Well I watched the first round of the Jeopardy: IBM Challenge. And I am certainly impressed, but a part of me also was disappointed, every time Watson got an question wrong or made a mistake. He has some serious problems, but the most important is how much of a closed system he is. Watson doesn't fail questions because he can't find relations to other things. That he can do, but he doesn't understand grammar of how to reconstruct answers very well. Yes I understand the quote the keep playing at IBM is that he's an "imformation seeking tool", but the AI is so narrow I think playing Jeopardy is all it can do. Now this isn't ranting on AI, or IBM, they did an excellent job, but before Watson will be really good at answering questions it needs not one AI, but a couple others. It needs brothers.
It needs a Watson to understand Grammar and read language, A Watson to understand ideas. So when it reads a sentence it knows exactly what that sentence wants. You can tell it has a little of this, but this was not it's primary goal.
Next it needs itself. So once it knows what to look for it can find exactly what it wants. It seems like it already is good at finding evidence, but that's not enough. once it finds that evidence it needs to verify it without a doubt. I understand Watson only has 3 seconds on Jeopardy, but for an enterprise ready version this high-level recursive verification will be important.
Lastly Watson needs a whole new AI to create complex answers to questions that require more then a simple yes or no answer. On the first round Watson said something to the extent "What is leg". Trebbeck corrected him and said "No. It's What is he was missing a leg." So you see he can find association, but doesn't understand the grammar or the question enough to give an answer in the right form.
My predictions are in the next 10 years we will see quite a few powerful narrow AI's like Watson, but the holy grail will become combining these AI's together, to form a more powerful intelligence. There may also be some competition between the combining method and the bottom up general intelligence approach, but I think the two are separate. General Intelligence can learn anything, it has a wide range, but isn't particularly good at anything. Specific Intelligence will be far more important business wise I think. We may start to see robots or AI, including some General Intelligence, but it will only be to round of specific AI, so the don't crash, or in other words provide fault tolerance for conditions out of the Specific AI's range. Where we will see huge benefits is where General AI is given the lead role of controlling all of it's specific AI, and can even create it's own Specific AI's to do things better. Sort of like humans have our own specific AI's for walking and talking, but we don't think about that, it's hardcoded into us and doesn't require serious conscious effort. The creating of Specific AI will be like humans learning a new skill. Sure we can do anything, but the more we practice the less it be comes a mental effort of understanding and we just do.
I think about my own introduction to Reverse Code Engineering, a year ago I had to analyze every little thing a thousand times, but now I just understand the code, I can read it like I would read a sentence, (at a first grade level mind you) but still with little critical thinking, however if I come across a new concept my General Intelligence kicks in and I can find what I know already and try to fit this new thing in. Think of AGI (Artificial General Intelligence) as a default exception handler or a defualt Windows Procedure, it just handles the exception you haven't programmed in.
Rough Draft, might redo this article in the future, with pretty pictures and better formatting and stuff =P
No comments:
Post a Comment