Cognizer- Understanding
AI 1.0

By now, we probably all know the term artificial intelligence (AI) and that the concept of AI has been around for a long time (approximately 40 years). We know that it has gone through a few winters. I am not going to rehash old history. But the reality is that today’s AI came from research about 20 years ago, and its value has grown a lot. We also probably know that AI relied on the increase in computing power, data availability at scale and some of these very sophisticated algorithms.

So, with that said, let’s take a look at AI 1.0 and why we say their algorithms are engineered. In the early ‘80s, AI started getting popular again. With the advent of Expert Systems, LISP and sophisticated Rules Engines programmers began to show some value from these technologies. All we had to do was carefully observe a process that a human could do and encode that process in some form of computer procedure. Voila! It was intelligent. Well, maybe not.

It turned out to be brittle, fragile and error prone. The world is just too chaotic. If the exact same conditions arrived and we needed exactly the same results with no other rules firing at the same time, maybe we would get the right response. But probably not. Mostly, it was a mess.

Although we should have learned our lessons back in the ‘80s, we are now trying to do the same thing with bots. If the customer says “what is today’s balance?” and the bot was expecting “what is the current account balance?,” the result is often an epic fail. Solutions like Siri and Alexa try to help this by surrounding these requests with some machine learning. But if I asked Siri “where could I buy a case for my iPhone?” and she gave me the contact information for three “case workers,” that doesn’t resolve my query.

At the very best, these solutions tend to get it right about 50% of the time. As I said in my previous blog, that is the same as a coin flip. Very few CEOs want to risk their companies on a coin flip. These systems didn’t work then, they don’t work now and they are probably not going to provide a lot of value in the future either.

Businesses were about to give up on AI when along came Geoffrey Hinton from the University of Toronto. Hinton realized what others had overlooked that led to the emergence of “AI 2.0.”