Every week I have dozens of meetings and conversations with executives, investors and industry experts about Cognizer, the...
Read MoreThe Financial Services industry is perhaps the most information intensive of all. They have no factories, no inventory,...
Read MoreBy Jack Porter
July 29, 2019
Where AI 1.0 relied on brittle engineered procedures, in AI 2.0, data scientists focused on advanced math. This started out with basic statistics and turned into hundreds of algorithms that could predict some form of trend or classification.
Used independently, these algorithms rarely are able to achieve accuracies above about 50%. This is primarily because most data science problems are non-linear. That is, the data space is not consistent. If the problem you are trying to solve is churn at a bank, some of the customers could be leaving the bank because they were turned down for a car loan, while others because of banking fees. Still others could be leaving because they moved, divorced or even died. Each of these has its own pattern and therefore needs its own model for predictions.
Data scientists tried to solve this by creating “ensembles” of models that could predict each form of behavior. This helped a lot, but required them to understand the underlying parameter of each of the behavior patterns. Many times these patterns were very sophisticated, involving hundreds or thousands of features over time.
In 2007, Geoffrey Hinton introduced his seminal paper “Unsupervised Learning of Image Recognition” that became the inflection point for Deep Learning. This was not the first research in this area. In fact, even Hinton had written an important paper 11 years earlier on backpropagation. But the 2007 paper was timed well, and the concept of Deep Learning was born. Then, in 2012, Hinton, Ilya Sutskever and Alex Krizhevsky used Deep Learning in the acclaimed ImageNet competition and blew the record away. Their advanced Deep Learning model improved the error rate of image recognition by a whopping 10.8 percentage points, a 41% advantage over the next competitor. With that, Deep Learning was off to the races.
Since that time, Deep Learning has gone from Hinton’s team getting a 23% error rate 29 out of 38 teams in the ImageNet competition getting less than 5% error rate by 2017. All used Deep Learning. In fact, in 2019, ImageNet researchers consistently get error rates below 2%.
But Deep Learning’s amazing performance is not restricted to image recognition. It is basically good at any type of classification problem where there is plenty of labeled data. This includes voice recognition, cancer screening, autonomous cars and robotics. It can be used against business data for customer engagement, fraud, anti-money laundering and retention. It is used in diverse industries such as banking, pharmaceuticals, chemical, oil and gas, and agriculture. In each of these situations, good data scientists with lots of data can get the classification to more than 90% accuracy. Again, this is a game changer.
The concept of Deep Learning is that the data scientist would create a stacked neural network and “feed forward” labeled data. During the classification, if the model did not equal the label, the error would be back propagated down the network, adjusting the weights of each neuron as it goes. This process was iterated using a gradient descent until the error was collapsed and outcome converged.
As Deep Learning took hold, it began to diversify into several architectures. Convolutional networks were used for spatial problems such as image recognition, Recurrent Neural Networks for longitudinal analysis and Self-Organizing Maps for dimensionality reduction. Today, there are more than 25 unique architectures and many more variations. Deep Learning’s advantage is that the “Feature Detection” is done automatically. Data scientists do not have to guess what is causing the predictive behavior; the network picks this up by itself as the weights of the neurons converge.
The problem with Deep Learning is that it requires a lot of data, and the data must be labeled. There are many research projects trying to reduce this requirement, but it is still a big problem. In addition, Deep Learning is really only focused on classification, either spatial or temporal, but not at the same time. This means it is really great at classifying images, but not great at predicting sequences of data.
This is where our brain is formidable. Unlike AI based on engineered procedures or mathematically calculated classifications, our brain is a “Prediction Engine.” It is great at constructing a model of the world, and then predicting future outcomes and identifying anomalies based on those predictions. It can do this with very little data and uses transfer learning to establish similar behavior.
This is what scientists consider intelligence, and this will be the basis of the next generation of AI, which will work much more like our human brains.
By Jack Porter
July 29, 2019
By now, we probably all know the term artificial intelligence (AI) and that the concept of AI has been around for a long time (approximately 40 years). We know that it has gone through a few winters. I am not going to rehash old history. But the reality is that today’s AI came from research about 20 years ago, and its value has grown a lot. We also probably know that AI relied on the increase in computing power, data availability at scale and some of these very sophisticated algorithms.
So, with that said, let’s take a look at AI 1.0 and why we say their algorithms are engineered. In the early ‘80s, AI started getting popular again. With the advent of Expert Systems, LISP and sophisticated Rules Engines programmers began to show some value from these technologies. All we had to do was carefully observe a process that a human could do and encode that process in some form of computer procedure. Voila! It was intelligent. Well, maybe not.
It turned out to be brittle, fragile and error prone. The world is just too chaotic. If the exact same conditions arrived and we needed exactly the same results with no other rules firing at the same time, maybe we would get the right response. But probably not. Mostly, it was a mess.
Although we should have learned our lessons back in the ‘80s, we are now trying to do the same thing with bots. If the customer says “what is today’s balance?” and the bot was expecting “what is the current account balance?,” the result is often an epic fail. Solutions like Siri and Alexa try to help this by surrounding these requests with some machine learning. But if I asked Siri “where could I buy a case for my iPhone?” and she gave me the contact information for three “case workers,” that doesn’t resolve my query.
At the very best, these solutions tend to get it right about 50% of the time. As I said in my previous blog, that is the same as a coin flip. Very few CEOs want to risk their companies on a coin flip. These systems didn’t work then, they don’t work now and they are probably not going to provide a lot of value in the future either.
Businesses were about to give up on AI when along came Geoffrey Hinton from the University of Toronto. Hinton realized what others had overlooked that led to the emergence of “AI 2.0.”
By Randy Friedman
July 29, 2019
Your business knowledge is perhaps your most valuable corporate asset, so it makes sense to monetize it in every way possible. Putting your business knowledge to use in real time is strategically vital. That is hard to do when it’s trapped in documents.
Documents are not knowledge. Documents contain knowledge.
For most organizations, the vast amount of their business knowledge resides in documents: emails, proposals, presentations, contracts, manuals, etc. These documents are digitized and stored in sophisticated enterprise content management systems such as Box.com, FileNet, OpenText, and in related systems such as Office 365, Google Drive, etc. These systems help users search for documents, using filters, tags, metadata and the like for assistance. “Search” is the operative term. Would you be surprised that up to 35% of your team’s time is wasted searching for information? (Source: KM World)
Right now, the only way an executive or manager can extract knowledge from a document (ex. prepare for a meeting or perform a task) is manually.
We search, then find and read the relevant documents.
We determine which facts are more pertinent and relevant to our task at hand.
We apply that knowledge to the task and ideally remember it for the next time.
Simply put, knowledge is trapped in documents.
Experts cite the need for AI to capture, retain and quickly disseminate knowledge from documents to solve business problems and optimize business processes like these:
Customer Service and Support Operations
Governance, Compliance and Ethical Issues
Human Capital Management/Recruiting Operations
IT Service Management and Help Desk Operations
Sales and Lead Management Operations
Security, Risk and Fraud Operations
Sourcing, Procurement and Vendor Management
Supply Chain Operations
Systems like BoxTM need to be augmented with additional capabilities to activate your business knowledge in real time. In fact, your entire enterprise needs to be augmented with AI.
Many experts point to AI to cure these gaps. Since there are many forms of AI, which form of AI works best to solve this challenge?
Cognizer, the Corporate Brain, answers this question. There are many types of AI. But to solve these types of problems, our team created a new type of AI: Natural Language Intelligence. With our proprietary Natural Language Intelligence, Cognizer learns, retains and proactively disseminates knowledge from documents and enterprise systems in real time. Building the Augmented Enterprise requires the Corporate Brain.
Searching for what you need to know to perform at your best is like having a second job. Who has the time? Stop searching. Start knowing.
By Randy Friedman
July 30, 2019
The Financial Services industry is perhaps the most information intensive of all. They have no factories, no inventory, no materials—just smart people and information systems. And these smart people are expensive considering financial service workers earn more on average than nearly every other industry. One would suspect that they are among the most efficient users of information, but the facts are quite different. It’s not that they lack information. They have too much, and they have so many complex regulations with which they need to comply.
Knowledge and speed are the competitive drivers of success. So why are banks so slow to onboard a new corporate customer?
My team interacts with large financial service firms all the time, and they see many applications for Cognizer, the Corporate Brain. Compliance with Know Your Customer (KYC) and Anti-Money Laundering (AML) is just one of many regulatory Cognizer use cases they see relieving severe financial pain.
Let’s explore why: $500 million dollars per year. In a Recent Article, Thomson Reuters cited that most large financial institutions report spending up to $500 million each year on KYC and customer due diligence.
For personal accounts, information extracted from documents is needed to prove one’s identity. For business accounts, each officer and director needs to do the same. Periodic surveillance is required, which means updating those documents every year or two. Most folks have several banking relationships. Most corporations have dozens.
Worse than the cost is the time. New corporate client on-boarding routinely takes two or three months on average. Inefficient processes are causing customers to walk.
Security is also a major problem. The documents shared by customers with financial firms are highly sensitive, and that data is highly regulated. Most banks ask you to send those documents via email. Ahem, I said “email”
Along with our partner, Box.com, expediting KYC and AML compliance is just one of the many problems that Cognizer, the Corporate Brain, solves. The Corporate Brain extracts intelligence from documents stored in Box.com, email systems, file folders, etc.
Every piece of vital information, such as your Social Security or driver’s license number, is secure by Cognizer using the same access control settings that Box.com imposes on those documents. If you are not entitled to see a passport document in a bank customer’s shared Box.com folder, you cannot see the information Cognizer extracts from that document.
The idea is simple. The documents are not moving via email, they are secured in a Box.com shared folder. Cognizer works with Box.com to manage the workflows and compliance using AI. Simple to say, complex AI is required to make it work. That’s part of Cognizer’s value proposition. We deliver intelligence from documents without search.
You can save just 20% of the time it takes for a major financial services firm to comply with AML and KYC amounts to $100 million—per year. I repeat: $100 million. Now, imagine cutting even that amount of time in half.
Who has the time to search? That why we say “Stop Searching. Start Knowing.”
To explore how Cognizer works or how it can help your organization, Click Here.
By Randy Friedman
July 30, 2019
Every week I have dozens of meetings and conversations with executives, investors and industry experts about Cognizer, the Corporate Brain. Everyone is fascinated by the potential power of our proprietary AI – Natural Language Intelligence. One of the most common questions is this: “What problems and use cases does Cognizer address?”
I tend to chuckle because preparing for the meeting I am having with them is perhaps the best use case of all. Let’s explore why.
One of my primary responsibilities is working with customers and partners. As a startup, Cognizer.ai must win deals to succeed. Preparing for each meeting in the deal cycle requires extensive preparation, research and knowledge. If you are not fluent about all the pertinent topics that your “buyer” prioritizes, you will not propel the opportunity forward and the buyer’s journey ends abruptly.
Think of each meeting as a live “performance” in which you are either moving the opportunity forward or you’re not. You’re either winning against your competition…or losing. The stakes are high. Sales professionals know it’s not just about what happens during a meeting; success is often tied to what you do before each meeting.
First, you need to learn about the company:
Does anyone on your team have any prior relationship, dealings with or connection to the company with whom you are meeting? To know, you need to search, find, read and digest information from internal documents, emails, systems, etc.
What are the company’s pain points and priorities, their competition, recent successes and challenges, etc.? To know, you need to search, find, read and digest information from internal and external sites, such as their corporate site, LinkedIn, Wikipedia, the SEC.
There are endless questions that need answers. In each case, you need to learn these, integrate these into your agenda and probably memorize them.
Second, you need to learn about the team who will be making a decision about your deal. Decisions are not often made by just one person.
With whom are you meeting?
Do you have any prior connections with them?
What are their roles and responsibilities?
What are their prior work experiences?
What are their interests and hobbies?
Once again, to know, you need to search, find, read, digest and memorize.
Consider how long this takes. A recent Forbes article answers this question. “Here is the bottom line: Sales reps are only spending one-third of their time selling.”
The financial cost is astounding. Forbes cites this fact: “The average field sales rep is paid $105,482 a year. If 64.8% of the time is spent on non-revenue generating activities, the typical company spends $68,352 per rep per year to pay him or her for tasks they were not hired to do.”
Cisco, a large technology firm, has more than 24,500 professionals in sales and marketing. Semantech, another technology firm, has about 2,000 sales and marketing people. Both are paying way more than average on talent. For every 1,000 sales professionals, these companies spend more than $68 million on activities they need to prepare to do their jobs. Hmmm. That seems like a big problem worth solving.
What if each professional already knew all the information they needed to perform their work? That is just one of the many problems that Cognizer, the Corporate Brain, solves. The Corporate Brain extracts intelligence from documents stored in Box.com, email systems, file folders, etc. It merges your organization’s business knowledge with hundreds of millions of facts preconfigured in its knowledge base.
That why we say “Stop Searching. Start Knowing.” Who has the time to search?
To explore how Cognizer works or how it can help your organization, Click Here.
By Jack Porter
July 30, 2019
In both AI 1.0 and AI 2.0, when creating an AI model, the intelligence came from the human data scientist doing the engineering or math. That means it not only required a very smart data scientist to create these sophisticated models, but they were limited to the capabilities of the human. As data sets went from thousands of rows of data to trillions of rows, the capabilities of the human became the cinch point. Big Data and Deep Learning got us pretty close to “what should be” predictions, but are barely scratching the surface of “what could be” predictions. The human brain is amazing. With one hundred billion neurons and hundreds of trillions of synapses, our human brain can calculate 38 thousand trillion operations a second. Only our fastest supercomputers can come anywhere close to that. And where our supercomputers use 10–20 megawatts of electricity, our brain does this with about 20 watts of energy. But it is not just the calculations that are impressive in our brain. It is the creativity and innovation that is most fascinating. We are great at thinking outside of the box, applying learning from one area to another area and innovating remarkable new ideas. The way AI 1.0 and AI 2.0 work, they will never be able to do that. To take AI to the next level, we need a new way. And that new way for AI is going to look a lot like the way of our brains. Our new models need to be much more generalized, more about the patterns and less about the math. Our brains do three things that our AIs are going to need to master. First, our brain builds a “model of the world”. It stores many patterns across our cerebral cortex. Second, when our perceives something new, it stores that too. It sees these patterns across time in what is called a temporal model. Time is critical to the brain, and all patterns are stored in this context. Finally, our brain is constantly making predictions of what is going to happen next. If the prediction is correct, the pattern is enforced. If there is an anomaly, is captures this difference as a new pattern. For our AIs to reach the next level, they are going to need to do this same thing. We are going to need to build sophisticated models of the world that rely on time, patterns and constant predictions. Like the brain, these models will probably be hierarchical. Instead of neurons being simply binary (on and off), they will project several states. They will also update with not only feed forward data paths, but also feedback and related neural context. Then, there is consciousness. For many years, most scientists believed that when our computers reached a certain number of operations per second, consciousness would simply emerge from calculation. However, our fastest supercomputers are now more than 200 quadrillion operations per second. Not only has this prediction not happened yet, but there is no sign that this is a cogent assumption. New discoveries have suggested that consciousness is a quantum effect. If that is the case, we probably have a long way to go before our computers are conscious. In the next few blogs, we are going to provide our thinking about where AI is going, how it is going to get there and how long it is going to take. We are going to be quite transparent about what is going on in our research labs and what strategies we find are working and what are not working. We hope you will enjoy this journey as much as we do.