Home » Artificial Intelligence » In AI, Common Sense Is Not All that Common

In AI, Common Sense Is Not All that Common

June 14, 2021

supplu-chain

When something is described as “common,” it generally means it occurs, is found, or is done often. When something is “common” it is supposed to be the prevalent state of things. Nevertheless, for centuries (if not longer) people have lamented that common sense is not all that common. The 18th century writer Voltaire wrote, “Common sense is not so common.” The 19th century newspaper editor Horace Greely agreed. “Common sense,” he wrote, ” is very uncommon.” The 19th century English poet Samuel Taylor Coleridge added, “Common sense, in an uncommon degree, is what the world calls wisdom.” If common sense among humans seems to be a rare thing, it shouldn’t surprise you to learn that common sense is also rare among thinking machines. To demonstrate how challenging common sense can be for cognitive technologies, New York University professors Ernest Davis and Gary Marcus (@GaryMarcus) suggest asking a series of questions that are easy for humans to answer but difficult for computers. Those questions are: “Who is taller, Prince William or his baby son Prince George? Can you make a salad out of a polyester shirt? If you stick a pin into a carrot, does it make a hole in the carrot or in the pin?”[1] As Davis and Marcus point out, “These types of questions may seem silly, but many intelligent tasks, such as understanding texts, computer vision, planning, and scientific reasoning require the same kinds of real-world knowledge and reasoning abilities.”

 

Rob Toews (@_RobToews) a venture capitalist at Highland Capital Partners, writes, “Relative to what we would expect from a truly intelligent agent — relative to that original inspiration and benchmark for artificial intelligence, human cognition — AI has a long way to go.”[2] He adds, “Critics like to point to these shortcomings as evidence that the pursuit of artificial intelligence is misguided or has failed. The better way to view them, though, is as inspiration: as an inventory of the challenges that will be important to address in order to advance the state of the art in AI.” He agrees with Davis and Marcus that one of artificial intelligence’s shortcomings is a lack of common sense. He quotes DARPA’s Dave Gunning who stated, “The absence of common sense prevents an intelligent system from understanding its world, communicating naturally with people, behaving reasonably in unforeseen situations, and learning from new experiences. This absence is perhaps the most significant barrier between the narrowly focused AI applications we have today and the more general AI applications we would like to create in the future.”

 

What is Common Sense?

 

Toews observes, “Humans’ ‘common sense’ is a consequence of the fact that we develop persistent mental representations of the objects, people, places and other concepts that populate our world — what they’re like, how they behave, what they can and cannot do.” Michael Stiefel, a principal at Reliable Software, and Daniel Bryant (@danielbryantuk), Director of DevRel at Ambassador Labs, add, “Common sense is all the background knowledge we have about the physical and social world that we have absorbed over our lives. It includes such things as our understanding of physics, (causality, hot and cold), as well as our expectations about how humans behave. Leora Morgenstern compares common sense to ‘What you learn when you’re two or four years old, you don’t really ever put down in a book.'”[3] Toews notes that AI systems are incapable of forming mental models. He explains, “They do not possess discrete, semantically grounded representations of, say, a house or a cup of coffee. Instead, they rely on statistical relationships in raw data to generate insights that humans find useful.”

 

Some pundits have posited that in many practical situations, massive knowledge is essentially common sense. This assumption underlies IBM Watson’s technology and MIT’s Open Mind Common Sense (OMCS) project as well as other deep learning systems. The underlying presumption is that the internet’s knowledge is now so vast, that most important facts probably exist in a document or database, and the ability to detect common sense is easier than developing an object model of common sense. These systems simulate common sense by searching and ranking answers, or by training machine learning to generate common sense answers. A couple of years ago, Thomas Serre (@tserre), a professor in the Department of Cognitive Linguistic and Psychological Sciences at Brown University, wrote, “Because computers can effortlessly sift through data at scales far beyond human capabilities, deep learning is not only about to transform modern society, but also about to revolutionize science — crossing major disciplines from particle physics and organic chemistry to biological research and biomedical applications.”[4]

 

Nevertheless, true common sense eludes artificial intelligence systems. Marcus insists, “The great irony of common sense — and indeed AI itself — is that it is stuff that pretty much everybody knows, yet nobody seems to know what exactly it is or how to build machines that possess it. Solving this problem is, we would argue, the single most important step towards taking AI to the next level. Common sense is a critical component to building AIs that can understand what they read; that can control robots that can operate usefully and safely in the human environment; that can interact with human users in reasonable ways. Common sense is not just the hardest problem for AI; in the long run, it’s also the most important problem.” The conundrum, of course, is that the complexity and ambiguity found in the world defies any effort at modeling it.

 

Towards Common Sense

 

As just discussed, the common-sense conundrum is intractable in unbounded, open-world domains, which Enterra Solution® avoids. Enterra® contains the scope of common-sense reasoning by choosing applications within business domains where knowledge and common-sense breadth can be bounded. Our applications further limit the types of rules and queries that are used, which additionally limits scope. For example, in our Insight Engine, all the knowledge pertains to expected outcomes (which is a very limited subset of all domain knowledge); and, common sense is used to explain the insights when any expectations are violated within one business domain, such as the consumer packaged goods (CPG) sector. Early in our development of the Enterra Cognitive Core™, we partnered with Cycorp® and have benefited from an exclusive relationship for use of its ontology within the CPG sector. Toews observes, “For over thirty-five years, AI researcher Doug Lenat and a small team at Cyc have devoted themselves to digitally codifying all of the world’s commonsense knowledge into a set of rules. These rules include things like: ‘you can’t be in two places at the same time,’ ‘you can’t pick something up unless you’re near it,’ and ‘when drinking a cup of coffee, you hold the open end up.’ As of 2017, it was estimated that the Cyc database contained close to 25 million rules and that Lenat’s team had spent over 1,000 person-years on the project.”

 

For bounded business applications, Cyc provides cognitive systems with something very close to common sense. However, for unbounded applications, Toews admits, “Cyc has not led to artificial intelligence with common sense.” Again, the basic problem for Cyc, and similar efforts, is the unbounded complexity of the real world. For every common sense “rule” one can think of, there is an exception or a nuance that itself must be articulated. These tidbits multiply endlessly. Somehow, the human mind is able to grasp and manage this wide universe of knowledge that we call common sense — and, however it does it, it is not through a brute-force, hand-crafted knowledge base.

 

Concluding Thoughts

 

Although Cyc may not be perfect, it far surpasses anything else available; especially, when an area of inquiry is bounded. Enterra uses ontologies in most of our solutions to link the patterns found in big data (usually found via machine learning) to a business understanding of the pattern. Enterra’s rules-based ontologies and rules engines (i.e., advanced semantic knowledge bases and rules repositories) understand data, the relationships between data elements, and they allow for inferences and deductions to occur by utilizing forwards and backwards inference chaining. Enterra’s Generalized Ontologies are Industry-specific ontologies that understand the knowledge and best practices of a market sector or discipline. The bottom line is this: A little common sense goes a long way in preventing poor machine learning results.

 

Footnotes
[1] Ernest Davis and Gary Marcus, “Commonsense Reasoning and Commonsense Knowledge in Artificial Intelligence,” New York University.
[2] Rob Toews, “What Artificial Intelligence Still Can’t Do,” Forbes, 1 June 2021.
[3] Michael Stiefel and Daniel Bryant, “Is Artificial Intelligence Closer to Common Sense?” InfoQ, 19 October 2020.
[4] Thomas Serre, “Deep Learning: The Good, the Bad, and the Ugly,” Annual Review of Vision Science, Vol. 5:399-426, September 2019.
[5] Mary Shacklett, “Artificial intelligence can’t yet learn common sense,” TechRepublic, 13 May 2020.

Related Posts:

Full Logo

Thanks!

One of our team members will reach out shortly and we will help make your business brilliant!