Periodically the AI field has entered an “AI Winter” where the dominant paradigm seems to have run its course and researchers look for new options.
Are we entering another AI Winter?
Three recent books suggest not so much renewed stormy weather as a need to broaden perspectives … some looking backward, some merely looking around.
The basic questions raised are simple: Is Deep Learning (the state of the art in machine learning) sufficient? Is it the path to towards more intelligent machines (even AGI – artificial general intelligence).
Russell is widely known as the co-author of Artificial Intelligence: A Modern Approach (3rd ed. 2009), the definitive textbook in the field. In the past few years he has been exploring the concept of “beneficial AI” and this book further articulates that concept.
“The history of AI has been driven by a single mantra: ‘The more intelligent the better.’ I am convinced that this is a mistake.”Russell. Human Compatible (2019)
Russell’s concern is that the current path of increasing AI autonomy fueled by more data, opaque algorithms, and enhanced computing will lead to a loss of control by humans. Not as bleak as Bostrom’s Superintelligence (2014), Russell’s solution is a design concept: make intelligent systems defer to human preferences.
Russell has three guiding principles:
- The machine’s only objective is to maximize the realization of human preferences.
- The machine is initially uncertain about what those preferences are.
- The ultimate source of information about human preferences is human behavior.
Putting humans at the center of intelligent machines seems reasonable and certainly desirable. But will it be effective and advance AI?
The concern of Marcus (a long standing and vocal critique of Deep Learning) and Davis is related to Russell’s but the focus is different: not a control problem but a myopic problem – AI “doesn’t know what its talking about”; it doesn’t actually “understand” anything.
“The cure for risky AI is better AI, and the royal road to better AI is through AI that genuinely understands the world.” p. 199Marcus & Davis. Re-Booting AI (2019)
And the way to understand the world is through “common sense”. In part this looks back to the symbolic (logic) representations of GOFAI (“Good Old Fashioned AI”) and it part it is about training AI about “time, space, causality, basic knowledge of physical objects and their interactions, basic knowledge of humans and their interactions.” Getting there requires us to train AI like children learn (an observation Turing made in 1950).
Smith picks up the issue of “understanding the world” and argues that AI must be “in the world” in a more visceral way – “deferring” to the world (reality) as we do. Two key concepts standout: judgment and ontology.
Judgment: Smith makes the distinction between “reckoning” (which most machine learning systems accomplish – calculation and prediction) and “judgment” which he views as the essence of intelligence and the missing component in AI.
Ontology: Smith contends that machine learning has “broken ontology.” It has given us a view of the world as more “ineffably dense” than we have ever perceived. The complexity and richness of the world require us to conceptualize the world differently.
The arguments about judgment and ontology converge in a discussion about knowledge presentation and point the way for machine learning to transcend its current limitations:
“If we are going to build a system that is itself genuinely intelligent, that knows what it is talking about, we have to build one this is itself deferential – that itself submits to the world it inhabits , and does not merely behave in ways that accord with our human deference.”Smith. The Promise of AI (2019)
This book celebrates the power of machine learning while lamenting its shortcomings. However:
“I see no principled reason why systems capable of genuine judgment might not someday be synthesized – or anyway may not develop out of synthesized origins.”Smith. The Promise of AI (2019)
Good books. All worth your time IMHO.