Nature vs Nurture: AI Style

There’s an interesting debate going on the in AI community. It has actually been happening for a while, it’s just that recently it has become more public and more personal.

Not exactly the old nature vs. nurture discussion but something similar. The essential question is: What is intelligence and how do AI-agents become intelligent (more intelligent)? Let’s simplify the debate and make it about two people: Rich Sutton and Gary Marcus.

Rich Sutton is the leading proponent of reinforcement learning (the trial, error, and reward based learning often associated with humans).

Gary Marcus is a cognitive scientist who has often called for the return of symbolic AI (e.g. GOFAI) and is currently advancing what he calls “robust AI”.

In a nutshell:

Sutton, The Bitter Lesson (2019), believes that intelligence is computation and that all we need to do is leverage computational scale and intelligent agents/systems (actions) will emerge. All the human knowledge, building in what AI folks call “priors”, is a “distraction” – not worth the effort. General systems always win.

Score one for nurture.

Marcus, The Next Decade in AI: Four Steps Towards Robust Artificial Intelligence (2020), does not dispute that computation has a role in intelligence, he just doesn’t think it’s sufficient or even efficient. People learn, in part, because they have “priors” – innate understand and knowledge. We use that as building blocks to create more knowledge as we experience new situations. Why not, says Marcus, use those priors in AI to speed understanding and knowledge creation?

Score one for nature and nurture.

From my perspective, both approaches are interesting and, frankly, valid (in their own way). The difference for me is the outcome. From my bias and interest in machine information behaviour, the agent resulting from the strategy of Sutton or Marcus will behave differently. And that’s OK. I think. Grin.

Join the debate.

….Mike

Green AI?

Is machine learning an environmental hazard or an environmental solution?

The answer is, apparently, “yes”.

Recently a number of papers have focused attention on machine learning and climate change. Interesting findings.

Tackling Climate Change with Machine Learning” (Rolnick et al. 2019) is a manifesto published in advance of the NeurIPS conference. This extensive and detailed report outlines many ways in which applying ML can have a positive impact on addressing significant aspects of climate change. In summary:

“ML can enable automatic monitoring through remote sensing (e.g. by pinpointing deforestation, gathering data on buildings, and assessing damage after disasters). It can accelerate the process of scientific discovery (e.g. by suggesting new materials for batteries, construction, and carbon capture). ML can optimize systems to improve efficiency (e.g. by consolidating freight, designing carbon markets, and reducing food waste). And it can accelerate computationally expensive physical simulations through hybrid modeling (e.g. climate models and energy scheduling models).”

A report from the AI Now Institute, “AI and Climate Change: How they’re connected, and what we can do about it” (Dobbe & Whittaker, 2019), is not so optimistic:

“The estimated 2020 global footprint [of the tech industry] is comparable to that of the aviation industry, and larger than that of Japan, which is the fifth biggest polluter in the world. Data centers will make up 45% of this footprint (up from 33% in 2010) and network infrastructure 24%.”

They conclude that overall, “we see little action to curb emissions, with the tech industry playing a significant role in the problem.”

While the Rolnick et al. report illustrates that applying ML to environmental challenges has been and will continue to be productive, the story is a bit different when looking at the environment challenges of training the ML models to do this very work.

Strubell et al., “Energy and Policy Considerations for Deep Learning in NLP” (2019), estimate that “training BERT [a widely used NLP model] on GPU is roughly equivalent to a trans-American flight.” The authors of “Green AI” (Schwartz et al., 2019) note that the amount of compute required to train a model has increased 600,000 times (!) since 2013. More and more data, millions of parameters, and hundreds of GPUs. And it’s getting worse. They advocate “making efficiency an evaluation criterion for research alongside accuracy and related measures. In addition, we propose reporting the financial cost or “price tag” of developing, training, and running models to provide baselines for the investigation of increasingly efficient methods.”

Whatever the directions taken, the ML community, and the tech industry more generally, are going to have to take their environmental impact much more seriously. The role of environmental solution is possible but not at the increased expense of environmental hazard.

…Mike

Contesting the State of AI

Periodically the AI field has entered an “AI Winter” where the dominant paradigm seems to have run its course and researchers look for new options.

Are we entering another AI Winter?

Three recent books suggest not so much renewed stormy weather as a need to broaden perspectives … some looking backward, some merely looking around.

The basic questions raised are simple: Is Deep Learning (the state of the art in machine learning) sufficient? Is it the path to towards more intelligent machines (even AGI – artificial general intelligence).

Stuart Russell. Human Compatible (2019).

Russell is widely known as the co-author of Artificial Intelligence: A Modern Approach (3rd ed. 2009), the definitive textbook in the field. In the past few years he has been exploring the concept of “beneficial AI” and this book further articulates that concept.

“The history of AI has been driven by a single mantra: ‘The more intelligent the better.’ I am convinced that this is a mistake.”

Russell. Human Compatible (2019)

Russell’s concern is that the current path of increasing AI autonomy fueled by more data, opaque algorithms, and enhanced computing will lead to a loss of control by humans. Not as bleak as Bostrom’s Superintelligence (2014), Russell’s solution is a design concept: make intelligent systems defer to human preferences.

Russell has three guiding principles:

  1. The machine’s only objective is to maximize the realization of human preferences.
  2. The machine is initially uncertain about what those preferences are.
  3. The ultimate source of information about human preferences is human behavior.

Putting humans at the center of intelligent machines seems reasonable and certainly desirable. But will it be effective and advance AI?

Gary Marcus & Ernest Davis. Re-Booting AI (2019).

The concern of Marcus (a long standing and vocal critique of Deep Learning) and Davis is related to Russell’s but the focus is different: not a control problem but a myopic problem – AI “doesn’t know what its talking about”; it doesn’t actually “understand” anything.

“The cure for risky AI is better AI, and the royal road to better AI is through AI that genuinely understands the world.” p. 199

Marcus & Davis. Re-Booting AI (2019)

And the way to understand the world is through “common sense”. In part this looks back to the symbolic (logic) representations of GOFAI (“Good Old Fashioned AI”) and it part it is about training AI about “time, space, causality, basic knowledge of physical objects and their interactions, basic knowledge of humans and their interactions.” Getting there requires us to train AI like children learn (an observation Turing made in 1950).

Brian Cantwell Smith. The Promise of AI (2019)

Smith picks up the issue of “understanding the world” and argues that AI must be “in the world” in a more visceral way – “deferring” to the world (reality) as we do. Two key concepts standout: judgment and ontology.

Judgment: Smith makes the distinction between “reckoning” (which most machine learning systems accomplish – calculation and prediction) and “judgment” which he views as the essence of intelligence and the missing component in AI.

Ontology: Smith contends that machine learning has “broken ontology.” It has given us a view of the world as more “ineffably dense” than we have ever perceived. The complexity and richness of the world require us to conceptualize the world differently.

The arguments about judgment and ontology converge in a discussion about knowledge presentation and point the way for machine learning to transcend its current limitations:

“If we are going to build a system that is itself genuinely intelligent, that knows what it is talking about, we have to build one this is itself deferential – that itself submits to the world it inhabits , and does not merely behave in ways that accord with our human deference.”

Smith. The Promise of AI (2019)

This book celebrates the power of machine learning while lamenting its shortcomings. However:

“I see no principled reason why systems capable of genuine judgment might not someday be synthesized – or anyway may not develop out of synthesized origins.”

Smith. The Promise of AI (2019)

Good books. All worth your time IMHO.

….Mike

Training Datasets, Classification, and the LIS Field

At the core of machine learning are training datasets. These collections, the most common are images, have labels (metadata) describing their contents and are used by an algorithm to learn how to classify them. A portion of the dataset is reserved for validation – testing the learned model with new, previously unseen, data. If all goes well, the model is then ready to classify entirely new data from the real world.

There are many such datasets and they are used repeatedly by AI researchers and developers to build their models.

And therein lies the problem.

Issues with datasets (e.g. lack of organizing principles, biased coding, poor metadata, and little or no quality control) result in models trained with those problems and reflecting this in their operation.

While over reliance on common datasets has long been a concern (see Holte, “Very simple classification rules perform well on most commonly used datasets”, Machine Learning, 1993), the issue has received widespread attention because of the work of Kate Crawford and Trevor Paglen. Their report, Excavating AI: The Politics of Images in Machine Learning Training Sets, and their online demonstration tool, ImageNet Roulette (no longer available as of September 27th), identified extraordinary bias, misidentification, racism, and homophobia. Frankly, it will shock you.

Kate Crawford and Trevor Paglen 
(with their misidentified classifications from ImageNet Roulette
Kate Crawford and Trevor Paglen
(with their misidentified classifications from ImageNet Roulette

Calling their work the “archeology of datasets”, Crawford and Paglen uncovered what is well known to the LIS field: all classification is political, social, and contextual. In essence, any classification system is wrong and biased even if it is useful (see Bowker & Star, Sorting Things Out, 1999).

From an LIS perspective, how is ImageNet constructed? What is the epistemological basis, the controlled taxonomy, and the subclasses? Who added the metadata, under what conditions, and with what training and oversight?

ImageNet was crowdsourced using Amazon’s Mechanical Turk. Once again, therein lies the problem.

While ImageNet did use the WordNet taxonomy to control classifications, it is not clear how effectively this was managed. The results uncovered by Crawford and Paglen suggest not very much. This year many training datasets were taken offline or made unavailable, and many were severely culled (ImageNet will remove 600,000 images). However, these datasets are important; ML relies on them.

Bottom line: the LIS field has extensive expertise and practical experience in creating and managing classification systems and the requisite metadata. We are good at this, we know the pitfalls, and it is clear and compelling opportunity for LIS researchers and practitioners to be centrally involved in the creation of ML training datasets.

…Mike