Book REviews:
Technology & Futures

R. Gregory Turner, AIA, LEED AP, MBA, APF

Out of the Mountains

David Kilcullen

Oxford University Press

2015

An Australian military veteran and intelligence expert in warfare, Kilcullen presents an analysis ostensibly about urban guerilla warfare but his perspectives are actually much more far-reaching.  Regardless of your domain of interest (e.g., warfare, urban planning, demography), you will want to learn his four forces that are driving the future of life on our planet.  These are population growth, urbanization, littoralization (i.e., clustering of peoples along coastlines), and electronic connectedness.  These provide the context for all else in the book.  As an aside, it is interesting to consider these in conjunction with Harvey Cox’s tracing of societal evolution from tribe to town to “technopolis” in The Secular City.

Kilcullen’s chilling account of the hotel attacks in Mumbai, India in 2008 demonstrate how the four forces cited above provide a fertile ground for non-state agents to effectively disrupt civilized societies and foster the uncertainty which leads to the contest for “competitive control.”  The theory of competitive control illustrates well the difficulty in modern nations’ ability to deal with non-state actors.  This theory simply states that the armed actor who is seen by a population as best able to establish a “predictable, consistent, wide-spectrum normative system of control” will dominate.  Predictability, not popularity or the content of rules themselves, is key.  This explains the durability of many dictatorships.

To my interest, Kilcullen’s four forces both illustrate the recent past and allow one to extrapolate a future for urban development across the globe.  To consider littoralization for one, dense concentrations of populations along coastlines increases vulnerabilities not only to seaborne attacks such as at Mumbai, but to extreme weather occurrences and pandemics as well.  How anticipation of such threats shapes cities will be a large issue in the coming decades.

David Kilcullen is an Australian author, strategist, and counterinsurgency expert who is currently the non-executive chairman of Caerus Associates, a strategy and design consulting firm that he founded.  He is a professor at Arizona State University and at University of New South Wales, Canberra.


What to Expect When You’re Expecting Robots: the Future of Human-Robot Collaboration

Laura Major and Julie Shah

Basic Books

2020

An optimistic assessment of a human future integrated with robots, Major and Shah present a practical, as opposed to a visionary, outlook of living with our machine colleagues.  They are more interested in how we can make such a partnership functional and effective, rather than either evangelizing for automation, or fearing its ascent.

The authors describe three parties to the human-robot society: the robots, the human supervisors or users who control the robots to some degree, and the bystanders.  The last of these groups is mostly us, those who interact with the automatons but don’t really know much about their intelligence or operational capacities.  Each of the groups, however, needs training and enhancement to make the human-robot society work.  

Industry and the military provide good examples of how to make human-robot collaboration successful, but only to an extent.  Both applications rely on extensive training and enhanced machine intelligence in order to work well, but training sufficient to create an effective human-robot society on a wide basis is currently deficient.  “Public” robots or those in work environments will need to be able to function in places where they meet bystanders often, a chaotic circumstance to the machine.  Conversely, humans are ignorant of robotic responses that could be expected.  Both must be much more advanced than at present in order to achieve efficacy and safety.

Key to effective human-robot collaboration will be data sharing and operational standards, probably through some sort of regulatory agency (similar to the FAA’s oversight of the airline industry).  Also, changes to our physical environment will be necessary to help both humans and robots read cues effectively.  This is the “three-body” issue, so-called because three key things—observability, predictability, and directability—must be designed into robots.  Observability refers to the machine’s ability to see us in a variety of dynamic environments (e.g., on a sidewalk).  Predictability requires robots to understand the likely behaviors of humans during an encounter in such an environment.  Directability means that, when a robot is confused, a system or person must be able to step in to resolve the situation to both human and robot satisfaction.

Compared with others in this domain, Major and Shah see little chance of robots ever superseding humans.  Their view seems to be that robotics are limited by their need to be programmed, whereas humans are outfitted with highly adaptive and responsive systems that will probably never be equaled or exceeded by machines.  Robots will always be tools.  Another limitation to robots is their path dependency, in which their programs so tightly fit certain data sets that they cannot deviate when faced with real environments.  

Major and Shah spend a lot of ink discussing the need for development of standards that will govern the interface between robots and humans in the future.  They don’t state this, but I will: there will be competition to establish standards to which inter-robot and human-robot communications must conform.  Whoever gets there first will rule the world.

Laura Major is the CTO of the Hyundai-Aptiv Autonomous Driving Joint Venture.  Julie Shah is director of the Interactive Robotics Lab at MIT.


Common Sense, the Turing Test, and the Quest for Real Al

Hector Levesque

MIT Press

2017

In Common Sense, the author distinguishes between Good Old Fashioned Artificial Intelligence (“GOFAI”) and Adaptive Machine Learning (“AML”).  The former is defined as AI that can show common sense, as opposed to the latter, comprised of machines with practical utility.  

Learning occurs, and knowledge is gained, through experience or language.  Applying this knowledge to one’s behavior is what we call thinking, and this enables the behavior to be intelligent.  This application of knowledge to behavior, this thinking, is what creates common sense.  Got that?

According to Levesque, the key issue regarding AI is not whether the machine can think, it’s whether it can behave like a thinking person.  Levesque says that AML AI cannot achieve human-level intelligent behavior because even with extensive training, it is not prepared to deal with the unanticipated.  Unlike humans, it has no common sense on which to fall back to utilize its knowledge base.  The key question is, “How is the system going to behave when training fails?”  If all of your expertise derives from sampling, as it does with AML, you may never get to see events that are rare but impactful (long tails).

In the end, the author warns that we may not want to pay for truly integrated autonomous AI.  We just want some of its features.

Hector Levesque is Professor Emeritus in the Department of Computer Science at the University of Toronto.  His research focuses in the area of knowledge representation and reasoning in artificial intelligence.