Just nine giant tech companies in the U.S. and China are behind the vast majority of advancements in artificial intelligence worldwide. In her new book, The Big Nine: How the Tech Titans and Their Thinking Machines Could Warp Humanity (PublicAffairs, March 5), Amy Webb envisions three possible futures, ranging from optimistic to apocalyptic, that could result from the actions we take–or don’t take–to control the development of A.I. and shape its global impact. In this excerpt, she puts forth a series of tough ethical questions that the humans building A.I. systems should use to guide their work.
The rules–the algorithm–by which every culture, society, and nation lives, and has ever lived, were always created by just a few people. Democracy, communism, socialism, religion, veganism, nativism, colonialism–these are constructs we’ve developed throughout history to help guide our decisions. Even in the best cases, they aren’t future-proof. Technological, social, and economic forces always intervene and cause us to adapt.
The Ten Commandments make up an algorithm intended to create a better society for humans alive more than 5,000 years ago. One of the commandments is to take a full day of rest a week and not to do any work at all that day. In modern times, most people don’t work the exact same days or hours from week to week, so it would be impossible not to break the rule. As a result, people who follow the Ten Commandments as a guiding principle are flexible in their interpretation, given the realities of longer workdays, soccer practice, and email. Adapting is fine–it works really well for us, and for our societies, allowing us to stay on track. Agreeing on a basic set of guidelines allows us to optimize for ourselves.
There would be no way to create a set of commandments for A.I. We couldn’t write out all of the rules to correctly optimize for humanity, and that’s because while thinking machines may be fast and powerful, they lack flexibility. There isn’t an easy way to simulate exceptions, or to try and think through every single contingency in advance. Whatever rules might get written, there would always be a circumstance in the future in which some people might want to interpret the rules differently, or to ignore them completely, or to create amendments in order to manage an unforeseen circumstance.
Knowing that we cannot possibly write a set of strict commandments to follow, should we, instead, focus our attention on the humans building the systems? These people–A.I.’s tribes–should be asking themselves uncomfortable questions, beginning with:
- What is our motivation for A.I.? Is it aligned with the best long-term interests of humanity?
- What are our own biases? What ideas, experiences, and values have we failed to include in our tribe? Whom have we overlooked?
- Have we included people unlike ourselves for the purpose of making the future of A.I. better–or have we simply included diversity on our team to meet certain quotas?
- How can we ensure that our behavior is inclusive?
- How are the technological, economic, and social implications of A.I. understood by those involved in its creation?
- What fundamental rights should we have to interrogate the data sets, algorithms, and processes being used to make decisions on our behalf?
- Who gets to define the value of human life? Against what is that value being weighed?
- When and why do those in A.I.’s tribes feel that it’s their responsibility to address social implications of A.I.?
- Does the leadership of our organization and our A.I. tribes reflect many different kinds of people?
- What role do those commercializing A.I. play in addressing the social implications of A.I.?
- Should we continue to compare A.I. to human thinking, or is it better for us to categorize it as something different?
- Is it OK to build A.I. that recognizes and responds to human emotion?
- Is it OK to make A.I. systems capable of mimicking human emotion, especially if it’s learning from us in real time?
- What is the acceptable point at which we’re all OK with A.I. evolving without humans directly in the loop?
- Under what circumstances could an A.I. simulate and experience common human emotions? What about pain, loss, and loneliness? Are we OK causing that suffering?
- Are we developing A.I. to seek a deeper understanding of ourselves? Can we use A.I. to help humanity live a more examined life?
There are nine big tech companies–six American, and three Chinese–that are overwhelmingly responsible for the future of artificial intelligence. In the U.S., they are Google, Microsoft, Amazon, Facebook, IBM, and Apple (“G-MAFIA”). In China, it’s the BAT: Baidu, Alibaba, and Tencent.
The G-MAFIA has started to address the problem of guiding principles through various research and study groups. Within Microsoft is a team called FATE–for Fairness, Accountability, Transparency, and Ethics in AI. In the wake of the Cambridge Analytica scandal, Facebook launched an ethics team that was developing software to make sure that its A.I. systems avoided bias. (Notably, Facebook did not go so far as to create an ethics board focused on A.I.) DeepMind created an ethics and society team. IBM publishes regularly about ethics and A.I. In the wake of a scandal at Baidu–the search engine prioritized misleading medical claims from a military-run hospital, where a treatment resulted in the death of a 21-year-old student–Baidu CEO Robin Li admitted that employees had made compromises for the sake of Baidu’s earnings growth and promised to focus on ethics in the future.
The Big Nine produces ethics studies and white papers, it convenes experts to discuss ethics, and it hosts panels about ethics–but that effort is not intertwined enough with the day-to-day operations of the various teams working on A.I.
The Big Nine’s A.I. systems are increasingly accessing our real-world data to build products that show commercial value. The development cycles are quickening to keep pace with investors’ expectations. We’ve been willing–if unwitting–participants in a future that’s being created hastily and without first answering all those questions. As A.I. systems advance and more of everyday life gets automated, the less control we actually have over the decisions being made about and for us.
Source: 16 Uncomfortable Questions Everyone Needs to Ask About Artificial Intelligence