Stratfor.com Reader ad
How the U.S.-China Power Competition Is Shaping the Future of AI Ethics
October 18, 2018
Highlights
As artificial intelligence applications develop and expand, countries and corporations will have different opinions on how and when technologies should be employed. First movers like the United States and China will have an advantage in setting international standards.

China will push back against existing Western-led ethical norms as its level of global influence rises and the major powers race to become technologically dominant.


In the future, ethical decisions that prevent adoption of artificial intelligence applications in certain fields could limit political, security and economic advantages for specific countries.


Controversial new technologies such as automation and artificial intelligence are quickly becoming ubiquitous, prompting ethical questions about their uses in both the private and state spheres. A broader shift on the global stage will drive the regulations and societal standards that will, in turn, influence technological adoption. As countries and corporations race to achieve technological dominance, they will engage in a tug of war between different sets of values while striving to establish ethical standards. Western values have long been dominant in setting these standards, as the United States has traditionally been the most influential innovative global force. But China, which has successfully prioritized economic growth and technological development over the past several decades, is likely to play a bigger role in the future when it comes to tech ethics.

The field of artificial intelligence will be one of the biggest areas where different players will be working to establish regulatory guardrails and answer ethical questions in the future. Science fiction writer Isaac Asimov wrote his influential laws of robotics in the first half of the 20th century, and reality is now catching up to fiction. Questions over the ethics of AI and its potential applications are numerous: What constitutes bias within the algorithms? Who owns data? What privacy measures should be employed? And just how much control should humans retain in applying AI-driven automation? For many of these questions, there is no easy answer. And in fact, as the great power competition between China and the United States ramps up, they prompt another question: Who is going to answer them?

Questions of right and wrong are based on the inherent cultural values ingrained within a place. From an economic perspective, the Western ideal has always been the laissez-faire economy. And ethically, Western norms have prioritized privacy and the importance of human rights. But China is challenging those norms and ideals, as it uses a powerful state hand to run its economy and often chooses to sacrifice privacy in the name of development. On yet another front, societal trust in technology can also differ, influencing the commercial and military use of artificial intelligence.

Different Approaches to Privacy

One area where countries that intend to set global ethical standards for the future of technology have focused their attention is in the use and monetization of personal data. From a scientific perspective, more data equals better, smarter AI, meaning those with access to and a willingness to use that data could have a future advantage. However, ethical concerns over data ownership and the privacy of individuals and even corporations can and do limit data dispersion and use.

How various entities are handling the question of data privacy is an early gauge for how far AI application can go, in private and commercial use. It is also a question that reveals a major divergence in values. With its General Data Protection Regulation, which went into effect this year, the European Union has taken an early global lead on protecting the rights of individuals. Several U.S. states have passed or are working to pass similar legislation, and the U.S. government is currently considering an overarching federal policy that covers individual data privacy rights.

China, on the other hand, has demonstrated a willingness to prioritize the betterment of the state over the value of personal privacy. The Chinese public is generally supportive of initiatives that use personal data and apply algorithms. For example, there has been little domestic objection to a new state-driven initiative to monitor behavior — from purchases to social media activity to travel — using AI to assign a corresponding "social score." The score would translate to a level of "trustworthiness" that would allow, or deny, access to certain privileges. The program, meant to be fully operational by 2020, will track citizens, government officials and businesses. Similarly, facial recognition technology is already used, though not ubiquitously, throughout the country and is projected to play an increasingly important role in Chinese law enforcement and governance. China's reliance on such algorithmic-based systems would make it among the first entities to place such a hefty reliance on the decision-making capabilities of computers.

When Ethics Cross Borders and Machine Autonomy Increases

Within a country's borders, the use of AI technology for domestic security and governance purposes may certainly raise questions from human rights groups, but those questions are amplified when use of the technology crosses borders and affects international relationships. One example is Google's potential project to develop a censored search app for the Chinese market. By intending to take advantage of China's market by adhering to the country's rules and regulations, Google could also be seen as perpetuating the Chinese government's values and views on censorship. The company left China in 2010 over objections to that very matter.

And these current issues are relatively small in comparison to questions looming on the horizon. Ever-improving algorithms and applications will soon prompt queries about how much autonomy machines "should" have, going far beyond today's credit scores, loans or even social scores. Take automated driving, for example, a seemingly more innocuous application of artificial intelligence and automation. How much control should a human have while in a vehicle? If there is no human involved, who is responsible if and when there is an accident? The answer varies depending where the question is asked. In societies that trust in technology more, like Japan, South Korea or China, the ability to remove key components from cars, such as steering wheels, in the future will likely be easier. In the United States, despite its technological prowess and even as General Motors is applying for the ability to put cars without steering wheels on the road, the current U.S. administration appears wary.

Defense, the Human Element and the First Rule of Robotics

Closely paraphrased, Asimov's first rule of robotics is that a robot should never harm a human through action or inaction. The writer was known as a futurist and thinker, and his rule still resonates. In terms of global governance and international policy, decisions over the limits of AI's decision-making power will be vital to determining the future of the military. How much human involvement, after all, should be required when it comes to decisions that could result in the loss of human life? Advancements in AI will drive the development of remote and asymmetric warfare, requiring the U.S. Department of Defense to make ethical decisions prompted by both Silicon Valley and the Chinese government.

At the dawn of the nuclear age, the scientific community questioned the ethical nature of using nuclear understanding for military purposes. More recently, companies in Silicon Valley have been asking similar questions about whether their technological developments should be used in warfare. Google has been vocal about its objections to working with the U.S. military. After controversy and internal dissent about the company's role in Project Maven, a Pentagon-led project to incorporate AI into the U.S. defense strategy, Google CEO Sundar Pinchai penned the company's own rules of AI ethics, which required, much like Asimov intended, that it not develop AI for weaponry or uses that would cause harm. Pinchai also stated that Google would not contribute to the use of AI in surveillance that pushes boundaries of "internationally accepted norms." Recently, Google pulled out of bidding for a Defense Department cloud computing project as part of JEDI (Joint Enterprise Defense Initiative). Microsoft employees also issued a public letter voicing objections to their own company's intent to bid for the same contract. Meanwhile, Amazon's CEO, Jeff Bezos, whose company is still in the running for the JEDI contract, has bucked this trend, voicing his belief that technology companies partnering with the U.S. military is necessary to ensure national security.

There are already certain ethical guidelines in place when it comes to integrating AI into military operations. Western militaries, including that of the United States, have pledged to always maintain a "human-in-the-loop" structure for operations involving armed unmanned vehicles, so as to avoid the ethical and legal consequences of AI-driven attacks. But these rules may evolve as technology improves. The desire for quick decisions, the high cost of human labor and basic efficiency needs are all bound to challenge countries' commitment to keeping a human in the loop. After all, AI could function like a non-human commander, making command and control decisions conceivably better than any human general could.

Even if the United States still abides by the guidelines, other countries — like China — may have far less motivation to do so. China has already challenged international norms in a number of arenas, including the World Trade Organization, and may well see it as a strategic imperative to employ AI in controversial ways to advance its military might. It's unclear where China will draw the line and how it will match up with Western military norms. But it's relatively certain that if one great power begins implementing cutting-edge technology in controversial ways, others will be forced to consider whether they are willing to let competing countries set ethical norms.

Posted by Analysis | Stratfor.com at 11:25 AM
Share this entry
Discuss This Entry
How the U.S.-China Power Competition Is Shaping the Future of AI Ethics
<< Back to Stratfor.com Intel Briefing