Searching for Privacy in the Internet of Bodies
– Eleonore Pauwels and Sarah W. Denton
The pervasive network of sensors that captures data within our homes and cities is in the process of morphing – and that's where the privacy issue really gets personal.
It’s the year 2075 and the newest generation doesn’t remember life before AI. Even more frightening, they don’t know the meaning of personal privacy – at least not in the way their grandparents remember it. Someone is always watching you, whether it be the government, your employer, insurance companies, the bad date you had last week, or some random hacker. Personalized surveillance is just a fact of life now. Nothing lives or dies without being monitored.
In the post-privacy world, your DNA can be collected without your knowledge and your biometrics are available in the Cloudmind. This collective intelligence platform created by the World Technology Service, established back in 2030, optimizes every data point and piece of information about you to tailor services from personalized healthcare to crime prevention. Guardians, small drones that track your body and mind, follow everyone and stream your face, voice, location, emotions, and thoughts to the Cloudmind in real-time. Non-compliance isn’t an option; guardians were mandated by the state to combat human trafficking, banking fraud, identity theft, and a myriad of other problems back in 2040.
***
Privacy is a concept that has existed – and evolved – as long as man and woman have roamed the earth. Indeed, questions about both what is private and what should be private have been asked throughout time, with the answers often updated across eras, cultures, and contexts. This vision of 2075 may or may not come to fruition, but personal privacy is now being questioned on terms unknown to previous generations. Increasingly, a world of devices connected to the internet will work with artificial intelligence to form personal algorithmic avatars of all of us. We may soon be facing a privacy problem that we – literally – can’t keep to ourselves.
Artificial intelligence, commonly referenced in acronym form, is a term that would have sounded entirely self-contradictory before its birth in the 1950s. Even today, many have trouble imagining how a machine could think or learn – abilities that we inextricably associate with living beings. The term generally refers to “the use of digital technology to create systems that are capable of performing tasks commonly thought to require intelligence.” Machine learning, usually considered a subset of AI, describes “the development of digital systems that improve their performance on a given task over time through experience.” Deep neural networks are machine learning architectures designed to mirror the way humans think and learn. At its core, what AI does is optimize data. Making sense of massive amounts of information curated by humans, machine-learning algorithms are trained to predict various aspects of our daily lives and reveal hidden insight in the process.
The result? Functional capabilities that were previously unimaginable are now real, upgrading industries from defense and education to medicine and law enforcement. Companies like Zipline are using AI technology in autonomous drones to deliver critical medical supplies to rural hospitals in Africa. Police are leveraging the predictive power of AI to identify crime hotspots and sort through faces in a crowd in real time. AI empowers high-efficiency “smart cities.” It helps businesses minimize waste. And it helps countries wage war. Possibilities abound, both heartening and troubling. To stay on the sunny side for a moment, AI could even become a powerful tool for development, akin to a new technological diplomacy; think of the goodwill engendered when a country or company makes available an image-recognition app that uses AI to help farmers identify diseases that affect their crops. What we are witnessing is just the beginning of the AI revolution.
The Networks of Our Lives
But how, your non-artificially intelligent mind may be wondering, are we able to first collect those mountains of data that feed AI? A primary way is through the Internet of Things, or IoT, a term that future-of-work expert Jacob Morgan describes as “the concept of basically connecting any device with an on and off switch to the internet (and/or to each other).” He continues: “This includes everything from cellphones [to] coffee makers, washing machines, headphones, lamps, wearable devices, and almost anything else you can think of. This also applies to components of machines, for example a jet engine of an airplane or the drill of an oil rig.” In short, IoT refers to a constellation of billions of devices that offer exponentially more data points on how things perform and how the world turns. It’s so large, and so seemingly innocuous, that you are probably not even aware of its existence. But this pervasive network of sensors that captures data within our homes and cities is also in the process of morphing – and here’s where the privacy issue really gets personal.
Never before has our species been equipped to monitor and sift through human behaviors and physiology on such a grand scale. We might call this set of networks the “Internet of Living Things (IoLT),” or the “Internet of Bodies.”
The all-encompassing capture of our personal information – the quirks that help define who we are and trace the shape of our lives – will increasingly be used for various purposes without our direct knowledge or consent. On an individual level, what this means is that our privacy is receding, and we are being exposed. The evolution of AI is occurring in parallel with technical advances in other fields, such as genomics, epidemiology, and neuroscience. That means not only are your coffee maker and your plane’s engine sending information to the cloud, but so are wearable sensors like Fitbits, intelligent implants inside and outside our bodies, brain-computer interfaces, and even portable DNA sequencers.
When optimized using AI, this trove of data provides information superiority to fuel truly life-saving innovations. Consider research studies conducted by Apple and Google: the former’s Heart Study app “uses data from Apple Watch to identify irregular heart rhythms, including those from potentially serious heart conditions such as atrial fibrillation,” while the Google-powered Project Baseline declares, “We’ve mapped the world. Now let’s map human health.” Never before has our species been equipped to monitor and sift through human behaviors and physiology on such a grand scale. We might call this set of networks the “Internet of Living Things (IoLT),” or the “Internet of Bodies.”
There is great promise here, but also great peril, especially when it comes to ownership and control of our most intimate data. When computer codes analyze not only shopping patterns and dating preferences, but our genes, cells, and vital signs, the entire story of you takes its place within an array of fast-growing and increasingly interconnected databases of faces, genomes, biometrics, and behaviors. The digital representation of your characteristic data could help create the world’s largest precision medicine dataset – or it could render everyone more vulnerable to exploitations and intrusions than ever before. What might governments seek to do with such information and capabilities? How might large corporations, using their vast computing and machine-learning platforms, try to commodify these streams of information about humans and ecosystems? Indeed, behavioral and biological features are beginning to acquire a new life on the internet, often with uncertain ownership and an uncertain future.
A New Cyber-Biopower
At the end of the electrifying 1970s in France, Michel Foucault coined the term “biopower” to describe how nation-states rely on an “explosion of numerous and diverse techniques for achieving the subjugation of bodies and the control of populations.” The ongoing digital and AI revolution magnifies his concerns. While we are not entering an Orwellian world or a dystopian episode of Black Mirror just yet, we cannot and should not ignore that the weakening boundary – and the weakening distinction – between “private” and “public” is a reality.
Consider the Chinese students whose pictures and saliva samples have been collected on campus to feed a database of faces and genomes. One Chinese facial recognition software company, Cloud Walk, is developing AI technology that tracks individuals’ movements and behavior to assess their chances of committing a crime. Chinese police forces have debuted AI-augmented glasses to identify individuals in real time. Notably, however, Chinese citizens are also beginning to resist such breaches of personal privacy.
What is certain is that national and international governance structures are not well-equipped to handle the concerns over privacy, ownership, and ethics that are already beginning to emerge.
There are other examples in which governments and companies are tapping into this Internet of Bodies, sometimes without informed consent or democratic deliberation. The National Institution for Transforming India, also called NITI Aayog, is helping the Indian government to aggregate private and public data on projects ranging from optimization of agriculture to healthcare. The Indian government has also mandated compliance with the creation of a country-wide biometrics database as part of Aadhaar’s identification profile. What India intends to do if and when it applies AI technology to such a database is uncertain. What is certain is that national and international governance structures are not well-equipped to handle the concerns over privacy, ownership, and ethics that are already beginning to emerge.
Could the Internet of Bodies be used toward the “optimization” of the next generation’s biology in line with prescribed ideals? The emergence of at-home genetic- and DNA-sequencing services, such as 23andMe, has spurred complimentary services like InsideDNA, a cloud-based platform that features over 1,000 bioinformatics tools. What if governments begin mandating compliance with national genetic testing regimes? What will happen when AI converges with genome-editing technology?
And what of the afterlife in our AI world? Does this realm remain “private”? It turns out that power over our cyber-biological lives does not even vanish with death. Today, an industry flourishes around “deep fakes,” forging speech and images to impersonate individuals – including deceased ones. Think of chat bots that harness people’s social media footprints to become online ghosts. One AI startup, Luka, has launched a program that allows the public to engage with the co-founder’s friend who was killed in a car accident. A recent Oxford study calls for thorough ethical guidelines, arguing that “digital remains should be treated with the same care as physical remains.”
Reserving ethical judgement on a “post-privacy” world, what is certain is that societies as a whole will have to determine to what extent the very concept of privacy can be modernized so as not to break, but to stretch and accommodate a reality in which it looks very different than it has in the past.
The Privacy-Security Quagmire
As long as individuals and societies have had privacy concerns, they have had security concerns, too. While state-sponsored cyberattacks and cyber-espionage are not new, the proliferation of internet-connected devices, the growth of the Internet of Bodies, and current and future AI technology will only exacerbate existing cybersecurity vulnerabilities. Highly complex, these vulnerabilities are often not well understood, and therefore, often neglected. What if skilled hackers were able to penetrate smart-city technologies, DNA databases, or perhaps neural imaging databases? These would be the holy grail of population-level datasets, encompassing everything from our daily activities to our mental and physical health statuses.
Equally troubling, rising tech platforms are often our last line of defense to ensure the security of the massive, precious datasets that fuel our e-commerce, and soon, our smart cities and much more. That is, the same multinationals that reign over data and its liquidity are also charged with cybersecurity – creating potential conflicts of interest on a global scale. The revelation that Facebook made the private data of about 87 million of its users available to the Trump campaign has fueled new levels of public anxiety about the ability of tech giants to exploit our personal information. That tension, of course, comes with the fact that the private tech sector is also enabling most of the positive benefits that AI can and will usher in for individuals and societies, from helping to predict natural disasters to finding new warning signs for disease outbreaks. Thinking about how to ensure data liquidity and security will become ever more important as governments aim to reap such benefits.
Whether the government in question is totalitarian or democratic through and through, security risks should be cause for action, if not the privacy and civil liberties of citizens. As you might imagine, AI- and data-governance around the world is a highly variable landscape, and in many instances, regulation is seriously lagging behind technological advances.
In collaboration with the European Commission, a few nations in Europe – led by France, the UK, and Estonia – are currently delineating their positions on the intersection of AI with data privacy, liquidity, and security. In April, 25 EU countries pledged to ally forces to shape a “European approach” to AI, clearly in an effort to compete with American and Asian tech platforms. At the same time, the EU’s new regulatory approach to data privacy and security, the General Data Protection Regulation (GDPR), is intended to “protect and empower all EU citizens’ data privacy and to reshape the way organizations across the region approach data privacy.” Data collection and processing is allowed if, and only if, valid consent is obtained. Meanwhile, some argue that the GDPR could hinder economic competition and innovation.
Societies as a whole will have to determine to what extent the very concept of privacy can be modernized so as not to break, but to stretch and accommodate a reality in which it looks very different than it has in the past.
While the EU has so far taken a broader, preventive approach to the misuse or unauthorized use of personal data, the U.S. has relied on responsive enforcement and industry self-regulation in the absence of any statute that holistically covers the subject. Existing U.S. privacy law has not been updated to reflect the realities of AI. The U.S. and China both remain largely silent in this area, seeking to protect their competitive advantage in technological innovation. Ultimately, new developments in data-sharing and optimization will challenge both approaches and will require new tools to address the resulting privacy issues.
In China, which is increasingly challenging the U.S. position at the front of the AI pack, the state is implementing AI technologies for surveillance and customer service with particular rapidity. Though Beijing recently passed its cybersecurity law, which calls for storing data inside the country in order to protect internet-connected devices from security threats, the law creates headaches for foreign tech companies and its privacy protections fall short in comparison to the speed of innovation.
Transparency and Courage
In a stirring new op-ed, surprisingly on AI, Henry Kissinger writes that the contemporary world order “is now in upheaval amid a new, even more sweeping technological revolution whose consequences we have failed to fully reckon with, and whose culmination may be a world relying on machines powered by data and algorithms and ungoverned by ethical or philosophical norms.” Competing interests and abundant complexities notwithstanding – or, rather, because of them – now is the time to collectively define a responsible way to govern AI and data-optimization within our democracies.
Future conflicts over who owns, steals, or benefits from genetic secrets have to be balanced by open-source efforts to ensure that data and this new generation of technological tools primarily serve the public good. An international network of technology leaders, ethicists, policymakers, members of civil society, and writers and artists, too, needs to come together and articulate a set of globally applicable policies and norms that protect basic human and civil rights in the algorithmic age. They also need to be transparent and courageous, explaining to the public how AI and the Internet of Bodies is transforming our privacy and ourselves. Only then will we be able to determine how to design AI and related technologies in a socially responsible and sustainable manner.
***
Eleonore Pauwels (@AI_RRI_Ethics) is director of the AI Lab within the Science and Technology Innovation Program at the Wilson Center. Her research focuses on the governance and democratization of converging technologies and how AI, genome-editing, and cyber-biosecurity raise opportunities and challenges across sectors.
Sarah W. Denton (@realthinkHer) is a research assistant in the Science and Technology Innovation Program at the Wilson Center and a research fellow at the Institute for Philosophy and Public Policy at George Mason University.
Cover photo: A bioengineer in Silicon Valley uses AI software to classify and analyze clusters of genomic data. (Courtesy of Eleonore Pauwels)