The AI Inventions That Threaten Our Rights
Generative, Biometric and Military Applications Need Boundaries Now
(Part I)
While climate change is still number one on the list of issues we must grapple with this year for the sake of human rights and humanity itself, there are others where the tipping point is nigh.
Almost all of them involve new technologies and particularly artificial intelligence, another specialist field that generalist human rights groups are racing to catch up on. We’re plenty apprehensive about these technologies, but why say they are near a tipping point?
The growth of the internet, the movement of the world’s economies, utilities and communications onto digital platforms, and the rise of social media are all monumental shifts in human social relations. Anyone paying attention to technological changes in the twenty-first century has experienced euphoria (Arab Spring! Wikipedia! Zoom!) and despair (cell phone addiction! algorithmic censorship! the destruction of privacy!). New developments overwhelm us as successive waves of technological innovations outpace our ability to master them.
Ground-breaking technology almost always outpaces social regulation, at least for a while, and in that volcanic interim period it tends to find its best and worse applications. Think of nuclear power. If we are lucky, we eventually scare ourselves into regulation, rather than have some Great Disaster force it. In the twenty-first century, though, the pace of technological discovery has picked up radically, leaving large regulatory gaps, enormous concentrations of corporate power, and great uncertainty for the rest of us. Some of these developments pose high risk for rights, and we can save ourselves much disaster if we regulate soon, rather than depend on laissez-faire market management.
Which types of technology merit the focus and investment in expertise by activist groups? Probably not the rise of new weight-loss drugs, surveillance doorbells, AI-humming bird feeders or self-driving baby strollers, however these might cause some alarm and be in the news.
However, the rise of generative AI is a current events topic that deserves some deep scrutiny and regulation, given its potential to super-charge disinformation and hate speech promise to aggravate an already-serious crisis of epistemic integrity and social trust. This highly imperfect technology is being integrated into every conceivable application with very weak regard to ethics or legal boundaries, causing even its creators qualms.
The unregulated capture and use of biometric and behavioral data is another high-risk development for human rights, as it can be easily weaponized in a wide variety of ways: for social surveillance and tracking, for censorship, for fraud, and even potentially for thought-control. And as usual, the experiments on these technologies are being conducted on those with the least power to object, and the applications are being sold as important for consumer convenience, fraud prevention and child protection.
Finally, the incorporation of AI into means of warfare is another frontier that requires global advocacy, as it threatens a world where human control (and accountability) for violence is increasingly attenuated. While it is often sold as a way to protect the lives of combatants and conduct strikes remotely with precision, there is little framework for evaluating and controlling its effects on civilian death and destruction.
The problem with each of these three sets of rapidly evolving technologies is that they promise both astounding benefits and detriments. That makes it difficult to produce a unified position, because a ban — or even stiff regulation — could mean giving up something important, either right now or in the near future.
However, the regulation of technology takes this into account all the time. Often the argument is made in terms of not letting regulation stifle future developments or market competitiveness. But this argument, which has some merit, is often overstated. Social planning that enables technology to be directed towards, and not against human values, can yield a much safer world, and still produce innovation. Not to be flippant, but there is a reason it’s so interesting to watch cooking shows where the contestants are asked to invent gourmet meals out of gumdrops, kale and sardines — human ability to innovate around boundaries is pretty boundless.
As these new AI applications unfold — generative AI, biometric data use and automated warfare — it’s become quite visible how they could threaten dire consequences in an unregulated market. Any group that is serious about human rights needs to take them on, and quickly, before dangerous practices become too deeply entrenched to question.
More later — For now, I need to ask ChatGPT a few questions.
Dinah PoKempner is a bar registered, accomplished, and published expert in international law, human rights, and organizational management. Read more of Dinah’s work on Twitter, LinkedIn, and DinahPoKempner.com