In Steven Spielberg’s 2001 movie, AI Artificial Intelligence, scientists program a robotic boy to understand and express a full range of human emotions, including love. The boy is adopted into a family as a test case where he learns to connect with the couple who become his parents. After a series of unexpected events, the family’s living arrangement becomes unsustainable. The mother begins to fear the boy and abandons him in the woods, consigning him to an uncertain fate. The boys sets out to navigate a complex world where he’s neither fully human nor fully machine.
Fast forward thousands of years to a time when alien life forms have arrived on planet Earth. Here, they discover the body of the robotic boy at the bottom of a frozen river and seek to reverse engineer his design. This quasi-human creation is their only connection to the Earthling inhabitants who preceded them, and they wish to understand his emotions. He was programmed by humans, they reason, so traces of their humanness still exist within his code.
In addition to film’s impressive special effects, its evocative music, and the spectrum of feelings it inspires, this movie also teaches a lesson: software bears the marks of the people who write the code. All of the assumptions, biases, and predetermined social perspectives that we possess get baked in to the algorithms, creating smart machines that lack the objectivity we expect them to exhibit. They inherit our prejudices and act accordingly. Nowhere is this being discussed more widely, it seems, than in the application of AI to the law. The articles listed here, found in popular magazines and journals, describe various ways that AI is being used — and misused — to predict crime, sentence offenders, and determine the likelihood of criminal recidivism. They also explore the limits of AI, the ethics of using AI to mete out justice, and the regulations that some are proposing to counteract the harmful effects of machine bias.
Artificial Intelligence is Now Used to Predict Crime. But is it Biased? (Smithsonian)
Can Crime Be Predicted by an Algorithm? from Hello World by Hannah Fry (Penguin)
Bias Detectives: The Researchers Striving to Make Algorithms Fair (Nature)
Machine Bias: Risk Assessments in Criminal Sentencing (ProPublica)
We Need an FDA for Algorithms (Nautilus)
AI Research is in Desperate Need of an Ethical Watchdog (Wired)
One State’s Bail Reform Exposes the Promise and Pitfalls of Tech-Driven Justice (Wired)
Courts Are Using AI to Sentence Criminals. That Must Stop Now. (Wired)
Management AI: Bias, Criminal Recidivism, And the Promise of Machine Learning (Forbes)
Trust but Verify: A Guide to Algorithms and the Law (Harvard Journal of Law & Technology)
[VIDEO] The Truth About Algorithms (Aeon)