With the advent of new technologies, there are always periods of adjustment, issues that need to be addressed, and bugs to be fixed, even after the technologies have undergone rigorous testing. Artificial intelligence (AI) is the “new kid on the block” when it comes to technology, but it is quickly entrenching itself in our everyday lives. ChatGPT has become de rigueur, and AI-generated results are appearing with more frequency in Google search results. While AI has the capability to make our lives more efficient, minimize human errors, and better analyze data, there is a price to this increased productivity and efficiency, and it comes at the cost of environment. This impact is especially felt here in Texas.
Read moreTake It Down: AI & Deepfakes
Last week, Senator Ted Cruz (R-Texas) and several bipartisan colleagues introduced a bill to protect victims of cyberbullying and “revenge porn,” also known as non-consensual intimate imagery. The bill, Tools to Address Known Exploitation by Immobilizing Technological Deepfakes on Websites and Networks Act, or TAKE IT DOWN Act, will, if passed, require a “covered platform,” such as a website, online service, or mobile app, to establish procedures for removal and remove the imagery and any copies upon a valid request from the identifiable individual within 48 hours of the request.
Read moreChatGPT, AI, and the Spread of Misinformation
Chat GPT and AI have been a hot topic in the legal field as of late. Many legal research databases such as Lexis and Westlaw have invested millions of dollars into this new and emerging technology. While Lexis and Westlaw are both companies that will combine their legal databases with AI technology, these features have not been released to the public just yet. They are still being tested and adapted for improvement to be used by legal professionals.
Read moreConcerns Swirl Around Facial Recognition Technology
A couple of years ago, we wrote about privacy issues surrounding emerging facial recognition technologies. In the intervening 700 or so days, the conversation has shifted dramatically. With political upheaval and a renewed commitment to racial justice emerging across the nation, the conversation around facial recognition artificial intelligence has taken on a sense of urgency.
Many people are likely familiar with basic facial recongition AI through their phones. For example, Facebook may alert you to a photo that includes you, uploaded without your knowledge by a friend, through an automated system that asks “Is this you?” Another example is Google Photos, which automatically builds albums of friends, family members, and pets, allowing you to identify them by name for easy searching.
However, the most lucrative markets for this type of tech are law enforcement and defense. In Detroit, the police have recently come under heat for two known instances of Black men being arrested for crimes they didn’t commit on the basis of the department’s facial recognition AI. The Detroit Chief of Police acknowledged the software has an incredible 96% false identification rate, which for some has raised questions about the software’s value to the community. The Detroit Police Department has promised to draft a policy about the use of this tech, which is produced by the company DataWorksPlus. In the meantime, a Congressional inquiry has been launched to examine the two facial recognition programs produced by DataWorksPlus, which are used by law enforcement in at least five states.
Tech companies working to produce this type of software are coming under pressure to stop its sale and production, not just by Congress or justice reform advocates, but by their own employees. One example is IBM, which has removed its general purpose facial recognition offerings from the market, and is urging other companies to do the same.
Arguments against use of facial recognition technology by government entities including law enforcement have previously focused on the inaccuracy of such tech. As we see from Detroit, that remains an issue. However, as this type of AI improves, concern has increasingly begun to shift towards the awesome power of accurate facial recognition tech, and its ability to obliterate privacy. As a result of of this, some local jurisditions have begun to specifically outlaw the use of facial recnognition tech. These municipalities are mostly cities in California and Massachusetts, incuding San Francisco and Boston, but now also include Portland, Maine.
One advancement is that facial recognition AI increasingly focuses on the space immediately around the eyes, so that would-be law-breakers and other evil-doers will struggle more to hide their identities. This also means that wearing a mask while you’re shopping might not stop corporate security from identifying you.
Further reading:
Robot Justice
In Steven Spielberg’s 2001 movie, AI Artificial Intelligence, scientists program a robotic boy to understand and express a full range of human emotions, including love. The boy is adopted into a family as a test case where he learns to connect with the couple who become his parents. After a series of unexpected events, the family’s living arrangement becomes unsustainable. The mother begins to fear the boy and abandons him in the woods, consigning him to an uncertain fate. The boys sets out to navigate a complex world where he’s neither fully human nor fully machine.
Fast forward thousands of years to a time when alien life forms have arrived on planet Earth. Here, they discover the body of the robotic boy at the bottom of a frozen river and seek to reverse engineer his design. This quasi-human creation is their only connection to the Earthling inhabitants who preceded them, and they wish to understand his emotions. He was programmed by humans, they reason, so traces of their humanness still exist within his code.
In addition to film’s impressive special effects, its evocative music, and the spectrum of feelings it inspires, this movie also teaches a lesson: software bears the marks of the people who write the code. All of the assumptions, biases, and predetermined social perspectives that we possess get baked in to the algorithms, creating smart machines that lack the objectivity we expect them to exhibit. They inherit our prejudices and act accordingly. Nowhere is this being discussed more widely, it seems, than in the application of AI to the law. The articles listed here, found in popular magazines and journals, describe various ways that AI is being used — and misused — to predict crime, sentence offenders, and determine the likelihood of criminal recidivism. They also explore the limits of AI, the ethics of using AI to mete out justice, and the regulations that some are proposing to counteract the harmful effects of machine bias.
Artificial Intelligence is Now Used to Predict Crime. But is it Biased? (Smithsonian)
Can Crime Be Predicted by an Algorithm? from Hello World by Hannah Fry (Penguin)
Bias Detectives: The Researchers Striving to Make Algorithms Fair (Nature)
Machine Bias: Risk Assessments in Criminal Sentencing (ProPublica)
We Need an FDA for Algorithms (Nautilus)
AI Research is in Desperate Need of an Ethical Watchdog (Wired)
One State’s Bail Reform Exposes the Promise and Pitfalls of Tech-Driven Justice (Wired)
Courts Are Using AI to Sentence Criminals. That Must Stop Now. (Wired)
Management AI: Bias, Criminal Recidivism, And the Promise of Machine Learning (Forbes)
Trust but Verify: A Guide to Algorithms and the Law (Harvard Journal of Law & Technology)
[VIDEO] The Truth About Algorithms (Aeon)