The striking of the moratorium on state AI regulation is good news for many states, including Texas, that have already enacted laws to police AI. In June, Governor Greg Abbott signed into law the Texas Responsible Artificial Intelligence Governance Act (Act). The Act, presented as HB 149, established regulations for the use of AI within Texas. The Act was designed to facilitate and advance “responsible development of [AI] systems,” protect individuals from foreseeable risks associated with AI, and create transparency in the development or use of AI, including such use by state agencies.
Read moreIs AI an Acronym for Adverse Impact?
With the advent of new technologies, there are always periods of adjustment, issues that need to be addressed, and bugs to be fixed, even after the technologies have undergone rigorous testing. Artificial intelligence (AI) is the “new kid on the block” when it comes to technology, but it is quickly entrenching itself in our everyday lives. ChatGPT has become de rigueur, and AI-generated results are appearing with more frequency in Google search results. While AI has the capability to make our lives more efficient, minimize human errors, and better analyze data, there is a price to this increased productivity and efficiency, and it comes at the cost of environment. This impact is especially felt here in Texas.
Read moreTake It Down: AI & Deepfakes
Last week, Senator Ted Cruz (R-Texas) and several bipartisan colleagues introduced a bill to protect victims of cyberbullying and “revenge porn,” also known as non-consensual intimate imagery. The bill, Tools to Address Known Exploitation by Immobilizing Technological Deepfakes on Websites and Networks Act, or TAKE IT DOWN Act, will, if passed, require a “covered platform,” such as a website, online service, or mobile app, to establish procedures for removal and remove the imagery and any copies upon a valid request from the identifiable individual within 48 hours of the request.
Read moreChatGPT, AI, and the Spread of Misinformation
Chat GPT and AI have been a hot topic in the legal field as of late. Many legal research databases such as Lexis and Westlaw have invested millions of dollars into this new and emerging technology. While Lexis and Westlaw are both companies that will combine their legal databases with AI technology, these features have not been released to the public just yet. They are still being tested and adapted for improvement to be used by legal professionals.
Read moreConcerns Swirl Around Facial Recognition Technology
The author takes a lot of photos of her dog, and Google knows.
A couple of years ago, we wrote about privacy issues surrounding emerging facial recognition technologies. In the intervening 700 or so days, the conversation has shifted dramatically. With political upheaval and a renewed commitment to racial justice emerging across the nation, the conversation around facial recognition artificial intelligence has taken on a sense of urgency.
Many people are likely familiar with basic facial recongition AI through their phones. For example, Facebook may alert you to a photo that includes you, uploaded without your knowledge by a friend, through an automated system that asks “Is this you?” Another example is Google Photos, which automatically builds albums of friends, family members, and pets, allowing you to identify them by name for easy searching.
However, the most lucrative markets for this type of tech are law enforcement and defense. In Detroit, the police have recently come under heat for two known instances of Black men being arrested for crimes they didn’t commit on the basis of the department’s facial recognition AI. The Detroit Chief of Police acknowledged the software has an incredible 96% false identification rate, which for some has raised questions about the software’s value to the community. The Detroit Police Department has promised to draft a policy about the use of this tech, which is produced by the company DataWorksPlus. In the meantime, a Congressional inquiry has been launched to examine the two facial recognition programs produced by DataWorksPlus, which are used by law enforcement in at least five states.
Tech companies working to produce this type of software are coming under pressure to stop its sale and production, not just by Congress or justice reform advocates, but by their own employees. One example is IBM, which has removed its general purpose facial recognition offerings from the market, and is urging other companies to do the same.
Arguments against use of facial recognition technology by government entities including law enforcement have previously focused on the inaccuracy of such tech. As we see from Detroit, that remains an issue. However, as this type of AI improves, concern has increasingly begun to shift towards the awesome power of accurate facial recognition tech, and its ability to obliterate privacy. As a result of of this, some local jurisditions have begun to specifically outlaw the use of facial recnognition tech. These municipalities are mostly cities in California and Massachusetts, incuding San Francisco and Boston, but now also include Portland, Maine.
One advancement is that facial recognition AI increasingly focuses on the space immediately around the eyes, so that would-be law-breakers and other evil-doers will struggle more to hide their identities. This also means that wearing a mask while you’re shopping might not stop corporate security from identifying you.
Further reading: