Beware of identity fraud with generative AI

Although some companies and organizations have called for a complete ban or pause on AI, this is not possible as a number of applications and software products are rapidly integrating AI to preempt regulation. In fact, Meta and Google recently added AI to their flagship products with mixed results from customers. Instead of trying to ban or slow down emerging technologies, Congress should pass the Biometric Information Privacy Act, or BIPA, to ensure that misuse of a person’s identity helps of generative AI be punished by law.

Recently, business executives and school principals have been impersonated using GAI, leading to scandals involving non-consensual intimate images, sexual harassment, blackmail and financial scams. When used in scams and hoaxes, generative AI provides an incredible advantage to cybercriminals, who often combine AI with social engineering techniques to enhance their ruse.

For example, before the New Hampshire primary earlier this year, some residents received a robocall in which it appeared that President Biden was urging them not to vote in the GOP primary and to “save” their vote for November. However, Biden’s voice had been cloned. In his 2024 State of the Union address, Biden called for a ban on identity theft technologies.

Apps and software that can be used to create convincing audio/video impersonations, like Snapchat’s face swapping feature, are already available on your smartphone and computer. Using vast datasets available online, applications powered by generative AI allow users to create original content without all the expensive equipment, professional actors or musicians once required for such production. And like most technology designed for the general public, it’s fun to play with. The enthusiasm quickly grew into an entire industry of deepfake porn vignettes of celebrities and non-public figures.

In 2019 and again in 2021, Representative Yvette Clarke of New York proposed the Deep Fakes Accountability Act, which would change laws and increase penalties for fraud and false claims to include audio and visual impersonation via generative AI. In Massachusetts, Rep. Dylan Fernandes of Falmouth has championed a law similar to BIPA that is now considered part of a broader data privacy law by lawmakers.

Generative AI represents a worrying intersection between advanced technology and privacy violations. Recently, New York Times technology reporter Kashmir Hill tracked the rise of new biometric products in her best-selling book, “Your Face Belongs to Us.” Tech companies, like Clearview AI and Palantir, use facial recognition algorithms that have drawn widespread criticism due to indiscriminate deletion of billions of images on social media platforms and other online sources without people’s consent. users, she wrote. By compiling these vast databases of people, facial recognition technology companies enable their customers, including law enforcement, to carry out mass surveillance and identify individuals without their knowledge or knowledge. their permission.

This uncontrolled access to personal data raises serious ethical questions regarding privacy, consent and the risk of abuse. Additionally, the lack of transparency around generative AI models and refusal to disclose what types of data are stored and how it is transmitted puts individual rights and national security at risk. Without strict regulation, widespread public adoption of this technology threatens individual civil liberties and is already creating new cybercrime tactics, including impersonating colleagues during real-time video conferences.

These issues highlight the urgent need for comprehensive privacy legislation in the digital age. Regulation to stop the spread of generative AI, however, is futile. Just as the federal government does not ban 3D printers because users can make 3D printed weapons, Congress should address inappropriate use of this emerging technology by requiring active consent.

BIPA, the Biometric Information Privacy Act, originated in Illinois in 2008 and protected residents’ biometric data from being incorporated into databases without affirmative consent. The law also penalizes companies for each individual violation. In a landmark case filed in 2015, Facebook paid $650 million in 2021 to capture users’ facial prints and then automatically tag users in photos. As part of the class action, each affected Illinois resident received at least $345.

Biometric data, such as fingerprints, facial recognition, voiceprints, iris scans and even DNA, uniquely identifies individuals. Sometimes biometric data is used as a password to access sensitive information and spaces. The integration of biometric identification on cell phones and computers, such as facial ID, fingerprint scans to unlock a laptop, or manual scans at Whole Foods, is now ubiquitous, but remains controversial among privacy advocates. Although biometric technology can provide convenience and increased security, its misuse or mishandling can result in serious violations of privacy and personal autonomy.

Unlike passwords or PINs, which can be changed if compromised, biometric data is inherent to an individual and cannot be changed. Additionally, the collection and storage of biometric data by multinational technology companies, like Palantir, raises concerns about surveillance and possible misuse of data by governments, corporations, or malicious actors. Mass surveillance and creating entire profiles of individuals without their consent could lead to potential discrimination, identity theft, or even a surveillance state.

Furthermore, the confidentiality of biometric information is essential to maintaining individual autonomy and freedom of expression. Without adequate protection, individuals may feel pressured to give up their biometric data in various contexts, compromising their ability to control their personal information and make informed decisions about its use. Instead of tracking down every company that might have used your data to “opt out,” BIPA requires active opt-in.

As generative AI technology continues to advance, the threat to personal identity becomes increasingly sophisticated, requiring robust protection, education, and detection mechanisms to mitigate the damage. The privacy of biometric information is crucial to protecting the fundamental rights of individuals in the digital age. Governments, businesses and technology developers must prioritize privacy regulations, transparent data practices and secure storage systems to ensure that biometric data is used responsibly and ethically, in the respect for the rights of individuals to privacy and personal autonomy.

It’s also possible that these generative AI systems will be more problematic than profitable, raising the question of what types of emerging technologies harm personal and public safety.

Joan Donovan is an assistant professor of journalism and emerging media studies at Boston University and founder of the Critical Internet Studies Institute.