Deep Problems of GenAI

Deep Problems of GenAI

Generative artificial intelligence is one of the biggest technology trends in 2024. Naturall, every new technology introduces challenges and downsides beside the praised advantages. Vgency analyses the risks of GenAI in 2024.

Deepfake

Already years before the foundation of Vgency in 2020, our team members recognized fraudulent deepfakes as one of the most dangerous aspects of GenAI. Our first public articles from 2021 are more relevant than ever.

An Introduction to Deepfake
Deepfake is a fascinating technology with both good and dangerous use cases. Whatever aspects will be dominating, it will likely become a big hype and demonstrates the importance to regulate AI. Image Corrections and Special Effects Manipulation of images and videos exists since analog film processing. Simple image corrections such
Voice Deepfakes Can Be Worse
While video deepfakes are fascinating and effectual, voice deepfakes should not be underestimated. At a first glance, audio seems always be just another part of video that comes along with it. It then often turns out that audio and sound are the more complex aspects of audiovisual media. Deepfake is
The Charm and Harm of Deepfake
The term deepfake has a negative association like fake news or fake media. There is an important difference. You don’t need AI to create fake news. Everyone can just make something up and spread false information verbally, as graffiti, or with pen and paper. Deepfake tools are available to everyone

Some years ago, creation of high-quality deepfakes required special software, a good amount of training data, and knowledgeable experts. Back then, we compared the deepfake creation process with professional video production that required a significant amount of manual editing. Synthetic voice creation was so challenging that it often made more sense to hire a human voice imitator.

All this has changed in only a few years. Synthetic voice creation is easier than ever. One emerging trend is the creation of authentic translations using the same natural sounding voice in different languages. The downside of this technology are malicious voice deepfakes that are easier to create meanwhile than image and video deepfakes.

In general, less data is needed to generate realistic AI content faster and easier than ever before. This results in more potential victims that are exposed to malicious deepfakes, which is what the first month of 2024 clearly indicated with three prominent examples.

  1. Joe Biden Robocalls
The Biden Deepfake Robocall Is Only the Beginning
An uncanny audio deepfake impersonating President Biden has sparked further fears from lawmakers and experts about generative AI’s role in spreading disinformation.
  1. A $25 million dollar scam in Hong Kong
A company lost $25 million after an employee was tricked by deepfakes of his coworkers on a video call: police
A Hong Kong-based employee attended a video call with deepfake versions of the company’s UK-based CFO.
  1. Pornographic deepfakes of Taylor Swift
The Taylor Swift deepfake debacle was frustratingly preventable | TechCrunch
You know you’ve screwed up when you’ve simultaneously angered the White House, the TIME Person of the Year and pop culture’s most rabid fanbase. That’s

Deepfakes of officials, pornographic deepfakes of female celebrities, and deepfake scams in video conferences or phone calls - all this is not new. What is new, are three major cases in the first month of a new year. If this is how 2024 will look like then it's because early warnings have been ignored.

GenAI seems in deep trouble because of vulnerabilities in AI software, the lack of technical safeguards, and lawmakers lagging behind. It seems fair to assume that the three major events will result in legal regulations that will put pressure on GenAI. This might cool down the hype a bit. Be aware investors.

Data Compliance

Suitable training data is required to enable stunning GenAI creation. There are a lots of unknowns about how AI companies leverage data collections and what the sources are. There is a high pace in the AI race and developers need to meet deadlines. The internet and social media seem too appealing as free sources for image and voice data. Or maybe our voice assistants and other smart devices gather personal data for AI training.

Amazon Is Using Your Conversations With Alexa to Train AI
Not only will the tech giant’s little assistant be listening to you, it’ll sometimes use those conversations to train the company’s newest product.

Training data is crucial for the success of AI. Investors need to be mindful and ask specifically if data is being acquired legally. Where does the data come from? Who owns the data? Is copyright respected? Is there consensus?

Every AI company needs to have compliance in place to clarify those questions and possible concerns. If not, the company's IP for AI might be based on unlawfully acquired data.

The New York Times believes this is the case for their copyright protected content and therefore, opened a lawsuit against OpenAI and Microsoft end of December. Even open source GenAI projects are exposed to legal consequences. Already in January 2023, Getty Images opened a court case in the UK against open source company Stability AI.

We can expect more legal disputes in the coming months and years. Hundreds if not thousands of publishers and other content owners might follow this example, which could result in copyright compensations. As a consequence, significant VC invested in AI might flow into different directions.

The New York Times is suing OpenAI and Microsoft for copyright infringement
The lawsuit says ChatGPT “recites Times content verbatim.”
‘Like Pouring Rocket Fuel’: Clarkson Law Firm Attorneys on the Future of Generative AI Litigation | The Recorder
The Clarkson Law Firm’s Ryan Clarkson, Tim Giordano and Tracey Cowan spoke to The Recorder about their reaction to OpenAI’s newest features, including a Copyright Shield, and the potential AI-related litigation they see on the horizon.

Wild, Wild West

The Wild, Wild West days of GenAI seem to come to an end in 2024. Too problematic and significant are the problems around deepfake and copyright violations. GenAI created its own problems that need to be solved urgently.

Believing that AI can also be used to solve its self-induced problems would only result in a cat-and-mouse game between AI methods. It's likely to see AI methods in the future that can reverse engineer the algorithms and sources of AI-generated content, which would reveal copyright violations and other useful information.

In addition, a catalog of measures will be needed. Regulations will be part of it. Technical safeguards and verification mechanisms are additional solutions, e.g. in web browsers, on Social Media, and by CDN providers. We also recommend to require powerful AI to be open source.

How to Regulate Deepfake
Regulation of AI is a trend and deepfake as a subset of Generative AI receives special attention. Passing new laws that follow technology trends is already a challenge for legislators. Keeping up with the dynamics of AI is an even greater task. Vgency is in agreement with those voices that

💡
Need help or additional information? Contact us for your feedback.