How to Regulate Deepfake

How to Regulate Deepfake
Regulation of AI is a trend and deepfake as a subset of Generative AI receives special attention.

Passing new laws that follow technology trends is already a challenge for legislators. Keeping up with the dynamics of AI is an even greater task. Vgency is in agreement with those voices that point out how difficult it is to keep up with AI.

AI-generated deepfakes are moving fast. Policymakers can’t keep up
Tech companies are in a race to roll out AI chatbots and other tools. As technology gets better at faking reality, there are big questions over how to regulate it.

Vgency is also in agreement with Microsoft president Brad Smith who stated that Deepfakes are one of the biggest concerns of AI. While we are in agreement, let's also look at priorities: First came virtual backgrounds to Microsoft Teams, which is some sort of deepfake as we already wrote end of 2021. End of May 2023, Microsoft announced the availability of avatars for Microsoft Teams. Those virtualization features are not necessarily harmful but there is potential of misuse if a person behind an avatar or deepfake is using fake identities.

Deepfakes are biggest AI concern, says Microsoft president
In Washington speech, Brad Smith calls for steps to ensure people know when a photo or video is generated by AI

The idea to flag AI-generated content doesn't solve the problem of fake profiles. Vgency believes that people want to use avatars and even deepfake versions of themselves. There are good and useful use cases for it. Flagging those as AI-generated makes sense and we support this idea. However, we also need to implement proper verification to only allow officially identified people behind avatars and deepfakes they are authorized to use.

Apple is slowly introducing virtual avatars after they acquired Zurich based startup Faceshift in 2015. We are tracking Apple's progress with great interest, not only because Vgency is also based in the Zurich area, but because Apple owns complementary solutions that can be used for verification: Face ID and iCloud.

Apple Has Acquired Faceshift, Maker Of Motion Capture Tech Used In Star Wars
As the market for virtual reality technology continues to grow, Apple has made an interesting acquisition that could further its role in the space. TechCrunch has confirmed that Apple has snapped up Faceshift, a startup based in Zurich that has developed technology to create animated avatars and oth…

Face ID does not only unlock your iPhone. It scans the real face of the user and creates an image and depth data based profile for a specific user account. Obviously, when creating an avatar on an Apple device, users would want to use those called Memojis with Apple communication tools such as Messages or FaceTime. This would then connect the Memoji with the user's Apple ID that is part of the user's iCloud account. Face ID + Apple ID + Memoji result in a verified avatar.

This sounds already like a good solution. At Vgency, we don't think it's good enough though. First of all, even if Apple can create accurate depth data from their TrueDepth cameras to identify the user, there are limitations. One example is that Face ID was not able to distinguish identical twins. Even if such edge cases will likely get fixed with improved sensor and processing technology, there is another issue: Vendor lock and closed IP.

Identical twins told not to use ‘Face ID’ on banking apps
Use passcodes instead, warns NatWest, in case siblings impersonate each other to break into accounts

Open Source by Law

AI develops so quickly that it's difficult to keep up. We already explained in our first article that detecting deepfakes with AI based methods is a cat and mice game because AI based deepfake detections are being trained with available deepfake data. The latest deepfake technology would always be one step ahead deepfake detection. This also applies to flagging of AI-generated content like Brad Smith suggested. It's useless to think about strict deepfake regulations like in China if the latest deepfake technology creates content that cannot be detected as deepfake. The European Union also doesn't seem to understand that labeling is pure activism that won't prevent misuse of AI by cybercriminals.

EU Demands Facebook, TikTok, and Google Start Labeling AI Content to Fight Deepfakes
Dozens of big tech companies must comply or face fines, while Twitter faces further sanctions for refusing to voluntarily comply with digital content laws.

Vgency suggests a different approach: Lawmakers shall require that relevant AI technology is open source. And if it's not, it shall be legal to reverse-engineer any powerful AI technology and make the results open source. Patents on such AI would not cover the new open source code created from reverse-engineering.

This simple approach would have the following positive impacts on AI:

  • Investors need to be more mindful about how to bet their money on AI technology
  • The hype about AI would cool down with a stronger focus on actual value
  • Allowing reverse engineering of AI would reveal flaws and weaknesses that otherwise would remain undetected
  • The collective intelligence of an open source community empowers people to study and investigate AI, resulting in more and better solutions to control AI
  • The human factor is being strengthened
  • More jobs are being created to control AI

Contact us for your feedback.