BYOMSPM

Build-Your-Own Master’s Degree in Product Management

Find here my thoughts on a collection of podcasts, articles, and videos related to product management, organized like a semester of a Master’s degree.

Module 4 / Tech / Implications of AI-Generated Content



I’ve studied and read previously about the implementation and implications of various applications of AI, but I wanted to dive into some more recent articles about the specific implications of AI-generated content. Themes to be covered in this article include issues related to trust, copyright, and disinformation/fraud.

Grade I gave myself for this assignment: 91/100

Trust

First I read an Economist article called “AI-generated content is raising the value of trust,” which argues that the introduction of AI-generated content is increasing the value of the ability to establish trust with consumers about digital content origin (as in, who created it & how).

According to this article, software to detect AI-generated content lags significantly behind generation technology.

The article argues that humans need to adapt to the new reality of not being able to decipher the origin of digital content. Humans already know that written documentation of an event does not signify that it definitely happened; to this end, we will have to adopt an understanding that photographic or videographic evidence similarly no longer necessarily correlates with fact (The Economist).

One way that the article believes trust about the integrity of content origin can be affirmed by consumers, besides through technical means, is by the reputation of who posted it, which will likely prove to become increasingly important to maintain.

Copyright

The next article I read, also from The Economist, aims to explore whether AI models infringe on copyright laws by using material protected by copyright to train their models (“Does generative artificial intelligence infringe copyright?”).

The article explains that the “fair use doctrine” allows protected content to be used by certain people in certain instances without permission from creators (for example, a teacher using a snippet of a book in teaching materials). The core of AI/copyright law suits going on right now is determining whether AI models fall under the scope of that doctrine.

Possible outcomes of the law suits include AI models being able to use copyrighted material to train models without permission, AI models not being allowed to train on copyrighted material, or a requirement that companies training AI models have to ask permission before using copyrighted materials.

Disinformation & Fraud

The last two articles I read, which are also from The Economist, discuss how AI can be used to perpetuate disinformation & fraud, along with some ways to combat them.

One article (“An AI-risk expert thinks governments should act to combat disinformation”) explains that AI is much better at creating fake videos than humans are, and it can also help humans create fake photos/text more quickly than they otherwise could. The second article (“AI could accelerate scientific fraud as well as progress”) outlines how AI-based generative text tools are known to produce fake titles and summaries of scientific papers, while other AI tools can be used to write parts of papers or even formulate fake images that support a paper’s hypothesis.

In the context of fake videos and their potential to spread political disinformation in order to materially affect national elections, the first article notes that most people are already skeptical of disinformation in the media, making deep fakes less potent. That said, the stakes are high.

The first article outlines a few strategies to combat AI-powered disinformation, specifically in the media:

  • Governments need to incentivize corporate development of watermarking & detection tools
  • Investments are needed in “prebunking,” in which you educate audiences about the motives of possible disinformation campaigns
  • Improve government oversight on models

Thanks for reading.


Works Cited

“AI could accelerate scientific fraud as well as progress.” The Economist. 1 February 2024. https://www.economist.com/science-and-technology/2024/02/01/ai-could-accelerate-scientific-fraud-as-well-as-progress.

“AI-generated content is raising the value of trust.” The Economist. 18 January 2024. https://www.economist.com/leaders/2024/01/18/ai-generated-content-is-raising-the-value-of-trust.

“An AI-risk expert thinks governments should act to combat disinformation.” The Economist. 6 February 2024. https://www.economist.com/by-invitation/2024/02/06/an-ai-risk-expert-thinks-governments-should-act-to-combat-disinformation.

“Does generative artificial intelligence infringe copyright?” The Economist. 2 March 2024. https://www.economist.com/the-economist-explains/2024/03/02/does-generative-artificial-intelligence-infringe-copyright.


Leave a comment