X

Supreme Court Showdown: YouTube Case May Set Precedent for AI Protections – What It Means for ChatGPT and Beyond

The U.S. Supreme Court is set to rule on whether Alphabet Inc.’s YouTube can be sued over its video recommendations to users in a case that tests whether a U.S. law that protects technology platforms from legal responsibility for content posted online by their users also applies when companies use algorithms to target users with recommendations. The outcome of this case could have implications beyond social media platforms and influence the emerging debate over whether companies that develop generative AI chatbots should be protected from legal claims like defamation or privacy violations.

One such generative AI chatbot is ChatGPT from OpenAI, a company in which Microsoft Corp is a major investor or Bard from Alphabet’s Google. According to technology and legal experts, the algorithms that power generative AI tools like ChatGPT and its successor, GPT-4, operate similarly to those that suggest videos to YouTube users. As a result, the Supreme Court’s decision on whether Section 230 of the Communications Decency Act of 1996, which generally protects third-party content from users of a technology platform, applies to recommendation algorithms could be relevant to the liability of AI chatbots.

“The debate is really about whether the organization of information available online through recommendation engines is so significant to shaping the content as to become liable,” said Cameron Kerry, a visiting fellow at the Brookings Institution think tank in Washington and an expert on AI. “You have the same kinds of issues with respect to a chatbot.”

During arguments in February, Supreme Court justices expressed uncertainty over whether to weaken the protections enshrined in Section 230. Justice Neil Gorsuch noted that AI tools that generate “poetry” and “polemics” likely would not enjoy such legal protections. The case is just one aspect of a wider conversation about whether Section 230 immunity should apply to AI models trained on troves of existing online data but capable of producing original works.

Maximize your job prospects and sign up for LawCrossing now.

As it stands, Section 230 protections generally apply to third-party content from users of a technology platform and not to information a company helped to develop. However, courts have not yet weighed in on whether a response from an AI chatbot would be covered.

It remains to be seen how the Supreme Court will rule on the YouTube case and whether its decision will have any impact on the liability of generative AI chatbots. Nonetheless, the case highlights the need for clarity around the legal protections afforded to AI technologies as they develop and become more widespread.

Representatives for OpenAI and Google declined to comment on the matter. For now, the tech industry and legal experts will be watching closely to see how the Supreme Court’s decision in the YouTube case unfolds and what it could mean for the future of AI development and regulation.

Rachel E: