AI Law - International Review of Artificial Intelligence Law
G. Giappichelli Editore

08/05/2024 - AI, the First Amendment and the “Law of the horse”

argument: Commenti/Comments - Constitutional Law

Jack M. Balkin of Yale Law School emphasizes the need for the U.S. to establish a regulatory framework for Artificial Intelligence (AI) that reflects Europe's comprehensive approach. He argues that AI, being an autonomous entity, should not be granted legal personhood, although AI-generated content merits First Amendment protection. Cass R. Sunstein's (of Harvard Law School) work further explores AI's relationship with the First Amendment, questioning AI's constitutional rights and the implications of autonomous AI actions. Both scholars stress the importance of adapting legal frameworks to address the unique challenges posed by AI, advocating for a nuanced approach that respects free speech principles while ensuring accountability and legal clarity in AI's application and development.


written by Marco Perilli

In a recent interview with Yale Law School's,  Jack M. Balkin, the Knight Professor of Constitutional Law and the First Amendment of U.S. Constitution, elegantly highlighted the urgency for the U.S. to develop a regulatory framework for Artificial Intelligence (AI) that parallels the comprehensive strategies already in place across Europe. He suggests that the judiciary is ill-equipped to tackle AI's legal puzzles without a robust statutory framework backed by Congressional authority and the expertise of a specialized administrative body.

The Professor’s stance on AI’s legal personhood is clear. Unlike corporations, which are entities composed of human individuals and thus can be assigned legal rights, AI, as an autonomous entity, does not fit this description. Balkin's view is that AI-generated content should be protected under the First Amendment because it is the product of human input and creativity; however, the AI itself should not be awarded legal autonomy.

This perspective on AI’s legal status brings us to the heart of liability issues. For instance, in the healthcare sector, professionals who utilize AI are responsible for its outcomes and are potentially liable for negligence. Balkin's insights extend across various sectors where AI systems could pose legal challenges if their operations lead to harm or legal infractions.

Continuing this exploration, Balkin delves into the copyright implications of AI. He identifies authorship of AI-generated works, the legality of using copyrighted content for AI training, and the status of AI outputs that integrate elements of copyrighted works as areas ripe for legal debate.

The relationship between artificial intelligence and the First Amendment of U.S. Constitution is also the subject of an in-depth analysis in the preliminary draft of the research paper “Artificial Intelligence and the First Amendment“ by Cass R. Sunstein, Professor at Harvard Law School. He discusses the non-human nature of AI and the significant First Amendment questions that emerge from restrictions on AI activities or the distribution of AI-generated materials.

Sunstein’s exploration brings forth the concept of "the law of the horse," a metaphor that criticizes the creation of specialized legal frameworks for specific domains like cyberspace or AI. The phrase, originally coined by Judge Frank H. Easterbrook in 1996, posits that the internet should not require its own set of laws, as the general principles of law are adaptable to this new domain. Easterbrook's skepticism towards specialized legal treatment for emerging technologies was met with resistance by Lawrence Lessig (professor at Harvard Law School and a renowned scholar in the field of cyberlaw and intellectual property) in 1999, who highlighted the internet’s unique challenges and opportunities, advocating for a coherent and adaptable legal theory for cyberspace and AI.

Sunstein raises open questions that challenge traditional legal notions: Does AI have constitutional rights? Who should be held liable if AI acts autonomously? Can AI be subjected to viewpoint-based restrictions, or should we frame the discourse around the rights of human interactants? Further complexities arise when contemplating sanctions against individuals responsible for AI capabilities, especially if they did not foresee their algorithms disseminating unprotected speech.

An intriguing aspect of Sunstein’s work is the analogy to the United States Supreme Court's treatment of video games. The Court has recognized that like books, plays, and movies, video games communicate ideas and social messages, and therefore, human interaction with them is protected by the First Amendment. This precedent is significant in understanding the protection of AI-generated content, not because AI itself has rights, but because it affects the rights of human users.

This leads to a critical point: laws that restrict AI from producing or disseminating material critical of government figures would likely be unconstitutional. The reasoning is anchored not in the rights of the AI but in the rights of the human beings who engage with the AI-generated content.

Sunstein explores two potential arguments in this regard: one advocating that the First Amendment categorically forbids viewpoint discrimination, including as it pertains to AI; and another that emphasizes the rights of the audience, drawing from principles established in Kleindienst v. Mandel. This case suggests that restrictions on speech must be justified, especially if it hinders the public's access to information, regardless of the speech's origin.

In his conclusion, Sunstein asserts that while AI does not currently enjoy First Amendment rights, the freedoms of individuals to access AI-created content are indirectly protected. Any restrictions on AI that impact human communicators or recipients should be carefully scrutinized in light of established First Amendment principles. The distinction between viewpoint-based restrictions, content-based (but viewpoint-neutral) restrictions, and content-neutral restrictions is crucial in evaluating the legality of limitations placed on AI.

In his concluding thoughts, Sunstein implies that while AI, in its current state of evolution, does not enjoy First Amendment rights, the freedoms of individuals to access content produced by AI are indirectly safeguarded. Restrictions on AI that impact human communicators or recipients must be evaluated through the lens of established First Amendment principles, distinguishing between viewpoint-based restrictions, content-based (but viewpoint-neutral) restrictions, and content-neutral restrictions.

____________________________

AI's autonomy raises the specter of legal entities acting independently of human oversight, which is a departure from traditional legal accountability structures where human intent can be ascribed and liability clearly defined. The law must grapple with the question of who becomes the defendant when AI acts autonomously and causes harm. Is it the developers, the users, or the AI entity itself? Sunstein's research paper pivots the legal perspective to consider not just the creators and disseminators of AI-generated speech, but also the recipients who have a First Amendment right to receive information.

The legal implications of AI’s autonomy and its impact on free speech principles are vast and complex. As AI systems grow more sophisticated, the potential for these entities to create content that reflects their learning and interactions with the world — content that may not be directly traceable to any human author — challenges our conventional understanding of speech and authorship.

In considering these challenges, the legal system may be compelled not only to apply existing principles but also anticipate future developments that may require the creation of new legal doctrines or the adaptation of old ones. This dynamic legal landscape calls for a proactive stance from lawmakers, courts, and legal scholars to ensure that as AI continues to advance, it does so in a manner that respects the foundational principles of our legal system.  The legal doctrinal debate on whether artificial intelligence requires the development of new legal institutions and figures or whether it can be subjected to and regulated by existing ones is also very much alive in Europe.