The International Authors Forum (IAF) has developed a set of principles that must be upheld in any policy in this area where the use of AI technologies intersects with authors’ work and intellectual property (IP).
These principles are intended to provide guidance for discussions and policymaking, acknowledging that a broad legal framework that includes copyright, privacy, data protection, and competition and consumer protection laws will be required to regulate the responsible use of AI technologies. We believe it is crucial to address the potential impact of AI on the irreplaceable value that authors bring to society.
The following principles outline our stance:
Authorization, Fair Compensation and Transparency: Copyright laws around the world enable human authors to be equitably compensated for their work and receive transparent information about how their work is used. It is essential to uphold these laws: authors must share in the financial benefits derived from their creations. Any exploitation of their works, however transient, requires authorisation and remuneration. Not to do so is a breach of copyright law. Market-based solutions such as licensing remains an effective mechanism to support new forms of exploitation in a way that honours an author’s right to exploit their economic rights – or refuse to do so –and ensure that authors are appropriately remunerated. Authors must also have the right to transparent information on when and how their work is utilized in AI systems and, subject to applicable laws, the option to grant or withhold permission for use of their work.
Ensuring creators can earn a living: The development of new technologies including generative AI, should not come at the cost of creators’ rights and livelihoods. AI should be considered a tool that compliments and relies on human work and authorship. The rights of human creators must be safeguarded and their contributions to the development of AI technologies acknowledged and compensated. Any technological developments must not conflict with the normal exploitation of copyright works nor harm the rightsholders. These are vital measures to ensure that the value gap between creators and online platforms is not accentuated but that remuneration is fairly distributed.
Respect for Creators’ Moral Rights: Regardless of technology and the methods of using authors’ works, their moral rights must be upheld. This includes rights to attribution and credit for their work, and rights to object to certain uses of their work. Therefore, authors must have clear attribution when their work is used, and they must have a means to object to the use of their work in AI technology – including any derogatory treatment.
Privacy rights: The image rights, data protection and privacy rights of creators must be upheld to ensure that their faces, voices and likeness cannot be simulated by AI systems in unauthorised performance synthesis without authorisation, acknowledgement or compensation. As data subjects, their consent must be sought – and they must be able to refuse and retract it – when their performance, voice or likeness is recorded, used or reproduced by machine learning systems and equivalent technology.
Transparency obligations: There must be an obligation of transparency and labelling on AI developers to provide clear and concise information, publicly, about the copyright status of their AI systems and the operation of those systems, including how those systems are trained, how their products work and interface with the underlying data that is used to inform the algorithm, and how the AI-rendered results (outputs) can be used. This would help consumers and users understand their rights and responsibilities when using AI systems and their outputs. These obligations should apply equally to users of AI systems and their outputs. It follows that AI system developers must provide labelling solutions to any AI outputs, as watermarks, in the metadata, and in a way that is easily understandable to an untrained user.
Accountability: The cost of contesting and seeking redress for AI-related harms, including enforcing the rights of rightsholders and creators, must be limited – for example through the creation of dedicated AI reporting systems, adjudicators, or ombudsmen. Clear and easy accountability mechanisms must be introduced to ensure the safe development and use of AI systems. AI systems must also provide mechanisms for third-party audits and certification to demonstrate their compliance with regulatory frameworks and principles, technical tools and checklists to assess AI products against regulations and principles, and a clear classification of AI risk and suggested risk assessments.
Careful Consideration of Licensing Terms: We emphasize the importance of carefully considering licensing terms, not only for the initial use of works but also for subsequent reuse by AI programs. Authors’ works have been—and continue to be—used to train generative AI technologies. These works are embedded in the DNA of generative AI technology; and AI outputs are derivative of the works they were trained on. The ongoing use of authors’ works should be taken into account when defining the terms of licensing for AI uses. AI systems must also develop ways in which authors can terminate any such agreement, whereby their works will be removed from AI systems and ‘unlearned’ upon termination.
Awareness of AI Limitations and Societal Impact: It is essential for all stakeholders to acknowledge the limitations of AI within a wider social context. While AI capabilities are impressive, there is a risk of reproducing or exacerbating societal biases. AI-generated content, lacking human perspective, can perpetuate harmful stereotypes and disseminate false information. It is crucial to thoroughly document the extent and nature of these biases and consider them when applying AI and developing policies governing its use. Ongoing human oversight, scrutiny, and expertise are necessary for the ethical and safe development of AI systems, since these cannot self-regulate and may struggle to apply contextual understanding and nuance to information. To avoid poor quality increasingly derivative works, AI-generated content is also dependent on broad, continuously evolving, up-to-date and reliable datasets.
Generative AI generates its output to replicate, as closely as possible, the patterns in the works it has “ingested” as input. Whether those inputs or outputs are “true” is not a factor in its language models or generative algorithms. By design, it will generate output that replicates the falsehoods, myths, and misunderstandings that are most common in the input corpus. By design, it will generate output that replicates as closely as possible the biases and bigotries in the input.
Because the output of generative AI is based on the patterns in the entirety of the input corpus, the output can only be attributed to the entirety of the corpus, and not to any one specific source. As a result, it is impossible for readers, viewers, or listeners to assess the credibility or truth value of the sources of the output.
Because generative AI generates output that is as consistent as possible with the patterns in the input, it will by design produce output that seems as plausible as possible, regardless of its truth or falsehood.
For these reasons, journalism or the generation of factual claims or responses to factual questions where the truth or credibility of the output is important are the least appropriate uses of generative AI.
Responsible Pursuit of AI Applications: Policymakers and industry stakeholders should responsibly pursue AI applications that support and enhance creators’ contributions. The development of AI should not diminish or sideline the social, cultural, and economic value that creators provide, and without which these AI systems would not be possible. Neglecting creators’ contributions would have a detrimental impact on the creative community and the industries they support. Unless we regulate the application of AI now, we may lose a generation of creators necessary to bring human emotion, experience, and originality to art, and be left with creative industries that are significantly diminished.
IAF envisions a future where human authors can utilise AI creatively, while ensuring that their rights and creative contributions are duly acknowledged and protected.