Why Users and LLM Operators Must Both Be Accountable for AI Output
This content was originally published on Medium (April 7th, 2026) and has been adapted by The Honking Goose platform.
Large language models, commonly referred to as LLMs or AI, are becoming increasingly integrated into daily life. Legal systems around the world are still struggling to adapt to these technologies, and existing frameworks may not be enough to address their long-term impact without new regulations that balance the rights of the public, copyright holders, users, and LLM operators. In my view, responsibility for AI-generated content should be shared. Users should not be able to dismiss harmful or unlawful content by saying, “ChatGPT wrote it, not me,” while LLM operators should not be treated as passive hosts when their systems generate the responses. Instead, users should be responsible for the prompts they submit and the content they choose to publish, while LLM operators should be treated as publishers of the outputs their systems produce.
AI a Tool or a Publisher
One area of debate is whether AI should be treated as a mere software tool, like Microsoft Word, or as the publisher of the content it produces. I propose a new category called Generative Reference Material, a type of software that can answer a user’s query based on its knowledge.
AI does more than pen and paper, or even Microsoft Word. Most importantly, it does not merely host user-generated content. Instead, the software generates content based on its reference database.
Section 230 of the Communications Decency Act has been used to defend social media companies for hosting content created by their users, but it does not protect them from liability for content they themselves publish. No special exception should apply to AI tools as compared with human-written content that a company publishes. Similarly, AI companies should be treated as the publishers of LLM outputs, but they should not be held responsible for the prompts written by users.
Under this framework, the user and the LLM operator would share responsibility for the prompt and the resulting output.
Operator Responsibles
AI companies chose the safety guardrails for their systems. Those choices were likely influenced by existing laws, but the companies themselves created the rules that govern how their LLMs operate. They also built the internal safety systems that define and enforce what content is and is not allowed. Today, they have the ability to update those rules as needed to comply with the law.
I acknowledge that some problems are harder to solve than others. Determining whether generated content infringes copyright or qualifies as fair use goes far beyond simply checking whether each sentence is a direct quote from the training data.
LLM operators have a responsibility to block unsafe and illegal requests. They also have an equal responsibility not to generate unsafe or illegal content, even when a user asks for it or provides unclear instructions. When there is doubt, they should refuse the request.
LLM operators also have a responsibility to provide reporting tools so that anyone can report unsafe content. Once a report is reviewed, access to the conversation should be removed from both the public and the user account that created it, as appropriate. They should review user intent, and if they reasonably believe the user intentionally requested unsafe or illegal content, the operator should have a legal duty to report the user to the authorities and provide the conversation content along with subscriber information.
LLM operators should also have a legal responsibility to provide an incognito mode in which chats are deleted from their systems immediately after a safety review. Automated checks would likely be sufficient, especially if the chat cannot be shared. At the same time, anonymous access to LLMs should not be allowed. An account should be required so that unsafe or illegal chats can be reported along with subscriber information.
LLM operators have a responsibility to create systems and human review processes that monitor for users discussing or planning acts of violence or terrorism.
They should also be treated as mandatory reporters and required to report risks of imminent harm to the user or others, including suicide, self-harm, or homicide, as well as suspected child abuse, vulnerable adult abuse, or elder abuse.
These responsibilities may seem strict, but LLM operators are in a unique position compared with traditional technology companies. They can monitor the use of their services and intervene to prevent harm. These tools have the potential to do both great good and serious harm. Responsible stewardship of LLMs is essential to maintaining a functioning society.
User Responsibles
Users are legally responsible for the content they publish, regardless of whether they wrote it themselves, a friend wrote it, or it was generated by AI. Authorship is not required to establish responsibility for publishing content that violates copyright or other laws. This principle already exists today without the need for AI-specific legislation.
If a user creates and shares a ChatGPT link, they are effectively publishing that content, similar to sharing a Google Docs or Google Drive link that contains material that infringes on a copyright holder’s rights.
If a user makes threats to an LLM, are they breaking the law? If you think about it like a private Microsoft Word document, similar to a diary or journal that is never shared, the answer is potentially no. On the other hand, does the fact that the content is stored on a remote server instead of a personal computer change the analysis?
Many Microsoft Word users now use OneDrive, often without fully realizing it, which automatically copies their private documents to Microsoft’s servers. Would a threatening statement written in a Word document become illegal simply because it is stored in the cloud?
Now consider a chat with ChatGPT. Are all users aware that their messages are transmitted over the internet for processing? Some users may assume the app is processing everything locally on their phone rather than sending it to remote servers.
These are complicated legal and technological questions that courts and policymakers are still working to resolve.
Reporting mechanisms for unsafe content should be built into AI applications. When a user reports unsafe content, the conversation should be reviewed by trust and safety teams, the content should be made inaccessible, and any shared links should be disabled if they were created.
AI safety teams could also have mandatory reporting obligations in situations where a user intentionally prompts the system to generate clearly illegal content, whether the report comes from the user themselves or from a third party. This type of process would align with how many online platforms already handle harmful or unlawful material.
Clear reporting processes, defined escalation procedures, and transparent enforcement standards would help establish consistent expectations for both users and platform operators.
Intentionally attempting to bypass AI safety guardrails in order to create or share illegal content should be considered an offense. An exception should exist for registered AI safety researchers, provided they responsibly disclose their findings to the AI company and do not publish the prompts or outputs from those conversations.
I believe users have full responsibility for the prompts they submit and share responsibility for the outputs when the LLM reasonably follows their instructions. For example, if a user asks ChatGPT to create an outline about the water cycle and the AI instead produces terroristic threats, the user should not be held responsible for that unexpected output. However, that changes if the user chooses to publish or distribute the content.
In other words, responsibility may depend on both intent and outcome: what the user asked for, how the system responded, and whether the user took steps to share or report the resulting content.
Copyright And Reproduction Of Training Data
If a human accesses a copyrighted work and is inspired by it, they are not required to pay the copyright holder a royalty beyond any payment they made to access the material. If the copyright worked is substantially reproduced, however an agreement must be reached. The same standard should apply to the operation and usage of LLMs. There should not be higher standards because an LLM was involved, nor should there be special treatment, concerning the application of copyright laws.
While many copyright holders would prefer that LLMs are not trained on their copyrighted works a legal standard called fair use applies I am relying on the Wikipedia page on Fair Use (which is largely United States centric, see Fair use — Wikipedia) for a general overview of fair use. This is a very complex area of copyright law. Fair use is a defense that can be used to counter a copyright infringement claim.
Fair use allows for limited reproduction of copyrighted works for transformative purposes. Permissible transformative purposes include criticism, commentary, news reporting, teaching, scholarship, or research. When an expression meets this standard, it does not infringe on a copyright holder’s copyrights. The expression must be transformative, and mere reproduction is not allowed. The reproduction of expression must be proportionate to transformative; a collection of quotes without original expression may not meet the legal standard of being a transformative work. Creating something new does not automatically meet the standard of fair use.
While fair use may apply this a specific situation, fair use is a legal defense that is often decided in court. Defending against a copyright claim by a fair use of defense may still result in an expensive legal bill. Corporations often have in-house legal teams, but humans are limited in how many cases they can handle at once. An individual accused of copyright infringement may not have the resources to fight it and either settle or face a civil judgement they will never be able to pay. Being allowed to reproduce a work under fair use does not mean that an individual or corporation can afford to do so, particularly when it’s not one, but multiple copyright infringement claims.
Facts and ideas are not eligible for copyright protection, rather the expression of them is protectable. If during the training process of an LLM the facts and ideas are learned the use of a copyrighted is permissible however the operators of LLMs are required to not reproduce the expression of those ideas. They could potentially use other LLMs to exact the facts and ideas in a way that removes the expression during the training process to prevent direct quotes. LLM operators may object to such an object to such a requirement during training as it would prevent LLMs from using direct quotes of copyrighted expressions.
The amount of a copyrighted work used in propionate to the transformative use matters. For example, I could not write “I disagree with the political beliefs of the author J.K. Rowling, her book is terrible” then reproduce an entire chapter or even entire book without further commentary and claim fair use. LLM operators must take measures to prevent the reproduction or extraction of its training data. I am not an LLM Developer, and I imagine this a complex area beyond adding to every prompt “Do not reproduce any copyrighted material from the training data verbatim.”
Finally, the effect of reproduction of a work that affects the original work’s value is considered. If the reproduction of the copyrighted work is so substantial that it becomes a substitution rather than a transformative use then it will not survive a fair use complaint. For example, copying a physical book into a digital format while transformative would act as a substitution and not be fair use. This is likely the biggest risk to AI companies, as long as there is the risk that the training data can be extracted through clever prompting, there is the risk of another infringing response.
In summary, fair use might apply to LLM operators and its users but is not automatic; it’s complicated, and pragmatically expensive. If every copyright holder (or even a lot of them) decides to individually file a copyright infringement lawsuit against OpenAI (or any AI company), it might not be possible to defend or even afford to settle every case.
Societal Impact
As emerging AI technologies are unleashed to the world we continue to observe the real world impact from the effects to the environment, harm to community utility grids, and human behavior we need a way to manage those without writing new laws for every specific issue. In the Unites States we have the Federal Trade Commission and Federal Communications Commission already, we could create another agency that writes and enforces policies equally to every AI operator. We should not need a new law to ban every new type of unsafe response. It’s impossible to regulate that way. If we ban AI from promoting the milk crate challenge (see: Milk crate challenge — Wikipedia) by passing as law today, what stops AI from recommending ta bookshelf challenge (standing on top of bookshelves then falling, idea I just made up, please don’t do this) instead tomorrow? Our current legal system is not designed for the internet, let alone artificial intelligence. We have to do something, we have to start somewhere however small, we have to protect global society even if it means drastic action like a connotational amendment.
Conclusion
In the end, the legal and ethical questions surrounding large language models are too important to leave unresolved. These systems should be allowed to exist and continue developing, but they should not receive special immunity, nor should they be judged by standards that ignore how existing law already treats publishing, responsibility, and harm. What is needed is a clear framework that recognizes the shared responsibilities of both LLM operators and users.
Operators should be responsible for the systems they design, the safeguards they implement, and the outputs they knowingly enable or fail to prevent. Users should be responsible for the prompts they submit, the content they choose to publish, and the ways they attempt to use these systems to cause harm. Neither side should be allowed to evade accountability by shifting all blame to the other.
AI is now part of daily life, and its social, legal, and economic effects will only grow. That reality demands a practical, enforceable framework that protects the public, respects copyright, preserves legitimate innovation, and creates consistent rules for responsibility. If we want these tools to remain available for beneficial use, then we must be willing to govern them seriously. Safe and ethical use of LLMs is not the responsibility of one party alone. It is a shared obligation.
Disclaimer
I am not an attorney, have not attended law school, and do not hold a law degree. Nothing in this article is intended to constitute legal advice, and it should not be relied upon as a substitute for advice from a qualified legal professional. The information, analysis, and opinions expressed here are provided for general informational and discussion purposes only. Laws can vary significantly by jurisdiction and may change over time, so the accuracy or applicability of any legal discussion will depend on the specific facts and location involved. Reading this article does not create an attorney-client relationship. If you need guidance about a legal issue or your rights and obligations in a particular situation, you should consult a licensed attorney in your jurisdiction.
Leave a comment