AI and The Death of Authenticity

Darkened image of a hand touching an iPad with a little digital ghost hovering over it labelled "AI". Keywords LLM, Generative AI.

By Mike Kiser

We live in an age that values authenticity: being true to who you are and what you value. It is ironic, then, that one of the more recent innovations of the past few years—Large Language Models (LLM), or generative AI—is in the process of undermining authenticity itself.

Human Authorship, Technology Editorship

Hold on just a minute or two, you may be saying to yourself: “We’ve long used innovation as an assistive tool!” This is certainly true; we’ve grown accustomed to using technology to assist with a variegated selection of activities that no one thinks twice about. As we perform these activities, though, the technology is assisting the human.

Take assistive writing features of modern word processing products: the human produces the content, and the program corrects spelling and ensures that the sentences are well-formed and comply with known grammars. In this process, the technology performs the editing function: copy editing and some level of line editing. To put it succinctly, the human is the author, and the technology is the editor.

The recent wave of generative AI reverses those roles: the human prompts the system to generate content and then edits it to fit purpose. The AI becomes the author, and the human is relegated to the role of editor.

This exchange of roles then leads to all sorts of ethical issues around authenticity: humans are tempted to continue to claim authorship of the generated material. This leads to all sorts of ethical issues as they claim work that is not their own.

Inauthenticity in Academia

Academia is largely concerned with truth—what is authentic. Generative AI is undermining that by providing students with questionable content. Since LLMs are creating texts with “what’s likely to be next,” they can create plausible-sounding sources and data that is the opposite of truth. When prompted for sourcing, generative AI has produced plausible citations out of thin air. Universities and other school systems will need to be able to further educate students about the dangers of thinking that if a document reads well and appears substantiated that it must be true—they must safeguard what is authentic, what is true.

In addition, generative AI creates another challenge for authenticity: the ability to generate well-written papers by outsourcing the task to AI is now prevalent in higher education settings. Multiple universities have forced students to go back to in-person written or oral exams to ensure that the knowledge and written materials are, indeed, generated by the student rather than an online tool.

Historically, identifying this kind of synthetic generation of material was fairly straightforward, with online tools to detect plagiarism and paper reuse. With the rise of generative AI, this check has become much more difficult. OpenAI and others have been working on watermarking generated content, but it is a work in progress (with adversarial approaches to thwart any safeguards already surfacing.) One professor attempted to use ChatGPT to inform on potential users of the solution in his class, resulting in the entire class being accused of cheating (inaccurately, of course.)

Inauthenticity in the Creation of Art and Writing

If academia presents a clear delineation of right and wrong in exams and classwork, the world outside of academic institutions presents an ethical quagmire when it comes to generated content.

For example, ghostwriting (by humans) has been a practice for centuries. At times, it’s an open secret that someone’s book or other content is actually written by a third party (and even at times they receive attribution); in other instances, the lines of authorship are not nearly as clear. Ghostwriting, though, is governed by contract law: the true author has intellectual property rights over the content that they create, which they then sell to the public author of the piece.

With the use of generative AI, that contractual line may become blurred: who owns the rights to the generated content? Is it the AI system? Or the “prompter” of the system? Or might it be the creator of the content that the AI system drew from? (see the ongoing Getty images lawsuit as an example of this).

There are no easy solutions to these situations; the level of content, the forum for which it was produced, the perceived benefit of claiming authorship, and a host of other factors speak to the ethics of claiming authorship in these cases. A New York Times bestselling book is a different prospect than creating content for a low-slung personal blog, but the ethical concerns remain the same.

The core truth is that with generative AI, the AI system steps forward as the author of the artwork or the written creation, while the human user steps back into the role of editor.

Safeguarding Authenticity

Flipping these roles of author and editor erodes the authenticity of any created material and presents a series of ethical questions that must be asked. Going forward, how can we ensure that knowledge demonstrated from created material actually originates with a human and not LLMs and generative AI? How can we trust or prove that someone was the creator of a particular document or piece of art? How can we—or should we—credit the AI systems we use with the proper attribution of role?

Ultimately, how we guard against the temptation to claim authorship of these creations—when we are merely their editors—will either continue to erode authenticity and truth in our society or safeguard it for future generations.

Author

Mike Kiser

Director of Strategy and Standards, SailPoint

Mike Kiser is insecure. He has been this way since birth, despite holding a panoply of industry positions over the past 20 years—from the Office of the CTO to Security Strategist to Security Analyst to Security Architect—that might imply otherwise. In spite of this, he has designed, directed, and advised on large-scale security deployments for a global clientele. He is currently in a long-term relationship with fine haberdashery, is a chronic chronoptimist (look it up), and delights in needlessly convoluted verbiage. Mike speaks regularly at events such as the European Identity Conference and the RSA Conference, is a member of several standards groups, and has presented identity-related research at Black Hat and Def Con. He is currently the Director of Strategy and Standards at SailPoint Technologies and an active IDPro member.

Badge for IDPro Editorial Committee
IDPro Newsletter Author badge
IDPro Active BoK Reviewer badge

Lets get in touch ...

Please use the below contact form to leave your message with us. We will be pleased to respond as soon as possible.

Contact Us

Name(Required)
You may contact us by filling in this form any time you need professional support or have any questions. You can also fill in the form to leave your comments or feedback.