The rapid and continuing rise in the development and use of Artificial Intelligence (AI) devices raises many questions, concerns and hopes for the future of humankind, if not the present. AGI (Artificial General Intelligence), sometimes called GAI (Generative Artificial Intelligence), combined with LLMs (Large Language Models), is rapidly becoming part of daily human activity in inquiry, writing, and communication. We no longer communicate with each other on the internet (assuming you have access to a computer and the internet), we are communicating with the internet. One begins to wonder if human intercourse is being replaced by silicon substitutes. Where all this takes us is uncertain, and for many, the potential for unintended adverse consequence for humankind is worrisome indeed. Perhaps more worrisome is that many in humankind don’t seem to see what all the fuss is about, or aren’t paying attention at all.

If that isn’t worrisome enough, most importantly, will authors, of the human variety, lose all hope of being valued for their writing – creating original authentic works, finding their audience, generating reasonable income and recognition for their efforts and protecting their intellectual property? Ah, there’s the rub. (And if you missed the rub, do you think the author of that sentence was Doug Jordan, or an AGI bot?)
There are many concerns for the future of the literary arts in the face of ‘competition’ from artificial authors. Here are a few:
- Unauthorized (interesting word) and uncompensated use of copyright material by developers of AI technologies in ‘educating’ machines in Large Language Models[1].
- Piracy – the theft of published copyright material and re-presented and often re-sold as the product of someone/something different from the original.
- Plagiarism – the representation of another person’s language, thoughts, ideas, or expressions as one’s own original work; although many types of plagiarism may not meet the legal requirements in copyright law, the passing-off of another’s work as one’s own is regarded as an ethical breech.
- Beyond loss of income for the originator of the written pieces, and the loss of intellectual property rights, even jobs for professional writers (low-paying as they are) become at risk of elimination. With the prospect of ‘AI writers’ being cheaper than human writers and therefore representing a competitive advantage for the producer of the AI generated product, human authors may find no work at all. (It may be hard to factor in the capital equipment costs and cost of energy in running the vast server farms that host the LLM/GAI functionality, which is not nothing, because these costs are distributed over a vast market of other users.)
- Creativity and authenticity becomes a confused mess. With the current state of AGI/LLM technology, impressive as it is in producing human-appearing text, such writing is not imaginative or creative in original material, it is regurgitation of existing bits and fragments of style and usage of existing writers. As an article for the [US] Authors Guild puts it: ‘These AI-created “new” works, as such, are not really new or truly creative. Rather, the AI system ingests existing works that have been broken down into data points, and then rehashes them—albeit in a very complex way. As Dr. Alison Gopnik, a professor of psychology and part of the AI research group at the University of California, Berkeley, puts it: “We might call it ‘artificial intelligence,’ but a better name might be ‘extracting statistical patterns from large data sets.’”
- Authentic original or ersatz copy? With increasing production of AI ‘new’ material based on catalogued works of human authors the question becomes, will the consuming public know the difference between the work of a CHAT GBT surrogate author and a new original work by the human author? and what difference might that make? Will humans, in general, value human created material over AI material, or not care?
As to human-created authenticity, it may well be that CHAT GBT devices will never (never?) be capable of original new material because they are not truly intelligent, merely regurgitation machines. Until AI programmers can produce algorithms that emulate the full range of human intelligence, including notions of intellect, wit, perspicacity, acuity, intuition and humour, CHAT GBT devices can never be creative in the literary arts (or any cultural arts), the way humans can. But don’t count on it. And anyway, readers may not care.
So what is a ‘real’ author to do? Count on discerning readers to recognize and value authenticity in writing? Hope that these readers are willing to pay the higher price of the human author’s book? (Did you know that it is estimated that even now, perhaps 45% of books sold were produced or partially produced by an AGI device? When you see a book for sale at your local Shoppers Drug Mart book corner for $4.99, or less, did you wonder if that was an authorized product of the original author?)
Should authors depend upon their governments to produce robust copyright laws to protect their intellectual property rights, and vigorously enforce them?
Darwinian authors probably need to pay attention: adapt or perish.
There is an emerging technology in the AGI writing world known as “distant writing”, a literary practice where human authors act as narrative designers – defining requirements, style, and constraints – while Large Language Models (LLMs) generate the actual text. The method relies on ‘prompt engineering’ and iterative refinement with the human meta-author curating and taking responsibility for the final product. In other words, the author creates the parameters of the narrative he/she wants to produce (plot, characters, scene, narrative style) and gives that to the LLM bot to craft a first draft – whole work, chapter by chapter, or paragraph by paragraph. The author reviews the resulting draft and accepts, rejects, suggests revisions, etc., and asks the bot to produce a second draft, progressing iteratively until a final product is produced, or the thing goes in the trash for a complete rewrite. One has to wonder how different is that from conventional creative writing in which the author produces his first draft, self-edits and revises, and then submits the work to an external editor, rinse and repeat.
Will the ‘distant writing’ method save time from conventional writing? Perhaps – in the research of factual elements in the narrative, in lightning-fast composition, accurate typing (spell checking and grammar is already mastered by the LLM processor), and if you’re lucky, produce a final draft in the early rounds. But I doubt it. Will the final product look and feel like the author’s own work, or somehow a slightly distorted version?
I’ve considered testing this method in a small way. Frame the project in terms of narrative goals and content. Give the AGI project to a trusted and skilled engineering friend to conduct the distant writing process using CHAT GPT (or other): have the bot master Doug Jordan’s narrative style by rending in memory the contents of his previous novels, produce a first draft and subsequent iterations under the direction of the surrogate author until satisfied. In parallel, Doug Jordan will produce his own draft narrative using his usual writing (and typing!) editing and revising methods. Compare the two outputs.
Hmmm,
Or maybe I’ll simply try a low-tech version of AI and merely dictate my draft, leaving MSWord to type the first draft. Or maybe get truly adventurous and then ask MS Co-pilot to suggest revisions!
Will I feel somehow this is dishonest, and not a true version of my authentic creative self?
For that matter, will my readers feel abused by this deception?
But what if it sells really well??
Doug Jordan, reporting to you from Kanata
© Douglas Jordan & AFS Publishing. All rights reserved. No part of these blogs and newsletters may be reproduced without the express permission of the author and/or the publisher, except upon payment of a small royalty, 5¢.
[1] Large Language Models (LLMs) are programmed computer modules that seek to emulate the writing styles (and in auditory environments, the sounds) of human communications and compositions.
The largest and most capable LLMs are generative pretrained transformers (GPTs), which are largely used in generative chatbots such as ChatGPT. LLMs can be fine-tuned for specific tasks or guided by prompt engineering. These models acquire predictive power regarding syntax, semantics, and ontologies inherent in human language corpora, but they also inherit inaccuracies and biases present in the data they are trained in.