I Wrote this Column
Terry H. Schwadron
Jan. 30, 2023
As soon as we hear of new scientific or technological development, we seem to react in automatic, predictable, and nearly immediate manner:
What’s the practical application? How can we make money from this? Is it going to help people cheat?
Of course, the reactions tell us more about our embedded social and ethical values than it does about the latest invention or tech extension. There are a host of thinkers through the ages who would note that our darkest reactions reflect a human tendency towards greed and selfishness that requires taming through regulations, laws, and restrictions to come close to retaining a fair society.
It also tells us something about our communal fear of the New, our resistance to Change and our discomfort with Different.
We’ve been hearing a lot about computer-generated writing lately, with public experimentation of ChatGPT, an artificial intelligence tool or “generative pre-trained transformer” that can take verbal commands to designate a topic and various stylistic parameters and turn out a credible written essay, letter, form, or opinion article.
Indeed, the surprisingly successful results of this latest, still-developing experiment from OpenAI, underwritten by $10 billion from Microsoft, are both making fans of the New smile and worrying educators and those who are protective about jobs and the integrity of writing.
Public Testing
Since its public introduction on the OpenAI website, millions have tried it. You can too, just by registering on the site. It recommends users check whether responses are accurate or useful.
Even the initial results have seemed to astonish those posting comments on social media; the software is capable of learning from these experiments, presumably with eye to closing loopholes that have allowed in a series of errors in translating instructions, inaccuracies in writing, style, and the rest. That’s why the company made software available to the public to try it out.
Like it or not, ChatGPT turns out to be able to produce working computer code, college-level essays, scientific explanations, historical reviews, legal arguments — even some jokes — — without working up a sweat. The sweating is left to professors and writers suddenly struck by a future of job security issues.
Tech reviews say ChatGPT seems different from previous artificial intelligence efforts — “Smarter. Weirder. More Flexible,” explained one tech writer. “ChatGPT can remember what a user has told it before, in ways that could make it possible to create personalized therapy bots, for example,” although its knowledge is “restricted to things it learned before 2021, making some of its answers feel stale.
Rep. Jake Auchincloss (D-Mass.) delivered a two-paragraph speech written by ChatGPT on the House floor last week on a bill to create a U.S.-Israel artificial intelligence research center. It was the first speech written by a computer — an effort that still required multiple editing and refinements, but Auchincloss, 34, claimed it was important because he is of a generation that will be dealing with AI into the future.
Maybe we should just skip the obvious — that most political speeches seem automated.
Fear Always
Of course, the real point is that like the introduction of radio, television, the internet, social media, cell phones and the rest, this development is about the beginnings of a new tool — whatever its eventual uses. It could be used for good or evil — or both, and it is not something that we should shun or fully embrace on its earliest appearance.
But we do need to grapple with the possibility of fraud in school essays as well as ease in producing advertising copy. As we have learned from social media, the pushing of a distribution button makes it simply to spread misinformation and propaganda as well as information that is credible. With authorship suddenly in question, our antennae are immediately triggered to consider fraud.
If it immediately can replace illegible doctor notes or ease the generation of accurate contract information, the software probably can easily serve to make certain rote parts of our lives easier in the manner that on-line forms for taxes, wills, and other everyday chores have served.
For me, writing always has been its own reward. Good writing requires good thinking, and writing is a means by which to assess what information is on hand, what is missing, and how well I can express myself. If I’m not thinking clearly, it seems obvious that I cannot persuade you to agree with me. If I’m not organizing my thoughts well, I cannot fully understand the information I’ve been presented by books, life, or various presentations. Clearly, I look to writing not as a chore but a way of life.
That doesn’t have to change with an automated writer. The machine still needs to know what to write. Sure, a student could pass a test with a fraudulent essay; last year, the same student could have gotten someone else to write the class paper just as in previous years, rewriting from the encyclopedia might have passed. The point is, the student went to school to learn something, and fraud just means passing without the learning. If it is that easy to pass the essay test, perhaps we’re asking the wrong essay questions.
One yoga journal that tried ChatGPT to set up a yoga sequence, linked to music or aromas and tone, noted that It’s one thing to string together a bunch of sentences on a topic and another to mimic an expert. “While it may seem like a time saver, a computer program cannot capture the subtleties involved in carefully preparing bodies in specific ways for more intense poses and creating easeful and graceful transitions from one pose to another. As a yoga teacher facing a roomful of unique bodies, you need patience, practice, and presence–not your iPhone.”
Rather than ruing the end of human-generated writing (or human searches on the internet), perhaps we should look at potential of AI systems to fundamentally reshape the process of creation.
##