First Tech, Then Rules

Terry Schwadron
4 min readMar 28

--

Terry H. Schwadron

March 28, 2023

A new research study about artificial intelligence suggests that there will be a significant impact from the introduction of AI language and writing programs on the American workplace.

A University of Pennsylvania/Open AI study suggests that we should expect that some tasks at 80 percent of U.S. workplaces including schools and health care settings could be affected somewhat as companies formally adopt programs like ChatGPT or its rivals, and that those changes could affect productivity, jobs, and, eventually, public policies. The study suggests that a fifth of workers could see half their tasks go away.

It hardly seems news that automation of any kind could displace human work, nor that the constant desire for higher productivity and corporate profit in our society drives technological research towards the next big thing.

Periodic changes in work cut for good and bad, of course. The study does trigger thought about whose jobs stand to undergo change: As The New York Post noted, mathematicians, interpreters, accountants, legal secretaries, writers, the public relations industry are could have high impact while maintenance workers, cooks, mechanics, floor-layers, meat-packers and stonemasons would see little change.

Unaddressed by the study, but clear from the early experimentation with these writing and language programs are the clear implications for student cheating, plagiarizing, putting words in someone else’s mouth, and the sometimes troublesome, regurgitated narration of past events as told through the voice of mechanical internet searches.

Once again, we are launching significant technology before we’re thinking through the implications for how we best use it or any needed regulations to guard against abuse.

Routinizing Information

Banks and lenders like car dealerships long have automated contract for loans, just filling in blanks for terms and signatures. E-file tax programs are built around making annual reports more routine and less time-consuming. Marketers keep multiple versions of routine mass press releases, and hospital nurses have loads of repetitive paperwork to hand out when you or I happen to pop up for a particular treatment.

Hell, most of our political speeches seem as if they have been written by and for machines that act in fully prescribed manner. Just plug in “in the style of Donald Trump” and how predictably the machine spits out a message of electoral vengeance and not-so-veiled promises of death and destruction for opposing him.

For years, then, we’ve been using copy machine-workarounds to deliver repetitive information. For a computer program good enough to spit out more tailored information on demand is just another step.

But would we ask a machine to investigate a crime or to diagnose illnesses or to do journalistic reporting to connect previously unrelated matters? We need to recognize AI as a tool and not a replacement vehicle.

It offends our sense of human control to think that machines might be better than us at crafting a research paper or to write an essay or a piece of music or to create a plan for curing cancer. We want to believe that the world is too complicated to let machines summarize what to say — even if they do it accurately.

A recent Politico essay reported on the cultural difficulty of asking ChatGPT to report on the history of slavery attitudes among the Founding Fathers. As with searches themselves, a lot depends on what the person ordering up the automated text has ordered up.

A couple of friends have asked the machine for a story about themselves, which came back riddled with inaccuracies that included their deaths.

So, we may not be quite ready for prime time, but we’re getting there.

Tech First, Then Warnings

Timnit Gebru, of the Distributed Artificial Intelligence Research Institute, which analyzes the risks of AI systems, warned on 60 Minutes this month that we are releasing more technology without having thought through needed oversight. Our society makes it too easy to see technology used to misinform or to distort — as we have seen with election information and social media.

We had congressional hearings last week about TikTok — basically standing in for other Big Tech companies — exactly because years after watching the introduction of phone apps and marketing aimed at the young, we’re now overly concerned about both mental health among youths bombarded with suicide messages and the proliferation of privacy invasive collection of personal data. Of course, in TikTok’s case, Congress’ ire is up because its parent company is Chinese owned, and we have a layer of espionage fear that has been added to the mix.

Our capitalist system rewards the company who gets new technology out the door fastest and cheapest, not the company that approaches such an undertaking with coordination with those looking at health, labor, and socio-political protection in mind.

The result is that we already have widespread concern among educators that students are turning in machine-written essays as their own work, for example. Only now, after recognizing a problem are we seeing some effort to re-think what we might ask of students or asking the same companies who will profit from AI for tools that can recognize machine-written language from human work.

A study that projects that this kind of artificial intelligence language programming will transform tasks in the workplace is only a beginning. We need to decide how to adjust jobs to take advantage of identifying more routine information gathering, for example, and to challenge our human workforce into a higher level of raising the questions for the use of such programming.

A robotic arm in a manufacturing plant that frees a worker to a higher-order job adds not only to productivity and potential factory safety but pushes us to think anew. A factory that cites robotics as an excuse for layoffs is a lazy company that is showing it cares more about profit than workers,

The same is true for the white-collar workforce now finding itself in the teeth of technology.

##

www.terryschwadron,wordpress.com

--

--