Writing is hard and time consuming. Thanks to ChatGPT, it just became far easier and faster. Rudimentary chat bots have been writing simple articles, such as sports game summaries, for years. ChatGPT, from artificial intelligence supernova OpenAI, leaps far beyond through its vast search of billions of internet texts and its ability to generate passable prose based on prompts and questions. It can even simulate software code. But it’s a Large Language Model, or LLM, so it is still not great at math, or lots of other things. It makes funny mistakes.
So what. It’s powerful enough to begin transforming dozens of tasks and businesses. And it doesn’t need to be perfect or even human-like to be the source of unimaginable mischief, too.
The Good
Joel Mokyr’s book Lever of Riches springs to mind. Just as computers could amplify and outdo many human capabilities, such as calculation, leading to boundless new products and even new industries, ChatGPT can amplify and outdo an even more human activity – writing. The quantity of text will explode. But will the quality of text – and of the embedded ideas – follow? That’s a bigger question.
Arnold Kling offers five new GPT-based businesses ”you should start today.” Mark Mills calls it “useful computing” and reminds us that even stunts – ChatGPT can pass a medical exam! – often point toward much bigger transformations.
Mills also notes the computer power dedicated to training A.I. models has grown 300,000-fold annually over the last six years. Which helps explain the deep integration of OpenAI and Microsoft’s Azure cloud computing infrastructure. The idea that A.I. is approaching human-level general intelligence is wrong for many reasons – among them, its voracious consumption of energy and space when compared to the elegant efficiency of the human brain.
Behind the scenes, ChatGPT is based on fancy pattern-matching math. But it’s also a new user interface, which might subsume large portions of Google, Wikipedia, Apple Siri, and Amazon Alexa, thus giving search a jolt in both form and function. Search is educational; it helps us learn. Search is augmented memory; it massively expands our lowly biological data storage. But now, ChatGPT’s search-plus will also amplify and democratize human output.
Regular people will get access to previously arcane tools. Wolfram Alpha has already pointed the way on turning language into math. Replit is a fantastically intuitive coding environment. As Andrej Karpathy says, “The hottest new programming language is English.”
ChatGPT will amplify machine output, too. Teams are already replacing the backend of software stacks with LLMs. ChatGPT can also turn unstructured data into structured data – in other words, create a database from a data jumble.
Lots of ink has been spilled, yet again, about the jobs ChatGPT will render obsolete. And there will be many. Used creatively, however, ChatGPT could benefit nearly anyone and be a central component of a new productivity boom. It might de-bug software for coders. Researchers can process 10 times more information. Writers and managers can generate new ideas and basic outlines, allowing them to do more of what they do best. It can offload tons of busy paperwork. People who don’t like to write text or code, but who might have other amazing talents – in art, mechanics, sports, or sales – can now turn their ideas and personalities into written output, and spoken output, and visual output.
The Bad and the Ugly
Such a powerful and versatile tool will also amplify bad things, such as disinformation and the supposed solution – censorship.
Marc Andreessen, as usual, is correct: “The censorship pressure applied to social media over the last decade pales in comparison to the censorship pressure that will be applied to AI.”
Think about today’s chief sources of disinformation, filtered, fused, and flung in even greater volumes across the globe. For example, take bogus data or statements from Hamilton 68, the Centers for Disease Control and Prevention, or 51 former intelligence officials. Then turn these falsities into thousands of articles and wikis and millions of clips, tweets, and toks. Now, you don’t need New York Times reporters or human trolls to push propaganda. You can mash up and automate the entire process.
The time and energy required to police these bots will also increase, and Elon Musk’s Twitter 2.0 is already on the job. GPTzero is a new tool which sniffs out A.I.-based text, and OpenAI itself just yesterday released a similar tool. New anti-spam counter-technologies will arise. We’ll especially need a larger number of honest humans – call them truth tellers – to sift through the exafloods of mis/dis/information. These analysts will need to play a far more sophisticated game of Fact Chess to rebut the phony Fact Checkers (TM). The Department of Homeland Security, British Army 77th Brigade, and so-called disinformation watchdogs will then redouble their censorship efforts.
Another potential danger of of the ChatGPT era will be the amplification of an already dangerous trend – the abandonment, even abolition, of human judgment. Across every organization and institution, we are suffering from a lack of action and accountability. Endless loops of compliance consultants check that things are being done “properly,” but what is getting done?
Ideally, A.I. should routinize the thankless tasks but elevate essentially human efforts. Entrepreneurial action and accountability must ultimately be human.
Some of A.I.’s great contributions may, for example, come in medicine and healthcare. But only if we allow physicians, patients, scientists, and entrepreneurs to build and use A.I. as they see fit.
A travesty of the Covid era was top-down, one-size-fits-all, medicine-by-decree, which denied experimentation, learning, and individual risk-reward calculations. It leveraged the imperious relationship of Medicare and giant health systems over doctors and patients. It accelerated the sad decline of doctors into mere box-checkers employing protocols written by Dr. Fauci and politicized medical associations. Now put Dr. Fauci in control of A.I., and inhumane medicine gets a hundred times worse. When mistakes are centralized, systemic risk explodes.
The rise of A.I. must thus be met with new rights and institutions which ensure it is a decentralizing force, as opposed to a consolidator of power. Who programs the A.I.? Which information is the A.I. allowed to consider? The cost of A.I. will determine which way many of these (de)centralizing questions tip.
OpenAI founder Sam Altman says it well.
"You should be able to write up a few pages of here's what I want, here are my values, here's how I want the AI to behave, and it reads it and thinks about it and acts exactly how you want, because it should be your AI."
Will Altman follow his wise words with action?
A version of this article first appeared at AEIdeas.