The authors compare ChatGPT to Roald Dahl’s Great Automatic Grammatizator
While OpenAI’s ChatGPT may look nothing like Roald Dahl’s (1916-1990) Great Automatic Grammatizator with its befuddling arrangement of wires, switches, thermionic valves, and electrical contraptions, its workings remain uncannily similar to the strange machine that Dahl had conjured in his classic short story: The Great Automatic Grammatizator (1954).
Much like the Great Grammatizator, ChatGPT can spit out coherent articles or stories (or, at least, seemingly so), at a phenomenal rate. Intriguingly, much like in Dahl’s fantasy, these can easily pass off as original works undertaken by humans. In Dahl’s story, the young and ambitious Adolph Knipe, the protagonist, and the creator of the Grammatizator, leverages the Grammatizator’s unique capability to write stories, novels, and articles to good effect. With some fine-tuning of the machine Knipe was able to publish an incredible volume of work as his own, past the scrutiny of the most fastidious of editors. Knipe’s machinations, fueled by his secret desire to become a famous writer, required him to keep his dealings clandestine.
Does ChatGPT, like the Grammatizator, harbinger a similar outcome, and the simultaneous end of human creativity and industry, today? We address this question here. Let us begin by making sense of how ChatGPT works.
At its core, ChatGPT is a class of Large Language Model (LLMs) called transformers (ChatGPT stands for Chat Generative Pre-Trained Transformer), coupled with a chat or conversational interface, having deep learning capabilities. A deep learning algorithm, made up of layers of artificial neural networks (ANNs), forms its core. These particular networks have the ability to cull out patterns from massive quantities of data, and more importantly, “understand” the context in a human-machine conversation through the mechanism of self-attention.
Self-attention, very simply, reflects a transformer’s ability to focus on various parts of an input data sequence (herein text) in developing its “understanding” of a given context. Herein, “understanding” is not what we understand as understanding to be, but loosely speaking, refers to the frequency of association of phrases and/or words in a dataset. For instance, the word ‘water’ is most frequently associated with words like ‘drinking’ or ‘cleaning’. So, the transformer understands water to be associated with either the act of drinking or cleaning, depending on a larger context. The transformer “learns” about these associations through the process of reinforcement (a reward and punishment system), or through supervision, during its training phase. Learning, in turn, entails searching, scraping, and sifting through copious amounts of data (from the Internet or other sources) as mentioned before.
Over time, the transformer adapts and gets better in responding to queries. We interpret the responses from a “well-trained” transformer (such as ChatGPT) as intelligent behavior, or as reflections of true understanding (by a machine).
Machines are phenomenal at doing certain tasks, but are they equally great at being? We feel this is an important question to ask ourselves before we start considering machines as having superhuman (or, at least, human) capabilities. Our inherent penchant for attributing human characteristics to the inanimate is self-serving in this regard; it ends up adding to the intrigue surrounding entities like ChatGPT. This intrigue can become the grist to the mill of a cleverly crafted marketing program that can lead to extensive adoption and usage of contrivances like ChatGPT.
This intrigue can be easily augmented through crafty tweaks to the machine. For instance, when a delay in typing out the response to some deeply philosophical question (e.g., What is the meaning of life?) is introduced, the delay may create the aura of actual thinking by the machine that ChatGPT is. Further, toying with the blinking rate of the cursor—a slow rate portraying, perhaps, deep introspection, or a faster rate depicting the imminence of completion of thinking—can produce dramatic effects. The maverick among the developers may even engineer a mechanism to introduce a Freudian slip in an ongoing conversation at an opportune moment, thereby, pretentiously, creating a window to the machine’s “true” feelings or mind. However, an impression of thinking is not the same as true thinking—the methods required to achieve each are markedly different as we discuss below.
LLMs, like ChatGPT, primarily work with data, based on statistical considerations. In this regard, LLMs have been christened, perhaps a tad simplistically, as statistical parrots that merely blurt out canned responses to queries. More significantly, being a parrot, herein, implies lack of thinking, imagination, introspection, questioning, and deliberation: things that are deeply human. Socrates exalted dialogue, debate, questioning, and thinking, above written text, as instrumental to learning.
The famous Socratic method forms the very foundation of courtroom deliberations, and its business school offshoot: the celebrated case method of learning and teaching à la Harvard Business School. Meek acquiescence to viewpoints, or acceptance of arguments do not characterize the Socratic Method; tumultuous debates, extensive deliberations, courage of conviction, assiduous questioning, deep introspection, and critical openness do. Deep thinking, however, does not come easy. Like all great faculties it needs to be consciously and continuously honed. It is easy to get lured by the easy fixes provided by thingamajigs like ChatGPT; the temptation is very real.
Unquestioned submission, subservience, indolent dependence and acceptance of readymade produce from these contraptions will, ultimately, leave us with the short end of the stick. We will end up refining mediocrity wherein ChatGPT (or the likes of it) becomes our only saving grace (much like how the Grammatizator ended up being for all of the run-off-the-mill writers). Apathy towards thinking may even start-off a vicious cycle (less thinking, and greater dependence on ChatGPT, and, hence, less thinking, and so on, and so forth) driving us to a point of no return. Ultimately, the brand ‘I’ will be at stake, much like the abovementioned writers in Dahl’s fantasy who stopped writing. These writers relegated writing, their very raison d’être, to the Grammatizator, and signed a contract with Knipe’s agency that prevented them from writing anymore. In essence, these writers gave up their most important asset: their name, or their brand identity.
While the macabre situation pertaining to the human intellect that we paint herein may seem unreal or distant, or, at best, a fantasy, now, the early signs are beginning to show already. What if we declared, that this very article is written by ChatGPT? Going forward, we must retain faith in our own faculties and make it our life’s mission to question and think, especially when our thinking is at stake. To think is to be (Cogito, ergo sum ~ Descartes).
So, taking the cure from Descartes, and thereafter, following Heidegger (from his book: What is Called Thinking?), and Nilkantha Bagchi, the “broken intellectual”, played by Ritwik Ghatak himself, in his critically acclaimed Bengali movie: Jukti Takko Aar Gappo (Reason Debate and a Story) (1974), we implore one and every to: Bhabo, Bhabo, Bhaba practice koro (think, think more, and continue to practice thinking).
The piece has been written by S Subramanyeswar, group CEO – India, and chief strategy officer – Asia Pacific, MullenLowe Group, and Rahul Kumar Sett, chairperson, Center of Excellence in Brand Management at the Indian Institute of Management Nagpur.
(The article was first published on Campaign India)
(Photo credits: Pixabay)