Writing, Aesthetic Judgment, and the Spectre of ChatGPT

Writing, Aesthetic Judgment, and the Spectre of ChatGPT

Abstract

The speed at which ChatGPT has penetrated higher education has been nothing short of astounding. ChatGPT is able to respond to prompts or commands and generate original content: in other words, it can write. For writers and readers ChatGPT may trigger anxieties about the very essence of authorship and originality, which in turn reflect certain deeply held notions of subjectivity: between the lines of texts that we cherish lies an author we admire (or disagree with), a poet that moves us, a journalist we respect–perhaps ourselves, our colleagues, our students. It is hard not to feel unsettled by the current moment. In this essay I wish to reflect on the very practice of writing itself, and the values we ascribe to it, at this very moment at which its upending seems likely. I approach writing, and by all means reading, as fundamentally intersubjective aesthetic practices: to write is to make a judgment, to deem an experience worthy of capturing, worthy of sharing. Like all judgments, it is an outward plea for assent. To read is to accept that invitation, to yield oneself to another's perspective. Are we ready to cede the exercise of aesthetic judgment to artificial intelligence?

Citation: Tsigkas, Alexios. “Writing, Aesthetic Judgment, and the Spectre of ChatGPT” The Jugaad Project, Vol. 5, No. 2, 2023, www.thejugaadproject.pub/writing-aesthetic-judgment [date of access]

A specter is haunting universities–the specter of ChatGPT. All the powers of old academia have entered into a holy alliance to exorcise this specter; or, perhaps, not (Marx & Engels 2002, 218). Pardon my somewhat clichéd appropriation of this most famous of quotes but, in my defense, what inspired this essay was a course paper on Karl Marx, submitted and ostensibly authored by a student yet obviously churned out by ChatGPT. For lack of a better term, the essay felt...robotic. Competent, for sure, but a close reading showed little more than a superficial sum of commentary already available online. This is how I was introduced to ChatGPT. The now infamous AI-powered chatbot is able to respond to prompts or commands to generate original content on any imaginable topic possible by mining the web for relevant data. The speed at which ChatGPT, released less than a year ago by OpenAI, has penetrated higher education has been nothing short of astounding; in more than a decade of teaching I've never experienced anything like it. The need to reflect on our teaching and evaluation practices feels urgent. Perhaps less urgent, but equally pertinent, is a collective reckoning of sorts with the very practice of writing itself and the values we ascribe to it, especially so for those of us who variously partake in the knowledge economy. It does feel like a crisis, even as we're told not to panic but to focus, instead, on the opportunities artificial intelligence can afford us.

If writing, yours or others', is your vocation, it is more than likely that the spectre of ChatGPT triggers anxieties around motivation, originality, but also authorship and judgment. The shift from human to robot can be unsettling, and never more so than when the robot turns creative. While the more common algorithms that form the digital infrastructure of our daily lives are largely computational, GPT models, which stands for generative pre-trained transformer, rely on processes of so-called deep learning to go one step further: layered data input gives way to original data output—the texts "written" by ChatGPT. The input on which ChatGPT is trained is nothing short of, well, the internet at large, the sum total of documented and digitised human knowledge: fiction, non-fiction, journalism, academic articles, blogs, tweets, the world wide web is its oyster. It can synthesise and curate existing data on the web into something original: one may ask ChatGPT to write a paper, a cover letter, a short story, or an op-ed; further, one may request that ChatGPT emulates the tone or writing style of a particular author, journalist, or public figure.

Cinderella, according to ChatGPT. A screenshot from OpenAI's webpage: openai.com/gpt-4

This moment of writerly apprehension reveals certain deeply held notions of subjectivity; human subjectivity, that is: between the lines of the texts that we cherish lies a flesh and blood person, be it an author we admire, a poet that moves us, a journalist we respect—perhaps ourselves, our students. ChatGPT encapsulates our collective unease with artificial intelligence. What are we to make of a ChatGPT-penned text? What, if anything, is different about the chatbot? In fact, so much has already been written about ChatGPT that as I type these words, I wonder whether I have something new to add, something original—the all too familiar, all too human, imposter syndrome that inflicts writers (might chatbots suffer from imposter syndrome too one day?).

Let us take a step back. While artificial intelligence is certainly not new, the stakes have been raised now that machine learning technologies have become generative—productive, rather than simply operational. ChatGPT can read and write and a fellow bot, DALL-E, another machine learning model developed by OpenAI, is able to generate all kinds of digital imagery (art?). In other words, artificial intelligence has entered the domain of aesthetics. In what follows, then, drawing on my own research on the judgment of taste, I approach reading and writing as aesthetic practices and texts themselves as aesthetic objects. I turn to the anthropology of art, specifically the work of Alfred Gell, to think through our relationship to the written word and to each other as we partake therein, spurred by texts into a shared aesthetic experience. 

In an essay titled "The Technology of Enchantment and the Enchantment of Technology", Gell suggests that we knock art from its lofty aesthetic pedestal and examine it as if it were a technical system instead (1992). He asks that we take the object of aesthetic appreciation (an artwork, a text, a technique) seriously, in all its complex materiality, and yet implores ethnographers to adopt a "methodological philistinism" that resists our collective fascination with art, our collective willingness, in other words, to succumb to the allure of artworlds (ibid 42). It would be advisable to similarly approach tools such as ChatGPT and artificial intelligence at large with a healthy dose of philistinism, especially so in this moment of collective AI-induced moral panic. I suggest that we try to move beyond the sense of doom that permeates much commentary on AI, but also to resist a resigned embrace of what is to come. Rather than fixate on what AI can do, what it will soon be able to do or what it will never be able to do, I would like to reflect on how much we are willing to let it do on our behalf.

Gell writes of the ornate prow-boards that adorn the canoes of the Trobriand Islanders: "It is the fact that an impressive canoe-board is a physical token of magical prowess on the part of the owner of the canoe which is important, as is the fact that he has access to the services of a carver whose artistic prowess is also the result of his access to superior carving magic [...] the canoe-board is not dazzling as a physical object, but as a display of artistry explicable only in magical terms, something which has been produced by magical means" (1992, 46) This is what he calls the enchantment of technology, the effect that technique has on the beholder.

Notice that enchantment, the distribution of technical magic, is an intrinsically intersubjective process; there's the owner of the canoe and there's the craftsman, but also, I would add, the potential captivated observer, be it the anthropologist or a bystander. What these agents have in common, is a shared, tacit acknowledgement of the magic nature of the canoe-board. Enchantment for Gell is a form of socialization whereby individuals become attuned to the dynamics of the collective.

The technology of enchantment described by Gell is not dissimilar to how aesthetic philosophy has treated the riddles of aesthetic judgment. In my own research I grapple with how taste judgments are made and shared, and the ways in which our judgments seek and achieve assent. Scholars of taste have long argued that the acquisition of aesthetic dispositions, of taste standards and preferences, is itself a process of socialization whereby we internalise the collective consensus, become enchanted by it—Pierre Bourdieu's famous habitus in other words. So, to be enchanted is simply to be social; to be enchanted by art is to be aware of the possibilities and limits of productive activity: "In reconstructing the processes which brought the work of art into existence, he [the spectator] is obliged to posit a creative agency which transcends his own and, hovering in the background, the power of the collectivity on whose behalf the artist exercised his technical mastery." If we accept that the work of art transcends the collective, we must also acknowledge that it "... is inherently social [...] it is a physical entity which mediates between two beings, and therefore creates a social relation between them, which in turn provides a channel for further social relations and influences" (1992, 52).

My invocation of Bourdieu's habitus–the social within–is deliberate. The concept of habitus was central to Bourdieu's attempt to articulate a theory of practice. Bourdieu argued against a sociological objectivism that reduces the social world to a readymade "... spectacle offered to an observer ..." and "... intended solely for knowledge ..." akin "... to a mere recording" (1980[1990], 52). Sounds familiar? More recent scholarship on taste maintains that a focus on practice can illuminate our varied relationships to the appraised object: sociologist Antoine Hennion argues that taste is "... a problematic modality of attachment to the world," material and social alike (2005, 131). He advocates for "a pragmatic conception" of taste, which "... aims at restoring the performative nature of the activity of taste, instead of making it an observance," a spectacle, knowledge readily available to the 'objective' observer, be it a sociologist or a chatbot (ibid 135). Like taste, writing is performative, too, in the Austinian sense of the term, words doing things, words bringing worlds into being. Thus, to think of writing and reading as aesthetic practices is to highlight their relational and performative nature, to think of writers, readers, and texts as mutually constituted, rather than predetermined. A focus on practice reminds us to pay attention to questions of scale, pace, and materiality, including the various tools and technologies deployed in the making of texts–­–media such as pens and paper, keys and screens, the rules of grammar and genre, techniques such as narrative, metaphor, or rhyme... shall we add ChatGPT prompts to the list? It thus means to examine our varied attachments to the written word, to each other, but also to a world in flux, constantly evolving rather than fixed in datasets. Ultimately, I turn to theories of practice and aesthetic judgment so as to reflect on the very nature of authorship (and perhaps authority, too) in this moment when its undoing feels likely.

My aim in foregrounding these approaches is, also, to offer a more nuanced analysis of the social mechanics of enchantment. I'm aware that the above reading of Gell's work could come across as suspiciously functionalist: I do not wish to reify 'the social'—a common trap for sociologically minded critics of artificial intelligence. Enchantment, to be sure, does not amount to agreement. Taste is constantly contested, negotiated, argued for and against. What I aim to foreground in this discussion, instead, is the intersubjectivity of aesthetic experience and, by extension, the intersubjectivity of judgment. Thus, to argue that enchantment is social is simply to trace the many relationships enacted in acts of creation, as well as in moments of appreciation. Our aesthetic judgments are not merely individual; they constantly strive for recognition, to transcend subjectivity and achieve objectivity. In his Critique of Judgment, Immanuel Kant called it subjective universality (1892[1987]); science historian, Steven Shapin, argues that when it comes to taste, objectivity is, in fact, intersubjectivity and this is how aesthetic judgments can be shared, how they can, how they do transcend the individual and become social (2012).

A robot, writing. AI generated: hotpot.ai/art-generator

Artificial Intelligence muddles the relational fabric of enchantment, the cycle of creation and appreciation. What happens to the web of enchantment when the canoe-board craftsman is replaced by an algorithm, or a chatbot, or any AI tool for that matter? Students of algorithmic systems and artificial intelligence more broadly, might well remind us that this web of enchantment is akin to a sociotechnical assemblage, neither fully human, nor fully technological–we should neither reify the social nor mythologise the machine. But what kind of thing is an algorithm, really? Surely, I would be amiss to reinforce strict divisions between the human and non-human, a division that reveals more about the fault lines of our collective imagination than the nature of things. And yet, in true ethnographic fashion, I take the intentions and proclamations of machine learning proponents seriously: humans are not as efficient as machines, they contend; we would be better off outsourcing our tasks, our judgments, to code. In other words, while AI is not, and perhaps will never be, ‘pure’ technology, attempts to contain the human, the social, are integral to its very constitution. To optimise is to codify, to remove any barriers to the uninterrupted flow of data. In the words of AI researcher Donald Michie, with machine learning, "a reliability and competence of codification can be produced which far surpasses the highest level that the unaided human expert has, ever, perhaps even could ever, attain" (quoted in Crawford 2021, 7).

The implications for academia are obvious and there’s no shortage of relevant content online—so much so that ChatGPT could breezily produce a self-reflexive essay on itself. Academics and educators are being urged to adjust, to think and to teach with, rather than against artificial intelligence—and rightly so, since machine learning is clearly not going anywhere anytime soon but also, significantly, because to moralise our apprehension would be to overlook the many nuanced ways in which technologies are received, contested, negotiated, mediated and appropriated. Does it count as cheating if my student (or I for that matter) resorts to an online thesaurus? Or right-click on a word I just typed to review synonym suggestions? Or use any of the many AI-powered writing tools already available online that stop short of generating original text? Why does the arrival of ChatGPT in the halls of academia and beyond feel like such a watershed moment?

Yes, we task algorithms with all sorts of judgments that were otherwise the domain of clearly demarcated fields of professional expertise. Are we ready to cede the exercise of aesthetic judgment, too, to artificial intelligence? Even if ChatGPT's output is ostensibly original, its 'thinking' is not: it aggregates, averages, and recycles, and this is what makes the prospect of generative artificial intelligence so uncanny and yet so devoid of aesthetic agency. We have heard it, we have read it, before. Despite the remarkable influence that GPT technologies may wield in our shared sociotechnical worlds, what ChatGPT lacks is an agentive capacity of a different sort, one that transcends reader, writer, and text as it simultaneously brings them together in aesthetic community. Even as ChatGPT is able to tap into the technologies of enchantment described by Alfred Gell—it does, after all, identify recurring patterns and sequences from the depths of our collective data—the judgment it delivers is purely derivative, no more than a self-referential data loop, echoing what has already been said, what has already been written.

If asked, ChatGPT will willingly admit that it can neither feel, nor experience. It surely sounds reassuring that robots can't feel; we deem empathy an all too human quality, something that machines, no matter how sophisticated, lack. We strive to make sense of the world and each other, and perhaps feel compelled to capture and share our experience through words: we write, first for ourselves, then in the hope that our words will resonate with someone else­; we read, in search for validation, in the hope that someone else might help us, in turn, to make sense of our own experiences and the world. Like Trobriand canoe-boards, there is something similarly enchanting about texts: despite the apparent solitude of the act, I have tried to show that reading and writing are collective enactments, which transcend both writer and reader, even as they bring them together in a delicate, tentative, intersubjective bond.

To write, then, is to make a judgment, to deem an experience worthy of capturing, worthy of sharing through words. Like all judgments, it is an outward plea for assent. To read is to accept that invitation, to yield oneself to another's perspective. This is what I wish to convey as I suggest that the acts of writing and reading are fundamentally intersubjective aesthetic practices. In the words of author Annie Ernaux, to write is to “shatter the loneliness of experiences endured and repressed, and enable beings to reimagine themselves” (2023).

Code may well carry the promise of objectivity. Yet to yield to the algorithm, it seems to me, is to abandon our search for intersubjectivity, to assume, somewhat arrogantly, that we have irrevocably resolved the conundrums of aesthetic judgment. Ironically, data objectivity takes us squarely back to the proverbial "there's no accounting for taste," to each their own—in other words, pure subjectivity: our data knows us best. Is this what we want? Can we trust artificial intelligence with a judgment so consequential, the task of sorting out our knowledge, our very experiences, and churning it out regurgitated? Gell concludes that "... what is uncertain is not the world but the knowledge we have about it" (1992, 57). The Trobriand Islanders don't know how a new canoe board will turn out. They share a common aesthetic language and faith in the technical mastery of the carver, but each canoe board enchants anew. Let's prioritise uncertainty over conviction. ChatGPT sounds a little too sure of itself.

References

Austin, J. L. 1962. How to Do Things With Words. Oxford: Oxford University Press.

Bourdieu, Pierre. 1980[1990]. The Logic of Practice, trans. by Ric hard Nice.

 Crawford, Kate. Atlas of AI. New Haven: Yale University Press.

Ernaux, Annie. Nobel Prize lecture. NobelPrize.org. Nobel Prize Outreach AB 2023. 24 Oct. 2023. https://www.nobelprize.org/prizes/literature/2022/ernaux/lecture/

Gell, Alfred. 1992. “The Technology of Enchantment and the Enchantment of Technology.” In Anthropology, Art, and Aesthetics, ed. by Jeremy Coote and Anthony Shelton, 40–63. New York: Oxford University Press.

Hennion, Antoine. 2004. “Pragmatics of Taste,” in Jacobs, Marc & Nancy Hanahan, eds. The Blackwell Companion to the Sociology of Culture. Oxford: Blackwell.

Kant, Immanuel. 1892[1987]. Critique of Judgment, trans. by Werner S. Pluhar. Indianapolis: Hackett.

Marx, Karl & Friedrich Engels. 1888[2002]. The Communist Manifesto. London: Penguin Books.

Shapin, Steven. 2012. “The Sciences of Subjectivity.” In Social Studies of Science, 42(2): 170–184.

Green Thumbs: The Politics and Precarity of Land Care Labors

Green Thumbs: The Politics and Precarity of Land Care Labors

Artisans by Trade: Working as Weavers and Embroiderers in the Chiapas Highlands

Artisans by Trade: Working as Weavers and Embroiderers in the Chiapas Highlands