On Friction and Ambiguity
On process vs. product and how one student moved from a broad interest in music to a research question with a manageable scope.
In early March, I read one of Jeppe Klitgaard Stricker’s posts here on Substack. In The Synthetic Knowledge Crisis, Stricker writes that “AI-generated content…increasingly looks like knowledge while bypassing the very mechanisms that give knowledge its legitimacy.” He then reflects on the ways in which universities are complicit in the shift to emphasizing visible, quantifiable output over the messier process that often accompanies learning and research. He points, among other factors, to the pressure to publish, which is of course tied to research funding and university rankings.
There is much more to say here. Educators are encouraged to list measurable learning outcomes in course syllabi—even if fields such as literature and philosophy that do not yield quantifiable results. As an educator (I still think of myself as a teacher even though I am now editing full-time), I would much rather assess students on process over product. The problem is that the process is not visible—it is often messy and slow. And process is not rewarded in a system that prioritizes quick, measurable results.
Process vs. product is the focus of a piece Stricker and I recently co-wrote, a collaboration resulting from the conversations he and I had following my reading of his Substack. That piece, titled “The Disappearance of the Unclear Question,” was just published in UNESCO Ideas Lab as part of their “Futures in Education” series.
In “The Disappearance of the Unclear Question,” Stricker and I reflect on the effects of AI on the formulation and revision of research questions. We do not reject the use of AI. Chatbots can certainly be useful in helping students narrow research questions. However, we caution against bypassing too much of the cognitive friction that accompanies learning.
Cognitive friction is a term coined by software designer Alan Cooper in his 1999 book The Inmates Are Running the Asylum: Why High Tech Products Drive Us Crazy and How to Restore the Sanity. Cooper defines cognitive friction as “the resistance encountered by a human intellect when it engages with a complex system of rules that change as the problem permutes.” The term has since circulated widely, especially among thinkers writing about AI in education. Jane Rosenzweig, who directs Harvard’s Writing Center, has a great post titled “When the friction is the point.” Rosenzweig discusses the “productive friction” that accompanies the writing process and allows students to feel ownership of their work. The concept is also central to the piece Stricker and I wrote for UNESCO.
Chatbots can be effective in helping students refine research question, but Stricker and I caution that we should not conflate prompting with other aspects of refining research questions. Research is a messy, iterative process. We can certainly use LLMs in an iterative way—in fact, models may lend themselves to that approach. They have the potential to greatly assist some students who may not otherwise have the ability to refine research questions to a manageable scope.
However, if not used in combination with the careful reading of published research, AI can allow students to gloss over understanding. The speed of chatbots can also be a disadvantage. It is the slowness of the process—the messiness of following bibliographic trails, addressing contradictions, and taking notes—that allows students to interpret and connect ideas.
A recent MIT Media Lab study (widely discussed) showed that people who use ChatGPT to draft essays have difficulty recalling the work and have less brain activity compared to those who do not use LLMs to produce drafts. Andrew R. Chow, who interpreted the MIT study for Time, notes that students who wrote essays using ChatGPT showed less ownership of their work. These results are unsurprising to those of us who have taught writing. (Aside: since this Substack is a sandbox for ideas, I’ll add that the concept of ownership and authorship in the age of AI is a topic I will explore in a future post).
Stricker and I anchor “The Disappearance of the Unclear Question” using an example from a student I taught in an introductory research writing class for undergraduates. That course was carefully scaffolded, structured around the process of moving from a topic of interest to a final paper that examined a question with a reasonable scope. Students had the entire semester to conduct research, compile an annotated bibliography, write drafts, and revise. I gave students substantial freedom to arrive at their own questions, which is challenging for many of them (challenging, in fact, for experienced writers as well).
The example Stricker and I use was from a student I taught in 2018. Despite the lapse of time, I recalled this student’s paper well—precisely because of the productive friction that accompanied her process. The student was a musician and decided in the first week of class that she wanted to write about something related to music. Her interest in music was her broad, unmanageable starting point. In her initial exploration of the topic (searching Google and academic databases), she came across the Mozart Effect, the disproven theory that playing classical music for infants improves spacial reasoning. Her essay, however, could not simply explain the theory; producing a summary is not answering a question. I often cautioned students that they could not pass my class by writing essays that could be mistaken for Wikipedia entries.
I recalled the student’s paper well also because I used it as an example in subsequent semesters.
Through discussions (with peers, with me, and with librarians) as well as a great deal of reading, writing, and revising, the student produced a final paper that arrived at a nuanced conclusion and opened further questions (a good research paper should not arrive at a dead-end). She examined the intersection of marketing and research to analyze why the public still clung to the Mozart Effect long after it had been disproven. She then discussed the effects of the theory and concluded that, while debunked, it still had the beneficial effect of expanding the audience for classical music.
This first-year undergraduate concluded her paper with the lines: “There will never be a shortcut to intelligence just as there will never be a shortcut to scientific discovery, but if anything, this research has shown that the circuitous route to prove something can reap the greatest benefits of all.” That was for a course I taught in 2018. I wonder how different her process—and conclusion—would be if she had been able to use chatbots during the process. Stricker and I expand on those ideas in our essay.