AI and “writing in your own words”. On conversing with students

See cover photo last blog – still not AI created

Here comes the AI post related to teaching, building on the first one a few weeks back. In the meantime, I had some nice conversations with some of you on AI, and some more insights related to students and their ways of dealing with it. Reminder, I think that a) AI is overrated vis-à-vis human intelligence, and b) it is REALLY important that we human brains & bodies continue doing the work of thinking, lest our thinking muscles atrophy. Thinking requires time, error, friction – it is a process first, and an outcome second. 

With that, let me pivot to the core area of my life where AI plays a role – teaching. Smart AI experts will tell you that “100% of students use it”. See for example, Bowen and Watson (2024), Teaching with AI. A practical guide to a new era of human learning, page 4 (in the first edition – just noticing that there is already a second one). That is not true based on what students tell me (I am assuming here that it would be hard for them to lie in my face). But of course, many students do use it.

What should we professors do about it? Interestingly, this is one of the few areas where our institutions have given us extremely vague guidelines, some would even say: freedom, which, as you may recall, is under threat in many other ways. Everything is in the discretion of the professor; increasingly, we are offered workshops on how to integrate and benefit from AI in teaching. But it really feels more like the administration thinks this is a problem faculty should figure out in the classroom, and they are watching (I would predict they get more proactive once the possibility of a lawsuit enters the picture). Indeed, I have had several conversations with colleagues – we are all somehow, and un-coordinated ways, dealing with this which must be confusing for students, to say the least.

So, a plurality of measures has been developed by thousands of college professors. Personally, I was pleased to see that the straightforward approach of “my teaching is now based on AI” has not been favorably received by students. In at least one case at Northeastern University, they thought they deserved more for their hefty tuition.

My own approach is limited, but has evolved, and in a way that I increasingly enjoy. I started with treating AI pretty much like plagiarism – you cannot steal work (plagiarism); you can also not have a machine produce your work for you. In other words, YOU should do YOUR work. How to deal with this exactly? Our online course management system has both a plagiarism and AI checker, so I can produce reports if either appears in a student’s submission. Because the AI detection is not 100% reliable, I send the report to the student with a reduced grade (say: 60% of the text seems to be AI generated, grade of 10 points is reduced to 4) and ask them to come see me and tell me their story. I then ask them questions like: Did you use AI or did you write this yourself? If self-written, how did you write it? Any thoughts why the AI detection flagged this section of the text? And so forth. At the end of this process, if a student tells me they did NOT use AI and I find this plausible, I change the grade. In other words, the AI detector is not treated as ultimate authority; only if the student does not come talk to me (which happens a lot).

These encounters with students have been eye-opening to me in several ways. First, more students come to see me, and I welcome the fact that this policy creates a new incentive for conversation; we often talk about other things as well. Second, several students have admitted openly using AI. They did it because of time pressure or other chaos in their lives that prevented them from completing the assignment properly. These students are honest, profoundly sorry, and often among the best students in class. I see them take responsibility for a mistake, and many of them vow to not ever do this again. I usually give them a second chance. After all, these students have a very clear understanding of what they are supposed to do.

Third, some students come in and are surprised, shocked and even ashamed that their work should contain AI use. They say they would never cheat and take pride in their work. We then try to find out what happened in the flagged text passages. Some say they use Grammarly to “clean up their language”. Supposedly, according to the detection tools, using this program should NOT be flagged, but apparently, the boundaries between all these programs are getting blurrier by the day. For many of my students, English is not their first language. And even those for whom English is the first language use it. I often suggest not using Grammarly for their next submission to see what happens. And I tell them that it would be good for them to improve their writing independently, without relying on a program (as we had to do in the olden days when Grammarly did not exist – this usually earns me some confused looks).  

Fourth, there are students who say things that I first don’t fully understand. As the conversation goes on, I realize that they have a completely different idea about what “writing” and “doing research” means. One student told me they did not use AI, only for structuring the essay. Ok, that is AI use. But why does the student think it is not? So I keep asking, and I hear things like: “I only ask xxx for the outline, and then I fill it in.” Fill it in with what exactly? And how is “structuring the essay” not part of the writing process? Again, my advice: try the next one without using AI.

In another case, the student says they did not use AI but then mused that perhaps in the sections where they summarized literature AI use came up because they translated that summary from Spanish into English. I ask: how did you translate it? Of course, the translation was done by google, not by the student. I suggest that this meant they did not translate it, but had it done by a program; I explain that translation, done by humans, is complex work and that we have a program at the university to become a translator/ interpreter. The student did not know that and became really interested in checking out the program.

Relatedly, I also realized that “summarizing literature” to some of my students does not mean what it means for me, namely “reading it (if not in its entirety, then cursorily), taking notes, and then writing summarizing sentences about it”. Instead, they use summaries already written about that literature they can find online, copy and paste it, and then edit it. The notion of “writing in your own words” which I use in abundance in my syllabi and assignment prompts, that in my own college days really meant to sit down with paper and pen and do that – has, if not evaporated, profoundly transformed. Here, I nostalgically recall my final MA exams, handwritten, over 6 hours. What an achievement that felt to be.

Bottom line: I think it is true that for kids, teens and people in their 20s, there is very little separation of life on- and offline. They do almost all their writing for college through internet-connected computers and phones. What are their own ideas in this context and what are others’? They have a thought, check something real quick, include it in what they are writing, and why should they not have a chat with a chatbot on this, who will probably be helpful in the process? Many of us college professors also realize another parallel: what AI checkers detect as AI produced – generic writing, repetitive sentence structure, certain word choices, etc. – is EXACTLY how many of our real life, blood and flesh, students WRITE. And why is that? Probably because their lives take place in the same spaces where Large Language Models are being trained. The interconnectedness is undeniable. 

I assign AI use in limited ways and to create a learning moment (for example: think about topics you want to write your research paper about; write three choices down; then ask AI for paper topics; then make a final choice and reflect on this process). So far, I can say that these reflective engagements with AI have helped me understand my students better, and I think they have clarified some things for them in a very uncertain space. Further, I have seen that MANY students have a clear understanding of authorship and their own creativity and intelligence. For example, some wrote in their reflections they “don’t trust” AI or prefer to come up with their own ideas because they find them more interesting. I feel energized by such answers and commit to supporting these young minds in their unique intellectual development.

To conclude, let me share one interesting chatbot answer. In a class on Postcolonialism, I asked students to answer the question why they did not speak Miccosukee (the language of one Native American tribe in South Florida). It was a surprising question to them, because the Miccosukee language community is very small; accordingly, many answered that they did not know anybody speaking it, that it was not taught in school, and that while they knew about the Miccosukee, they also knew that US history is based on erasure of indigenous peoples. In class, we expanded on the discussion re: colonial erasure. Here, I am sharing parts of the answer I got from ChatGPT to this question:      

“That’s a thoughtful question! I don’t speak the Miccosukee language primarily because: … My language abilities are based on the data I was trained on, and Miccosukee is an endangered language with limited publicly available resources … Indigenous languages like Miccosukee are often preserved by the community through oral tradition and selective teaching. That means the language may not be widely published or digitized—both of which are necessary for me to learn from. … Some Indigenous communities … view their languages as sacred or culturally sensitive. Not all of them want AI models or outsiders to access or replicate it, and that choice deserves respect.”

An ”honest” answer in terms of recognizing the limitation of LLMs. AI “knows” that it knows nothing beyond the universe of truths and falsehoods the internet has become. Great answer, ChatGPT! As for us humans, let’s continue to cast the net of learning wider than that.

A family of black swans – probably unaware of AI and still living their lives

AI or not AI? Thinking about knowledge beyond internet uploads (part 1)

Against the odds, not AI-created (but a machine did help)

Does your life feel like it has been taken over by “Artificial Intelligence”? I have a very remote relationship to it but cannot say that it is not influential (while I was typing “Artificial”, Word already proposed “intelligence” to follow – machines telling you what is right have been around for a long time, after all).

A warning: this blog comes in two parts. As I started writing, I realized I had been thinking about AI a lot, but never really articulated a position outside of my brain or beyond discussions on one of the many new articles or expert assessments about it. Hence, a lot had been piling up inside me, and the post was getting longer and longer. The first part is about AI as we academics may be dealing with it in our own work, as researchers and knowledge producers; the second is about AI and teaching.  

Thinking back, AI became relevant for me the first time in late 2022, when ChatGPT was launched; at the time I simply felt exhausted. It seemed to be a threat to all that I, and many like me, do. A bit like: You are standing somewhere, minding your own business, when suddenly a massive flood comes at you. All you can do is try to keep standing there, but you might be swept away and must focus all your energies on not drowning.

What is it that we do and that felt so threatened? Thinking. Creating knowledge. All of a sudden, it seemed like this complicated work can be done much better, and much faster, by a new invention (I try to resist the all-too-common anthromorphizing of AI here). Why do something that can be done much better and more conveniently FOR you? Sounds like the story of modernization – thinking of the washing machine, the calculator, the computer – what could possibly be problematic about that? Perhaps that your thinking muscles, if you don’t use them, might regress.

There have been – and continue to be – so many prognoses about how AI will change our lives. Some are sensational, some dystopian, others ooze authority about how to best harness this new tool for your benefit and for the best purposes. In the academic world, it seems that many people do the “harness for your benefit/ best purposes” strategy.

I recall the first time I was directly affected by this kind of AI use: a colleague who had agreed to write a letter for my promotion file sent a draft of that letter to me and asked for my feedback (Was everything ok? Did I want to add anything?). I had one or two comments but really did not want to further burden the colleague, as I was truly grateful that they had taken the time. In the answer of that colleague, I learned that the letter was composed by AI. That felt awkward. The letter sounded fine to me, a bit general, but checking all the boxes that a letter of support should check. My gratitude shrank, to be honest. Was I not worth the effort for a REAL person to write a recommendation for me? I get it, of course: writing substantive letters takes time, and we are all chronically time constrained. Sometime later, I heard colleagues talk about their routine use of AI for the many letters of recommendation they are asked to write for their students. The matter-of-fact way they mentioned this was so surprising to me that I could not even verbally express my surprise. Students and former students: as of this writing, I can attest that ALL of the letters I have ever written for you have been human composed. By myself. I admit that I use a certain pattern (where do I know you from, what can I say about your academic achievements, which are your particular strengths …), and I cannot guarantee each letter was of high quality; but I took the time.

I kept thinking about this AI composed letter by my colleague and can say I got used to this new reality. I am not bitter (to quote our favorite Miami comedian Dave Barry)! In retrospect, I should have asked HOW they used it – and could have learned something. Also, I admit that the many letters that I have to READ (like, for applicants to our Graduate Program) are often pretty generic, and sometimes of a quality that suggests AI assistance might have improved them significantly. Still: a piece of advice for all of you trying to get a job or get into a program: I learned from The Professor is In that when a lot depends on it – for example, when writing a cover letter for a job that you ACTUALLY want – AI is not going to get you on the shortlist.   

When listening to colleagues – and I typically converse with social science and humanities folks about this – the most common AI use that I hear about is saving time in the complex world or knowledge production. For example, it helps to get an overview of relevant literature in a particular area – you don’t have to read everything from A to Z, and AI may be useful in getting comprehensive coverage/ lead you to literature you were not yet aware of. As an expert in a particular field, you have the ability to assess what AI generates for you, so clearly, this new tool enhances the knowledge production you already know how to engage in. Sidenote: my AI-averse significant other sometimes checks AI tools in the field of his expertise and often gets responses to his questions he is unsatisfied with (from outright wrong to superficial to not based on pertinent literature). I guess this tells us that training a program on “what is out there on the internet” aka Large Language Models (LLMs) does not mean you get good information, but rather a lot of replicated information. This cannot be terribly surprising to anyone – even journalist Andres Oppenheimer, who tells us that he has been using chatbots for getting his news, now warns us that these are not always right (and in fact, get “wronger” with each upgrade).

Is this thought still allowed today: can we conceive of knowledge as something beyond an internet upload? Is it perhaps something inside a human (or even non-human) brain and body and between people with brains and bodies? Something that is alive, evolving, context-dependent, molded by those who create it, use it, reuse it, apply it?

As all of you, I get a lot of news about AI. There is the “no alternative” kind. It makes no sense to avoid it (I have done a lot of avoiding, as you can perhaps tell – but let me call that “keeping a sane space for human thinking and interaction”). Rather, “it” will take over. The developments are so rapid, the only thing we can do is try to keep up. Are the bots already leading, and humanity is running after them (if they let us)? I happen to think there is no such thing as “Artificial Intelligence”. What we rather have is “Predatory Pattern Recognition”, programmed by human intelligence. Predatory, because whatever online available knowledge LLMs are trained on was produced by someone (except that we now hear increasingly about AI generated published research – sigh). In their answers, AI tools come across as friendly, knowledgeable counterparts (anthromorphization again – making people “feel comfortable” is a big part of the selling point). And how does the program “know”? Could it at least make transparent what data it draws on? I would call this “citing your sources”. In this context, Academia.edu has been in the headlines recently. Many academics use it to make their work more widely known and some, especially those without institutional affiliation, use it as their website. It recently changed its terms of service, pretty much stating that it allows itself to do anything with the data you upload, including using it for LLMs to get trained on. Many academics erased their account. Academia.edu then backpaddled, but it is probably time to move to scholar-controlled online networks.  

Drawing by Nick Sousanis

Through a colleague, I found this visual created by Nick Sousanis (very cool guy – check him out, he wrote his dissertation as a comic!). It seems like a good encapsulation of what AI does for us and why it is not a good idea to use it if you are interested in learning to think and keep those thinking muscles in good shape. While time might be saved, having the right answer to a question (outcome) is not the same as learning (process – nonlinear, sometimes painful).

A few final thoughts: First, I have approached the issue from my own experience, and accordingly, inexperience in other fields. These other fields might be more interesting. For example, in ”What if there’s no AGI?” Bryan Macmahon takes a look at the financial implications of the AI hype. He talks about the limits of LLMs that their own creators have been aware of for a long time. But since they still wanted to make money, they lured investors to sink huge sums into the “next internet” – a problem that could result into another economic bubble (the burst of which will harm all of us, not just the investors).

Second, if you want to know how bad AI is for the environment, watch this video conversation by Inside Climate News “Is AI throwing climate change under the bus?”. Admittedly, the question is a bit misguided, as the conversation is about measures to STOP climate change being thrown under the bus, but the short answer is: Yes, exclamation mark! The data centers use tons of electricity and water. Longer answer – it depends on how this demand is going to be satisfied, by fossil fuels or renewables (guess what plans the current US administration has on that). So, how about thinking about AI use as if replacing your nice hybrid, or even electric car with the worst gas-guzzler you can imagine? To make it more normative: do you really not care how much this goes in the wrong direction, beyond the comfort of it all??    

Rodell Warner

Finally, have I come across anything REALLY good about AI? Yes. When it is used for creativity, as Caribbean visual artist Rodell Warner demonstrates in this post: Brief and Candid Notes on Artificial Archive. He writes about the many limitations of historical photographs taken of Caribbean people, especially people of color. They are depicted as the white elite – who took the photos – saw them: more as labor force and part of the estate equipment than actual humans. Warner uses text-to-image AI to imagine how ordinary 19th century Caribbean people might have looked like in their own image, as humans who show their personality, express feelings, have fun, are portrayed for their uniqueness and beauty – check out these stunning photos. It is an odd experience because the photos deviate so much from what we expect to be shown – humans de-humanized by enslavement and indentureship. I have no idea how this transformation works technically, but it was the first time that I have found AI interesting, enhancing our human imagination.

Where do YOU stand on AI? More on AI and teaching soon.