Skip to main content Skip to secondary navigation

PWR 91NF: Writing and Representing Ourselves in the Time of ChatGPT

Main content start

Photo credit: Growtika

Recently, Antonio Guterres, Secretary-General of the United Nations expressed concern that “AI can amplify bias, reinforce discrimination and enable new levels of authoritarian surveillance”, noting that the key concerns stem from AI’s unpredictability and potential for misuse. Meanwhile, and more locally here at Stanford, researchers tested ChatGPT and other popular Language Learning Models (LLMs), and found that these models offer different financial advice based on “first and last names suggestive of race or gender,” whereby Black women received “the least advantageous outcome” (Haim, Salinas & Nyarko, 2024). Lastly, in a study attempting to invert stereotypical global health images, researchers prompted Midjourney, an image generating AI program to produce results of “Black African doctors providing care for white suffering children;” the results were sobering: Midjourney produced an AI-generated image of a White doctor surrounded by Black children (Alenichev, Kingori & Grietens, 2023) in more than one try.

These examples are but a few of how Generative AI tools exhibit forms of racial, gender (or language-based) bias. In such instances, LLMs clearly fall short in offering a fair representation of the lived experiences of BIPOC, women, as well as other minoritized or socioeconomically disadvantaged groups. The ubiquity of these tools and the speed upon which they produce content by scraping an overwhelming US-centric internet, is now creating “homogenized American narratives” (Rettberg 2024). Indeed, there are ongoing attempts to rectify such wrong or “embarrassing” results from LLMs by their parent companies (such as Google or OpenAI), and there is much enthusiasm about the potential of Generative AI/LLMs to enhance our lives (in Medicine, Public Health, Education, Environmental Sciences, etc.). Yet, the popular discourse tends to focus on the “hype” of AI, with little attention to the lived-experiences of individuals who will bear the consequences of such biased AI-generated content.

This course offers you the opportunity to reflect more deeply on the risks that Generative AI poses in terms of reinforcing stock-narratives about marginalized groups. In this class, we will work together via our course readings, guest-lecture visits, and course assignments to find practical ways to counter the normative narratives that feed these Generative AI tools. In this class you will engage with recently published texts from researchers such as Timnit Gebru, Safiya Noble, Emily Bender, Julian Nyarko (Stanford Institute for Human-Centered AI) as well as podcasts like the BBC’s Digital Human. Together we’ll examine how these authors effectively communicate their messages to a public audience, while considering how some of the solutions they propose can be applied to a specific problem you have identified in a Generative AI of your choice. A key component of this class will be rhetorically analyzing the results of the culture and identity-focused prompts that you feed into your chosen Generative AI tool. Some students can choose to build on their multilingual knowledge and examine the extent to which minoritized languages are vastly underrepresented when using LLM for knowledge finding, versus when using a “traditional” search engine.

Major Assignments

Rhetorical Analysis of Results (4-6 pages)

Building on your PWR1 skills, you will choose a Generative AI tool/LLM of your choice (either text-generating or image-generating, close-sourced or open-sourced) and rhetorically analyze the type of narratives/discourse produced based on the different queries and word choices you experiment with (i.e. auditing the tool). For example, you will be looking at whether these text or image-generating tools fall short (i.e. they do not emphasize the rhetorical traditions, languages or practices of communities of color and other minoritized groups). Your analysis will be informed by the readings we discuss in class as well as one foundational text in Discourse Analysis.

Presentation of Solutions and Interventions (5-7 mins.)

Based on your earlier rhetorical analyzing and auditing of a Generative AI tool, you will now propose ways to intervene and counter some of the dominant and normative cultural narrative results you have come across earlier (eg. feeding the tool with more diverse images/examples, or proposing a different model). What are the overtly advertised “merits” of these platforms, as well as the less-exposed underbelly of these platforms? Again, these solutions will be informed by further readings we have discussed in class, as well as guest-lecture visits. Here you will build on your PWR2 skills and will orally present your findings to your peers in a mini conference format.

Public-Facing Project (choose one)

Based on the first two assignments, you can choose to write a piece for Medium where you explore some of the culturally undermining elements of Generative AI that you discovered in your exploration of one specific tool. You may choose to develop an Ad. campaign, whereby you raise awareness on the impact of generative AI on minoritized communities (think of a community you are part of). This Ad campaign can be in the form of a TikTok video, an Instagram slide deck, or an info-chart that you can post on campus. You will then write a reflective piece that examines your rhetorical choices given your intended audience and purpose.

*Note: You can use library computers if you do not wish to use your personal computer for assignment 1 & 2. For image generating tools you can use Midjourney, Stable Diffusion and DALL.E but you are not limited to these tools.

 

Prerequisite: WR-1 requirement or the permission of instructor

Grade Option: Letter (ABCD/NP)

Course Feature: Science Communication Track

WAYS: SI; EDP 

This Course does not fulfill the WR-1 or WR-2 Requirement