I was in a discussion yesterday about introducing young people (17-18) to generative text models. I noted that "we need AI literacy like media literacy." (NB: The context of this discussion was generative text.) From the beginning of any discussion about teaching how to use generative text models, there must be an introduction to the risks. One idea I had was to ask a generative model a question and fact check points in front of students, allowing them to see fact checking as part of the process. Upfront, it must be clear that while AI-generated text may be convincing, it may not be accurate.
This would be the first in a series of lessons that I would want to convey. Having clear expectations on the veracity of AI-generated text is, in my book, a foundation: something that should be taught at the earliest possible opportunity. From there, ideas like prompting can be explored: the idea that, with natural language, you can tune the response to a question. "Explain this to me like I am five", or "Convey this point in bullet points", or variants thereof are interesting. I am especially intrigued by how language models can convey the same concept in different ways.
Some foundational points I would aim to teach include:
- Expectations around the veracity of content.
- The importance of fact checking.
- AI-generated text as part of the process rather than the process itself.
Regarding the third point, something excites me about Google's TextFX experiment, in which a variety of tools are offered to remix language: make acronyms, find related words, and more. Lupe Fiasco worked with Google on the tool, which is positioned to aid in tasks like songwriting. I am no songwriter, but I appreciated how Fiasco used the tool to come up with new combinations of words.
What are those wow moments? For a young person who writes lyrics, that could be it: an AI tool that encourages them to think about new words to include in a song. We need young people to feel those wow moments while also knowing their limitations.
Going on further with points I would aim to teach:
- How are language models able to answer questions? Compare to how humans learn for an intro class: seeing the same thing or similar things over again helps us build patterns which eventually constitute knowledge.
- Why are AI models sometimes wrong? Sometimes, the text on which they are trained is wrong; they don't "know" anything, but have learned "about" some concepts. Getting the articulation right is critical.
- What are the other modalities of AI (computer vision in particular)? How do they compare or differ from text?
- How does generative text compare to search? Search is a concept with which many young audiences will be able to intuit.
- What are other concerns?
Lessons should keep coming back to these points to reinforce knowledge.
My kind friend with whom I was discussing this topic yesterday night -- nothing quite like a midnight discussion about AI literacy! -- noted the importance of making a young person feel like they are building something. Generative text should not be positioned as, or used as, a tool to entirely replace tasks; that could disempower. Rather, it should be taught to be used as a creativity aid. Such a class should involve an exercise of making something.
Given the versatility of language models, a class could take multiple tracks: some students could write song lyrics, others could write stories, others still could see if they could "break" the system with different prompts.
These are a few ideas. I have not written about this topic thus far. I'm excited to learn more. If you have thoughts, please send me an email at readers [at] jamesg [dot] blog.
Comment on this post
Respond to this post by sending a Webmention.
Have a comment? Email me at firstname.lastname@example.org.