Category: Artificial Intelligence (AI)

Keywords in Higher Ed: AI Authoring Tools

Keywords in Higher Ed: AI Authoring Tools

During my graduate degree coursework in composition and rhetoric, I came across a book titled Keywords in Writing Studies, edited by Paul Heiker and my professor himself, Peter Vandenberg.

The book’s concept is given in its title: Keywords provides a fresh and concise array of essay entries, each packed with heavy research dedicated to unpacking an operative referent in the realm according to its related studies, theories, and applications.

As a student that has kept nearly every required textbook, I can reflect on the utility of such a cogent textbook concept, and now would like to transfer its reader-friendly approach to the great wide realm of instructional technologies—to start, within in the smaller realm of AI authoring tools for teaching and learning.

I anticipate my keywords approach will be much messier and less formal in scholarship, as the body of published works, studies, and opinions on AI authoring is sprawling and immense. However, the goal is to offer an ongoing collection of resources that facilitate your own research and dialogue around important questions about technology in teaching and learning.

With this keywords approach in mind, let’s begin!

AI authoring tools & learning

AI authoring tools such as ChatGPT, Bard, DALL-E3, and the like, pose immediate questions for rethinking how to teach core learning tasks and skills, particularly those assigning students to compose original work.

Though there is no direct teaching solution to safeguard against cheating, and worse, whether a student is actually demonstrating their learning, many conversations in higher education circle back to how assessments are designed for students to think critically about information and acquire digital literacy. Such classroom-rooted strategies and conversations about AI authoring are also recommended by the leading product developing company in AI writing detection, Turnitin.

Difficulties in regulating AI use & ethical concerns

Studies have noted areas of AI use that pose challenges for demarcating its ethical scope and regulation. Key questions implicated by AI machine learning and data science include responsibility for use, bias and discrimination within development, transparency in development, and responsibility for stakeholder action or policy.

From a corporate stance, the move towards regulation is difficult, if not impossible, as implementation of restrictions cannot be imposed on a scale that corresponds with its users. Though statements and calls to pause development have been made, much AI development is within the private sector, and those that might be in the position to draft such regulations do not necessarily understand the nature and scope of the technological developments to impose effective boundaries.

Ethical considerations with AI authoring tools that more directly relate to teaching and learning include biases against non-English speakers and replications that bypass creative attribution, such as the popular query of Greg Rutkowski styled outputs that mimic his aesthetic without his consent.

Academic integrity & teaching with AI

Because of its dominance in the assessment tools arena and Loyola’s adoptions of several products, Turnitin resources on academic integrity and AI writing are within the purview of technology-based assessment in higher education. Their latest webinar offering on how to include AI in institutional policy offers a puzzle map for approaching the complex issue of AI.

An Exigence for Faculty Development

A silver lining that AI authoring brings to our attention is the prompt for enriching faculty development through dialogue and creative learning design.

Though some find AI authoring tools a cause for panic, many specialized faculty in the fields of medicine and sciences are excited about the opportunities AI provides for teaching and learning.

Reflections in faculty panels, such as this one at Ole Miss University of Mississippi or professional higher ed groups, such as the AI in Education Google group.

While Loyola Instructional Technology and Research Support does not decide on the adoption of learning tools for the institution, we do invite ideas for teaching strategies, further research, and learning designs.

Teaching Strategies for “the ChatGPT wave”: Transferable Lessons from Proctoring Tools

Teaching Strategies for “the ChatGPT wave”: Transferable Lessons from Proctoring Tools

Read time: 5 minutes

In my popular culture research, a cultural movement often carries the referent of a “wave.” Example: The Hallyu movement of the 1980s to 2000s (debatable depending on the scholar you consult) refers to a “wave” of Korean popular culture beyond the nation’s borders.

In my day-to-day work, I might use the referent “wave” to refer to the conversation en vogue in the fields of teaching, learning, and academic integrity: in this instance, let’s use the referent “the ChatGPT wave.”

But first, a quick blast from the past [three years] for context:

Higher education conversations about assessment in digital learning environments rarely avoid a debate on academic integrity. From my experience—and likely yours—this specific debate maps itself on a spectrum ranging somewhere from “enforcing academic integrity with the latest and most stringent means available” to “recognizing no perfect enforcement is possible and does not seem productive to ensure student learning”.

My emphasis here is on two points, to be revisited very soon: (1) that no flawless enforcement of academic honesty is possible with a tool; and (2) that a fixation on enforcement of not cheating rather than a focus on fostering student learning leads to costly outcomes for all.

Perhaps this diversity of positions on assessment with academic integrity emerged rather sharply during the emergency move to online learning per the COVID-19 pandemic. The immediate legacy might be summed up in some phases: faculty unrest for a technology-based solution to prevent students from cheating, a hasty adoption of an inadequate solution, uncomfortable and stressful assessments for both its administrating faculty and its examinee students using said inadequate solution, then a quick abandonment of said inadequate solution due to privacy violations (some of which are undergoing legal disputes, well within our region).

As we embark on the amazing frontier of AI (artificial intelligence) authoring tools, let us brace ourselves for the ChatGPT wave by remembering to prioritize student learning rather than hunting for cheaters. Here are some teaching strategies for AI authoring tools like ChatGPT, very much informed by our recent misadventures with proctoring tools:

Remember that a tool is not a human. Just like the highly touted and speedily adopted proctoring tools of yesteryear cannot guarantee or completely safeguard cheating by a human student, ChatGPT and AI tools share an obvious quality: ChatGPT is not a human student. A human demonstrates learning for a specific learning outcome, whether by sharing a sentiment or committing an error that is irrevocably human. Looking for signs of life might mean creating space for students to show their human selves, perhaps by engaging conversation about something fun to them, or posing a writing prompt that is more specific to their periphery of being, or assigning something creative or audio recorded. If you assign work that is general and without connection to your students, expect machine-like responses.

Revise your learning objectives and corresponding activities for someone who wants to learn. As an instructor, I find my essential job description, whether I am teaching professional business writing or instructional design, is to facilitate meaningful learning experiences for my students. Many times, essential charge prompts reflection and revision of my coursework and assessment designs. Rising to the occasion of facilitating meaningful learning is an easy move when students want to learn. National enrollment in higher education has seen better days, so being interesting seems like a project of mutual interest for faculty.

Find help for the things you don’t know. Since my start in the field of teaching and learning support, I have seen resources and services grow rapidly in the name of faculty teaching online and with instructional tools. It is highly likely that your place of teaching extends such resources and services to you, if only you seek them out. “Closed mouths don’t get fed,” as the saying goes, and in my experience, if you don’t ask for help, you will only fall more behind. Technologies are always updating and departments may shift in structure, but you can control your own course (pun intended) by looking for those that literally have in their job descriptions to help you.

Learn about the tool’s development and limitations, and share this with your students. OpenAI, the developers behind ChatGPT, are very transparent about its testing process and limitations as an AI authoring tool. Some key and critical limitations to note so far include a proclivity to outputs that are “toxic or biased” with made-up facts; and an English-speaking, and therefore cultural bias “towards the cultural values of English-speaking people.” Having a conversation with your students about such limitations makes for transparency in your class while addressing the serious possibilities for mis-presentations of self. Who wants to be seen as toxic or treacherous?

If we have learned anything from the Test Cheating Scare of 2020, let us brace for this ChatGPT wave with clarity of purpose as instructors, and aim for human exchanges with our students.