Should Educators Put Disclosures on Teaching Materials When They Use AI?

by Admin
Should Educators Put Disclosures on Teaching Materials When They Use AI?

Many teachers and professors are spending time this summer experimenting with AI tools to help them prepare slide presentations, craft tests and homework questions, and more. That’s in part because of a huge batch of new tools and updated features that incorporate ChatGPT, which companies have released in recent weeks.

As more instructors experiment with using generative AI to make teaching materials, an important question bubbles up. Should they disclose that to students?

It’s a fair question given the widespread concern in the field about students using AI to write their essays or bots to do their homework for them. If students are required to make clear when and how they’re using AI tools, should educators be too?

When Marc Watkins heads back into the classroom this fall to teach a digital media studies course, he plans to make clear to students how he’s now using AI behind the scenes in preparing for classes. Watkins is a lecturer of writing and rhetoric at the University of Mississippi and director of the university’s AI Summer Institute for Teachers of Writing, an optional program for faculty.

“We need to be open and honest and transparent if we’re using AI,” he says. “I think it’s important to show them how to do this, and how to model this behavior going forward,” Watkins says.

While it may seem logical for teachers and professors to clearly disclose when they use AI to develop instructional materials, just as they are asking students to do in assignments, Watkins points out that it’s not as simple as it might seem. At colleges and universities, there’s a culture of professors grabbing materials from the web without always citing them. And he says K-12 teachers frequently use materials from a range of sources including curriculum and textbooks from their schools and districts, resources they’ve gotten from colleagues or found on websites, and materials they’ve purchased from marketplaces such as Teachers Pay Teachers. But teachers rarely share with students where these materials come from.

Watkins says that a few months ago, when he saw a demo of a new feature in a popular learning management system that uses AI to help make materials with one click, he asked a company official whether they could add a button that would automatically watermark when AI is used to make that clear to students.

The company wasn’t receptive, though, he says: “The impression I’ve gotten from the developers — and this is what’s so maddening about this whole situation — is they basically are like, well, ‘Who cares about that?’”

Many educators seem to agree: In a recent survey conducted by Education Week, about 80 percent of the K-12 teachers who responded said it isn’t necessary to tell students and parents when they use AI to plan lessons and most educator respondents said that also applied to designing assessments and tracking behavior. In open-ended answers, some educators said they see it as a tool akin to a calculator, or like using content from a textbook.

But many experts say it depends on what a teacher is doing with AI. For example, an educator may decide to skip a disclosure when they do something like use a chatbot to improve the draft of a text or slide, but they may want to make it clear if they use AI to do something like help grade assignments.

So as teachers are learning to use generative AI tools themselves, they’re also wrestling with when and how to communicate what they’re trying.

Leading By Example

For Alana Winnick, educational technology director at Pocantico Hills Central School District in Sleepy Hollow, New York, it’s important to make it clear to colleagues when she uses generative AI in a way that is new — and which people may not even realize is possible.

For instance, when she first started using the technology to help her compose email messages to staff members, she included a line at the end stating: “Written in collaboration with artificial intelligence.” That’s because she had turned to an AI chatbot to ask it for ideas to make her message “more creative and engaging,” she explains, and then she “tweaked” the result to make the message her own. She imagines teachers might use AI in the same way to create assignments or lesson plans. “No matter what, the thoughts need to start with the human user and end with the human user,” she stresses.

But Winnick, who wrote a book on AI in education called “The Generative Age: Artificial Intelligence and the Future of Education” and hosts a podcast by the same name, thinks putting in that disclosure note is temporary, not some fundamental ethical requirement, since she thinks this kind of AI use will become routine. “I don’t think [that] 10 years from now you’ll have to do that,” she says. “I did it to raise awareness and normalize [it] and encourage it — and say, ‘It’s ok.’”

To Jane Rosenzweig, director of the Harvard College Writing Center at Harvard University, whether or not to add a disclosure would depend on the way a teacher is using AI.

“If an instructor was to use ChatGPT to generate writing feedback, I would absolutely expect them to tell students they are doing that,” she says. After all, the goal of any writing instruction, she notes, is to help “two human beings communicate with each other.” When she grades a student paper, Rosenzweig says she assumes the text was written by the student unless otherwise noted, and she imagines that her students expect any feedback they get to be from the human instructor, unless they are told otherwise.

When EdSurge posed the question of whether teachers and professors should disclose when they’re using AI to create instructional materials to readers of our higher ed newsletter, a few readers replied that they saw doing so as important — as a teachable moment for students, and for themselves.

“If we’re using it simply to help with brainstorming, then it might not be necessary,” said Katie Datko, director of distance learning and instructional technology at Mt. San Antonio College. “But if we’re using it as a co-creator of content, then we should apply the developing norms for citing AI-generated content.”

Seeking Policy Guidance

Since the release of ChatGPT, many schools and colleges have rushed to create policies on the appropriate use of AI.

But most of those policies don’t address the question of whether educators should tell students how they’re using new generative AI tools, says Pat Yongpradit, chief academic officer for Code.org and the leader of TeachAI, a consortium of several education groups working to develop and share guidance for educators about AI. (EdSurge is an independent newsroom that shares a parent organization with ISTE, which is involved in the consortium. Learn more about EdSurge ethics and policies here and supporters here.)

A toolkit for schools released by TeachAI recommends that: “If a teacher or student uses an AI system, its use must be disclosed and explained.”

But Yongpradit says that his personal view is that “it depends” on what kind of AI use is involved. If AI is just helping to write an email, he explains, or even part of a lesson plan, that might not require disclosure. But there are other activities he says are more core to teaching where disclosure should be made, like when AI grading tools are used.

Even if an educator decides to cite an AI chatbot, though, the mechanics can be tricky, Yongpradit says. While there are major organizations including the Modern Language Association and the American Psychological Association that have issued guidelines on citing generative AI, he says the approaches remain clunky.

“That’s like pouring new wine into old wineskins,” he says, “because it takes a past paradigm for taking and citing source material and puts it toward a tool that doesn’t work the same way. Stuff before involved humans and was static. AI is just weird to fit it in that model because AI is a tool, not a source.”

For instance, the output of an AI chatbot depends greatly on how a prompt is worded. And most chatbots give a slightly different answer every time, even if the same exact prompt is used.

Yongpradit says he was recently attending a panel discussion where an educator urged teachers to disclose AI use since they are asking their students to do so, garnering cheers from students in attendance. But to Yongpradit, those situations are hardly equivalent.

“These are totally different things,” he says. “As a student, you’re submitting your thing as a grade to be evaluated.The teachers, they know how to do it. They’re just making their work more efficient.”

That said, “if the teacher is publishing it and putting it on Teachers Pay Teachers, then yes, they should disclose it,” he adds.

The important thing, he says, will be for states, districts and other educational institutions to develop policies of their own, so the rules of the road are clear.

“With a lack of guidance, you have a Wild West of expectations.”

Source Link

You may also like

Leave a Comment

This website uses cookies. By continuing to use this site, you accept our use of cookies.