Is it ethical to disclose that you’ve used AI-generated content in your course?
Consider it from the perspective of a course buyer. Would knowing someone used AI to create their course influence your decision to buy it?
It’s a sticky one, isn’t it?
And it’s a bit like the wild west out there… courses teaching people how to create a course using AI content are everywhere!
AI is absolutely incredible. It can generate human-like text, images, and even music, but I think it’s super important that we consider the ethical implications that go along with using it – especially in relation to courses.
I asked ChatGPT what it thought about these ethical considerations:
“Disclosing the use of AI-generated content in online courses is not only ethical but also essential for maintaining transparency and fostering trust between educators and learners. Students have the right to know the tools and technologies utilized in their education, as well as the origins of the content they engage with. Providing such disclosure empowers learners to understand the educational process fully and enables them to critically assess the information presented. Ultimately, openly acknowledging the integration of AI technology contributes to a culture of integrity and responsible innovation in online learning environments.” -ChatGPT
Isn’t it fascinating that even ChatGPT thinks there are ethical considerations to using AI-generated content in a course. Well, not really because all ChatGPT does is scrape the internet for human-generated opinions.
This leads me to one of the biggest ethical considerations when it comes to AI generated content – plagiarism.
Unintentional plagiarism is a very real ethical concern with AI-generated content.
If the ideas or concepts AI generates aren’t yours (or the AI’s) then who do they belong to?
And what does the use of this content open you up to? Possible violation of intellectual property and fair use.
Also, while OpenAI has so far never denied not using user chats for training. Their privacy policy says that:
“…ChatGPT does not copy or store training information in a database. Instead, it learns about associations between words, and those learnings help the model update its numbers/weights. The model then uses those weights to predict and generate new words in response to a user request.”
So, anything you’ve uploaded to ChatGPT may be used in response to a user request. And while they may change a word or two – the original concept would be yours. How does that make you feel?
It makes me feel uncomfortable.
Another ethical consideration is that it isn’t always accurate.
Think about it. If you Google a topic, you know well, and read through a bunch of the results Google serves up for you – how many of those are accurate? I bet there are wildly different responses.
AI tools are only as good as the data used to train their algorithms and when the data varies – it’s weighted and then either rejected or added to its model.
So the AI you’re using may be regurgitating information that sounds plausible but isn’t necessarily correct. Inaccurate information that you’re happily copying and pasting into your course content. And depending on your course topic that may be mildly annoying – or downright dangerous.
Which leads me to my next ethical consideration, embedded bias and discrimination.
Because the AI is trained using data that comes from the internet and is created by humans who all have their own biases, along with the use of old and historical data – prejudice is included in all AI systems. This is a fact.
And I think we could all do with a lot less prejudice, bias and discrimination in this world.
These are just a few of the possible ethical implications that go hand-in-hand with using AI-generated content.
And personally, I agree with ChatGPT. I think it’s vital that we disclose whether we’ve used any AI-generated content.
What do you think?