Ethics, Creativity and the Impacts of Artificial Intelligence
On Wednesday, Oct. 4, 2023, we’ll be hosting a STEM Café at Fatty’s with Andy Jeon, PhD, NIU assistant professor of marketing, and David Gunkel, Ph.D, NIU professor of media studies. NIU STEAM educator Judy Dymond interviewed Professor Jeon to learn a little more about artificial intelligence and get a preview of the October café.
There has been much conversation among academics and companies about ChatGPT, now in its 4th Generation. Tell us about this emerging technology. What is ChatGPT? How does it work?
ChatGPT (Generative Pre-trained Transformer) is based on the transformer architecture. Just like a ‘Transformer’ robot shifts shapes, the machine learning ‘transformer’ shifts attention to different words, especially when trained on gigantic data, to understand language deeply. Each version of GPT, including the 4th generation (ChatGPT4), improves upon its previous versions through refined architectures, larger model sizes, and advancements in training techniques on vast amounts of text data.
How do you see this technology evolving? What do you think ChatGPT will look like in the future?
We need to be very vigilant and watchful about the next generations of this technology. There are two issues – 1) a technical concern about its potential applications – what are the practical uses of this technology? and 2) an ethics issue connected with people’s use of this technology. Many companies are afraid they are getting behind and wondering what they need to do to keep up with other companies that are beginning to use it. But they’re also feeling somewhat cautious about this product, especially regarding the line between what ChatGPT can do and what it should avoid doing.
What are potential users are saying about this technology?
The first question that potential users are asking is, “Where can this technology be applied?” There is this market interest, but also the ethical application issue. It is cutting-edge technology and can have novel applications. Right now, however, there is the transparency issue. If there is a news article, for example, should it be disclosed that it was generated by AI?
When it comes to an advertisement for an election, for example, companies such as Google are planning to enforce disclosure – that Generative AI generated the advertisement.
The final outcome of ChatGPT artificial intelligence can be a very creative art piece. For example, in South Korea, there was a claim that some artists used ChatGPT for web cartoons. Many readers in South Korea are outraged by this. The object of a cartoon is for people to enjoy the story and the humor. However, people seem to draw the line and think it is important to reveal that AI created the humor.
ChatGPT is also used for marketing campaigns, and it is important that marketers know how it is perceived. For example, if it were to be used for a blog article, if it is not disclosed that AI generated it, at some point people will distrust the content – even if it is generated by a human – because they assume that the internet is polluted by ChatGPT-created content.
Users need to come up with their own ideas for using the technology that meet their specific needs. To do this, companies need to thoroughly understand how generative AI like ChatGPT works. Then, the companies need to finetune the use of ChatGPT for their goals.
What conversations are you having with your students about ChatGPT?
Our minds are adjusting to this technology. If we rely too much on this technology, our thinking will be changed. Users will not be using their own imagination and all parts of their human brain. The human’s capacity for creative thinking may be compromised. If ChatGPT says that it is creative, how do we know if it is creative or not creative?
ChatGPT is developing based on new data. But, imagine that the Internet is full of generative AI created contents, then, ChatGPT is trained on the content it created. The possibility is that ChatGPT will be in a feedback loop, reinforcing its own patterns and biases. It would postpone the benefit if the whole internet were polluted by AI content which ChatGPT is trained on. So, what will ChatGPT 20 look like if it keeps training on its own data?!
To ensure the progress, it needs to be updated with fresh insights from human creativity.
What are some concerns and cautions for users?
The more I use generative AI, the more I am scared.
For example, if ChatGPT says that a piece of poetry I wrote is creative, how does it know if it is creative or not creative?
Philosophically, at some time in the future, there will be a humanoid that has no physical and mental difference with humans. The only difference will be its artificial origin. In this situation, what will the other difference be – the soul, which may be a matter of belief rather than empirical evidence.
What if we have a robot that gets a conscience? How do we react? I ask this question to my students in my class: would you like to see a customer service chatbot that seems to be self-aware? The answer is mixed.
There are many ethical issues to pursue!
Don’t miss the STEM Café at Fatty’s on Oct. 4 to learn more.



