Tales from the Classroom: Applicability of Generative AI across various sectors

The key issues I encounter with AI and machine learning typically revolve around abuse of technology, potential job losses, and AI models going rogue. The webinar provides more context on these issues, expounds on how the technology may be misunderstood, and how these issues are being addressed. What I learned was how broad AI is and that there are very many classifications for AI and, equally, very many ways it can be used.

From an educator’s standpoint, I was happy to confirm my belief that there is value in using AI for learning and development, and its contribution goes beyond what we know about ChatGPT. As one of the speakers mentioned, ChatGPT is a very small part of the AI pie and there are many other models being developed to address very specific learning and development needs.

Generative AI models are being used to help train professionals in various fields. I see many advantages in this. First, most trainings involved self-paced learning using multimedia material. While these are helpful, there are many instances when learners have questions that learning materials cannot directly address. Generative AI models that were developed for training provide interaction during the learning process, which helps keep learners engaged. As an educator, I believe that learning does not occur in a vacuum. In fact, I believe that learning is a social activity. While I don’t consider generative AI models as a substitute of collaborative learning, it seems to be an adequate alternative for when independent learning is a requirement.

I was pleasantly surprised that many view generative AI models as a tool that will help humanize work. The webinar explained how AI can help customer contact agents, who are typically expected to provide solutions as quickly as possible while simultaneously processing huge amounts of information within a short period of time. I have made many calls myself to contact centers, typically when I am already irritable and needing an immediate solution. From an agent’s standpoint, they can’t solve problems they don’t understand, which means they spend a good amount of time gathering the relevant information, including verifying customer identification, before they can resolve an issue. As this process takes time, it annoys an already irate customer even more. Having a way to optimize this step will make the agent more efficient. AI can take out the grunt work so that the agent can spend more time providing solutions—the kind of work that truly matters to the customer. Finally, I was pleased to discover that the development of guardrails in AI is an important issue that practitioners in the field are actively working on. That AI will develop sentience is a concern and fear of many, which I attribute to Hollywood films. Nevertheless, I don’t think these fears are misplaced, particularly as some generative AI models do handle and train on personal and sensitive data. One speaker also mentioned that while currently, those in leadership positions seem responsible and act with integrity, there is still the fear that they will be replaced by others who may not act so and may abuse the information or technology to their advantage.

Initially, I was also hesitant about AI and how it may contribute to education. I can understand how ChatGPT may be used as a tool for cheating. However, I cannot help but compare this logic used in the introduction of the Internet and how the Internet could negatively impact learning because it makes things easy for anyone to “copy and paste” information. However, we have also seen how the Internet has aided students by enhancing their learning. The Internet democratized information. I see AI as something that will also democratize information that will benefit those who have limited access to them (i.e., schools and students in rural areas).

I am excited also to learn that there are many who are using generative AI as a tool we can use to take out the pain points of working. Information processing takes time and effort. As humans, we have the limited capacity to remember data instantly. Having something that processes that information in a relevant and usable output may aid in decision-making.

Generative AI was not developed to replace humans, but instead to help us. This is especially true for the kind of work that is built around human interactions, such as customer contact or customer relations. While chatbots have been developed to help manage customer communications, I find that the current iterations continue to be lacking in many aspects. They bare limited information and I often find that the information they relay are often canned or standard and rarely provided in the context that is relevant to me. Many times, the information they relay are no different than the information that I will find in a company’s website, for example. There are even times when I find it cumbersome to use chatbots, because they are often the first line of communication. So instead of being helpful, I find it wasteful communicating with them, because I already know they won’t provide the information or assistance that I need for my issue or problem. This is why I don’t see why the current iterations will replace humans in customer communication.

The discussion helped me understand that there are measures in place to help address the risks of technology abuse and potential job losses. More importantly, I see that they have a long-term, beneficial view of generative AI, particularly on how they can help us in our lives. Like the Internet, AI is a tool. When used properly, it can add value in our learning, our growth, and our lives.

While the subject matter experts have addressed the issue or fears of AI models going rogue, there are still some questions regarding the sufficiency and robustness of existing guardrails.

Do these guardrails really work?

Are they effective and sufficient in achieving their goals?

Are they protecting against the correct risks? Perhaps more importantly, have the developers identified all the risks involved? Is there a body responsible for identifying the risks?

How do we ensure that there is transparency on how AI models are being trained?

What information or data is being used to train them? Are they using private or sensitive information? How do we ensure that data used have the appropriate consent from entities who own them? Furthermore, how do we ensure that the information is only used as intended, and that the generative AI model will not abuse this information?

As a student and researcher, I find that generative AI’s potential in helping me learn. I also see that AI can help me identify where to start with my research, perhaps even guide me towards the right direction. It has helped me create outlines for my research.

However, I have also seen downsides, particularly for ChatGPT. Some information provided by the platform is not always accurate. I have seen reports generated by ChatGPT citing resources that does not exist. As in any research, it’s always important to check sources and triangulate information to ensure their accuracy.

As a teacher, generative AI has challenged me to find ways how to improve learning and enhance student engagement. Information is readily available, and students can easily create prompts in ChatGPT to generate class requirements. However, just as one webinar speaker has mentioned, teachers should learn how to ask probing questions to help gauge whether learning has occurred.

In general, I am excited to learn more about generative AI and its application in both industry and education.

Tales from the classroom is a special blog series where I share research and articles I produce in my DBA (Doctorate in Business Administration) program.