By Annabel Simmons

Recently, an ambiguous question has dawned on the minds of many: Will artificial intelligence (AI) reshape the future of education?
AI has gained mainstream recognition, as Generative AI (Gen-AI) tools have become increasingly accessible for public use. Growing awareness of AI has also fostered controversy regarding how the technology will continue to evolve. Yet, its future still remains highly volatile, with possible implications ranging from groundbreaking to destructive. Despite major emphasis on the technology’s future impacts, AI is not just a distant promise but is actively reshaping the world. This transformation is prominent in academic institutions, where the technology’s presence is forcing teachers and students to grapple with both potential opportunities and ethical dilemmas of AI and how to navigate the tool’s use in education.
Although widespread public awareness of AI has only recently begun, most people have unknowingly used AI for over a decade. Google maps, search engines and autocorrect are all commonly used examples of AI. But, what really is AI? According to Faisal Kalota in the Education Sciences journal, AI broadly refers to the techniques that enable machines and computer systems to behave with human-like intelligence.
“While AI has advantages over human intelligence, such as increased speed, the ability to communicate with many different systems effectively, and the ability to reconfigure itself, human intelligence can efficiently achieve complex goals through things such as motivation, emotion, creativity, and mutual understanding,” Kalota said.
In this journal, Kalota said that AI is classified into three categories based on its capabilities: artificial narrow intelligence (ANI), artificial general intelligence (AGI) and artificial super intelligence (ASI). ANI, also known as Weak AI, is the only form of AI that actually exists today, as the other forms are merely theoretical. As its name implies, ANI is extremely limited in functionality; it works by using predefined algorithms and data sets to perform specific tasks with extreme efficiency. It lacks consciousness, awareness and cannot perform outside of its programmed task. There are many subsets of AI that fall under the larger framework of ANI, all with distinct functions—one of these being Gen-AI.
“Generative AI is a machine learning model that can generate new data instead of making predictions,” Kalota said. “The new data can be audio, code, images, text, simulations, and video.”
Gen-AI has existed for over seventy years but was brought into the public eye when the AI research company, OpenAI, released ChatGPT in November 2022. After its debut, ChatGPT reached over 100 million users in only two months. Subsequently, the years 2023 through 2025 marked the most rapid advancement in Gen-AI, bringing about the development of thousands of competing AI tools.
Kalota explained that ChatGPT is a very sophisticated form of Gen-AI that uses artificial neural networks (ANN) and large language models (LLMs) to generate an output based on a given prompt. Essentially, it is a chatbot that was trained with web-pulled texts, which allows it to communicate with users through human-like dialogue.

Upon the release of ChatGPT, students quickly began to use the site to aid academic tasks, such as coursework, studying and research. An engineering student at the University of Arkansas said that he has used Gen-AI as an academic resource for over two years. He said that he primarily uses Gen-AI as a search engine to explain the missing details from his notes. The student explained that he tries to avoid using Gen-AI to complete homework assignments, unless he has no other option, but often still ends up turning to it. Although he regrets it afterwards, the ability of Gen-AI to provide immediate answers is overwhelming alluring.
“I do try to use it in a way that’s actually helpful for my learning,” the student explained. “But I mean, sometimes, something’s due at midnight. And you know, you don’t think you’re going to get done, so it’s just easy to just boot up ChatGPT. It won’t get it 100% right, but you still get it done. I’m cautious of it for sure, but sometimes things happen.”
The use of Gen-AI by students has become normalized, raising concerns about potential overreliance on this technology in academic settings. The U of A student admitted that he believes most students, especially engineering majors, turn to Gen-AI immediately to save time when completing coursework. He estimated that at least 90 percent of engineering majors heavily rely on Gen-AI, adding that he is shocked by the students who do not use these tools because they likely have significantly less free time as a result. Despite his own use, the student expressed concern about his reliance on Gen-AI. During his freshman year, he had never used any form of Gen-AI.
“I think freshman year I studied a lot more,” he said. “Maybe that’s because I felt like all I had to rely on was myself. So I kind of knew that I needed to be more prepared. It does make me nervous thinking about how AI is affecting my own learning. It can be easy to get through some coursework without really understanding it. And I think that kind of makes me nervous when thinking about engineering because if you think about an engineer who maybe just used AI throughout their whole college career, just getting it to do everything for them, and they’re an engineer, but they don’t really know what they’re talking about. I mean, that’s kind of nerve wracking, you know.”
The use of Gen-AI by students is only one way that the technology has infiltrated schools, leading to prominent discussions concerning the evolving role of AI in education. Due to the vast capabilities of AI, educational institutions have been attempting to understand how to use the technology to support school administration, teachers and students. In June 2024, former Provost Terry Martin created the AI Task Force and Working Groups to develop guiding principles and procedures on how to utilize AI at the U of A. Dr. Chase Rainwater, head of the Department of Industrial Engineering, served as the chair of the task force.
Rainwater said that it became necessary to begin looking into AI when students and faculty gained legitimate and useful access to the technology around 2023. Although AI had already been a part of higher education, there were no policies or official integrations of it.
“Now, the tools that kind of dragged us into this I think were predominantly in the generative AI space, and I think they still are, but agentic AI and physical AI are still very relevant,” Rainwater said. “So, we weren’t limited to generative AI, although there was an urgency and continues to be a bit of a spotlight on that because of the influence it has on our classes.”
Further explaining why the U of A began to examine AI, Rainwater said that there’s an understanding that most students coming to the university have already been exposed to AI in K-12 settings, so AI is coming in the door regardless. He also said that many employers now expect students to be equipped with the skills to operate AI integrated tools.
“There was almost like a three-level effect there, and a response was necessary,” he said. “As we saw at many of our peer institutions, there needed to be a formal addressing of this.”
The original AI Task Force ran from 2024 and 2025, with the charge to understand how AI was already impacting the academic space, the research space and the operation space. Rainwater explained that the task force had significant findings in that year, with diverse results depending on the part of campus being studied. The only consistent finding was that the use of AI was widespread throughout the university, which led to the revelation that AI needed to be officially addressed at the U of A in a plethora of areas. Notably, the need for immense training and education on AI use, campus wide, became evident.
“We didn’t have tools that were protected at the time,” Rainwater said. “We did not have as much guidance to give to faculty about how they should be instructing students on the use of AI or at least warning and protecting them.”
In May 2025, Rainwater was designated the 2025-2026 Provost Fellow for AI to implement recommendations from the AI Task Force report. Since then, he has been helping lead four AI working groups on teaching/learning, research, data security and ethics and training. He also participates in the AI Executive Steering Committee alongside many of the senior leaders on campus, including the Provost. Rainwater said that he is currently working on new initiatives and long-term plans to determine how AI will be formally incorporated into higher education over the next 10 years.
However, he also claimed that AI is already being integrated across campus in many ways, noting that faculty, staff and students have all found beneficial ways to incorporate AI tools into their academic work. It has primarily been used to increase productivity and efficiency in research, teaching and learning.
“We have a lot of smart, really talented staff on campus, and some of them have already, on their own, identified things that make their job easier, so they can get more done and help us achieve things that we couldn’t achieve before,” he said. “We’ve seen faculty using it to expand the offerings that they give to students, both in terms of the lecture and in terms of assessments. We’ve also seen faculty encourage and help students integrate AI as a learning resource—a tutor, if you will.We’re still just at the tip of the iceberg, I think because, you know, this is very class dependent in terms of the amount of AI that’s even appropriate for a particular class.”
Despite these positive observations, the use of AI in education also comes with many risks. Rainwater explained that Gen-AI models thrive on taking information and learning from it. Hence, it is important for students and educators to understand how AI tools could interfere with individual privacy and privacy laws, such as the Family Educational Rights and Privacy Act (FERPA). He said that faculty and staff, in particular, must be conscious of using models that are secure and protected so that they do not risk leaking personal and sensitive data, including student information. Gen-AI also has many inherent limitations; it can produce biased outputs, leading to the potential that systems generate malicious, deceptive or false content. Because he does not think that AI will disappear anytime soon, Rainwater said that he aims to bring awareness to all of its positives and negatives.
“I think AI is as much a part of the future of higher education as it is a future of everything in our lives at this point,” he said. “As to whether it’s…just an added tool or…a radical transition in the composition of a college campus, I think time will tell. At the moment, as humans, we have a lot of influence on it. I understand that there’s a narrative of where that goes away, and that’s an interesting debate to have, and I’m not saying it’s wrong, but at the moment, we kind of control a lot of what is going to happen here and so I choose to think about ways that we can positively make use of this.”
Meanwhile, many educators have also resisted the increasing presence of Gen-AI in academic settings. Professor LewEllyn Hallett has been teaching at the U of A since 2013 and currently serves as the Associate Director of Rhetoric and Composition Program. Prior to working as an educator, she had a career as a writer for 35 years. To stay informed on the evolving role of AI, she has attended many workshops and webinars regarding AI’s use in education. As a passionate writer and educator, Hallett is troubled by how AI could impact the Rhetoric Composition Program, its courses and its students. In addition, she stresses the ethical implications of Gen-AI use.
“I don’t think you can use AI ethically, technically, because there’s so many issues behind this,” she said. “It uses people’s material, people’s writing, art, all kinds of words and images…without any kind of permission or credit. And that’s problematic. That undercuts writers, artists.”
As a writing instructor, Hallett said that she fears that students’ reliance on Gen-AI to help with writing could impede the development of important skills, such as their ability to truly think and communicate. Additionally, she said that the writing AI produces lacks authenticity, voice and human perspective. AI’s writing may sound good, but it’s flat and generic, she said. Using Gen-AI to cut down the amount of work one has to do can undermine the learning experience, leading to missed opportunities.
“When I see students use it for research, like undergraduate students, I feel like they are missing that experience of looking for something,” Hallett said. “And, you know how when you go down the various trails that research will put you on, sometimes it’s not going where you want it to go, but you still learn something interesting. Students are just handing over that learning and experience to AI because it can do it fast. That is also giving up another skill set to something outside of ourselves.”
Hallett said that she worries AI could eliminate jobs in higher education by replacing many aspects of course design and instruction. She said that if Gen-AI is being used to create assignments, organize material and respond to student work, it may be tempting for universities to replace instructors with this technology altogether, especially for online courses.
“My overall stance on it would be that the costs outweigh the benefits,” Hallett said. “I think we absolutely should regulate it. We should learn from the past. The argument I hear a lot of times is, ‘Well, everybody’s all alarmed now; we were also alarmed when TVs came into everybody’s living room, and then when we had computers, and then the World Wide Web, and then we had social media, and then we had cell phones, and every step, some people were always alarmed.’ But, we were right to be alarmed because it changed us. Every one of those things changed us, changed culture, changed the way our minds work, changed our ability to do certain things.”
Like Hallett, many other educators recognize the overwhelming risks of AI. Dr. Maggie Fernandes is an Assistant Professor of Rhetoric and Composition at the U of A. Alongside colleague Dr. Megan McIntyre, Fernandes has been tracking how academics are responding to Gen-AI, particularly in the classroom, since the release of ChatGPT in 2022. Their work has centered around many fundamental questions concerning the technology, including its ethical challenges, environmental impact and technical limitations.
Ultimately, they encourage the refusal of Gen-AI in writing studies, which they describe as the “the range of ways that individuals and/or groups consciously and intentionally choose to refuse Gen-AI.”
“Dr. McIntyre and I, we’ve been trying to look at both the response that is pushing people, teachers included, but also students into using this technology when it’s still very new and also trying to chart out paths for those who don’t want to use it,” she said. “So, our main project has been trying to understand the effect of this technology on education and trying to think of alternative ways of understanding this technology beyond use.”
They found that there is an overwhelming push for adoption of Gen-AI in higher education based on broad speculations about what the future of the technology will be. Fernandes said that a narrative has circulated that Gen-AI is inevitably the future of education, which has led many institutions to quickly endeavor in finding ways to help students use Gen-AI responsibly, rather than resist it. She explained that this push for adoption has eliminated the importance of choice by suggesting that there’s only one way forward.
“The other reason why I think there’s a big push is these technologies are intensely marketed,” Fernandes said. “They’re marketed to universities, in part because they’re not profitable yet. So, getting people to use them is part of the game and part of the stage of this technology that we are still at. And so, all of this is about selling a product, not about making education better.”
With many AD based subscription models emerging, college students have become one of the major demographics who Gen-AI tools are marketed to. Because of this, Fernandes feels that it’s important not to demonize the use of the technology by students who are curious about it. For many, this technology is being pushed upon them without any information about its risks.
“What I really think is important is for everybody to be able to make choices for themselves and for students to have the opportunity to learn how to write and read and think free from these technologies, but to ultimately have information so that they can make informed decisions about using these tools,” she said.
In her own classes, Fernandes does not ban the use of Gen-AI. Instead, her goal is to give students enough information about the technology so that they can make their own informed decisions. She encourages struggle in her classes so that students do not feel the need to turn to Gen-AI tools.
“The more people understand these technologies, the better positioned we are to respond to them,” she said.
Fernandes said that she appreciates the U of A’s current reserved approach and attempt to fully understand Gen-AI tools but that there is also an opportunity for the university to address the harms of Gen-AI. Fernandes explained that one of the most significant harms associated with Gen-AI is the proliferation and development of data centers. Gen-AI relies on data centers to deploy and train its applications, and these facilities require massive amounts of water and electricity to do so. This has many environmental implications, including the exacerbation of global warming. Consequently, data centers also raise electric bills and deplete water supplies of nearby communities.
“What we really need to be thinking about is if an entire university pivots to Gen AI in every class, what effect does that have on our state, on our local communities, and how do we need to be mindful about what that is,” Fernandes questioned. “I think the university can do more to start having those conversations…How do we actually grapple with it ethically?”
Over the next few years, Fernandes said that she thinks critical awareness of the harms and limitations of Gen-AI tools will increase, leading to more efforts to get these tools out of the classroom.
“The idea that this technology is inevitable is tied up in the marketing of this technology,” she said. “We can’t separate those two things, largely because that’s a really effective way to sell things to people, and it does seem impressive enough that we can imagine the future being radically shifted because of it. I don’t think anything’s inevitable.”
Some versions of this technology will probably prevail, she said. However, she explained that she does not think that Gen-AI tools will be profitable in the future; according to the companies that design LLMs, like ChatGPT, the problems these tools have are basically unfixable. Fernandes said that OpenAI recently announced that chatbots will always hallucinate, as this is just a fundamental part of the technology.
“I think that what is necessary going forward is to reject that inevitability narrative, not on the basis that this technology will go away if we ignore it,” she said. “What I think we should do, instead, is really look seriously at its problems and remind ourselves that there are multiple ways to respond to the same problem.”
As AI continues to unfold in the classroom, one thing remains clear: the future of AI in higher education is not predetermined. Rather, the fate of the technology is in the hands of students, educators and institutions.