Skip to content

Let’s Have a Chat about ChatGPT

Facebook
Twitter
LinkedIn

ChatGPT has taken the world by storm since its launch in November 2022. It is a chatbot developed by OpenAI and built on the GPT-3 language model. What sets it apart from other AI chatbots is the sheer amount of data it has been trained on, allowing the quality of its responses to cause waves, leading to headlines such as ChatGPT passing key professional exams. It has also consequently caused concern in academia that it may be used to cheat at exams and assignments. We speak to two academics from the University of Malta, Dr Claudia Borg and Dr Konstantinos Makantasis, to see how academia should adapt. Are such advances a threat to be curbed or an opportunity to be exploited?

An article in the Times of Malta in mid-January highlighted growing concerns amongst academics regarding the potential use of ChatGPT by students to cheat. Many responses to the challenge seemed to focus on prevention, such as mentioning a service that can detect AI-generated text. Speaking to Dr Claudia Borg (Senior Lecturer on Artificial Intelligence, Faculty of ICT, UM) and Dr Konstantinos Makantasis (Lecturer on Artificial Intelligence, Faculty of ICT, UM) about the challenge, however, they presented a different perspective from the offset. Referencing their own personal experience to make the point, they addressed AI as a reality that must be incorporated into education rather than opposed.

Makantasis started with an analogy, in which he remembered when the Internet started to become mainstream. Twenty years ago, when he was a student, the Internet was very expensive and limited to a few households. Therefore, examinations focused on the degree to which students understood questions being asked during lectures. Once the Internet became more mainstream, these same questions no longer made sense, as one could simply Google factual answers. Therefore, the questions in exams were adapted. They became problem oriented instead of simply fact oriented.

‘Now we have to go one step further with ChatGPT. The potential of using it lies in combining the knowledge of different subjects across fields. Instead of simply asking for the answer to a question, we can ask students how to exploit the tools at their disposal to pose dynamic questions themselves. It becomes about asking the right questions as well. Curiosity has always been a driving force for human progress, and learning to ask the right questions has always been a part of it,’ explains Makantasis.

Borg elaborates further on this train of thought. One can take it for granted that when giving an assignment to students, they will use all the tools at their disposal to complete their assigned tasks. Yesterday it was Google, today it is ChatGPT, and tomorrow it is going to be something else. However, one can incorporate that reality into assignments. Part of the questions asked by examiners can be to explain how students made use of programs like ChatGPT, providing their own assessment of the output they received, and describing how they improved the answers and arrived at a final product. The submitted assignment would consist of the whole process, which would require the student to explain their understanding of such tools and how they used them.

‘When students end up going out to work in industry, they are going to make use of AI tools and whatever else is available. It is no use ignoring them and continuing to assess in an outdated format. We have to keep education relevant to what is out there. It was the same with the invention of the calculator,’ adds Dr Borg.

Evolution or Devolution?

One also wonders, however, if certain skills are being lost in the process, as was the case with mental maths when the calculator was invented. Makantasis acknowledges the value of skills such as mental maths but points to similar concerns when Google was made popular. There was a wide concern that people’s memories would become weaker. However, he states that it is a question of resource efficiency. We are strengthened instead by our ability to allocate fewer resources to basic tasks. People allocate their mental resources to other tasks, and therefore, it is a question of prioritisation rather than decline.

Borg likens this to evolution and points out that people have lost the ability to light a fire in nature to cook their meals. The need for that skill has declined, replaced by other realities. While such skills may become necessary again, we should not be concerned that society will be unable to relearn them as necessity dictates.

‘I take it for granted that in 10 years, I will no longer need to type. I will simply speak to my computer, and perhaps use some form of gestures as commands. I think certain things will continue to become more convenient, easy to use, and more widespread. Such technologies will also become available in wider languages, such as the Maltese language,’ explains Borg.

Both Borg and Makantasis underline that they are not concerned about the development of such AI tools per se, but rather, that people underestimate the need to reskill themselves to use them. It will no longer simply be a subject for specialists, but will become a core subject that needs to be integrated into the whole education system. It will be so prevalent that we cannot afford to have people remaining unaware of how AI works and how to make use of it.

Further on the topic of risks, the academics look beyond plagiarism to instead think about data-privacy and the ability of the companies developing such software to control the information to which students might be exposed. At the end of the day, the companies behind such chatbots are private entities with a profit incentive. Few people know that artificial intelligence used in the Facebook algorithm can have a damaging effect, for example, placing people in echo chambers.


An echo chamber is a metaphorical description of a situation where ideas, opinions, or viewpoints are amplified or reinforced by communication and repetition within a closed system. It refers to a situation where people are only exposed to a limited number of perspectives, and as a result, their beliefs and opinions are shaped and reinforced by those around them, creating a sort of ‘echo’ of the same viewpoints. This can lead to a lack of exposure to alternate viewpoints, making it difficult for people to consider new ideas and perspectives. The datasets upon which AI is programmed similarly delineate the limits of what text the AI is able to generate and what answers it is likely to provide. People’s conversations with chatbots are also likely to give cues to the program which will dictate the chatbot’s tone on certain issues, which may reinforce users’ beliefs if they do not know how the chatbot really works.


While safeguards are built in for the chatbots to avoid providing false information on topics such as COVID-19, such safeguards tend to be custom designed according to particular challenges of the day. As the AI is trained on large quantities of data, and because its answers are based on probability and imitation, it is unable to think or provide genuinely correct answers. The risks are always there, therefore, that such chatbots may misinform or be misused. This is especially the case because crises may unfold quicker than programmers can react to them. Therefore, how to democratise these tools in a safe fashion is one of the key questions of the day and is of special importance given how prominent they may become in education.

Understanding the Tools at our Disposal

Nonetheless, the discussion is underlined by a recognition that using AI tools will pose a unique challenge to every field of study. In this regard, the Faculty of Information & Communication Technology has the advantage of already understanding and working with the subject of artificial intelligence. The academics therefore acknowledge the concerns of other departments and the scale of the challenge ahead.

There is no one size fits all solution to plagiarism or how to integrate such a tool. Even when talking about modern assessment approaches, each field has its own techniques. There can be significant differences between departments in the same faculty, let alone between the faculties themselves. Therefore, there is a need to acknowledge and respect academic diversity in this regard. When we come up with proposals and solutions regarding how to use AI tools, we need to think of sensitive solutions, and those ideas need to be tailored to the needs of that user group,’ says Borg.

‘Awareness of how these tools work is essential. The first step must be for the various fields to see how these tools may be of service to them and what their limitations are as well as the opportunities. Once people understand how this thing works, then they begin to innovate on their own. The first step, therefore, is to acknowledge AI as a tool,’ Makantasis points out.

Looking ahead, both academics acknowledge the exciting developments in the field while highlighting that little progress has been made towards achieving actual thinking machines. To date, these are merely prediction tools, simulating what a sentient response should look like. Nonetheless, with the proverbial genie loose from the bottle, the changes to education and society set in motion by chatbots are already being felt, and the time to work together to adapt to them is now.

Author

More to Explore

The Voynich Manuscript

The Voynich Manuscript is one of the most enduring historical enigmas, attracting multidisciplinary interest from around the world. Jonathan Firbank speaks with

Is Digital Immortality Possible?

What if technology allowed us to map our entire brain? What if we could upload ourselves into an online world to live

Comments are closed for this article!