Nicholas Mamo has developed an algorithm that analyses Twitter feeds and extracts information about events. This type of artificial intelligence has been designed to automatically identify the participants involved in the events and understand what happened based on the tweets.Continue reading
As passengers, we often overlook the complexity and challenges faced by pilots as they navigate the skies. THINK explores SmartAP, a cutting-edge AI technology that could help pilots combat stalling and difficult landing situations. Buckle up, sit back, and enjoy the journey!Continue reading
A spin-out from the University of Malta’s Institute of Digital Games is working on artificial intelligence-run game testing software. The engine would run thousands of low-level testing rounds before humans engage in high-level testing of a game prior to market release. Modl.ai co-founder Georgios N. Yannakakis tells THINK how his team aspires to change the game.Continue reading
From the perspective of a student and an academic, how can ChatGPT be used in academia? Nathanael Schembri explores how the AI could be used to aid in assessments as well as scholarly work; such as in writing dissertations and research papers.Continue reading
ChatGPT has taken the world by storm since its launch in November 2022. It is a chatbot developed by OpenAI and built on the GPT-3 language model. What sets it apart from other AI chatbots is the sheer amount of data it has been trained on, allowing the quality of its responses to cause waves, leading to headlines such as ChatGPT passing key professional exams. It has also consequently caused concern in academia that it may be used to cheat at exams and assignments. We speak to two academics from the University of Malta, Dr Claudia Borg and Dr Konstantinos Makantasis, to see how academia should adapt. Are such advances a threat to be curbed or an opportunity to be exploited?Continue reading
Medical diagnosis relies on data. A physician observes and analyzes a patient’s vital signals to assess their condition and prescribe adequate treatment. The more accurate and reliable the data, the better the treatment. Through the use of technology, digital health allows both physicians and patients real-time access to medical data.Continue reading
One of the largest citizen science projects in Malta, Spot the Jellyfish has helped record many interesting discoveries about marine life. But as the project grows, the team must expand their technology to cope with the influx of data. Prof. Alan Deidun, Prof. John Abela, and Dr Adam Gauci speak to Becky Catrin Jones about their latest developments.Continue reading
Artificial Intelligence (AI) is revolutionising the world. We have self-driving cars, algorithms determining future market patterns, and computers diagnosing disease. We believe that AI is supporting huge developments in healthcare.Continue reading
While most European citizens remain wary of AI and Facial Recognition, Maltese citizens do not seem to grasp the repercussions of such technology. Artificial Intelligence expert, Prof. Alexiei Dingli (University of Malta), returns to THINK to share his insights.
The camera sweeps across a crowd of people, locates the face of a possible suspect, isolates, and analyses it. Within seconds the police apprehend the suspect through the capricious powers of Facial Recognition technology and Artificial Intelligence (AI).
A recent survey by the European Union’s agency for fundamental rights revealed how European citizens felt about this technology. Half of the Maltese population would be willing to share their facial image with a public entity, which is surprising given that on average only 17% of Europeans felt comfortable with this practice. Is there a reason for Malta’s disproportionate performance? Artificial Intelligence expert, Prof. Alexiei Dingli (University of Malta), returns to THINK to share his insights.
Facial Recognition uses biometric data to map people’s faces from a photograph or video (biometric data is human characteristics such as fingerprints, gait, voice, and facial patterns). AI is then used to match that data to the right person by comparing it to a database. The technology is now advanced enough to scan a large gathering to identify suspects against police department records.
Data is the new Oil
Facial Recognition and AI have countless uses. They could help prevent crime and find missing persons. They are prepared to unlock your phone, analyse, and influence our consumption habits, even track attendance in schools to ensure children are safe. But shouldn’t there be a limit? Do people really want their faces used by advertisers? Or, by the government to know about your flirtation with an opposing political party? In essence, by giving up this information, will our lives become better?
‘Legislation demands that you are informed,’ points out Dingli. Biometric data can identify you, meaning that it falls under GDPR. People cannot snap pictures of others without their consent; private data cannot be used without permission. Dingli goes on to explain that ‘while shops are using it [Facial Recognition Technology] for security purposes, we have to ask whether this data can lead to further abuses. You should be informed that your data is being collected, why it is being collected, and whether you consent or not. Everyone has a right to privacy.’
Large corporations rely on their audiences’ data. They tailor their ad campaign based on this data to maximise sales. Marketers need this data, from your Facebook interests to tracking cookies on websites. ‘It’s no surprise then,’ laughs Dingli, ‘that Data is the new oil.’
The EU’s survey also found that participants are less inclined to share their data with private companies rather than government entities. Dingli speculates that ‘a government is something which we elect, this tends to give it more credibility than say a private company. The Facebook-Cambridge Analytica data breach scandal of 2018 is another possible variable.’
China has embraced Facial Recognition far more than the Western World. Millions of cameras are used to establish an individual citizens’ ‘social score’. If someone litters, their score is reduced. The practise is controversial and raises the issue of errors. Algorithms can mismatch one citizen for another. While an error rate in single digits might not seem like a large margin, even a measly 1% error rate can prove catastrophic for mismatched individuals. A hypothetical 1% error rate in China, with a population of over 1.3 billion, would mean that well over ten million Chinese citizens have been mismatched.
Is privacy necessary?
‘I am convinced that we do not understand our rights,’ Prof. Dingli asserts. ‘We do not really value our privacy and we find it easy to share our data.’ Social media platforms like Facebook made its way into our daily lives without people understanding how it works. The same can be said for AI and facial recognition. It has already seeped its way into our lives, and many of us are already using it—completely unaware. But the question is, how can we guarantee that AI is designed and used responsibly?
Dingli smiles, ‘How can you guarantee that a knife is used responsibly? AI, just like knives, are used by everybody. The problem is that many of us don’t even know we are using AI. We need to educate people. Currently, our knowledge of AI is formed through Hollywood movies. All it takes is a bit more awareness for people to realise that they are using AI right here and now.’
Everyone has a right to privacy and corporations are morally bound to respect that right, individuals are also responsible for the way they treat their own data. A knife, just like data, is a tool. It can be used for both good and evil things. We are responsible for how we use these tools.
To Regulate or Not to Regulate?
Our data might not be tangible, but it is a highly valued commodity. Careless handling of our data, either through cyberattacks or our own inattention, can lead to identity theft. While the technology behind AI and Facial Recognition is highly advanced, it is far from perfect and is still prone to error. The misuse of AI can endanger human rights by manipulating groups of people through the dissemination of disinformation.
Regulating AI is one possibility; it would establish technical standards and could protect consumers, however, this may stifle research. Given that AI is a horizontal field of study, fields such as architecture and medicine must consider the implications of a future with restricted use. An alternative to regulation is the creation of ethical frameworks which would enable researchers to continue expanding AI’s capabilities within moral boundaries. These boundaries would include respecting the rights of participants and drawing a line at research that could be used to cause physical or emotional harm or damage to property.
While the debate regarding regulation rages on, we need to take a closer look at things within our control. While we cannot control where AI and Facial Recognition technology will take us, we can control whom we share our data with. Will we entrust it to an ethical source who will use it to better humanity, or the unscrupulous whose only concern is profit?
The Facebook-Cambridge Analytica data breach involved millions of Facebook users’ data being harvested without their consent by Cambridge Analytica which was later used for political advertising;
Chan, R. (2019). The Cambridge Analytica whistleblower explains how the firm used Facebook data to sway elections. Business Insider. Retrieved 8 July 2020, from https://www.businessinsider.com/cambridge-analytica-whistleblower-christopher-wylie-facebook-data-2019-10.
Malta’s Ethical AI Framework;Parliamentary Secretariat For Financial Services, Digital Economy and Innovation. (2019). Malta Towards Trustworthy AI. Malta’s Ethical AI Framework. Malta.AI. Retrieved 8 July 2020, from https://malta.ai/wp-content/uploads/2019/10/Malta_Towards_Ethical_and_Trustworthy_AI_vFINAL.pdf
If you had a rich malleable canvas that could flip rules on their heads and expose truths we take for granted, wouldn’t you use it? Jasper Schellekens writes about the games delving deep into some of our most challenging philosophical questions.
The famous Chinese philosopher Confucius once said, ‘I hear and I forget. I see and I remember. I do and I understand.’ Confucius would have likely been a miserable mystic in modern mainstream education which demands that students sit and listen to teachers. But it’s not all bad. Technological advancements have brought us something Confucius could never have dreamed of: digital worlds.
A digital world offers interaction within the boundaries of a created environment. It allows you to do things, even if the ‘thing’ amounts to little more than pressing a key. Research at the Institute of Digital Games (IDG) focuses on developing a deeper understanding of how these concepts can be used to teach through doing by looking at people interact with gameworlds, studying how games can impact them (Issue 24), and designing games that do exactly that.
Doing it digital
Two millennia later, John Dewey, one of the most prominent American scholars of the 20th century, proposed an educational reform that focused on learning through doing and reflection instead of the ‘factory model’ that was the norm. Dewey’s idea was embraced, and has become a pedagogical tool in many classrooms, now known as experiential learning.
Let’s not pretend that Confucius was thousands of years ahead of his time—after all, apprenticeships have always been an extremely common form of learning. But what if we were to transplant this method of experimentation, trial and error, into a digital world?
It would allow us to do so much! And we’re talking about more than figuring out how to plug in to Assassin’s Creed’s tesseract or getting the hang of swinging through New York City as Spiderman. While these are valuable skills you don’t want to ignore, what we’re really interested in here are virtual laboratories, space simulations, and interactive thought experiments.
Games make an ideal vehicle for experiential learning precisely because they provide a safe and relatively inexpensive digital world for students to learn from.
Think of the value of a flight simulator to train pilots. The IDG applied the same idea to create a virtual chemistry lab for the Envisage Project. They threw in the pedagogical power tools of fun and competition to create what’s known as serious games.
Serious games are at the heart of many of the IDG’s research projects. eCrisis uses games for social inclusion and teaching empathy. iLearn facilitates the learning process for children with dyslexia and Curio is developing a teaching toolkit to foster curiosity. However, the persuasive power of videogames stretches further than we might think.
In a videogame world, players take intentional actions based on the rules set by the creators. These ‘rules’ are also referred to as ‘game mechanisms’. Through these rules, and experiential learning, players can learn to think in a certain, often conventional, way.
Which brings us to HERE.
Prof. Stefano Gualeni is fond of using games to criticise conventions: in Necessary Evil a player takes on the role of an NPC (Non Player Character) monster, in Something Something Soup Something the definition of soup is questioned, while in HERE Gualeni breaks down what ‘here’ means in a digital world.
HERE sees the player explore the philosophical concept of ‘indexicality’, the idea that meanings depend on the context in which they occur. A fitting example is the extended index finger, which means different things depending on where it is placed and what movement it makes. Point one way or another to indicate direction, place over the lips to request silence, or shake it from side to side to deny or scold.
The game explores the word ‘here’ in the digital world. It sheds light on how much we take for granted, and how a lot of concepts are not as straightforward as we think.
HERE you play as ‘Wessel the Adventurer’, a cat of acute perception that is sent on a quest by a wizard to find magic symbols and open an enchanted cave. Playing on the tropes of role-playing games, the expectations of the adventurer are thus framed in a conventional manner, but not everything is as it seems.
By subverting players’ expectations of role-playing games, they will have the opportunity to discover what they have been (perhaps unwittingly) taught. They will be confronted with a puzzle involving the many versions of ‘here’ that can co-exist in a digital world. Among their prizes is Gualeni himself performing a philosophical rap.
Experiential learning isn’t the only way to learn, but video games, with their interactivity and ability to manipulate the gameworld’s rules with ease, offer a ripe environment for it. The digital realm adds a very malleable layer of possibility for learning through doing and interacting with philosophical concepts. HERE is not alone in this approach.
Words often fall short of the concepts they are trying to convey. How do you explain why people trust each other when there are so many opportunities to betray that trust? Telling people they have cognitive biases is not as effective as showing them acting on those biases.
Explorable Explanations is a collection of games curated by award-winning game developer Nicky Case that dig into these concepts through play. The Evolution of Trust is one of them, breaking down the complex psychological and social phenomena contributing to the seemingly simple concept of trust in society. Adventures in Cognitive Biases is able to show us how we are biased even when we don’t think we are. HERE delves into our understanding of language and the world around us, showing us (instead of telling us) that learning doesn’t have to be boring. Now go learn something and play HERE.
To try the game yourself visit www.here.gua-le-ni.com