The unusual suspects

When it comes to technology’s advances, it has always been said that creative tasks will remain out of their reach. Jasper Schellekens writes about one team’s efforts to build a game that proves that notion wrong.

The murder mystery plot is a classic in video games; take Grim Fandango, L.A. Noire, and the epic Witcher III. But as fun as they are, they do have a downside to them—they don’t often offer much replayability. Once you find out the butler did it, there isn’t much point in playing again. However, a team of academics and game designers are joining forces to pair open data with computer generated content to create a game that gives players a new mystery to solve every time they play. 

Dr Antonios Liapis

The University of Malta’s Dr Antonios Liapis and New York University’s Michael Cerny Green, Gabriella A. B. Barros, and Julian Togelius want to break new ground by using artificial intelligence (AI) for content creation. 

They’re handing the design job over to an algorithm. The result is a game in which all characters, places, and items are generated using open data, making every play session, every murder mystery, unique. That game is DATA Agent.

Gameplay vs Technical Innovation 

AI often only enters the conversation in the form of expletives, when people play games such as FIFA and players on their virtual team don’t make the right turn, or when there is a glitch in a first-person shooter like Call of Duty. But the potential applications of AI in games are far greater than merely making objects and characters move through the game world realistically. AI can also be used to create unique content—they can be creative.

While creating content this way is nothing new, the focus on using AI has typically been purely algorithmic, with content being generated through computational procedures. No Man’s Sky, a space exploration game that took the world (and crowdfunding platforms) by storm in 2015, generated a lot of hype around its use of computational procedures to create varied and different content for each player. The makers of No Man’s Sky promised their players galaxies to explore, but enthusiasm waned in part due to the monotonous game play. DATA Agent learnt from this example. The game instead taps into existing information available online from Wikipedia, Wikimedia Commons, and Google Street View and uses that to create a whole new experience.

Data: the Robot’s Muse  

A human designer draws on their experiences for inspiration. But what are experiences if not subjectively recorded data on the unreliable wetware that is the human brain? Similarly, a large quantity of freely available data can be used as a stand-in for human experience to ‘inspire’ a game’s creation. 

According to a report by UK non-profit Nesta, machines will struggle with creative tasks. But researchers in creative computing want AI to create as well as humans can.

However, before we grab our pitchforks and run AI out of town, it must be said that games using online data sources are often rather unplayable. Creating content from unrefined data can lead to absurd and offensive gameplay situations. Angelina, a game-making AI created by Mike Cook at Falmouth University created A Rogue Dream. This game uses Google Autocomplete functions to name the player’s abilities, enemies, and healing items based on an initial prompt by the player. Problems occasionally arose as nationalities and gender became linked to racial slurs and dangerous stereotypes. Apparently there are awful people influencing autocomplete results on the internet. 

DATA Agent uses backstory to mitigate problems arising from absurd results. A revised user interface also makes playing the game more intuitive and less like poring over musty old data sheets. 

So what is it really? 

In DATA Agent, you are a detective tasked with finding a time-traveling murderer now masquerading as a historical figure. DATA Agent creates a murder victim based on a person’s name and builds the victim’s character and story using data from their Wikipedia article.

This makes the backstory a central aspect to the game. It is carefully crafted to explain the context of the links between the entities found by the algorithm. Firstly, it serves to explain expected inconsistencies. Some characters’ lives did not historically overlap, but they are still grouped together as characters in the game. It also clarifies that the murderer is not a real person but rather a nefarious doppelganger. After all, it would be a bit absurd to have Albert Einstein be a witness to Attila the Hun’s murder. Also, casting a beloved figure as a killer could influence the game’s enjoyment and start riots. Not to mention that some of the people on Wikipedia are still alive, and no university could afford the inevitable avalanche of legal battles.

Rather than increase the algorithm’s complexity to identify all backstory problems, the game instead makes the issues part of the narrative. In the game’s universe, criminals travel back in time to murder famous people. This murder shatters the existing timeline, causing temporal inconsistencies: that’s why Einstein and Attila the Hun can exist simultaneously. An agent of DATA is sent back in time to find the killer, but time travel scrambles the information they receive, and they can only provide the player with the suspect’s details. The player then needs to gather intel and clues from other non-player characters, objects, and locations to try and identify the culprit, now masquerading as one of the suspects. The murderer, who, like the DATA Agent, is from an alternate timeline, also has incomplete information about the person they are impersonating and will need to improvise answers. If the player catches the suspect in a lie, they can identify the murderous, time-traveling doppelganger and solve the mystery!

De-mystifying the Mystery 

The murder mystery starts where murder mysteries always do, with a murder. And that starts with identifying the victim. The victim’s name becomes the seed for the rest of the characters, places, and items. Suspects are chosen based on their links to the victim and must always share a common characteristic. For example, Britney Spears and Diana Ross are both classified as ‘singer’ in the data used. The algorithm searches for people with links to the victim and turns them into suspects. 

But a good murder-mystery needs more than just suspects and a victim. As Sherlock Holmes says, a good investigation is ‘founded upon the observation of trifles.’ So the story must also have locations to explore, objects to investigate for clues, and people to interrogate. These are the game’s ‘trifles’ and that’s why the algorithm also searches for related articles for each suspect. The related articles about places are converted into locations in the game, and the related articles about people are converted into NPCs. Everything else is made into game items.

The Case of Britney Spears 

This results in games like “The Case of Britney Spears” with Aretha Franklin, Diana Ross, and Taylor Hicks as the suspects. In the case of Britney Spears, the player could interact with NPCs such as Whitney Houston, Jamie Lynn Spears, and Katy Perry. They could also travel from McComb in Mississippi to New York City. As they work their way through the game, they would uncover that the evil time-traveling doppelganger had taken the place of the greatest diva of them all: Diana Ross.

Oops, I learned it again 

DATA Agent goes beyond refining the technical aspects of organising data and gameplay. In the age where so much freely available information is ignored because it is presented in an inaccessible or boring format, data games could be game-changing (pun intended). 

In 1985, Broderbund released their game Where in the World is Carmen Sandiego?, where the player tracked criminal henchmen and eventually mastermind Carmen Sandiego herself by following geographical trivia clues. It was a surprise hit, becoming Broderbund’s third best-selling Commodore game as of late 1987. It had tapped into an unanticipated market, becoming an educational staple in many North American schools. 

Facts may have lost some of their lustre since the rise of fake news, but games like Where in the World is Carmen Sandiego? are proof that learning doesn’t have to be boring. And this is where products such as DATA Agent could thrive. After all, the game uses real data and actual facts about the victims and suspects. The player’s main goal is to catch the doppelganger’s mistake in their recounting of facts, requiring careful attention. The kind of attention you may not have when reading a textbook. This type of increased engagement with material has been linked to improving information retention.In the end, when you’ve traveled through the game’s various locations, found a number of items related to the murder victim, and uncovered the time-travelling murderer, you’ll hardy be aware that you’ve been taught.

‘Education never ends, Watson. It is a series of lessons, with the greatest for the last.’ – Sir Arthur Conan Doyle, His Last Bow. 

What’s lurking on your lunch?

In our modern, fast-paced lives, more of us are turning to convenient ready-to-eat meals. But with short shelf lives and high demand, food safety tests just aren’t quick enough anymore. Dr Sholeem Griffin tells Becky Catrin Jones how an innovative collaboration between microbiology and computing is tackling this challenge.

Continue reading

Escape the (Virtual) Room!

 

Natalia Mallia

Virtual Reality (VR) has created a whole new realm of experiences. By placing people into varied situations and environments, VR enables them not only to explore, but to challenge themselves and gain skills in ways never thought possible. With applications in medical and psychological treatment, VR is now being used to train surgeons, treat PTSD, and to help people understand what it’s like to be on the autism spectrum. The key to this application is VR’s ability to immerse its users. 

Many agree that immersion needs two key ingredients: a sense of presence and interaction with the environment. Interaction comes in three main forms. Selection is about differentiating between items in the environment. Navigation allows travelling from one point to another and observing the environment. Finally, manipulation lets users grab, move and rotate selectable items. In addition to this, VR applications need a setting. Supervised by Dr Vanessa Camilleri and Prof. Alexiei Dingli, I chose to use escape rooms (adventure games where multiple puzzles are solved to leave a room) to experiment with these interaction techniques. 

I used escape rooms because they’re highly interactive and naturally immersive systems. And since interaction isn’t a one-size-fits-all scenario, I also applied procedural content generation (PCG) techniques to create the escape rooms themselves.

People selected items using a reticle, a small circle in the middle of the screen which expands or contracts to indicate which objects they could interact with. They navigated the space by looking around through the VR headset and moving their joystick. They manipulated puzzles from a separate screen which I layered on top of the escape room. This allowed them to inspect objects to their heart’s content, while also reducing the amount of clutter in the room.

Since there was no previous work in PCG escape rooms, I had to pave my own way. I used a genetic algorithm, a machine learning algorithm that mimics evolution in biology to select the best solution to a problem, to determine which puzzles and items would be placed in the escape room. I also programmed the game to create the rest of the room, placing floors, ceilings, and everything else that the algorithm didn’t consider. This made the space look like it had been made by an actual person, despite being created through AI.

From the results gathered, most people found that the system allowed them to explore the VR environment in a very natural way. Players said that the room’s generated interaction was consistent, reliable, and fun. 

Understanding immersion is critical for VR’s future applications. If we can help people hone these techniques by creating a few games along the way—so be it!  

This research was carried out as part of a Bachelor of Artificial Intelligence at the Faculty of ICT, University of Malta

Author: Natalia Mallia

Drawing with our eyes

Matthew Attard

Drawing can be defined as the active exploration of an individual’s mental imagery. John Berger described it as ‘an autobiographical record of one’s discovery of an event—seen, remembered, or imagined.’

The initial hunch for my research revolved around the idea of drawing with one’s eyes instead of hands by using an eye-tracker.

The approach intrigued me for three reasons. It allowed me to explore the notion that an artist’s skills are in his tools—his hands. The eye-tracker-based technique ‘levelled the playing field’ between artist and non-practitioner by removing hands from the equation. Secondly, through eye-drawing practice, I could also notice a shift in the drawing methods used. Normal drawing involves hand-eye coordination and a degree of intuitive eye movements. In ‘eye-drawing’, these movements have to be suppressed into following contours along the observed worldview, while also restraining the impulse to refer to the accustomed curvilinear hand motions. All this feeds into the fact that eye-drawing cannot be regarded through the same approach as ‘normal’ drawing. Eye-drawn objects have a direct representation tied to their place and time of execution and acquire a technological aesthetic.

I explored these concepts in several experiments. I ran communal ‘life’ eye-drawing classes with first year students reading for an MFA (Faculty of Media and Knowledge Sciences, University of Malta [UM]). Their resulting visuals were surprisingly individualistic, highlighting their characters, a quality I observed to be constant throughout all eye-drawings.

Using an eye-tracker to draw led to some exciting possibilities. I tested a preliminary algorithm, developed by my colleague Neil Mizzi, (Faculty of ICT, UM) that ‘corrected’ an eye-drawing by comparing it to a real-world picture. The technique could be applied in future eye-drawing devices designed to help physically impaired individuals to draw from real-world images using just their eyes.

It can be argued that art is a subjective experience, both in its creation and perception. Eye-drawing can exploit this subjectivity revealing ‘signature’ gestures through a new way of looking.

This research was carried out as part of a Masters by research at the Department of Digital Arts, Faculty of Media and Knowledge Sciences, University of Malta (UM).

Author: Matthew Attard

Robot see, robot maps

by Rachael N. Darmanin

The term ‘robot’ tends to conjure up images of  well-known metal characters like C-3P0, R2-D2, and WALL-E. The robotics research boom has in the end enabled the introduction of real robots into our homes, workspaces, and recreational places. The pop culture icons we loved have now been replaced with the likes of robot vacuums such as the Roomba and home-automated systems for smoke detectors, or WIFI-enabled thermostats, such as the Nest. Nonetheless, building a fully autonomous mobile robot is still a momentous task. In order to purposefully travel around its environment, a mobile robot has to answer the questions ‘where am I?’, ‘where should I go next?’ and ‘how am I going to get there?’

Like humans, mobile robots must have some awareness of their surroundings in order to carry out tasks autonomously. A map comes in handy for humans. A robot could build the map itself while exploring an unknown environment—this is a process called Simultaneous Localisation and Mapping (SLAM). For the robot to decide which location to explore next, however, an exploration strategy would need to be devised, and the path planner would guide the robot to navigate to the next location, which increases the map’s size.

Rachael N. Darmanin
Rachael N. Darmanin

Rachael Darmanin (supervised by Dr Ing. Marvin Bugeja), used a software framework called Robot Operating System (ROS) to develop a robot system that can explore and map an unknown environment on its own. Darmanin used a differential-drive-wheeled mobile robot, dubbed PowerBot, equipped with a laser scanner (LIDAR) and wheel encoders. The algorithms responsible for localising the robot analyse the sensors’ data and construct the map. In her experiments, Darmanin implemented two different exploration strategies, the Nearest Frontier and the Next Best View, on the same system to map the Control Systems Engineering Laboratory. Each experiment ran for approximately two minutes until the robot finished its exploration and produced a map of its surroundings. This was then compared to a map of the environment to evaluate the robot’s mapping accuracy. The Next Best View approach generated the most accurate maps.

Mobile robots with autonomous exploration and mapping capabilities have massive relevance to society. They can aid hazardous exploration, like nuclear disasters, or access uncharted archaeological sites. They could also help in search and rescue operations where they would be used to navigate in disaster-stricken environments. For her doctorate, Darmanin is now looking into how multiple robots can work together to survey a large area—with a few other solutions in between.


This research was carried out as part of a Master of Science in Engineering, Faculty of Engineering, University of Malta. It was funded by the Master it! Scholarship Scheme (Malta). This scholarship is part-financed by the European Union European Social Fund (ESF) under Operational Programme II Cohesion Policy 2007–2013, Empowering People for More Jobs and a Better Quality Of Life.