What’s your Face Worth?

AI and Facial Recogntion

While most European citizens remain wary of AI and Facial Recognition, Maltese citizens do not seem to grasp the repercussions of such technology. Artificial Intelligence expert, Prof. Alexiei Dingli (University of Malta), returns to THINK to share his insights.

The camera sweeps across a crowd of people, locates the face of a possible suspect, isolates, and analyses it. Within seconds the police apprehend the suspect through the capricious powers of Facial Recognition technology and Artificial Intelligence (AI).

A recent survey by the European Union’s agency for fundamental rights revealed how European citizens felt about this technology. Half of the Maltese population would be willing to share their facial image with a public entity, which is surprising given that on average only 17% of Europeans felt comfortable with this practice. Is there a reason for Malta’s disproportionate performance? Artificial Intelligence expert, Prof. Alexiei Dingli (University of Malta), returns to THINK to share his insights.

Facial Recognition uses biometric data to map people’s faces from a photograph or video (biometric data is human characteristics such as fingerprints, gait, voice, and facial patterns). AI is then used to match that data to the right person by comparing it to a database. The technology is now advanced enough to scan a large gathering to identify suspects against police department records. 

Data is the new Oil

Facial Recognition and AI have countless uses. They could help prevent crime and find missing persons. They are prepared to unlock your phone, analyse, and influence our consumption habits, even track attendance in schools to ensure children are safe. But shouldn’t there be a limit? Do people really want their faces used by advertisers? Or, by the government to know about your flirtation with an opposing political party? In essence, by giving up this information, will our lives become better?  

‘Legislation demands that you are informed,’ points out Dingli. Biometric data can identify you, meaning that it falls under GDPR. People cannot snap pictures of others without their consent; private data cannot be used without permission. Dingli goes on to explain that ‘while shops are using it [Facial Recognition Technology] for security purposes, we have to ask whether this data can lead to further abuses. You should be informed that your data is being collected, why it is being collected, and whether you consent or not. Everyone has a right to privacy.’

Large corporations rely on their audiences’ data. They tailor their ad campaign based on this data to maximise sales. Marketers need this data, from your Facebook interests to tracking cookies on websites. ‘It’s no surprise then,’ laughs Dingli, ‘that Data is the new oil.’ 

The EU’s survey also found that participants are less inclined to share their data with private companies rather than government entities. Dingli speculates that ‘a government is something which we elect, this tends to give it more credibility than say a private company. The Facebook-Cambridge Analytica data breach scandal of 2018 is another possible variable.’ 

China has embraced Facial Recognition far more than the Western World. Millions of cameras are used to establish an individual citizens’ ‘social score’. If someone litters, their score is reduced. The practise is controversial and raises the issue of errors. Algorithms can mismatch one citizen for another. While an error rate in single digits might not seem like a large margin, even a measly 1% error rate can prove catastrophic for mismatched individuals. A hypothetical 1% error rate in China, with a population of over 1.3 billion, would mean that well over ten million Chinese citizens have been mismatched.   

Is privacy necessary?

‘I am convinced that we do not understand our rights,’ Prof. Dingli asserts. We do not really value our privacy and we find it easy to share our data.’ Social media platforms like Facebook made its way into our daily lives without people understanding how it works. The same can be said for AI and facial recognition. It has already seeped its way into our lives, and many of us are already using it—completely unaware. But the question is, how can we guarantee that AI is designed and used responsibly?

Dingli smiles, ‘How can you guarantee that a knife is used responsibly? AI, just like knives, are used by everybody. The problem is that many of us don’t even know we are using AI. We need to educate people. Currently, our knowledge of AI is formed through Hollywood movies. All it takes is a bit more awareness for people to realise that they are using AI right here and now.’

Everyone has a right to privacy and corporations are morally bound to respect that right, individuals are also responsible for the way they treat their own data. A knife, just like data, is a tool. It can be used for both good and evil things. We are responsible for how we use these tools. 

To Regulate or Not to Regulate?

Our data might not be tangible, but it is a highly valued commodity. Careless handling of our data, either through cyberattacks or our own inattention, can lead to identity theft. While the technology behind AI and Facial Recognition is highly advanced, it is far from perfect and is still prone to error. The misuse of AI can endanger human rights by manipulating groups of people through the dissemination of disinformation.   

Regulating AI is one possibility; it would establish technical standards and could protect consumers, however, this may stifle research. Given that AI is a horizontal field of study, fields such as architecture and medicine must consider the implications of a future with restricted use. An alternative to regulation is the creation of ethical frameworks which would enable researchers to continue expanding AI’s capabilities within moral boundaries. These boundaries would include respecting the rights of participants and drawing a line at research that could be used to cause physical or emotional harm or damage to property. 

While the debate regarding regulation rages on, we need to take a closer look at things within our control. While we cannot control where AI and Facial Recognition technology will take us, we can control whom we share our data with. Will we entrust it to an ethical source who will use it to better humanity, or the unscrupulous whose only concern is profit? 

Further Reading:

The Facebook-Cambridge Analytica data breach involved millions of Facebook users’ data being harvested without their consent by Cambridge Analytica which was later used for political advertising; 

Chan, R. (2019). The Cambridge Analytica whistleblower explains how the firm used Facebook data to sway elections. Business Insider. Retrieved 8 July 2020, from https://www.businessinsider.com/cambridge-analytica-whistleblower-christopher-wylie-facebook-data-2019-10.

Malta’s Ethical AI Framework;Parliamentary Secretariat For Financial Services, Digital Economy and Innovation. (2019). Malta Towards Trustworthy AI. Malta’s Ethical AI Framework. Malta.AI. Retrieved 8 July 2020, from https://malta.ai/wp-content/uploads/2019/10/Malta_Towards_Ethical_and_Trustworthy_AI_vFINAL.pdf

We’re exploring Here!

If you had a rich malleable canvas that could flip rules on their heads and expose truths we take for granted, wouldn’t you use it? Jasper Schellekens writes about the games delving deep into some of our most challenging philosophical questions.

The famous Chinese philosopher Confucius once said, ‘I hear and I forget. I see and I remember. I do and I understand.’ Confucius would have likely been a miserable mystic in modern mainstream education which demands that students sit and listen to teachers. But it’s not all bad. Technological advancements have brought us something Confucius could never have dreamed of: digital worlds. 

A digital world offers interaction within the boundaries of a created environment. It allows you to do things, even if the ‘thing’ amounts to little more than pressing a key. Research at the Institute of Digital Games (IDG) focuses on developing a deeper understanding of how these concepts can be used to teach through doing by looking at people interact with gameworlds, studying how games can impact them (Issue 24), and designing games that do exactly that. 

Doing it digital 

Two millennia later, John Dewey, one of the most prominent American scholars of the 20th century, proposed an educational reform that focused on learning through doing and reflection instead of the ‘factory model’ that was the norm. Dewey’s idea was embraced, and has become a pedagogical tool in many classrooms, now known as experiential learning.

Let’s not pretend that Confucius was thousands of years ahead of his time—after all, apprenticeships have always been an extremely common form of learning. But what if we were to transplant this method of experimentation, trial and error, into a digital world?

It would allow us to do so much! And we’re talking about more than figuring out how to plug in to Assassin’s Creed’s tesseract or getting the hang of swinging through New York City as Spiderman. While these are valuable skills you don’t want to ignore, what we’re really interested in here are virtual laboratories, space simulations, and interactive thought experiments.

Games make an ideal vehicle for experiential learning precisely because they provide a safe and relatively inexpensive digital world for students to learn from.

Think of the value of a flight simulator to train pilots. The IDG applied the same idea to create a virtual chemistry lab for the Envisage Project. They threw in the pedagogical power tools of fun and competition to create what’s known as serious games.

Serious games are at the heart of many of the IDG’s research projects. eCrisis uses games for social inclusion and teaching empathy. iLearn facilitates the learning process for children with dyslexia and Curio is developing a teaching toolkit to foster curiosity. However, the persuasive power of videogames stretches further than we might think.

In a videogame world, players take intentional actions based on the rules set by the creators. These ‘rules’ are also referred to as ‘game mechanisms’. Through these rules, and experiential learning, players can learn to think in a certain, often conventional, way.

Which brings us to HERE.

Prof. Stefano Gualeni is fond of using games to criticise conventions: in Necessary Evil a player takes on the role of an NPC (Non Player Character) monster, in Something Something Soup Something the definition of soup is questioned, while in HERE Gualeni breaks down what ‘here’ means in a digital world.

What’s Here?  

HERE sees the player explore the philosophical concept of ‘indexicality’, the idea that meanings depend on the context in which they occur. A fitting example is the extended index finger, which means different things depending on where it is placed and what movement it makes. Point one way or another to indicate direction, place over the lips to request silence, or shake it from side to side to deny or scold. 

The game explores the word ‘here’ in the digital world. It sheds light on how much we take for granted, and how a lot of concepts are not as straightforward as we think. 

HERE you play as ‘Wessel the Adventurer’, a cat of acute perception that is sent on a quest by a wizard to find magic symbols and open an enchanted cave. Playing on the tropes of role-playing games, the expectations of the adventurer are thus framed in a conventional manner, but not everything is as it seems.

By subverting players’ expectations of role-playing games, they will have the opportunity to discover what they have been (perhaps unwittingly) taught. They will be confronted with a puzzle involving the many versions of ‘here’ that can co-exist in a digital world. Among their prizes is Gualeni himself performing a philosophical rap. 

Explorable Explanations 

Experiential learning isn’t the only way to learn, but video games, with their interactivity and ability to manipulate the gameworld’s rules with ease, offer a ripe environment for it. The digital realm adds a very malleable layer of possibility for learning through doing and interacting with philosophical concepts. HERE is not alone in this approach. 

Words often fall short of the concepts they are trying to convey. How do you explain why people trust each other when there are so many opportunities to betray that trust? Telling people they have cognitive biases is not as effective as showing them acting on those biases.

Explorable Explanations is a collection of games curated by award-winning game developer Nicky Case that dig into these concepts through play. The Evolution of Trust is one of them, breaking down the complex psychological and social phenomena contributing to the seemingly simple concept of trust in society. Adventures in Cognitive Biases is able to show us how we are biased even when we don’t think we are. HERE delves into our understanding of language and the world around us, showing us (instead of telling us) that learning doesn’t have to be boring. Now go learn something and play HERE.   

To try the game yourself visit www.here.gua-le-ni.com

Enter the swarm

Author: Jean Luc Farrugia 

Jean Luc Farrugia

Once upon a time, the term ‘robot’ conjured up images of futuristic machines from the realm of science fiction. However, we can find the roots of automation much closer to home.

Nature is the great teacher. In the early days, when Artificial Intelligence was driven by symbolic AI (whereby entities in an environment are represented by symbols which are processed by mathematical and logical rules to make decisions on what actions to take), Australian entrepreneur and roboticist Rodney Brooks looked to animals for inspiration. There, he observed highly intelligent behaviours; take lionesses’ ability to coordinate and hunt down prey, or elephants’ skill in navigating vast lands using their senses. These creatures needed no maps, no mathematical models, and yet left even the best robots in the dust. 

This gave rise to a slew of biologically-inspired approaches. Successful applications include domestic robot vacuums and space exploration rovers. 

Swarm Robotics is an approach that extends this concept by taking a cue from collaborative behaviours used by animals like ants or bees, all while harnessing the emerging IoT (Internet of Things) trend that allows technology to communicate.

Supervised by Prof. Ing. Simon G. Fabri, I designed a system that enabled a group of robots to intelligently arrange themselves into different patterns while in motion, just like a herd of elephants, a flock of birds, or even a group of dancers! 

Farrugia’s robots in action.

I built and tested my system using real robots, which had to transport a box to target destinations chosen by the user. Unlike previous work, the algorithms I developed are not restricted by formation shape. My robots can change shape on the fly, allowing them to adapt to the task at hand. The system is quite simple and easy to use.

The group consisted of three robots designed using inexpensive off-the-shelf components. Simulations confirmed that it could be used for larger groups. The robots could push, grasp, and cage objects to move them from point A to B. To cage an object the robots move around it to bind it, then move together to push it around. Caging proved to be the strongest method, delivering the object even when a robot became immobilised, though grasping delivered more accurate results.

Collective transportation can have a great impact on the world’s economy. From the construction and manufacturing industries, to container terminal operations, robots can replace humans to protect them from the dangerous scenarios many workers face on a daily basis. 

This research project was carried out as part of the M.Sc. in Engineering (Electrical) programme at the Faculty of Engineering. A paper entitled “Swarm Robotics for Object Transportation” was published at the UKACCControl 2018 conference, available on IEEE Xplore digital library.

https://www.facebook.com/ThinkUM/videos/493872941442263/

The unusual suspects

When it comes to technology’s advances, it has always been said that creative tasks will remain out of their reach. Jasper Schellekens writes about one team’s efforts to build a game that proves that notion wrong.

The murder mystery plot is a classic in video games; take Grim Fandango, L.A. Noire, and the epic Witcher III. But as fun as they are, they do have a downside to them—they don’t often offer much replayability. Once you find out the butler did it, there isn’t much point in playing again. However, a team of academics and game designers are joining forces to pair open data with computer generated content to create a game that gives players a new mystery to solve every time they play. 

Dr Antonios Liapis

The University of Malta’s Dr Antonios Liapis and New York University’s Michael Cerny Green, Gabriella A. B. Barros, and Julian Togelius want to break new ground by using artificial intelligence (AI) for content creation. 

They’re handing the design job over to an algorithm. The result is a game in which all characters, places, and items are generated using open data, making every play session, every murder mystery, unique. That game is DATA Agent.

Gameplay vs Technical Innovation 

AI often only enters the conversation in the form of expletives, when people play games such as FIFA and players on their virtual team don’t make the right turn, or when there is a glitch in a first-person shooter like Call of Duty. But the potential applications of AI in games are far greater than merely making objects and characters move through the game world realistically. AI can also be used to create unique content—they can be creative.

While creating content this way is nothing new, the focus on using AI has typically been purely algorithmic, with content being generated through computational procedures. No Man’s Sky, a space exploration game that took the world (and crowdfunding platforms) by storm in 2015, generated a lot of hype around its use of computational procedures to create varied and different content for each player. The makers of No Man’s Sky promised their players galaxies to explore, but enthusiasm waned in part due to the monotonous game play. DATA Agent learnt from this example. The game instead taps into existing information available online from Wikipedia, Wikimedia Commons, and Google Street View and uses that to create a whole new experience.

Data: the Robot’s Muse  

A human designer draws on their experiences for inspiration. But what are experiences if not subjectively recorded data on the unreliable wetware that is the human brain? Similarly, a large quantity of freely available data can be used as a stand-in for human experience to ‘inspire’ a game’s creation. 

According to a report by UK non-profit Nesta, machines will struggle with creative tasks. But researchers in creative computing want AI to create as well as humans can.

However, before we grab our pitchforks and run AI out of town, it must be said that games using online data sources are often rather unplayable. Creating content from unrefined data can lead to absurd and offensive gameplay situations. Angelina, a game-making AI created by Mike Cook at Falmouth University created A Rogue Dream. This game uses Google Autocomplete functions to name the player’s abilities, enemies, and healing items based on an initial prompt by the player. Problems occasionally arose as nationalities and gender became linked to racial slurs and dangerous stereotypes. Apparently there are awful people influencing autocomplete results on the internet. 

DATA Agent uses backstory to mitigate problems arising from absurd results. A revised user interface also makes playing the game more intuitive and less like poring over musty old data sheets. 

So what is it really? 

In DATA Agent, you are a detective tasked with finding a time-traveling murderer now masquerading as a historical figure. DATA Agent creates a murder victim based on a person’s name and builds the victim’s character and story using data from their Wikipedia article.

This makes the backstory a central aspect to the game. It is carefully crafted to explain the context of the links between the entities found by the algorithm. Firstly, it serves to explain expected inconsistencies. Some characters’ lives did not historically overlap, but they are still grouped together as characters in the game. It also clarifies that the murderer is not a real person but rather a nefarious doppelganger. After all, it would be a bit absurd to have Albert Einstein be a witness to Attila the Hun’s murder. Also, casting a beloved figure as a killer could influence the game’s enjoyment and start riots. Not to mention that some of the people on Wikipedia are still alive, and no university could afford the inevitable avalanche of legal battles.

Rather than increase the algorithm’s complexity to identify all backstory problems, the game instead makes the issues part of the narrative. In the game’s universe, criminals travel back in time to murder famous people. This murder shatters the existing timeline, causing temporal inconsistencies: that’s why Einstein and Attila the Hun can exist simultaneously. An agent of DATA is sent back in time to find the killer, but time travel scrambles the information they receive, and they can only provide the player with the suspect’s details. The player then needs to gather intel and clues from other non-player characters, objects, and locations to try and identify the culprit, now masquerading as one of the suspects. The murderer, who, like the DATA Agent, is from an alternate timeline, also has incomplete information about the person they are impersonating and will need to improvise answers. If the player catches the suspect in a lie, they can identify the murderous, time-traveling doppelganger and solve the mystery!

De-mystifying the Mystery 

The murder mystery starts where murder mysteries always do, with a murder. And that starts with identifying the victim. The victim’s name becomes the seed for the rest of the characters, places, and items. Suspects are chosen based on their links to the victim and must always share a common characteristic. For example, Britney Spears and Diana Ross are both classified as ‘singer’ in the data used. The algorithm searches for people with links to the victim and turns them into suspects. 

But a good murder-mystery needs more than just suspects and a victim. As Sherlock Holmes says, a good investigation is ‘founded upon the observation of trifles.’ So the story must also have locations to explore, objects to investigate for clues, and people to interrogate. These are the game’s ‘trifles’ and that’s why the algorithm also searches for related articles for each suspect. The related articles about places are converted into locations in the game, and the related articles about people are converted into NPCs. Everything else is made into game items.

The Case of Britney Spears 

This results in games like “The Case of Britney Spears” with Aretha Franklin, Diana Ross, and Taylor Hicks as the suspects. In the case of Britney Spears, the player could interact with NPCs such as Whitney Houston, Jamie Lynn Spears, and Katy Perry. They could also travel from McComb in Mississippi to New York City. As they work their way through the game, they would uncover that the evil time-traveling doppelganger had taken the place of the greatest diva of them all: Diana Ross.

Oops, I learned it again 

DATA Agent goes beyond refining the technical aspects of organising data and gameplay. In the age where so much freely available information is ignored because it is presented in an inaccessible or boring format, data games could be game-changing (pun intended). 

In 1985, Broderbund released their game Where in the World is Carmen Sandiego?, where the player tracked criminal henchmen and eventually mastermind Carmen Sandiego herself by following geographical trivia clues. It was a surprise hit, becoming Broderbund’s third best-selling Commodore game as of late 1987. It had tapped into an unanticipated market, becoming an educational staple in many North American schools. 

Facts may have lost some of their lustre since the rise of fake news, but games like Where in the World is Carmen Sandiego? are proof that learning doesn’t have to be boring. And this is where products such as DATA Agent could thrive. After all, the game uses real data and actual facts about the victims and suspects. The player’s main goal is to catch the doppelganger’s mistake in their recounting of facts, requiring careful attention. The kind of attention you may not have when reading a textbook. This type of increased engagement with material has been linked to improving information retention.In the end, when you’ve traveled through the game’s various locations, found a number of items related to the murder victim, and uncovered the time-travelling murderer, you’ll hardy be aware that you’ve been taught.

‘Education never ends, Watson. It is a series of lessons, with the greatest for the last.’ – Sir Arthur Conan Doyle, His Last Bow. 

Of robots and rights

Author: Dr Jackie Mallia

Dr Jackie Mallia

In 2019, Malta will create a National Strategy for Artificial Intelligence or ‘AI’, in order to establish the Country as a hub for investment in AI. Speaking about AI at the Delta Summit late last year, Prime Minister Dr Joseph Muscat stated that ‘not only can we not stop change, but we have to embrace it with anticipation since it provides society with huge opportunities.’ He followed up with similar declarations at the Malta Innovation Summit, also observing that in the future ‘we may reach a stage where robots may be given rights under the law.’ 

This latter statement seemed to generate unease. Reading some of the negative comments posted online, I realised that for many, the mention of ‘AI’ still conjures up images of the Terminator movies. 

Although a machine possessing self-awareness, sentience, and consciousness may take decades to materialise, AI is already pervasive in our lives. Many of us make use of intelligent assistants, be it Amazon’s Alexa or Apple’s Siri. Others use Google Nest to adjust their home’s temperature. Then there are the millions with Netflix accounts whose content is ranked in order of assumed preference. All of it is convenient and all of it is due to AI. But some of the skepticism towards the technology may be warranted. High-profile failures include Google Home Minis allegedly sending their owners’ secretly recorded audio to Google. Facebook’s chatbots, Alice and Bob, developed their own language to conduct private conversations, leading to their shutdown. In addition, there were two well-documented fatal autonomous car accidents in 2018.

AI is still evolving, but at the same time, it is becoming ubiquitous, which leads us to some very important questions. What is happening to the data that such systems are collecting about us? What decisions are the devices taking, and to what extent are we even aware of them? Do we have a right to know the basis upon which such decisions are taken? If a machine’s ‘intelligence’ is based on big data being fed to it in an automated manner, how do we ensure it remains free from bias? Can decisions taken by a machine be explained in a court of law? Who is liable? 

A focus on the regulation of AI is not misplaced. The issues are real and present. But the answer is not to turn away from innovation. Progress will happen whether we want it to or not. Yes, we need ‘to embrace it,’ as Muscat stated, but we must do so in the most responsible way possible through appropriate strategy and optimal legislation.   

Dr Jackie Mallia is a lawyer specialising in Artificial Intelligence and a member of the Government of Malta’s AI Taskforce

Blood in the brain

Artificial intelligence (AI) has now made its way into the medical world. But it’s not as scary as it sounds. Most forms of AI are simply programs which have been developed to carry out very specific tasks–and they do them very well.

As part of my final-year project, I used AI to develop a program that can diagnose different types of brain haemorrhages. Brain haemorrhages are life-or-death situations where blood vessels in the brain burst and bleed into surrounding tissues, killing brain cells. Speed is key in preventing long-term brain damage, but treatment options depend on the size and location of the haemorrhage. This is when computerised tomography, or CT scans, come in.

Using X-rays, CT scans can image the brain in seconds. Last year, John Napier (another final-year project student) created an AI system to detect brain haemorrhages from CT scans. Building on this, I (under the supervision of Prof. Ing. Carl James Debono, Dr Paul Bezzina, and Dr Francis Zarb) developed a system to take the output from Napier’s system and further analyse the intensity, shape, and texture of haemorrhages to identify them as one of three types.

Kirsty Sant

The AI was trained on 24 pre-classified CT scans. By presenting the scan image to the artificial neural network along with the answer, the system can take on the information and learn. This process trains it to become familiar with the types of haemorrhage. Two different structures of artificial neural network were used with 220 variants each–resulting in 440 variants being used to train and test the model.

Then it was time to test this system. Six scans were given as unknowns and the network successfully classified over 88% of the haemorrhages using only three of the 440 variants.

The purpose of this system is to verify radiologists’ diagnoses. However, we hope to develop it to diagnose haemorrhages, which would help treat patients faster. The system can be adapted to other illnesses–CT scans are commonly used to image the abdomen and chest. The applications, and life-saving potential, are endless.

This research was carried out as part of a Bachelor’s degree in Computer Engineering at the Faculty of ICT, University of Malta.

Author: Kirsty Sant 

Underwater Eyes

Water covers 70% of Earth’s surface, but our oceans and seas might as well be alien planets. According to estimates, we’ve only explored about 5% of them so far. Crazy depths and dangerous conditions prevent humans from venturing into the unknown simply because we would be unable to survive. However, these limitations are being overcome. Drone technology can safely explore what lurks beneath the waves, and the Physical Oceanography Research Group from the Department of Geosciences at the University of Malta (UM) are doing just that.

Enter Powervision’s PowerRay Underwater Drone, an intelligent robot. It can capture real-time, high-res images beneath the sea’s surface. It has a wide-angle lens and instrumentation capable of determining temperature, sea depth, and even the presence of fish. Coupled with image processing and machine learning techniques being developed by the group, the drone maps the sea floor, determining its make-up as well as identifying locations where different fish species originate.
The small, lightweight drone can travel up to 1.5m/s and is currently being tested off the coast of Malta near Buġibba. This area has already been mapped manually by divers, which means that, when ready, the drone and human maps can be compared to evaluate the drone’s performance. If the AI algorithm produces accurate results, it will be used to charter unmapped regions—a first from Malta.

PowerRay Underwater Drone exploring the depths

But its applications don’t end there. The drone can also be used to monitor the condition of other expensive marine instruments which spend a lot of time underwater. Without having to put on a diving suit, it allows the team to check on deployed water temperature sensors, tide gauges, and acoustic Doppler current profilers. This helps to optimise and plan maintenance, which in turn prolongs the hardware’s lifetime.

The UM team also want to use the technology to detect marine litter. They plan to identify litter ‘hotspots’ in order to raise awareness and organise clean-up campaigns—a valuable initiative to support vital efforts to clean up our oceans.

  Author: Kirsty Callan