To Do or Not to Do a PhD? – Book Review

Are you debating whether or not to start a PhD? Have you just started and are already overwhelmed? You might be asking yourself, why am I putting myself through this? Don’t panic! We’ve gotten in touch with Dr Sarah Cuschieri, author of To Do or Not to Do a PhD to ask that very question.

Continue reading

ZOOMing out

Illustrative image of a woman working at a desk

The COVID-19 virus has had a profound impact in the way in which our lives are led. The widespread global adoption of remote workplaces and classrooms has introduced us to a new way of life. The question is whether the adoption of this new norm will continue in years following the pandemic. To answer that, David Mizzi takes a look at the nature of work and what the raison d’etre of pursuing tertiary education is.

Continue reading

Let’s not panic about our teens just yet

The benefits of research can be lost if we amplify only one argument in a nuanced, complex topic. How youth interact with social media is as complex as it gets. Dr Velislava Hillman, director and senior researcher at Data, Media, and Society Research Centre, Malta, writes. 

The COVID-19 pandemic kept many children and teenagers at home, with parents struggling to recreate routine. Yet with or without public health risks, teenagers’ social media use was shrouded in moral panic and gloom. Mainstream media headlines do not help; take ‘Social Media Creates ‘Instant Loneliness’ for teenagers’ and ‘Loneliness: An Epidemic In The Making?’. All too often research and policy looks at risks separately from opportunities. 

In Malta, this division happens often. Run-of-the-mill surveys bring out numbers without context. Left in the hands of hungry news writers, these numbers can lead to uncontrolled and wild interpretations that raise unnecessary fear in readers. The truth is that there is no clear evidence of any causal relationship between loneliness and social media use. Young people – and many adults too – do feel social or emotional loneliness, but the real reasons remain elusive. To give a more balanced approach to social-media-induced loneliness among teenagers, here are five questions to ask before allowing any concern to seep in.

The truth is that there is no clear evidence of any causal relationship between loneliness and social media use. Young people – and many adults too – do feel social or emotional loneliness, but the real reasons remain elusive.

Firstly, what’s the evidence? Comparing the findings of a quantitative study on loneliness carried out by the faculty of Social Wellbeing at University of Malta and its coverage in the mainstream media, the gap is striking. There is no solid proof that teenagers suffer ‘instant’ loneliness, let alone that social media causes it. The study found that loneliness tends to particularly affect older people with lower education, unemployed and retired individuals, and those living alone (a bit of a giveaway), among other factors. A person’s risk of loneliness, the study summarises, ‘is reduced if they: form part of a younger age group; are highly educated; are in employment; are of a single marital status; live with their parent(s) or guardian(s)…’ etc.

A third of the teenagers (ages 11–19) who took part in this survey said they experienced some sort of loneliness (with no connection to social media whatsoever). The survey (a method that has its own limitations) included 115 teenagers in total.  While the research instrument has a unidimensional overall loneliness measure, it prevents researchers from understanding why the survey participants responded as they did – which may be a result of temporary bias (e.g. unique life events, having a stomach ache, or responding right after a fight with a friend).

The second question is: who is interpreting the results (researchers, journalists, parents, NGOs who need funding to carry out their work)? Mainstream media covered  similar studies in the past (e.g. studies on youth and online gaming), as they make a compelling read even when evidence is inconclusive. But while the intention may be to create awareness, inflicting moral panics will not provide the support that is necessary in these situations. 

The third question to consider: when does feeling lonely become problematic? A headline such as ‘Loneliness: An Epidemic in the Making?’ sounds as though feeling lonely is somehow wrong. The referenced study by the Faculty for Social Wellbeing highlights that it is OK to feel lonely. And while presenting the number of people who said they feel ‘moderately lonely’, the findings do not make claims about the cause or length of such a feeling. 

Fourthly, are social media users seen as passive consumers or as complex individuals? Scary headlines of media articles (as quoted above) or books (like Jean Twenge’s iGen: Why Today’s Super-Connected Kids Are Growing Up Less Rebellious, More Tolerant, Less Happy or Adam Alter’s Irresistible: The Rise of Addictive Technology and the Business of Keeping Us Hooked) create a dangerous bandwagon. The media has been heavily criticised for construing children and young people as a passive audience of media messages, carried away by content that adults somehow seem immune to. However, children have their own moral compass; they detect liars like no other device can, and show resilience when faced with an adversary. Examples galore: from Pocahontas to Malala and Taylor Swift (with her support for LGTBQ rights and speaking up against sexual harassment). 

Of course, accepting youth as ‘tech savvy’ is another extreme to avoid. The point is to not segregate audiences, grouping them as ‘addicted’ or ‘digital natives’ or ‘lonely’, but to reveal all evidence with its accompanying limitations and drawbacks and to emphasise the nuances that exist among usage patterns, perspectives, and individuals. 

The media has been heavily criticised for construing children and young people as a passive audience of media messages, carried away by content that adults somehow seem immune to. However, children have their own moral compass; they detect liars like no other device can, and show resilience when faced with an adversary.

Finally, what’s the point of creating moral panics? NGOs and mainstream media make every effort to create awareness, to help raise awareness of existing problems, and to make improvements in society. This is great. However, such work also relies on external funding – for selling shrinking newspapers, running educational and support programs, for conducting further research. Amplifying complex issues that are far from being clear-cut builds upon that same dangerous bandwagon. Generalising and turning survey responses into sensationalised headlines is never productive. 

An average family will never spend a whole day reading academic work to understand what exactly has been discovered. All institutional actors, media, and stakeholders concerned about young people’s wellbeing should ensure a full display of the existing evidence and interpret it in a balanced way. And again, there is no causal relationship between loneliness and social media – the tool is not inherently harmful.

What should we do in these unprecedented times?

‘Isolation, physical distancing, the closure of schools and workplaces are challenges that affect us, and it is natural to feel stress, anxiety, fear and loneliness at this time,’ pointed out Hans Kluge, an important World Health Organisation expert. Instead of adding to the anxiety and fears about screen time, let’s use this COVID-19 pandemic to explore the beneficial use of social and digital media. Some tips: 

  • Enable discussions with young ones; learn together about the access to and spread of misinformation (misleading information) and disinformation (wrongfully given information with the intention to mislead and harm).
  • Find strategies for fact-checking and finding good quality information.
  • Connect with others and provide space for children and youths to enjoy their usual friendships, albeit digitally.
  • Listen to them with less judgement and critique. Instead, learn how they feel and what they use their digital technologies for.

Further Reading

Azzopardi, A. (2019). Loneliness: an epidemic in the making?. Malta Independent

Clark, M., Azzopardi, A., & Bonnici, J. (2019). The Prevalence of Loneliness in Malta: A nationally representative study of the Maltese population. The Faculty for Social Wellbeing, University of Malta. 

Conneely, V. (2020). Social media creating ‘instant loneliness’ for teenagers. Times Of Malta

Malala Fund | Working for a world where all girls can learn and lead. Malala.org. (2013). 

Pocahontas: Beyond the Myth. Smithsonian Channel. (2020). 

Zacharek, S., Dockterman, E., & Edwards, H. (2017). TIME Person of the Year 2017: The Silence Breakers. Time.com. 

What’s your Face Worth?

AI and Facial Recogntion

While most European citizens remain wary of AI and Facial Recognition, Maltese citizens do not seem to grasp the repercussions of such technology. Artificial Intelligence expert, Prof. Alexiei Dingli (University of Malta), returns to THINK to share his insights.

The camera sweeps across a crowd of people, locates the face of a possible suspect, isolates, and analyses it. Within seconds the police apprehend the suspect through the capricious powers of Facial Recognition technology and Artificial Intelligence (AI).

A recent survey by the European Union’s agency for fundamental rights revealed how European citizens felt about this technology. Half of the Maltese population would be willing to share their facial image with a public entity, which is surprising given that on average only 17% of Europeans felt comfortable with this practice. Is there a reason for Malta’s disproportionate performance? Artificial Intelligence expert, Prof. Alexiei Dingli (University of Malta), returns to THINK to share his insights.

Facial Recognition uses biometric data to map people’s faces from a photograph or video (biometric data is human characteristics such as fingerprints, gait, voice, and facial patterns). AI is then used to match that data to the right person by comparing it to a database. The technology is now advanced enough to scan a large gathering to identify suspects against police department records. 

Data is the new Oil

Facial Recognition and AI have countless uses. They could help prevent crime and find missing persons. They are prepared to unlock your phone, analyse, and influence our consumption habits, even track attendance in schools to ensure children are safe. But shouldn’t there be a limit? Do people really want their faces used by advertisers? Or, by the government to know about your flirtation with an opposing political party? In essence, by giving up this information, will our lives become better?  

‘Legislation demands that you are informed,’ points out Dingli. Biometric data can identify you, meaning that it falls under GDPR. People cannot snap pictures of others without their consent; private data cannot be used without permission. Dingli goes on to explain that ‘while shops are using it [Facial Recognition Technology] for security purposes, we have to ask whether this data can lead to further abuses. You should be informed that your data is being collected, why it is being collected, and whether you consent or not. Everyone has a right to privacy.’

Large corporations rely on their audiences’ data. They tailor their ad campaign based on this data to maximise sales. Marketers need this data, from your Facebook interests to tracking cookies on websites. ‘It’s no surprise then,’ laughs Dingli, ‘that Data is the new oil.’ 

The EU’s survey also found that participants are less inclined to share their data with private companies rather than government entities. Dingli speculates that ‘a government is something which we elect, this tends to give it more credibility than say a private company. The Facebook-Cambridge Analytica data breach scandal of 2018 is another possible variable.’ 

China has embraced Facial Recognition far more than the Western World. Millions of cameras are used to establish an individual citizens’ ‘social score’. If someone litters, their score is reduced. The practise is controversial and raises the issue of errors. Algorithms can mismatch one citizen for another. While an error rate in single digits might not seem like a large margin, even a measly 1% error rate can prove catastrophic for mismatched individuals. A hypothetical 1% error rate in China, with a population of over 1.3 billion, would mean that well over ten million Chinese citizens have been mismatched.   

Is privacy necessary?

‘I am convinced that we do not understand our rights,’ Prof. Dingli asserts. We do not really value our privacy and we find it easy to share our data.’ Social media platforms like Facebook made its way into our daily lives without people understanding how it works. The same can be said for AI and facial recognition. It has already seeped its way into our lives, and many of us are already using it—completely unaware. But the question is, how can we guarantee that AI is designed and used responsibly?

Dingli smiles, ‘How can you guarantee that a knife is used responsibly? AI, just like knives, are used by everybody. The problem is that many of us don’t even know we are using AI. We need to educate people. Currently, our knowledge of AI is formed through Hollywood movies. All it takes is a bit more awareness for people to realise that they are using AI right here and now.’

Everyone has a right to privacy and corporations are morally bound to respect that right, individuals are also responsible for the way they treat their own data. A knife, just like data, is a tool. It can be used for both good and evil things. We are responsible for how we use these tools. 

To Regulate or Not to Regulate?

Our data might not be tangible, but it is a highly valued commodity. Careless handling of our data, either through cyberattacks or our own inattention, can lead to identity theft. While the technology behind AI and Facial Recognition is highly advanced, it is far from perfect and is still prone to error. The misuse of AI can endanger human rights by manipulating groups of people through the dissemination of disinformation.   

Regulating AI is one possibility; it would establish technical standards and could protect consumers, however, this may stifle research. Given that AI is a horizontal field of study, fields such as architecture and medicine must consider the implications of a future with restricted use. An alternative to regulation is the creation of ethical frameworks which would enable researchers to continue expanding AI’s capabilities within moral boundaries. These boundaries would include respecting the rights of participants and drawing a line at research that could be used to cause physical or emotional harm or damage to property. 

While the debate regarding regulation rages on, we need to take a closer look at things within our control. While we cannot control where AI and Facial Recognition technology will take us, we can control whom we share our data with. Will we entrust it to an ethical source who will use it to better humanity, or the unscrupulous whose only concern is profit? 

Further Reading:

The Facebook-Cambridge Analytica data breach involved millions of Facebook users’ data being harvested without their consent by Cambridge Analytica which was later used for political advertising; 

Chan, R. (2019). The Cambridge Analytica whistleblower explains how the firm used Facebook data to sway elections. Business Insider. Retrieved 8 July 2020, from https://www.businessinsider.com/cambridge-analytica-whistleblower-christopher-wylie-facebook-data-2019-10.

Malta’s Ethical AI Framework;Parliamentary Secretariat For Financial Services, Digital Economy and Innovation. (2019). Malta Towards Trustworthy AI. Malta’s Ethical AI Framework. Malta.AI. Retrieved 8 July 2020, from https://malta.ai/wp-content/uploads/2019/10/Malta_Towards_Ethical_and_Trustworthy_AI_vFINAL.pdf