Skip to content

Google’s deepest dreams and nightmares

Facebook
Twitter
LinkedIn

By Ryan Abela

In Artificial Intelligence, neural networks have always fascinated me. Based on biological concepts similar to human brains, artificial neural networks consist of very simple mathematical functions connected to each other through a set of variable parameters. These networks can solve problems that range from mathematical equations to more abstract concepts such as detecting objects in a photo or recognising someone’s voice.


Artificial neural networks normally need some training. Say we need a neural network that can detect whether there is an apple in a photo. We could feed in thousands of different pictures of apples, and fine-tune the parameters of the artificial neural network until it starts classifying these photos correctly.

Google and Facebook use some of these techniques for their photo applications. A couple of months ago, Google released an app that can find your photos of specific objects by using words, like ‘dog’ or ‘house’. To do this, Google came up with an artificial neural network that was trained with images of dogs, animals, and so on. But here comes the fun part. Later that month some Google software engineers wrote an article about how to analyse and visualise what’s going on inside the neural network.

Neural networks have been used for decades and are backed by strong mathematical proofs. Yet what is going on within the neural network is very hard to visualise because a classification model is essentially represented by thousands of variables (connections) which appear to be quite random. In their experiment, Google’s software engineers inverted the artificial neural network by feeding it an image of random noise to see what patterns it would detect. They took the experiment one step further by using the detected pattern on a different image. The picture was passed through the neural network a number of times to see which pattern would emerge.

The results of these experiments amazed the whole world. Photos passed through Google’s artificial neural network produced hallucinogenic, surrealist imagery with many dogfaces, eyes, and buildings emerging from the photo. Google named it Deep Dream and now anyone can Deep Dream their photo and turn it into a dreamscape. Dalì: eat your heart out.


Deep Dream your own photo on: http://deepdreamgenerator.com or apps like http://dreamify.io Think magazine interns got carried away and Deep Dreamed all our cover artwork. Find them on Twitter #ThinkDream or Facebook http://bit.ly/ThinkDream

Author

More to Explore

Smooth Operator: Improving Surface Finish in Additive Manufacturing

While the advent of 3D metal printing may redefine how designers develop parts for products, the process itself is not without faults. Andre Giordimaina speaks with THINK about the GLAM Project, which aims to improve the process of 3D metal printing by optimising the finish and performance of designed parts.

Beyond What Drifts Us Apart

Beyond What Drifts Us Apart is a long-term art project conceptualised and curated by the acclaimed Maltese curator, Elyse Tonna. The 2024 edition took place in and around Gozo’s Dwejra Tower, which proved to be an abundant source of inspiration for this year’s selection of international and interdisciplinary artists. The exhibit was open to the public for a week through a variety of workshops and performances.

Finding a Home in Malta

Getting on the property ladder is incredibly difficult. Unless you are fortunate enough that your parents already own several properties, you will most likely be stuck for the rest of your adult life paying off your first (and possibly only) one-bedroom apartment. Is this grim future set in stone, or are there more creative solutions?

Comments are closed for this article!