Pale Machine

Game Review_Costantino

Pale Machine2Our idea of digital games certainly doesn’t fit Pale Machine. The latest work of Ben Esposito — a multimedia artist based in Los Angeles — comprises a physical CD with eight songs and eight wacky game experiments that accompany every track  on the album. The title track (or game) is a sequence of absurd vignettes: first you are somehow controlling a bottle rolling on a desk. A few seconds after, you are awkwardly maneuvering a hyper extendable tongue, which soon enough will occupy the whole screen. The game then proceeds to completely change the controls, and now you become a giant hand floating in the sky of a suburb.

It is hard to grasp, but Pale Machine is a tribute to many other works: games like WarioWare and Keita Takahashi’s Katamari Damacy and Noby Noby Boy. One can also hear echos of Japanese electronic musician Nobukazu Takemura, as well of the chiptune band YMCK. But the uniqueness of Pale Machine is in its ability to join together interaction design and music composition. It provides an intense and inspiring experience, perfectly appropriate for an artistic setting.

http://bo-en.info/URLpalemachine.html

pale

Stalking E.T.

AlessioMagro
There are over 100 billion galaxies in our universe. Each galaxy has billions of stars. Each star could have a planet. Planets can breathe life. Alessio Magro writes about his experience hunting for E.T. Illustrations by Sonya Hallett

 

In 1982, 4 years before I was born, the world fell in love with Spielberg’s E.T. the Extra-Terrestrial. Fifteen years later, the movie Contact, an adaptation of Carl Sagan’s novel, hit the big screen. Although at the time I was too young to appreciate the scientific, political, and religious themes I was captivated and it fired my thoughts. I questioned whether we are alone in this vast space. What would happen if E.T. does call? Are we even listening? If so, how? And, is it all a waste of time and precious money? Instead of deflating me, these questions inspired me to start a journey that led me to my collaboration with SETI, the Search for Extra Terrestrial Intelligence. I participated in ongoing efforts to try and find intelligent civilisations on other worlds.

The debate on whether we are alone started ages ago. It was first debated in Thales,  Ancient Greece. Only recently has advanced technology allowed us to try and open up communication channels with any existing advanced extraterrestrial civilisations. If we do not try we will never answer this question.

For the past fifty years we have been scanning the skies using large radio telescopes and listening for signals which cannot be generated naturally. The main assumption is that any advanced civilisation will follow a similar technological path as we did. For example, they will stumble upon radio communication as one of the first wireless technologies.

SETI searches are usually in the radio band. Large telescopes continuously scan and monitor vast patches of the sky. Radio emissions from natural sources are generally broadband, encompassing a vast stretch of the electromagnetic spectrum — waves from visible light to microwaves and X-rays — whilst virtually all human radio communication has a very narrow bandwidth, making it easy to distinguish between natural and artificial signals. Most SETI searches therefore focus on searching for narrow band signals of extraterrestrial origin.

Narrow bands are locked down by analysing a telescope’s observing band — the frequency range it can detect. This frequency range is broken down into millions or billions of narrow frequency channels. Every channel is searched at the same time. SETI searches for sharp peaks in these small channels. This requires a large amount of computational resources, such as supercomputing clusters, specialised hardware systems, or through millions of desktop computers. The infamous SETI@home screen-saver extracted computer power from desktops signed up to the programme, which started as the millennium turned.

E.T. civilisations might also transmit signals in powerful broadband pulses. This means that SETI could search for wider signal frequencies. However, they are more difficult to tease apart from natural emissions, so they require more thorough analysis. The problem is that as broadband signals — natural or otherwise — travel through interstellar space they get dispersed, resulting in higher frequencies arriving at the telescope before lower ones, even though they both were emitted at the same time. The amount of dispersion, the dispersion gradient, depends on the distance between the transmitter and receiver. The signal can only be searched after this effect is accounted for by a process called dedispersion. To detect E.T. signs, thousands of gradients have to be processed to try out all possible distances. This process is nearly identical to that used to search for pulsars, which are very dense, rapidly rotating stars emitting a highly energetic beam at its magnetic poles. Pulsars appear like lighthouses on telescopes, with a regular pulse across the entire observation band.

For the past four years I have been developing a specialised system which can perform all this processing in real-time, meaning that any interesting signals will be detected immediately. Researchers now do not need to wait for vast computers to process the data. This reduces the amount of disk space needed to store it all. It also allows observations to be made instantaneously, hence reducing the risk of losing any non-periodic, short duration signals. To tackle the large computational requirements I used Graphics Processing Units (GPUs) — typically unleashed to work on video game graphic simulations — because a single device can perform tasks of at least 10 laptops. This system can be used to study pulsars, search for big explosions across the universe, search for gravitational waves, and for stalking E.T..

The Electromagnetic Spectrum. Higher frequencies mean higher energies but shorter wavelengths. X-rays and Gamma rays are on the higher end of the spectrum making them so dangerous.
The Electromagnetic Spectrum. Higher frequencies mean higher energies but shorter wavelengths. X-rays and Gamma rays are on the higher end of the spectrum making them so dangerous.

E.T. we love you

Hunting for planets orbiting other stars, known as exoplanets, has recently become a major scientific endeavour. Over 3,500 planet-candidates were found by the Kepler telescope that circles our planet, about 961 are confirmed. Finding so many planets is now leading scientists to believe that the galaxy is chock-full of them. The current estimate: 100 billion in our galaxy, with at least one planet per star. For us E.T. stalkers, this is music to our ears.

Life could be considered inevitable. However, not all planets can harbour life, or at least life as we know it. Humans need liquid water and a protective atmosphere, amongst other things. Life-supporting planets need to be approximately Earth-sized and orbit within its parent star’s habitable zone. This Goldilocks zone is not too far away from the sun, freezing the planet, or too close to it, frying it. These exoplanets are targeted by SETI searches, which perform long duration observations of exoplanets similar to Earth.

“The big question is: where do we look for E.T.? I would prefer rephrasing to: at which frequency do we listen for E.T.?”

By focusing on these planets, SETI is gambling. They are missing huge portions of the sky to focus on areas that could yield empty blanks. SETI could instead perform wide-field surveys which search large chunks of the sky for any interesting signals. Recent development in radio telescope technology allows for the instantaneous observation of the entire sky, making 24/7 SETI monitoring systems possible. Wide-field surveys lack the resolution needed to figure out where a signal would come from, so follow-up observations are required. Anyhow, a one-off signal would never be convincing.

For radio SETI searches, the big question is: where do we look for E.T.? I would prefer rephrasing to: at which frequency do we listen for E.T.? Imagine being stuck in traffic and you are searching for a good radio station without having a specific one in mind. Now imagine having trillions of channels to choose from and only one having good reception. One would probably give up, or go insane. Narrowing down the range of frequencies at which to search is one of the biggest challenges for SETI researchers.

The Universe is full of background noise from naturally occurring phenomena, much like the hiss between radio stations. Searching for artificial signals is like looking for a drop of oil in the Pacific Ocean. Fortunately, there exists a ‘window’ in the radio spectrum with a sharp noise drop, affectionately called the ‘water hole’. SETI researchers search here, reasoning that E.T. would know about this and deliberately broadcast there. Obviously, this is just guesswork and some searches use a much wider frequency range.

Two years ago we decided to perform a SETI survey. Using the Green Bank Telescope in West Virginia (USA), the world’s largest fully steerable radio dish, we scanned the same area the Kepler telescope was observing whilst searching for exoplanets. This area was partitioned into about 90 chunks, each of which was observed for some time. In these areas, we also targeted 86 star systems with Earth-sized planets. We then processed around 3,000 DVDs worth of data to try and find signs of intelligent life. We developed the system ourselves at the University of Malta, but we came out empty handed.

 

 

A camera shy E.T.

Should we give up? Is it the right investment in energy and resources? These questions have plagued SETI from the start. Till now there is no sign of E.T., but we have made some amazing discoveries while trying to find out.

Radio waves were discovered and entered into mainstream use in the late 19th century. We would be invisible to other civilisations unless they are up to 100 light years away. Light (such as radio) travels just under 9.5 trillion kilometres per year. Signals from Earth have only travelled 100 light years, broadcasts would take 75,000 years to reach the other side of our galaxy. To compound the problem, technology advances might soon make most radio signals obsolete. Taking our own example, aliens would have a very small time window to detect earthlings. The same reasoning works the other way, E.T. might be using technologies which are too advanced for us to detect. As the author Arthur C. Clarke stated, ‘any sufficiently advanced technology is indistinguishable from magic’.

The Wow! signal is a brief, strong radio burst of unknown origin detected by the Big Ear Telescope, SETI search, 1977. If it originated from deep space, it could either be a new astrophysical phenomena or an alien signal.
The Wow! signal is a brief, strong radio burst of unknown origin detected by the Big Ear Telescope, SETI search, 1977. If it originated from deep space, it could either be a new astrophysical phenomena or an alien signal.

At the end of the day, it is all a probability game, and it is a tough one to play.  Frank Drake and Carl Sagan both tried. They came up with a number of factors that influence the chance of  two civilisations communicating. One is that we live in a very old universe, over 13 billion years old, and for communication between civilisations their time windows need to overlap. Another factor is, if we try to detect other technological signatures they might also be obsolete for advanced alien life. Add to these parts, the assumed number of planets in the Universe and the probability of an intelligent species evolving. For each factor, several estimates have been calculated. New astrophysical, planetary, and biological discoveries keep fiddling with the numbers that range from pessimistic to a universe teeming with life.

The problem with a life-bloated galaxy is that we have not found it. Aliens have not contacted us, despite what conspiracy theorists say. There is a fatalistic opinion that intelligent life is destined to destroy itself, while a simpler solution could be that we are just too damned far apart. The Universe is a massive place. Some human tribes have only been discovered in the last century, and by SETI standards they have been living next door the whole time. The Earth is a grain of sand in the cosmic ocean, and we have not even fully explored it yet.

“Signals from Earth have only travelled 100 light years, broadcasts would take 75,000 years to reach the other side of our galaxy”

Still, the lack of alien chatter is troubling. Theorists have come up with countless ideas to explain the lack of evidence for intelligent alien existence. The only way to solve the problem is to keep searching with an open mind. Future radio telescopes, such as the Square Kilometre Array (SKA), will allow us to scan the entire sky continuously. They require advanced systems to tackle the data deluge. I am part of a team working on the SKA and I will do my best to make this array possible. We will be stalking E.T. using our most advanced cameras, and hopefully we will catch him on tape.

[ct_divider]

Carl Sagan’s Cosmos

Exoplanets Galore

The Sky’s Limits

Europe has a dream: a single European sky. By unifying its air traffic it wants to clean up its skies and make them safer. To find out how Dr Sedeer El-Showk interviewed researchers at the University of Malta

Sedeer El-Showk

Every day around 30,000 aircraft take to Europe’s skies. Choreographing this airborne dance is daunting. At the moment, it is orchestrated by the disparate air traffic management systems of each European country, with control handed over at border crossings. The aeronautics research team at the University of Malta is part of an ambitious EU project to change that by establishing a single European sky, enabling EU air traffic controllers to manage increasing amounts of traffic with greater safety, lower costs, and a reduced environmental impact.

A passion for flight

Ask Prof. Ing. David Zammit-Mangion (Department of Electronic Systems Engineering, UoM) what he loves and he will reply, ‘anything that flies’. He has come a long way since his childhood dreams of flight, when he would build model aeroplanes and scamper over fences to photograph real ones. Now he leads a major research team with an important role in Clean Sky, the EU’s €1.6 billion flagship project which aims to reduce the environmental impact of air transport.

Prof. Ing. David Zammit-Mangion
Prof. Ing. David Zammit-Mangion

The enthusiasm for flying never left Zammit-Mangion. As an adult, he eventually took to the skies himself, learning to fly during his doctoral research at Cranfield University in the UK, where he designed a cockpit instrument to monitor the take-off performance of aircraft. ‘My dream was to twin my passion with my profession,’ he said. It is a formula that has worked. Zammit-Mangion’s familiarity with commercial operations, safety procedures, and aircraft equipment has given his research an edge by enabling him to quickly estimate the cost and feasibility of different approaches. ‘When it comes to addressing problems, you need to have a very broad understanding of the whole industry,’ he says, and his hands-on industrial experience and hours logged in the cockpit have proven invaluable. Clean Sky is central to meeting the environmental goals embedded in the vision of a unified European sky. Launched in 2008, its goal is to reduce the excess noise and greenhouse gas emissions created by aeroplanes. Air transport is responsible for around 2% of global carbon dioxide (CO2) emissions, but traffic is expected to more than double by 2030. By improving air traffic management (ATM) and aircraft technology, the 600-member Clean Sky project aims to ensure that emissions increase at a slower rate than demand.

Clearing the air

Aeroplanes currently follow flight paths through set air corridors, which can make routes unnecessarily long. They also may have to climb or descend in stages and wait in a holding pattern at their destination. These inefficient practices increase the amount of fuel used, leading to higher costs and greater greenhouse gas emissions. Each kilogram of jet fuel burned releases roughly three kilograms of CO2 into the atmosphere, along with other greenhouse gases like nitrogen oxides. This happens high in the atmosphere, where these gases end up taking part in a variety of physical and chemical processes that cause them to have a greater environmental impact than they would closer to the ground. Given that many airliners burn around 50 kg of fuel per minute, even relatively small optimisations can have a significant impact.

“Each kilogram of jet fuel burned releases three kilograms of CO2 into the atmosphere, along with other greenhouse gases like nitrogen oxides”

Improving air travel routes is not a simple task. It is what engineers call a ‘multi-criterion, multi-parameter problem’. In other words, you have to balance lots of factors, like the type and mass of the aeroplane, weather conditions, route limitations, and air traffic control constraints. At the same time, you need to maximise performance on different objectives such as fuel use, flight time, and environmental impact. Zammit-Mangion describes it as ‘a very complex mathematical problem’. That sort of complexity might sound like a nightmare to most people, but it is just the sort of thing Ing. Kenneth Chircop thrives on. ‘My real love is for engineering mathematics,’ said Chircop. He studied engineering for his degree, but then his passion for mathematical challenges drove him to join the aeronautics research team. ‘At the end of the day, I wanted to do something heavy in mathematics again.’ As their contribution to Clean Sky, the team developed a software package called Green Aircraft Trajectories under ATM Constraints (GATAC) to help optimise flight routes. Instead of just performing a single optimisation, GATAC provides an optimisation framework which aircraft operators can use with their own models. By plugging in models of aircraft and engine performance, emissions levels, noise production, and so on, users can work out optimal air travel trajectories to match their constraints and conditions. The core software developed at UoM incorporates various models from different research partners, but users are also free to plug in their own models. Aircraft manufacturer Airbus uses GATAC with its own proprietary models. ‘It’s great to see that foreign partners look at us as equals,’ said Chircop. ‘They trust us to develop state-of-the-art technology. We have delivered, and they trust us to keep delivering. We’re really proud of that; it’s what makes us tick and want to do more.’

Air traffic over Europe. Courtesy of Flightradar.com
Air traffic over Europe. Courtesy of Flightradar.com
Dr Ing. Andrew Sammut
Dr Ing. Andrew Sammut

Bringing it home

This work has brought more than just international recognition to Malta; the country will also enjoy practical benefits. Kenneth Chircop is spear-heading Clean Flight — a national research project financed by the Malta Council for Science and Technology’s national research and innovation programme 2011 — to apply the lessons from Clean Sky to Maltese airspace. ‘Our impact on the national scene can be remarkable,’ said Chircop, describing the gains to be made by optimising the arrival and departure routes aeroplanes use at Malta airport. As an island nation, Malta relies heavily on air traffic to connect it to the rest of the world. In 2013, Malta International Airport saw over 30,000 arrivals and departures, up from roughly 26,000 only seven years ago. Despite this, its air traffic systems need an overhaul; while the technology is state-of-the-art, some of the procedures are out of date. For example, aeroplanes arriving and departing from an airport follow standard, published routes, called STARs (Standard Terminal Arrival Route) and SIDs (Standard Instrument Departures) respectively, which can simplify airspace management. ‘The SIDs in Maltese airspace were designed years ago when fuel was relatively cheap, and the impact combustion made on the environment was not given due importance,’ said Chircop, ‘and we don’t even have STARs.’ Updating these procedures presented a clear opportunity to reduce fuel use and greenhouse gas emissions in Maltese airspace. Together with their partner, Maltese aeronautics consultancy company QuAero Ltd, Chircop, Zammit-Mangion, and the rest of the team analysed the flight paths taken by aircraft in Maltese airspace and discovered that they were scattered and inefficient. They developed a tool to design and analyse the best arrival and departure routes for aeroplanes, which they used to calculate revised routes for Malta’s airport. Based on fuel savings estimates for the Boeing 737 and Airbus A320, the two most common aircraft in Maltese airspace, the new routes could save 465 tonnes of fuel for departing aircraft and 200 tonnes for arrivals every year. The fuel reductions mean less money spent and lower CO2 emissions in Maltese airspace. Not only does that directly benefit Malta’s environment, but it also offers indirect benefits by reducing the pressure on Malta’s carbon emission caps. In addition to improving the course followed by flights, the team has helped improve climbs and descents. Planes can approach the airport in many different ways: for example, a smooth, continuous descent, a series of steps interrupted by level flight, or a close approach at full altitude followed by a quick descent. Determining which approach is optimal is a dynamic problem that has to factor in the weight of aeroplane and its cargo, weather conditions, operational constraints, air traffic and so forth. Current optimisation methods try to balance flight time and fuel use, but do not take the other factors into account. The Clean Flight team developed a new approach using computer algorithms which can improve the efficiency of climbs and descents in around 10 minutes on a single computer. ‘So 15 minutes before departure, for example, an air traffic controller can calculate the optimal route for the flight at the current conditions,’ said Chircop. Altogether, this work could save 1,500 tonnes of fuel every year.

Ing. Kenneth Chircop
Ing. Kenneth Chircop

Upwards and onwards

The sky is the limit for this aeronautics team. As Clean Sky winds to a close, the EU is preparing to launch Clean Sky 2, and the UoM team will probably continue to play a significant role in the initiative. On the national front, the optimisation system developed in the Clean Flight project will be tested with actual flight trials over the coming months – a major step forward in a field where such tests are incredibly expensive and safety is always a paramount concern. According to Chircop, it is an indication that the potential benefits are large. ‘We’re pushing to get this technology into the field so we can see it making actual gains, instead of simply on paper,” he said. Meanwhile, the GATAC software package is already being used by key industrial players, according to Zammit-Mangion. Looking forward, it clearly has a scope beyond Clean Sky, and may even come to be used by other industries like maritime shipping, which faces similar problems. The team is also working on a project to test unmanned aerial vehicles (UAV) flying with commercial aircraft in an air traffic control environment. Although the UAV tech was developed in Italy, the Maltese team will test its operational aspects. If successful, the project could open the door to the integration of UAVs into the wider aviation community. The aeronautics team has put Malta on the map when it comes to aviation research, a major accomplishment for a nation with no significant track record in the field until ten years ago. ‘We’re well-established and recognised in European and global research circles,’ said Zammit-Mangion, describing the team’s success. With the network of partners they have built up and the quality of the team’s research, the future is looking up.

[ct_divider]

Dr Sedeer El-Showk is a freelance science writer. He blogs at Inspiring Science and for Nature’s Scitable network.

Airplane3

 

Documentary on Maltese researchers by Science in the City

Decoding Language

AlbertgattGordonPaceMikeRosner

Maltese needs to be saved from digital extinction. Dr Albert Gatt, Prof. Gordon Pace, and Mike Rosner write about their work making digital tools for Maltese, interpretting legalese, and making a Maltese-speaking robot

In 2011 an IBM computer called Watson made the headlines after it won an American primetime television quiz called Jeopardy. Over three episodes the computer trounced two human contestants and won a million dollars.

Jeopardy taps into general world knowledge, with contestants being presented with ‘answers’ to which they have to find the right questions. For instance, one of the answers, in the category “Dialling for Dialects”, was: While Maltese borrows many words from Italian, it developed from a dialect of this Semitic language. To which Watson correctly replied with: What is Arabic?

Watson is a good example of state of the art technology that can perform intelligent data mining, sifting through huge databases of information to identify relevant nuggets. It manages to do so very efficiently by exploiting a grid architecture, which is a design that allows it to harness the power of several computer processors working in tandem.

“Maltese has been described as a language in danger of ‘digital extinction’”

This ability alone would not have been enough for it to win an American TV show watched by millions. Watson was so appealing because it used English as an American would.

Consider what it takes for a machine to understand the above query about Maltese. The TV presenter’s voice would cause the air to vibrate and hit the machine’s microphones. If Watson were human, the vibrations would jiggle the hairs inside his ear so that the brain would then chop up the component sounds and analyse them into words extremely rapidly. The problem for a computer is that there is more to language than just sounds and words. A human listener would need to do much more. For example, to figure out that ‘it’ in the question probably refers to ‘Maltese’ (rather than, say, ‘Italian’, which is possible though unlikely in this context). They would also need to figure out that ‘borrow’ is being used differently than when one says borrowing one’s sister’s car. After all, Maltese did not borrow words from Italian on a short-term basis. Clearly the correct interpretation of ‘borrow’ depends on the listener having identified the intended meaning of ‘Maltese’, namely, that it is a language. Watson was equipped with Automatic Speech Recognition technology to do exactly that.

To understand language any listener needs to go beyond mere sound. There are meanings and structures throughout all language levels. A human listener needs to go through them all before saying that they understood the message.

Watson was not just good at understanding; he was pretty good at speaking too. His answers were formulated in a crisp male voice that sounded quite natural, an excellent example of Text-to-Speech synthesis technology. In a fully-fledged human or machine communicating system, going from text to speech requires formulating the text of the message. The process could be thought of as the reverse of understanding, involving much the same levels of linguistic processing.

 

Machine: say ‘hello’ to Human

The above processes are all classified as Human Language Technology, which can be found in many devices. Human Language Technology can be found everywhere from Siri or Google Now in smart phones to a word processing program that can spell, check grammar, or translate.

Human-machine interaction relies on language to become seamless. The challenge for companies and universities is that, unlike artificial languages (such as those used to program computers or those developed by mathematicians), human languages are riddled with ambiguity. Many words and sentences have multiple meanings and the intended sense often depends on context and on our knowledge of the world. A second problem is that we do not all speak the same language.

 

Breaking through Maltese

Maltese has been described as a language in danger of ‘digital extinction’. This was the conclusion of a report by META-NET, a European consortium of research centres focusing on language technology. The main problem is a lack of Human Language Technology — resources like word processing programs that can correctly recognise Maltese.

Designing an intelligent computer system with a language ability is far easier in some languages than it is in others. English was the main language in which most of these technologies were developed. Since researchers can combine these ready-made software components instead of developing them themselves, it allows them to focus on larger challenges, such as winning a million dollars on a TV program. In the case of smaller languages, like Maltese, the basic building blocks are still being assembled.

Perhaps the most fundamental building block for any language system is linguistic data in a form that can be processed automatically by a machine. In Human Language Technology, the first step is usually to acquire a corpus, a large repository of text or speech, in the form of books, articles, recordings, or anything else that happens to be available in the correct form. Such repositories are exploited using machine-learning techniques, to help systems grasp how the language is typically used. To return to the Jeopardy example, there are now programs that can resolve pronouns such as ‘it’ to identify their antecedents, the element to which they refer. The program should identify that ‘it’ refers to Maltese.

For the Maltese language, researchers have developed a large text/speech repository, electronic lexicons (language’s inventory of its basic units of meaning), and related tools to analyse the language (available for free). Automatic tools exist to annotate this text with basic grammatical and structural information. These tools require a lot of manual work however, once in place, they allow for the development of sophisticated programs. The rest of this article will analyse some of the on-going research using these basic building blocks.

 

From Legalese to Pets

Many professions benefit from automating tasks using computers. Lawyers and notaries are the next professionals that might benefit from an ongoing project at the University of Malta. These experts draft contracts on a daily basis. For them, machine support is still largely limited to word processing, spell checking, and email services, with no support for a deeper analysis of the contracts they write and the identification of their potential legal consequences, partly through their interaction with other laws.

Contracts suffer from the same challenges when developing Human Language Technology resources. A saving grace is that they are written in ‘legalese’ that lessens some problems. Technology has advanced enough to allow the development of tools that analyse a text to enable extraction of information about the basic elements of contracts, leaving the professional free to analyse the deeper meaning of these contracts.

Deeper analysis is another big challenge in contract analysis. It is not restricted to just identifying the core ‘meaning’ or message, but needs to account the underlying reasoning behind legal norms. Such reasoning is different from traditional logic, since it talks about how things should be as opposed to how they are. Formal logical reasoning has a long history, but researchers are still trying to identify how one can think precisely about norms which affect definitions. Misunderstood definitions can land a person in jail.

Consider the following problem. What if a country legislates that:Every year, every person must hand in Form A on 1st January, and Form B on 2nd January, unless stopped by officials.’  Exactly at midnight between the 1st and 2nd of January the police arrest John for not having handed in Form A. He is kept under arrest until the following day, when his case is heard in court. The prosecuting lawyer argues that John should be found guilty because, by not handing in Form A on 1st January he has violated the law. The defendant’s lawyer argues that, since John was under arrest throughout the 2nd of January he was being stopped by officials from handing in Form B, absolving him of part of his legal obligation. Hence, he is innocent. Who is right? If we were to analyse the text of the law logically, which version should be adopted? The logical reasoning behind legal documents can be complicated, which is precisely why tools are needed to support lawyers and notaries who draft such texts.

Figuring out legal documents might seem very different to what Watson was coping with. But there is an important link: both involve understanding natural language (normal every day language) for something, be it computer, robot, or software, to do something specific. Analysing contracts is different because the knowledge required involves reasoning. So we are trying to wed recent advances in Human Language Technology with advances in formal logical reasoning.

Illustration by Sonya Hallett
Illustration by Sonya Hallett

Contract drafting can be supported in many ways, from a simple cross-referencing facility, enabling an author to identify links between a contract and existing laws, to identifying conflicts within the legal text. Since contracts are written in a natural language, linguistic analysis is vital to properly analyse a text. For example in a rent contract when making a clause about keeping dogs there would need to be a cross-reference to legislation about pet ownership.

We (the authors) are developing tools that integrate with word processors to help lawyers or notaries draft contracts. Results are presented as recommendations rather than automated changes, keeping the lawyer or notary in control.

 

Robots ’R’ Us

So far we have only discussed how language is analysed and produced. Of course, humans are not simply language-producing engines; a large amount of human communication involves body language. We use gestures to enhance communication — for example, to point to things or mime actions as we speak — and facial expressions to show emotions. Watson may be very clever indeed, but is still a disembodied voice. Imagine taking it home to meet the parents.

“Robby the Robot from the 1956 film Forbidden Planet, refused to obey a human’s orders”

Robotics is forging strong links with Human Language Technology. Robots can provide bodies for disembodied sounds allowing them to communicate in a more human-like manner.

Robots have captured the public imagination since the beginning of science fiction. For example, Robby the Robot from the 1956 film Forbidden Planet, refused to obey a human’s orders, a key plot element. He disobeyed because they conflicted with ‘the three laws of robotics’, as laid down by Isaac Asimov in 1942. These imaginary robots look somewhat human-shaped and are not only anthropomorphic, but they think and even make value judgements.

Actual robots tend to be more mundane. Industry uses them to cut costs and improve reliability. For example, the Unimate Puma, which was designed in 1963, is a robotic arm used by General Motors to assemble cars.

The Unimate Puma 200
The Unimate Puma 200

The Puma became popular because of its programmable memory, which allowed quick and cheap reconfiguration to handle different tasks. But the basic design was inflexible to unanticipated changes inevitably ending in failure. Current research is closing the gap between Robby and Puma.

Opinions may be divided on the exact nature of robots, but three main qualities define a robot: one, a physical body; two, capable of complex, autonomous actions; and three, able to communicate. Very roughly, advances in robotics push along these three highly intertwined axes.

At the UoM we are working on research that pushes forward all three, though it might take some time before we construct a Robby 2. We are developing languages for communicating with robots that are natural for humans to use, but are not as complex as natural languages like Maltese. Naturalness is a hard notion to pin down. But we can judge that one thing is more or less natural than another. For example, the language of logic is highly unnatural, while using a restricted form of Maltese would be more natural. It could be restricted in its vocabulary and grammar to make it easier for a robot to handle.

Edited Lego copyTake the language of a Lego EV3 Mindstorms robot and imagine a three-instruction program. The first would be to start its motors, the second to wait until light intensity drops to a specific amount, the third to stop. The reference to light intensity is not a natural way to communicate information to a robot. When we talk to people we are not expected to understand how the way we put our spoken words relates to their hardware. The program is telling the robot to: move forward until you reach a black line. Unlike the literal translation, this more natural version employs concepts at a much higher level and hence is accessible to anybody with a grasp of English.

The first step is to develop programs that translate commands spoken by people into underlying machine instructions understood by robots. These commands will typically describe complex physical actions that are carried out in physical space. Robots need to be equipped with the linguistic abilities necessary to understand these commands, so that we can tell a robot something like ‘when you reach the door near the table go through it’.

To develop a robot that can understand this command a team with a diverse skillset is needed. Language, translation, the robot’s design and movement, ability to move and AI (Artificial Intelligence) all need to work together. The robot must turn language into action. It must know that it needs to go through the door, not through the table, and that it should first perceive the door and then move through it. A problem arises if the door is closed so the robot must know what a door is used for, how to open and close it, and what the consequences are. For this it needs reasoning ability and the necessary physical coordination. Opening a door might seem simple, but it involves complex hand movements and just the right grip. Robots need to achieve complex behaviours and movements to operate in the real world.

The point is that a robot that can understand these commands is very different to the Puma. To build it we must first solve the problem of understanding the part of natural language dealing with spatially located tasks. In so doing the robot becomes a little bit more human.

A longer-term aim is to engage the robot in two-way conversation and have it report on its observations — as Princess Leia did with RT-D2 in Star Wars, if RT-D2 could speak.

Lego Mindstorms EV3 brick
Lego Mindstorms EV3 brick

Language for the World

Human Language Technologies are already changing the world. From automated announcements at airports, to smartphones that can speak back to us, to automatic translation on demand. Human Language Technologies help humans interact with machines and with each other. But the revolution has only just begun. We are beginning to see programs that link language with reasoning, and as robots become mentally and physically more adept the need to talk with them as partners will become ever more urgent. There are still a lot of hurdles to overcome.

To make the right advances, language experts will need to work with engineers and ICT experts. Then having won another million bucks on a TV show, a future Watson will get up, shake the host’s hand, and maybe give a cheeky wink to the camera.

Attack of The Friday Monsters: A Tokyo Tale

Game Review_Costantino

Not a 50 hour-long blockbuster, not a 30 second casual game: Attack of The Friday Monsters is an experiment with a new, middle-sized format. The game presents a day in the life of an 8 year old kid. The oneiric, nostalgic storyline is a masterfully paced intense adventure that feels just right.

Downloadable from the Nintendo 3DS eShop, the game is set in a ‘70s Japanese town, where our hero Sohta and his family just moved in. Told from the kid’s perspective, the events are open to interpretation: apparently, Godzilla-like monsters attack every Friday. On the same day, a TV show also packed with monsters is produced and aired in town. What is the secret behind these attacks? And is there a connection between fact and fiction?

attack-of-the-friday-monsters-screen1
Don’t expect to engage in massive monster fights in Attack of The Friday Monsters. The game focuses on talking with villagers, meeting new friends, and strolling in a beautiful countryside town. It really makes you feel like a kid again encouraging a relaxed kind of roleplay.

At €7.99, Attack of The Friday Monsters proves that digital downloads can be a great way to introduce audiences to new formats and concepts. It introduces a poetical take on games.

Will robots take over the world?

Unlikely, for the next 100 years. Academics and sci-fi writers take three rough approaches. We will become one with the bots by integrating computers into our body achieving the next stage of evolution. Or, robots will become so powerful so quickly that we’ll become their slaves, helpless to stop them — think the Matrix. Or, robots have certain technological hurdles that will take ages to overcome.

Let’s analyse those hurdles. Computing power: no problem. Manufacturing expense: no problem. Artificial intelligence: could take decades, but we are already mapping and replicating the human brain through computers. Energy: very difficult to power such energy-hungry devices in a mobile way; battery or portable energy generation has a long way to go. The desire to enslave humanity: would require Asmiov’s trick or a mad computer scientist to programme it into the bot’s code. Conclusion: unlikely, sleep easy tonight.

INDIE GAMES

How can a video game ask questions about life, art, and frustration? Giuliana Barbaro-Sant met up with Dr Pippin Barr to tell us about his game adaptation of Marina Abramović‘s artwork The Artist is Present.

In each creative act, a personal price is paid. When the project you have been working on so hard falls to pieces because of funding, it is hard to accept its demise. The feeling of failure, betrayal, and loneliness is an easy trap to fall into. This is the independent game maker’s industry: a bloodthirsty world rife with competition, sucking pockets dry from the very beginning of the creative process.

Maltese game makers face a harsher reality. Not all game makers are lucky enough to make it to the finish line, publish, and make good money. Rather, most of them rarely do. Yet, if and when they get there, it is often thanks to the passion and dedication they put into their creation — together with the continuous support of others.

Dr Pippin Barr always had a passion for making things, be it playing with blocks or doodling. His time lecturing at the Center for Computer Game Research at the IT University of Copenhagen, together with his recent team-up with the newly opened Institute of Digital Games at the University of Malta, only served to reincarnate another form of this passion: Pippin makes games. At the Museum of Modern Art in New York he exhibited his most well known work: the game rendition of Marina Abramović’s The Artist is Present. He thought of the idea while planning to deliver lectures about how artists invoke emotions through laborious means in their artworks. In The Artist is Present, artist Marina Abramović sits still in front of hundreds of thousands of people and just stares into their eyes for as long as participants desire.

There is more to this performance than meets the eye. Beyond the simplistic façade, Barr saw real depth. Through eye contact, the artist and audience forge a unique connection. All barriers drop, and human emotion flows with a great rawness that games are so ‘awful’ at embodying. Yet, paradoxically, there is a militariness in the preparation behind the performance that games embrace only too well. Not only does the artist have to physically programme herself to withstand over 700 hours’ worth of performing, but the audience also prepares for the experience in their own way, by disciplining themselves as they patiently wait for their turn.

“It’s a pretty lonely road and it can be tough when you’re stuck with yourself”

Pippin Barr

‘Good research is, after all, creative,’ according to Pippin Barr. By combining his academic background with his creative impulse, he made an art game — a marriage between art and video games. These are games about games, which test their values and limits. Barr relishes the very idea of questioning the way things work. His self-reflexive games serve as a platform for him to call into question life’s so-called certainties, in a way that is powerful enough to strike a chord in both himself and the player. He is looking to create a deep emotional resonance, which gives the player a chance to ‘get’ the game through a unique personal experience. Sometimes, players write about his games and capture what Pippin Barr was thinking about, as he put it, ‘better than I could myself’, or read deeper than his own thoughts.

As far as gameplay goes, The Artist is Present is fairly easy to manoeuvre in. The look is fully pixellated yet captures the ambience at the Museum. The first screen of the game places the player in front of its doors and you are only allowed in if you are playing the game during the actual exhibition’s opening hours in America. Until then, there is no option but to wait till around 4:30 pm our time (GMT+1). The frustration continues increasing since after entering you will still have to wait behind a long queue of strangers to experience the performance work. This reflects real world participants who had to wait to experience The Artist is Present. If they were lucky, they sat in front of the artist and gazed at her for as long as they wanted.

Interestingly, Marina Abramović also played the game. She told Barr about how she was kicked out of the queue when she tried to catch a quick lunch in the real world as she was queuing in the digital one. Very unlucky, but the trick is to keep the game tab open. Other than that, good luck!

Despite that little hiccup, Abramović did not give up on the concept of digitalising the experience of her art. After The Artist is Present, Barr and Abramović set forth on a new quest: the making of the Digital Marina Abramović Institute. Released last October, it has proven to be a great challenge for those who cannot help but switch windows to check up on their Facebook notifications – not only are the instructions in a scrolling marquee, but you have to keep pressing the Shift button on your keyboard to prove you are awake and aware of what is happening in the game. It is the same kind of awareness that is expected out of the physical experience of the real-life Institute.

The quirkiness of Barr’s games reflects their creator. Besides The Artist is Present, in Let’s Play: Ancient Greek Punishment, he adapted a Greek Sisyphus myth to experiment with the frustration of not being rewarded. In Mumble Indie Bungle, he toyed with the cultural background of indie game bundles by creating ‘terrible’ versions with ‘misheard titles’ (and so, ‘misheard’ game concepts) of renowned indie games. One of his 2013 projects involves the creation of an iPhone game, called Snek, an adaptation of the good old Nokia 3310 Snake. In his version, Pippin Barr turned the effect of the smooth ‘naturally’ perfect touch interface of the device upon its head, by using the gyroscope feature. Instead, the interaction with the Apple device becomes thoroughly awkward, as the player has to move around very unnaturally because of the requirements of the game.

This dedicated passion for challenging boundaries ultimately drives creators and artists alike to step out of their comfort zone and make things. These things challenge the way society thinks and its value systems. Game making is no exception, especially for independent developers. An artist yearns for the satisfaction that comes with following a creative impulse and succeeding. In Barr’s case, being ‘part of the movement to expand game boundaries and show players (and ourselves) that the possibilities for what might be “allowed” in games is extremely broad.’

Accomplishing so much, against the culture industry’s odds, is a great triumph for most indie developers. For Pippin Barr, the real moment of success is when the game is finished and is being played. Then he knows that someone sat with the game and actually had an experience — maybe even ‘got it’.

 

Follow Pippin Barr on Twitter: @pippinbarr or on: www.pippinbarr.com

Giuliana Barbaro-Sant is part of the Department of English Master of Arts programme.

An Intelligent Pill

carlazzopardi
Doctors regularly need to use endoscopes to take a peek inside patients and see what is wrong. Their current tools are pretty uncomfortable. Biomedical engineer Ing. Carl Azzopardi writes about a new technology that would involve just swallowing a capsule.

Michael* lay anxiously in his bed, looking up at his hospital room ceiling. ‘Any minute now’, he thought, as he nervously awaited his parents and doctor to return. Michael had been suffering from abdominal pain and cramps for quite some time. The doctors could not figure it out through simple examinations. He could not take it any more. His parents had taken him to a gut specialist, a gastroenterologist, who after asking a few questions, had simply suggested an ‘endoscopy’ to examine what is wrong. Being new to this, Michael had immediately gone home to look it up. The search results did not thrill him.

The word ‘endoscope’ derives from the Greek words ‘endo’, inside, and ‘scope’, to view. Simply put, looking inside  our body using instruments called endoscopes. In 1804, Phillip Bozzini created the first such device. The Lichtleiter, or light conductor, used hollow tubes to reflect light from a candle (or sunlight) onto bodily openings — rudimentary.

Modern endoscopes are light years ahead. Constructed out of sleek, black polyurethane elastometers, they are made up of a flexible ‘tube’ with a camera at the tip. The tubes are flexible to let them wind through our internal piping, optical fibers shine light inside our bodies, and since the instrument is hollow it allows forceps or other instruments to work during the procedure. Two of the more common types of flexible endoscopes used nowadays are called gastroscopes and colonoscopes. These are used to examine your stomach and colon. As expected, they are inserted through your mouth or rectum.

Michael was not comforted by such advancements. He was not enticed by the idea of having a flexible tube passed through his mouth or colon. The door suddenly opened. Michael jerked his head towards the entrance to see his smiling parents enter. Accompanying them was his doctor holding a small capsule. As he handed it over to Michael, he explained what he was about to give him.

Enter capsule endoscopy. Invented in 2000 by an Israeli company, the procedure is simple. The patient just needs to swallow a small capsule. That is it. The patient can go home, the capsule does all the work automatically.

The capsule is equipped with a miniature camera, a battery, and some LEDs. It starts to travel through the patient’s gut. While on its journey it snaps around four to thirty-five images every second. Then it transmits these wirelessly to a receiver strapped around the patient’s waist. Eventually the patient passes out the capsule and on his or her next visit to the hospital, the doctor can download all the images saved on the receiver.

The capsule sounds like simplicity itself. No black tubes going down patients’ internal organs, no anxiety. Unfortunately, the capsule is not perfect.

“The patient just needs to swallow a small capsule. That is it. The patient can go home, the capsule does all the work automatically”

Autumn 2013 Magazine Master.inddFirst of all, capsule endoscopy cannot replace flexible endoscopes. The doctors can only use the capsules to diagnose a patient. They can see the pictures and figure out what is wrong, but the capsule has no forceps that allow samples to be taken for analysis in a lab. Flexible endoscopes can also have cauterising probes passed through their hollow channels, which can use heat to burn off dangerous growths. The capsule has no such means. The above features make gastroscopies and colonoscopies the ‘gold standard’ for examining the gut. One glaring limitation remains: flexible endoscopes cannot reach the small intestine, which lies squarely in the middle between the stomach and colon. Capsule endoscopy can examine this part of the digestive tract.

A second issue with capsules is that they cannot be driven around. Capsules have no motors. They tend to go along for the ride with your own bodily movements. The capsule could be pointing in the wrong direction and miss a cancerous growth. So, the next generation of capsules are equipped with two cameras. This minimises the problem but does not solve it completely.

The physical size of the pill makes these limitations hard to overcome. Engineers are finding it tricky to include mechanisms for sampling, treatment, or motion control. On the other hand, solutions to a third problem do exist. This difficulty relates to too much information. The capsule captures around 432,000 images over the 8 hours it snaps away. The doctor then needs to go through nearly all of these images to spot the problematic few. A daunting task that uses up a lot of time, increasing costs, and makes it easier to miss signs of disease.

A smart solution lies in looking at image content. Not all images are useful. A large majority are snapshots of the stomach uselessly churning away, or else of the colon, far down from the site of interest. Doctors usually use capsule endoscopy to check out the small intestine. Medical imaging techniques come in handy at this point to distinguish between the different organs. Over the last year, the Centre for Biomedical Cybernetics (University of Malta) has carried out collaborative research with Cardiff University and Saint James Hospital to develop software which gives doctors just what they need.

Following some discussions between these clinicians and engineers they quickly realised that images of the stomach and large intestine were mostly useless for capsule endoscopes.

Identifying the boundaries of the small intestines and extracting just these images would simplify and speed up screening. The doctor would just look at these images, discarding the rest.

Engineers Carl Azzopardi, Kenneth Camilleri, and Yulia Hicks developed a computer algorithm that could first and foremost tell the difference between digestive organs. An algorithm is a bit of code that performs a specific task, like calculating employees’ paychecks. In this case, the custom program developed uses image-processing techniques to examine certain features of each image, such as colour and texture, and then uses these to determine which organ the capsule is in.

Take colours for instance. The stomach has a largely pinkish hue, the small intestine leans towards yellowish tones, while the colon (unsurprisingly perhaps) changes into a murky green. Such differences can be used to classify the different organs. Additionally, to quickly sort through thousands of images, the images need to be compacted. A specific histogram is used to amplify differences in colour and compress the information. These procedures make it easier and quicker for algorithm image processing.

Texture is another unique organ quality. The small intestine is covered with small finger-like projections called villi. The projections increase the surface area of the organ, improving nutrient absorption into the blood stream. These villi give a particular ‘velvet-like’ texture to the images, and this texture can be singled out using a technique called Local Binary Patterns. This works by comparing each pixel’s intensity to its neighbours’, to determine whether these are larger or smaller in value than its own. For each pixel, a final number is then worked out which gauges whether an edge is present or not (see image).

Classification is the last and most important step in the whole process. At this point the software needs to decide if an image is part of the stomach, small intestine, or large intestine. To help automatically identify images, the program is trained to link the factors described above with the different organ types by being shown a small subset of images. This data is known as the training set. Once trained, the software can then automatically classify new images from different patients on its own. The software developed by the biomedical engineers was tested first by classification based just on colours or texture, then testing both features together. Factoring both in gave the best results.

“The software is still at the research stage. That research needs to be turned into a software package for a hospital’s day-to-day examinations” 

Dr Yulia Hicks
Dr Yulia Hicks
Prof. Ing. Kenneth Camilleri
Prof. Ing. Kenneth Camilleri

After the images have been labeled, the algorithm can draw the boundaries between digestive organs. With the boundaries in place, the specialist can focus on the small intestine. At the press of a button countless hours and cash are saved.

 

The software is still at the research stage. That research needs to eventually be turned into a software package for a hospital’s day-to-day examinations. In the future, the algorithm could possibly be inserted directly onto the capsule. An intelligent capsule would be born creating a recording process capable of adapting to the needs of the doctor. It would show them just what they want to see.

Ideally the doctor would have it even easier with the software highlighting diseased areas automatically. The researchers at the University of Malta want to start automatically detecting abnormal conditions and pathologies within the digestive tract. For the specialist, it cannot get better than this.

The result? A shorter and more efficient screening process that could turn capsule endoscopy into an easily accessible and routine examination. Shorter specialist screening times would bring down costs in the private sector and lessen the burden on public health systems. Michael would not need to worry any longer; he’d just pop a pill.

* Michael is a fictitious character

[ct_divider]

The author thanks Prof. Thomas Attard and Joe Garzia. The research work is funded by the Strategic Educational Pathways Scholarship (Malta). The scholarship is part-financed by the European Union — European Social Fund (ESF) under Operational Programme II — Cohesion Policy  2007–2013, ‘Empowering People for More Jobs and a Better Quality of Life’

Time to buy a smart watch?

Tech Review

Just a few years back, mobile phones could make and receive a call, store a few numbers, and that’s it. That’s all they could do. Over the last few years, phones have grown ‘smarter’; they can surf the web, take photos, keep up-to-date on Facebook and Twitter, play games and music, read books and much much more.
Many argue that our watches are next in line for such a transformation. And considering the excitement brought about by the recent announcements of the smartwatch from Samsung, the Galaxy Gear, few will argue against that. Samsung is not the only player vying for the big potential return of smartwatches. Another heavyweight in the technology business, Sony, has been on board for a few years and have just announced their SmartWatch2.
sony
Many small start-ups have also joined the furore delivering watches such as the Pebble, the Martian Passport, the Kreyos Meteor, the Wimm One, the Strata Stealth and the rather unimaginatively named: I’m watch.
All these smartwatches provide basic features such as instant notifications of incoming calls, smses, facebook updates, and tweets through a bluetooth connection with a paired phone. They often also allow mail reading and music control.
With so many players and no clear winner, the technology still needs to mature. Sony and Samsung use colour LED-based displays. Their setbacks are poor visibility in direct sunlight and a weak one-day battery life. Others use electronic ink, the same screen as e-readers, with excellent visibly and much improved battery life, sadly in black and white or limited colour.

User interaction also varies. While the Pebble and the Meteor favour a button-based interface, all other players utilise touch and voice control.
The differences do not stop there. Not all watches are waterproof – and do you really want to be taking off your watch every time you wash your hands? Also, some watches, like the I’m watch, provide a platform for app development, with new apps available for download every day.
One big player is still missing. Rumours of Apple’s imminent entry into the smartwatch business have been circling for a couple of years.
imwatch
While guessing Apple’s watch name is easy — the iWatch, the technology has been kept under covers. As with other Apple products, their watch will not be first to market. Are they again waiting for the technology to evolve enough to bring out another game changer like the iPod, the iPhone, and more recently, the iPad? Only time will tell.

My biggest problem with any smartwatch available is that none seem truly ‘smart’. Smartwatches seem like little dumb accessories to their smart big brothers — the phones. I am waiting for a watch to become smart enough to replace my phone before jumping on the smartwatch bandwagon.