Decoding Language

AlbertgattGordonPaceMikeRosner

Maltese needs to be saved from digital extinction. Dr Albert Gatt, Prof. Gordon Pace, and Mike Rosner write about their work making digital tools for Maltese, interpretting legalese, and making a Maltese-speaking robot

In 2011 an IBM computer called Watson made the headlines after it won an American primetime television quiz called Jeopardy. Over three episodes the computer trounced two human contestants and won a million dollars.

Jeopardy taps into general world knowledge, with contestants being presented with ‘answers’ to which they have to find the right questions. For instance, one of the answers, in the category “Dialling for Dialects”, was: While Maltese borrows many words from Italian, it developed from a dialect of this Semitic language. To which Watson correctly replied with: What is Arabic?

Watson is a good example of state of the art technology that can perform intelligent data mining, sifting through huge databases of information to identify relevant nuggets. It manages to do so very efficiently by exploiting a grid architecture, which is a design that allows it to harness the power of several computer processors working in tandem.

“Maltese has been described as a language in danger of ‘digital extinction’”

This ability alone would not have been enough for it to win an American TV show watched by millions. Watson was so appealing because it used English as an American would.

Consider what it takes for a machine to understand the above query about Maltese. The TV presenter’s voice would cause the air to vibrate and hit the machine’s microphones. If Watson were human, the vibrations would jiggle the hairs inside his ear so that the brain would then chop up the component sounds and analyse them into words extremely rapidly. The problem for a computer is that there is more to language than just sounds and words. A human listener would need to do much more. For example, to figure out that ‘it’ in the question probably refers to ‘Maltese’ (rather than, say, ‘Italian’, which is possible though unlikely in this context). They would also need to figure out that ‘borrow’ is being used differently than when one says borrowing one’s sister’s car. After all, Maltese did not borrow words from Italian on a short-term basis. Clearly the correct interpretation of ‘borrow’ depends on the listener having identified the intended meaning of ‘Maltese’, namely, that it is a language. Watson was equipped with Automatic Speech Recognition technology to do exactly that.

To understand language any listener needs to go beyond mere sound. There are meanings and structures throughout all language levels. A human listener needs to go through them all before saying that they understood the message.

Watson was not just good at understanding; he was pretty good at speaking too. His answers were formulated in a crisp male voice that sounded quite natural, an excellent example of Text-to-Speech synthesis technology. In a fully-fledged human or machine communicating system, going from text to speech requires formulating the text of the message. The process could be thought of as the reverse of understanding, involving much the same levels of linguistic processing.

 

Machine: say ‘hello’ to Human

The above processes are all classified as Human Language Technology, which can be found in many devices. Human Language Technology can be found everywhere from Siri or Google Now in smart phones to a word processing program that can spell, check grammar, or translate.

Human-machine interaction relies on language to become seamless. The challenge for companies and universities is that, unlike artificial languages (such as those used to program computers or those developed by mathematicians), human languages are riddled with ambiguity. Many words and sentences have multiple meanings and the intended sense often depends on context and on our knowledge of the world. A second problem is that we do not all speak the same language.

 

Breaking through Maltese

Maltese has been described as a language in danger of ‘digital extinction’. This was the conclusion of a report by META-NET, a European consortium of research centres focusing on language technology. The main problem is a lack of Human Language Technology — resources like word processing programs that can correctly recognise Maltese.

Designing an intelligent computer system with a language ability is far easier in some languages than it is in others. English was the main language in which most of these technologies were developed. Since researchers can combine these ready-made software components instead of developing them themselves, it allows them to focus on larger challenges, such as winning a million dollars on a TV program. In the case of smaller languages, like Maltese, the basic building blocks are still being assembled.

Perhaps the most fundamental building block for any language system is linguistic data in a form that can be processed automatically by a machine. In Human Language Technology, the first step is usually to acquire a corpus, a large repository of text or speech, in the form of books, articles, recordings, or anything else that happens to be available in the correct form. Such repositories are exploited using machine-learning techniques, to help systems grasp how the language is typically used. To return to the Jeopardy example, there are now programs that can resolve pronouns such as ‘it’ to identify their antecedents, the element to which they refer. The program should identify that ‘it’ refers to Maltese.

For the Maltese language, researchers have developed a large text/speech repository, electronic lexicons (language’s inventory of its basic units of meaning), and related tools to analyse the language (available for free). Automatic tools exist to annotate this text with basic grammatical and structural information. These tools require a lot of manual work however, once in place, they allow for the development of sophisticated programs. The rest of this article will analyse some of the on-going research using these basic building blocks.

 

From Legalese to Pets

Many professions benefit from automating tasks using computers. Lawyers and notaries are the next professionals that might benefit from an ongoing project at the University of Malta. These experts draft contracts on a daily basis. For them, machine support is still largely limited to word processing, spell checking, and email services, with no support for a deeper analysis of the contracts they write and the identification of their potential legal consequences, partly through their interaction with other laws.

Contracts suffer from the same challenges when developing Human Language Technology resources. A saving grace is that they are written in ‘legalese’ that lessens some problems. Technology has advanced enough to allow the development of tools that analyse a text to enable extraction of information about the basic elements of contracts, leaving the professional free to analyse the deeper meaning of these contracts.

Deeper analysis is another big challenge in contract analysis. It is not restricted to just identifying the core ‘meaning’ or message, but needs to account the underlying reasoning behind legal norms. Such reasoning is different from traditional logic, since it talks about how things should be as opposed to how they are. Formal logical reasoning has a long history, but researchers are still trying to identify how one can think precisely about norms which affect definitions. Misunderstood definitions can land a person in jail.

Consider the following problem. What if a country legislates that:Every year, every person must hand in Form A on 1st January, and Form B on 2nd January, unless stopped by officials.’  Exactly at midnight between the 1st and 2nd of January the police arrest John for not having handed in Form A. He is kept under arrest until the following day, when his case is heard in court. The prosecuting lawyer argues that John should be found guilty because, by not handing in Form A on 1st January he has violated the law. The defendant’s lawyer argues that, since John was under arrest throughout the 2nd of January he was being stopped by officials from handing in Form B, absolving him of part of his legal obligation. Hence, he is innocent. Who is right? If we were to analyse the text of the law logically, which version should be adopted? The logical reasoning behind legal documents can be complicated, which is precisely why tools are needed to support lawyers and notaries who draft such texts.

Figuring out legal documents might seem very different to what Watson was coping with. But there is an important link: both involve understanding natural language (normal every day language) for something, be it computer, robot, or software, to do something specific. Analysing contracts is different because the knowledge required involves reasoning. So we are trying to wed recent advances in Human Language Technology with advances in formal logical reasoning.

Illustration by Sonya Hallett
Illustration by Sonya Hallett

Contract drafting can be supported in many ways, from a simple cross-referencing facility, enabling an author to identify links between a contract and existing laws, to identifying conflicts within the legal text. Since contracts are written in a natural language, linguistic analysis is vital to properly analyse a text. For example in a rent contract when making a clause about keeping dogs there would need to be a cross-reference to legislation about pet ownership.

We (the authors) are developing tools that integrate with word processors to help lawyers or notaries draft contracts. Results are presented as recommendations rather than automated changes, keeping the lawyer or notary in control.

 

Robots ’R’ Us

So far we have only discussed how language is analysed and produced. Of course, humans are not simply language-producing engines; a large amount of human communication involves body language. We use gestures to enhance communication — for example, to point to things or mime actions as we speak — and facial expressions to show emotions. Watson may be very clever indeed, but is still a disembodied voice. Imagine taking it home to meet the parents.

“Robby the Robot from the 1956 film Forbidden Planet, refused to obey a human’s orders”

Robotics is forging strong links with Human Language Technology. Robots can provide bodies for disembodied sounds allowing them to communicate in a more human-like manner.

Robots have captured the public imagination since the beginning of science fiction. For example, Robby the Robot from the 1956 film Forbidden Planet, refused to obey a human’s orders, a key plot element. He disobeyed because they conflicted with ‘the three laws of robotics’, as laid down by Isaac Asimov in 1942. These imaginary robots look somewhat human-shaped and are not only anthropomorphic, but they think and even make value judgements.

Actual robots tend to be more mundane. Industry uses them to cut costs and improve reliability. For example, the Unimate Puma, which was designed in 1963, is a robotic arm used by General Motors to assemble cars.

The Unimate Puma 200
The Unimate Puma 200

The Puma became popular because of its programmable memory, which allowed quick and cheap reconfiguration to handle different tasks. But the basic design was inflexible to unanticipated changes inevitably ending in failure. Current research is closing the gap between Robby and Puma.

Opinions may be divided on the exact nature of robots, but three main qualities define a robot: one, a physical body; two, capable of complex, autonomous actions; and three, able to communicate. Very roughly, advances in robotics push along these three highly intertwined axes.

At the UoM we are working on research that pushes forward all three, though it might take some time before we construct a Robby 2. We are developing languages for communicating with robots that are natural for humans to use, but are not as complex as natural languages like Maltese. Naturalness is a hard notion to pin down. But we can judge that one thing is more or less natural than another. For example, the language of logic is highly unnatural, while using a restricted form of Maltese would be more natural. It could be restricted in its vocabulary and grammar to make it easier for a robot to handle.

Edited Lego copyTake the language of a Lego EV3 Mindstorms robot and imagine a three-instruction program. The first would be to start its motors, the second to wait until light intensity drops to a specific amount, the third to stop. The reference to light intensity is not a natural way to communicate information to a robot. When we talk to people we are not expected to understand how the way we put our spoken words relates to their hardware. The program is telling the robot to: move forward until you reach a black line. Unlike the literal translation, this more natural version employs concepts at a much higher level and hence is accessible to anybody with a grasp of English.

The first step is to develop programs that translate commands spoken by people into underlying machine instructions understood by robots. These commands will typically describe complex physical actions that are carried out in physical space. Robots need to be equipped with the linguistic abilities necessary to understand these commands, so that we can tell a robot something like ‘when you reach the door near the table go through it’.

To develop a robot that can understand this command a team with a diverse skillset is needed. Language, translation, the robot’s design and movement, ability to move and AI (Artificial Intelligence) all need to work together. The robot must turn language into action. It must know that it needs to go through the door, not through the table, and that it should first perceive the door and then move through it. A problem arises if the door is closed so the robot must know what a door is used for, how to open and close it, and what the consequences are. For this it needs reasoning ability and the necessary physical coordination. Opening a door might seem simple, but it involves complex hand movements and just the right grip. Robots need to achieve complex behaviours and movements to operate in the real world.

The point is that a robot that can understand these commands is very different to the Puma. To build it we must first solve the problem of understanding the part of natural language dealing with spatially located tasks. In so doing the robot becomes a little bit more human.

A longer-term aim is to engage the robot in two-way conversation and have it report on its observations — as Princess Leia did with RT-D2 in Star Wars, if RT-D2 could speak.

Lego Mindstorms EV3 brick
Lego Mindstorms EV3 brick

Language for the World

Human Language Technologies are already changing the world. From automated announcements at airports, to smartphones that can speak back to us, to automatic translation on demand. Human Language Technologies help humans interact with machines and with each other. But the revolution has only just begun. We are beginning to see programs that link language with reasoning, and as robots become mentally and physically more adept the need to talk with them as partners will become ever more urgent. There are still a lot of hurdles to overcome.

To make the right advances, language experts will need to work with engineers and ICT experts. Then having won another million bucks on a TV show, a future Watson will get up, shake the host’s hand, and maybe give a cheeky wink to the camera.

Attack of The Friday Monsters: A Tokyo Tale

Game Review_Costantino

Not a 50 hour-long blockbuster, not a 30 second casual game: Attack of The Friday Monsters is an experiment with a new, middle-sized format. The game presents a day in the life of an 8 year old kid. The oneiric, nostalgic storyline is a masterfully paced intense adventure that feels just right.

Downloadable from the Nintendo 3DS eShop, the game is set in a ‘70s Japanese town, where our hero Sohta and his family just moved in. Told from the kid’s perspective, the events are open to interpretation: apparently, Godzilla-like monsters attack every Friday. On the same day, a TV show also packed with monsters is produced and aired in town. What is the secret behind these attacks? And is there a connection between fact and fiction?

attack-of-the-friday-monsters-screen1
Don’t expect to engage in massive monster fights in Attack of The Friday Monsters. The game focuses on talking with villagers, meeting new friends, and strolling in a beautiful countryside town. It really makes you feel like a kid again encouraging a relaxed kind of roleplay.

At €7.99, Attack of The Friday Monsters proves that digital downloads can be a great way to introduce audiences to new formats and concepts. It introduces a poetical take on games.

Will robots take over the world?

Unlikely, for the next 100 years. Academics and sci-fi writers take three rough approaches. We will become one with the bots by integrating computers into our body achieving the next stage of evolution. Or, robots will become so powerful so quickly that we’ll become their slaves, helpless to stop them — think the Matrix. Or, robots have certain technological hurdles that will take ages to overcome.

Let’s analyse those hurdles. Computing power: no problem. Manufacturing expense: no problem. Artificial intelligence: could take decades, but we are already mapping and replicating the human brain through computers. Energy: very difficult to power such energy-hungry devices in a mobile way; battery or portable energy generation has a long way to go. The desire to enslave humanity: would require Asmiov’s trick or a mad computer scientist to programme it into the bot’s code. Conclusion: unlikely, sleep easy tonight.

INDIE GAMES

How can a video game ask questions about life, art, and frustration? Giuliana Barbaro-Sant met up with Dr Pippin Barr to tell us about his game adaptation of Marina Abramović‘s artwork The Artist is Present.

In each creative act, a personal price is paid. When the project you have been working on so hard falls to pieces because of funding, it is hard to accept its demise. The feeling of failure, betrayal, and loneliness is an easy trap to fall into. This is the independent game maker’s industry: a bloodthirsty world rife with competition, sucking pockets dry from the very beginning of the creative process.

Maltese game makers face a harsher reality. Not all game makers are lucky enough to make it to the finish line, publish, and make good money. Rather, most of them rarely do. Yet, if and when they get there, it is often thanks to the passion and dedication they put into their creation — together with the continuous support of others.

Dr Pippin Barr always had a passion for making things, be it playing with blocks or doodling. His time lecturing at the Center for Computer Game Research at the IT University of Copenhagen, together with his recent team-up with the newly opened Institute of Digital Games at the University of Malta, only served to reincarnate another form of this passion: Pippin makes games. At the Museum of Modern Art in New York he exhibited his most well known work: the game rendition of Marina Abramović’s The Artist is Present. He thought of the idea while planning to deliver lectures about how artists invoke emotions through laborious means in their artworks. In The Artist is Present, artist Marina Abramović sits still in front of hundreds of thousands of people and just stares into their eyes for as long as participants desire.

There is more to this performance than meets the eye. Beyond the simplistic façade, Barr saw real depth. Through eye contact, the artist and audience forge a unique connection. All barriers drop, and human emotion flows with a great rawness that games are so ‘awful’ at embodying. Yet, paradoxically, there is a militariness in the preparation behind the performance that games embrace only too well. Not only does the artist have to physically programme herself to withstand over 700 hours’ worth of performing, but the audience also prepares for the experience in their own way, by disciplining themselves as they patiently wait for their turn.

“It’s a pretty lonely road and it can be tough when you’re stuck with yourself”

Pippin Barr

‘Good research is, after all, creative,’ according to Pippin Barr. By combining his academic background with his creative impulse, he made an art game — a marriage between art and video games. These are games about games, which test their values and limits. Barr relishes the very idea of questioning the way things work. His self-reflexive games serve as a platform for him to call into question life’s so-called certainties, in a way that is powerful enough to strike a chord in both himself and the player. He is looking to create a deep emotional resonance, which gives the player a chance to ‘get’ the game through a unique personal experience. Sometimes, players write about his games and capture what Pippin Barr was thinking about, as he put it, ‘better than I could myself’, or read deeper than his own thoughts.

As far as gameplay goes, The Artist is Present is fairly easy to manoeuvre in. The look is fully pixellated yet captures the ambience at the Museum. The first screen of the game places the player in front of its doors and you are only allowed in if you are playing the game during the actual exhibition’s opening hours in America. Until then, there is no option but to wait till around 4:30 pm our time (GMT+1). The frustration continues increasing since after entering you will still have to wait behind a long queue of strangers to experience the performance work. This reflects real world participants who had to wait to experience The Artist is Present. If they were lucky, they sat in front of the artist and gazed at her for as long as they wanted.

Interestingly, Marina Abramović also played the game. She told Barr about how she was kicked out of the queue when she tried to catch a quick lunch in the real world as she was queuing in the digital one. Very unlucky, but the trick is to keep the game tab open. Other than that, good luck!

Despite that little hiccup, Abramović did not give up on the concept of digitalising the experience of her art. After The Artist is Present, Barr and Abramović set forth on a new quest: the making of the Digital Marina Abramović Institute. Released last October, it has proven to be a great challenge for those who cannot help but switch windows to check up on their Facebook notifications – not only are the instructions in a scrolling marquee, but you have to keep pressing the Shift button on your keyboard to prove you are awake and aware of what is happening in the game. It is the same kind of awareness that is expected out of the physical experience of the real-life Institute.

The quirkiness of Barr’s games reflects their creator. Besides The Artist is Present, in Let’s Play: Ancient Greek Punishment, he adapted a Greek Sisyphus myth to experiment with the frustration of not being rewarded. In Mumble Indie Bungle, he toyed with the cultural background of indie game bundles by creating ‘terrible’ versions with ‘misheard titles’ (and so, ‘misheard’ game concepts) of renowned indie games. One of his 2013 projects involves the creation of an iPhone game, called Snek, an adaptation of the good old Nokia 3310 Snake. In his version, Pippin Barr turned the effect of the smooth ‘naturally’ perfect touch interface of the device upon its head, by using the gyroscope feature. Instead, the interaction with the Apple device becomes thoroughly awkward, as the player has to move around very unnaturally because of the requirements of the game.

This dedicated passion for challenging boundaries ultimately drives creators and artists alike to step out of their comfort zone and make things. These things challenge the way society thinks and its value systems. Game making is no exception, especially for independent developers. An artist yearns for the satisfaction that comes with following a creative impulse and succeeding. In Barr’s case, being ‘part of the movement to expand game boundaries and show players (and ourselves) that the possibilities for what might be “allowed” in games is extremely broad.’

Accomplishing so much, against the culture industry’s odds, is a great triumph for most indie developers. For Pippin Barr, the real moment of success is when the game is finished and is being played. Then he knows that someone sat with the game and actually had an experience — maybe even ‘got it’.

 

Follow Pippin Barr on Twitter: @pippinbarr or on: www.pippinbarr.com

Giuliana Barbaro-Sant is part of the Department of English Master of Arts programme.

An Intelligent Pill

carlazzopardi
Doctors regularly need to use endoscopes to take a peek inside patients and see what is wrong. Their current tools are pretty uncomfortable. Biomedical engineer Ing. Carl Azzopardi writes about a new technology that would involve just swallowing a capsule.

Michael* lay anxiously in his bed, looking up at his hospital room ceiling. ‘Any minute now’, he thought, as he nervously awaited his parents and doctor to return. Michael had been suffering from abdominal pain and cramps for quite some time. The doctors could not figure it out through simple examinations. He could not take it any more. His parents had taken him to a gut specialist, a gastroenterologist, who after asking a few questions, had simply suggested an ‘endoscopy’ to examine what is wrong. Being new to this, Michael had immediately gone home to look it up. The search results did not thrill him.

The word ‘endoscope’ derives from the Greek words ‘endo’, inside, and ‘scope’, to view. Simply put, looking inside  our body using instruments called endoscopes. In 1804, Phillip Bozzini created the first such device. The Lichtleiter, or light conductor, used hollow tubes to reflect light from a candle (or sunlight) onto bodily openings — rudimentary.

Modern endoscopes are light years ahead. Constructed out of sleek, black polyurethane elastometers, they are made up of a flexible ‘tube’ with a camera at the tip. The tubes are flexible to let them wind through our internal piping, optical fibers shine light inside our bodies, and since the instrument is hollow it allows forceps or other instruments to work during the procedure. Two of the more common types of flexible endoscopes used nowadays are called gastroscopes and colonoscopes. These are used to examine your stomach and colon. As expected, they are inserted through your mouth or rectum.

Michael was not comforted by such advancements. He was not enticed by the idea of having a flexible tube passed through his mouth or colon. The door suddenly opened. Michael jerked his head towards the entrance to see his smiling parents enter. Accompanying them was his doctor holding a small capsule. As he handed it over to Michael, he explained what he was about to give him.

Enter capsule endoscopy. Invented in 2000 by an Israeli company, the procedure is simple. The patient just needs to swallow a small capsule. That is it. The patient can go home, the capsule does all the work automatically.

The capsule is equipped with a miniature camera, a battery, and some LEDs. It starts to travel through the patient’s gut. While on its journey it snaps around four to thirty-five images every second. Then it transmits these wirelessly to a receiver strapped around the patient’s waist. Eventually the patient passes out the capsule and on his or her next visit to the hospital, the doctor can download all the images saved on the receiver.

The capsule sounds like simplicity itself. No black tubes going down patients’ internal organs, no anxiety. Unfortunately, the capsule is not perfect.

“The patient just needs to swallow a small capsule. That is it. The patient can go home, the capsule does all the work automatically”

Autumn 2013 Magazine Master.inddFirst of all, capsule endoscopy cannot replace flexible endoscopes. The doctors can only use the capsules to diagnose a patient. They can see the pictures and figure out what is wrong, but the capsule has no forceps that allow samples to be taken for analysis in a lab. Flexible endoscopes can also have cauterising probes passed through their hollow channels, which can use heat to burn off dangerous growths. The capsule has no such means. The above features make gastroscopies and colonoscopies the ‘gold standard’ for examining the gut. One glaring limitation remains: flexible endoscopes cannot reach the small intestine, which lies squarely in the middle between the stomach and colon. Capsule endoscopy can examine this part of the digestive tract.

A second issue with capsules is that they cannot be driven around. Capsules have no motors. They tend to go along for the ride with your own bodily movements. The capsule could be pointing in the wrong direction and miss a cancerous growth. So, the next generation of capsules are equipped with two cameras. This minimises the problem but does not solve it completely.

The physical size of the pill makes these limitations hard to overcome. Engineers are finding it tricky to include mechanisms for sampling, treatment, or motion control. On the other hand, solutions to a third problem do exist. This difficulty relates to too much information. The capsule captures around 432,000 images over the 8 hours it snaps away. The doctor then needs to go through nearly all of these images to spot the problematic few. A daunting task that uses up a lot of time, increasing costs, and makes it easier to miss signs of disease.

A smart solution lies in looking at image content. Not all images are useful. A large majority are snapshots of the stomach uselessly churning away, or else of the colon, far down from the site of interest. Doctors usually use capsule endoscopy to check out the small intestine. Medical imaging techniques come in handy at this point to distinguish between the different organs. Over the last year, the Centre for Biomedical Cybernetics (University of Malta) has carried out collaborative research with Cardiff University and Saint James Hospital to develop software which gives doctors just what they need.

Following some discussions between these clinicians and engineers they quickly realised that images of the stomach and large intestine were mostly useless for capsule endoscopes.

Identifying the boundaries of the small intestines and extracting just these images would simplify and speed up screening. The doctor would just look at these images, discarding the rest.

Engineers Carl Azzopardi, Kenneth Camilleri, and Yulia Hicks developed a computer algorithm that could first and foremost tell the difference between digestive organs. An algorithm is a bit of code that performs a specific task, like calculating employees’ paychecks. In this case, the custom program developed uses image-processing techniques to examine certain features of each image, such as colour and texture, and then uses these to determine which organ the capsule is in.

Take colours for instance. The stomach has a largely pinkish hue, the small intestine leans towards yellowish tones, while the colon (unsurprisingly perhaps) changes into a murky green. Such differences can be used to classify the different organs. Additionally, to quickly sort through thousands of images, the images need to be compacted. A specific histogram is used to amplify differences in colour and compress the information. These procedures make it easier and quicker for algorithm image processing.

Texture is another unique organ quality. The small intestine is covered with small finger-like projections called villi. The projections increase the surface area of the organ, improving nutrient absorption into the blood stream. These villi give a particular ‘velvet-like’ texture to the images, and this texture can be singled out using a technique called Local Binary Patterns. This works by comparing each pixel’s intensity to its neighbours’, to determine whether these are larger or smaller in value than its own. For each pixel, a final number is then worked out which gauges whether an edge is present or not (see image).

Classification is the last and most important step in the whole process. At this point the software needs to decide if an image is part of the stomach, small intestine, or large intestine. To help automatically identify images, the program is trained to link the factors described above with the different organ types by being shown a small subset of images. This data is known as the training set. Once trained, the software can then automatically classify new images from different patients on its own. The software developed by the biomedical engineers was tested first by classification based just on colours or texture, then testing both features together. Factoring both in gave the best results.

“The software is still at the research stage. That research needs to be turned into a software package for a hospital’s day-to-day examinations” 

Dr Yulia Hicks
Dr Yulia Hicks
Prof. Ing. Kenneth Camilleri
Prof. Ing. Kenneth Camilleri

After the images have been labeled, the algorithm can draw the boundaries between digestive organs. With the boundaries in place, the specialist can focus on the small intestine. At the press of a button countless hours and cash are saved.

 

The software is still at the research stage. That research needs to eventually be turned into a software package for a hospital’s day-to-day examinations. In the future, the algorithm could possibly be inserted directly onto the capsule. An intelligent capsule would be born creating a recording process capable of adapting to the needs of the doctor. It would show them just what they want to see.

Ideally the doctor would have it even easier with the software highlighting diseased areas automatically. The researchers at the University of Malta want to start automatically detecting abnormal conditions and pathologies within the digestive tract. For the specialist, it cannot get better than this.

The result? A shorter and more efficient screening process that could turn capsule endoscopy into an easily accessible and routine examination. Shorter specialist screening times would bring down costs in the private sector and lessen the burden on public health systems. Michael would not need to worry any longer; he’d just pop a pill.

* Michael is a fictitious character

[ct_divider]

The author thanks Prof. Thomas Attard and Joe Garzia. The research work is funded by the Strategic Educational Pathways Scholarship (Malta). The scholarship is part-financed by the European Union — European Social Fund (ESF) under Operational Programme II — Cohesion Policy  2007–2013, ‘Empowering People for More Jobs and a Better Quality of Life’

Time to buy a smart watch?

Tech Review

Just a few years back, mobile phones could make and receive a call, store a few numbers, and that’s it. That’s all they could do. Over the last few years, phones have grown ‘smarter’; they can surf the web, take photos, keep up-to-date on Facebook and Twitter, play games and music, read books and much much more.
Many argue that our watches are next in line for such a transformation. And considering the excitement brought about by the recent announcements of the smartwatch from Samsung, the Galaxy Gear, few will argue against that. Samsung is not the only player vying for the big potential return of smartwatches. Another heavyweight in the technology business, Sony, has been on board for a few years and have just announced their SmartWatch2.
sony
Many small start-ups have also joined the furore delivering watches such as the Pebble, the Martian Passport, the Kreyos Meteor, the Wimm One, the Strata Stealth and the rather unimaginatively named: I’m watch.
All these smartwatches provide basic features such as instant notifications of incoming calls, smses, facebook updates, and tweets through a bluetooth connection with a paired phone. They often also allow mail reading and music control.
With so many players and no clear winner, the technology still needs to mature. Sony and Samsung use colour LED-based displays. Their setbacks are poor visibility in direct sunlight and a weak one-day battery life. Others use electronic ink, the same screen as e-readers, with excellent visibly and much improved battery life, sadly in black and white or limited colour.

User interaction also varies. While the Pebble and the Meteor favour a button-based interface, all other players utilise touch and voice control.
The differences do not stop there. Not all watches are waterproof – and do you really want to be taking off your watch every time you wash your hands? Also, some watches, like the I’m watch, provide a platform for app development, with new apps available for download every day.
One big player is still missing. Rumours of Apple’s imminent entry into the smartwatch business have been circling for a couple of years.
imwatch
While guessing Apple’s watch name is easy — the iWatch, the technology has been kept under covers. As with other Apple products, their watch will not be first to market. Are they again waiting for the technology to evolve enough to bring out another game changer like the iPod, the iPhone, and more recently, the iPad? Only time will tell.

My biggest problem with any smartwatch available is that none seem truly ‘smart’. Smartwatches seem like little dumb accessories to their smart big brothers — the phones. I am waiting for a watch to become smart enough to replace my phone before jumping on the smartwatch bandwagon.

Power of the Wind

Alu_DanielBaldacchinoMy passion for renewable energies was sparked off during my undergraduate studies in Mechanical Engineering at the University of Malta. Thanks to ERASMUS, I studied at the University of Strathclyde which had a Renewable Energy course that, at the time, was not offered in Malta.

I spent the last year of my bachelor studies designing and testing part of a wind tunnel to simulate atmospheric wind conditions. This test setup allowed for more realistic wind turbine experiments than previous efforts.

Although I wanted to further my career in wind energy, I opted first to broaden my knowledge in the field of renewables by enrolling for the Masters in Sustainable Energy Technology at Delft University of Technology in the Netherlands in August 2010.

Over the first year, I worked on several projects. They included designing a smart grid which was presented at the European Joint Research Centre (JRC). I also helped develop an innovative thermal energy plant that exploits temperature differences between the ocean surface and deep-water (>1km deep) in tropical waters to generate electricity.

Over the second year, I again carried out research in wind energy. At the famed Wind Energy Research Institute of Delft University called DUWIND, I looked into the effect wind turbines can have on each other. When wind turbine blades cut through the wind they can change its direction. This can reduce the efficiency of nearby wind turbines making them produce less energy. My results showed that a turbine’s effect on nearby systems diminishes when the wind distortion it causes is limited either by the wind’s inherent instability or other by properties like its proximity to the ground. By exploiting these wind qualities, a wind farm’s efficiency can be improved by up to 15%.

After my Masters I worked for a year at Eindhoven as a flow and thermal analyst at Segula Technologies Consultancy. I developed new components for a company’s cutting edge lithography machines and for fuel cell system development for BOSAL engineering.  Now I have managed to secure a Ph.D. scholarship in wind turbine blade aerodynamics, continuing the work I started in my Masters at DUWIND. This time I am looking into the influence of small flow control devices on the performance of large (10 MW) wind turbines.

 

Baldacchino was awarded a STEPS scholarship for his Masters studies, which is part-financed by the EU’s European Social Fund under Operational Programme II — Cohesion Policy 2007–2013.