Traffic at the intersection of sensor data and social media

Dublin has one of the most advanced intelligent traffic systems in Europe. The city is full of sensors at its intersections (induction loops counting cars, remotely controlled traffic lights, and traffic cameras) and on its bus fleet equipped with on-board satellite positioning units, all streaming data in real-time. Now, it is turning to crowdsourcing. The city is plugging social media data into its Insight System, powered by research done at IBM Research-Ireland. Two technologies use data from the commuting public to help urban traffic controllers get a better view of city roads to more-quickly respond to incidents, and issue alerts, thereby improving the accuracy of the Insight system and the availability of traffic alerts for Dubliners.


Carbon nanotubes at 9 nm

A Q&A with IBM Research’s Shu-Jen Han

PhD, Materials Science & Engineering, Stanford
PhD Minor, Electrical Engineering, Stanford
Area of focus: Nanotechnology, including nanomaterials and nanoelectronics

How are silicon and carbon similar when it comes to transistors? 

Let's start with carbon because it has so many different allotropes, from carbon nanotubes, graphene to diamonds. But diamonds, for example, are electrical insulators, not semiconductors – which are what we need for a transistor. Graphene is a two-dimensional sheet of pure carbon (yes, one-atom-thick) that can conduct current well, but it does not have a bandgap, therefore, transistors made with graphene cannot be switched off. Carbon nanotubes are a rolled-up form of graphene, which are somewhat similar to Silicon since they both have band gap and can be used as the center piece of the transistor – the channel.

Staring at the sun

IBM Research solar camera watches for, predicts solar energy

The best solar panel only converts about 20 percent of the sun’s rays hitting its surface into usable electricity. On a perfect day at sea level, that panel could generate approximately 200 watts of electricity per square meter. Introduce clouds, shade from trees, or dust in the wind and that power drops even further – making solar a variable energy source for the grid, or anything else powered by photovoltaic panels. So, our physical analytics team at IBM Research built a basketball-sized and shaped camera that can predict solar radiation for the Department of Energy – and more recently, the University of Michigan’s solar car team.

Measuring the sun

Analyzing cloud and car movement for a perfect solar forecast 

The 3,000 km World Solar Challenge route runs north to south through the Australian outback, from Darwin to Adelaide. That means the critical component in the race – the sun – will slowly arc over the cars’ panels from left to right. The sun’s position, its radiation, and other weather elements are what our physical analytics team at IBM Research will measure and predict for the University of Michigan’s solar car team during the WSC, October 18-25.

We hope our short-term and long-range solar forecasts help UM dodge clouds and find the perfect place to charge their car battery before sundown to enhance the chance that they win the race. But the forecasting challenge we’re solving with cognitive computing could also impact the solar energy industry at large. Maybe solar powered mass transit someday?


Meet an IBM researcher: Michael Nidd

Name: Mike Nidd
Location: IBM Research - Zurich
Nationality: Canadian
Focus: Services Research

Many large firms across any number of industries often outsource their data centers to IT service providers like IBM. The rational is obvious: managing thousands of servers is not a core competency for a retailer or mining company, while IT vendors have the skills and resources to manage the millions of square feet of datacenter space more efficiently.

After the often billion dollar deals are signed, the real work begins as the vendors migrate the systems. In some instances this could mean literally shipping or relocating the hardware to another secure facility, or it could mean remote management, or a complete rip and replace.


Mimicking Neurons With Math

Dr. Takayuki Osogami
Artificial neural networks have long been studied with the hope of achieving a machine with the human capability to learn. Today’s attempts at artificial neural networks are built upon Hebb’s rule, which Dr. Donald Hebb proposed in 1949 as how neurons adjust the strength of their connections. Since Hebb, other “rules” of neural learning have been introduced to refine Hebb’s rule, such as spike-timing dependent plasticity (STDP). All of this helps us understand our brains, but makes developing artificial neurons even more challenging.

A biological neural network is too complex to exactly map into an artificial neural network. But IBM mathematician Takayuki Osogami and his team at IBM Research-Tokyo might have figured it out by developing artificial neurons that mathematically mimic STDP to learn words, images, and even music. Takayuki’s mathematical neurons form a new artificial neural network, creating a dynamic Boltzmann machine (DyBM) that can learn about information from multiple contexts through training.

The team taught seven artificial neurons the word “SCIENCE” (one artificial neuron per bit) in a form of a bitmap image. So, the image of “SCIENCE” becomes:


The “1s” equate to the lines making up the letters, while the “0s” translate to the white space around the letters.

What these seven neurons can do all at once is read and write 7-bit information. The word “SCIENCE” is expressed as 7-bit x 35-bit of pattern sequences that equals a 245 bits monochrome bitmap image. These seven neurons read and memorized each piece of the 7-bit information in the image. For example, “0100010” is the tenth column of the whole image according to the learning order, and the neurons recall those pieces in the order they learned them. By memorizing the word from left to right and from right to left, the neurons could recognize “SCIENCE” forward and backward, or in any order – like how we might solve a jumble word puzzle or crossword puzzle.

Figure 1: The DyBM successfully learns two target sequences
and retrieves a particular sequence when an associated
cue is presented. (Credit: Scientific Reports)

More neurons. More memories.

Things get more complicated (and interesting) when these artificial neurons learn about different topics in different formats, such as the human evolution image, below. Takayuki’s team put 20 artificial neurons to the task of learning this image, which shows how we humans have evolved, from left to right. Why they used 20 artificial neurons this time? One column of the image showing the human evolution consists of 20 bits, and they made it consistent with it. 

These neurons learned how the pieces of the image line up in the correct order of evolution – from apes to Homo sapiens. As Takayuki runs simulation, these neurons learn more over time, while detecting the mistakes they make – and making corrections per simulation. With each simulation, the neurons generate an image to show their progress in re-creating the image. It took just 19 seconds for the 20 neurons to learn the image correctly, as mapped out below. 
Figure 2: The DyBM learned the target sequence of human evolution. (Credit: Scientific Reports)

Images and text are one thing. But neurons encompass all senses. So, Takayuki’s team put 12 of their artificial neurons to work learning music. Using a simplified version of the German folk song, Ich bin ein Musikante, each neuron was assigned to one of the 12 notes (Fa, So, Ra, Ti, Do, Re, Mi, Fa, So, Ra, Ti, Do). After 900,000 training sessions, they learned the sequential patterns of tones to the point of being able to generate a simplified version of the song.

The neurons learn music much like we might: repetition from beginning to end to the point of memorization. Currently, the 12 neurons only generate quarter notes, but simply doubling the neurons to 24 gives the system the ability to comprehend half-notes.
Figure 3: The DyBM learned the target music. (Credit: Scientific Reports)

Takayuki’s DyBM not only memorizes and recalls sequential patterns, but it can also detect anomalies in sequential patterns, and make predictions about future patterns. It could be used to predict driving risks through car-mounted cameras, or generate new music, or even detect and correct grammatical errors in text. Takayuki’s work, currently funded by the Japan Science and Technology Agency’s Core Research for Evolutionary Science and Technology (CREST), hopes to advance his DyBM by integrating it with reinforcement learning techniques to optimally act on the basis of such anomaly detection and prediction.

The scientific paper Seven neurons memorizingsequences of alphabetical images via spike-timing dependent plasticity by Takayuki Osogami and Makoto Otsuka appears in Scientific Reports of the Nature Publishing Group on September 16, 2015, DOI: 10.1038/SREP14149.