20 December 2015

25 sci-tech developments that offer a peek into the future

Scientists at universities across the world continue to experiment with radical ideas in fields such as genetics, quantum computing, robotics, chemistry and physics. Many of their scientific and technological developments of 2015, clubbed here under five subheadings of convenience (Cutting Edge, Science and Medicine, Robotics, The Universe, and Technology and Society), are clearly works in progress.

Nevertheless, all of them reflect the potential to change the way we think and work on our planet, and the universe we live in. A handpicked collection of some of 2015’s finest moments of epiphany.

Cutting Edge

Invisibility cloaks

While many would dream of a cloak that makes one invisible to the world, a true Harry Potter-like invisibility cloak is still a distant dream. Nevertheless, a lot of research is clearly leading the way towards the inevitability of making one.

Consider the work of Debashis Chanda at the University of Central Florida. The cover story in the March edition of the journal Advanced Optical Materials explains how Chanda and his fellow optical and nanotech experts were able to develop a larger swathe of multi-layer 3D metamaterial operating in the visible spectral range.

The broad theory behind invisibility cloaks is to manipulate light by controlling and bending it around an object to make the latter seem invisible to the human eye. It is the scattering of light—visible, infrared, X-ray, etc.—that interacts with matter to help us detect and observe objects. However, the rules that govern these interactions in natural materials can be circumvented in metamaterials whose optical properties arise from their physical structure rather than their chemical composition.

By improving the technique, Chanda and his team hope to be able to create larger pieces of the material with engineered optical properties, which would make it practical to produce for real-life device applications.

In April, a group of researchers from the Karlsruhe Institute of Technology (KIT) in Karlsruhe, Germany, said they have developed a portable invisibility cloak that can be taken into classrooms and used for demonstrations. It can’t hide a human, but it can make small objects disappear from sight without specialized equipment.

Scientists hoping to divert light around an object to render it invisible must find a way to offset the increased distance by a higher speed limit. To address this challenge, the KIT team constructed their cloak from a light-scattering material. By scattering light, the material slows down the effective propagation speed of the light waves through the medium. Then the light can be sped up again to make up for the longer path length around the hidden object. In this cloak, the object to be concealed is placed inside a hollow metal cylinder coated with acrylic paint, which diffusely reflects light. The tube is embedded within a block of polydimethylsiloxane, a commonly used organic polymer, doped with titanium dioxide nanoparticles that make it scatter light.

There have been similar attempts. On 17 September, for example, scientists at the US department of energy’s Lawrence Berkeley National Laboratory (Berkeley Lab) and the University of California Berkeley said they have devised an ultra-thin invisibility “skin cloak” that can conform to the shape of an object and conceal it from detection with visible light. Working with brick-like blocks of gold nano-antennas, the Berkeley researchers fashioned a “skin cloak” barely 80 nanometers in thickness. The surface of the “skin cloak” was meta-engineered to reroute reflected light waves so that the object was rendered invisible to optical detection when the cloak is activated.

On 21 September, scientists at the Nanyang Technological University (NTU) in Singapore said in a statement that they have developed a thermal cloak that can render an object thermally invisible by actively redirecting incident heat. To construct the cloak, the researchers deployed 24 small thermoelectric modules, which are semiconductor heat pumps controlled by an external input voltage. The modules operate via the Peltier effect, whereby a current running through the junction between two conductors can remove or generate heat. When many modules are attached in series, they can redirect heat flow.

The researchers also found that their active thermal cloaking was not limited by the shape of the object being hidden. When applied to a rectangular air hole, the thermoelectric devices redistributed heat just as effectively as in the circular one. Baile Zhang and his team plan to apply the thermal cloaks in electronic systems.

Transparent devices

If you recall all the high-tech transparent technology Tom Cruise used in Minority Report, you may wonder why they are yet to become a reality even though it’s well over a decade since that movie was released. It’s batteries that pose a big problem, according to a 14 November press statement, since they have thick materials.

With a technique known as spin-spray layer-by-layer (SSLbL) assembly, Yale researchers have created ultrathin and transparent films from single-walled carbon nanotubes (SWNT) and vanadium pentoxide (V2O5) nanowires to serve as battery anodes and cathodes—in a bid to work around the issue.

The work was done at the lab of André Taylor, associate professor of chemical and environmental engineering, and the results were published online in the journal ACS Nano. Forrest Gittleson, a post-doctoral associate at Yale in chemical and environmental engineering, is the lead author.

The researchers acknowledge that there are still challenges to overcome before transparent devices can be mass-produced, the biggest obstacle being “improving the conductivity of these thin electrodes”. To address the issue, the researchers created a new “sandwich” architecture that integrates conductive SWNT layers and active cathode materials to enhance performance. The next step, Taylor said, is creating a transparent separator/electrolyte—the third major component of a battery. It’s how the lithium ions travel between the anode and cathode.

“Nature has already demonstrated that complex systems can be transparent,” Gittleson said. “In fact, earlier this year, they discovered a new glass frog species with translucent skin in Costa Rica. If nature can achieve it through evolution, we should be able to with careful engineering.”

Companies are indeed interested in transparent devices. On 18 November 2014, Apple Inc. was granted a patent for an invention relating to a method and system for displaying images on a transparent display of an electronic device. Furthermore, the display screens may allow for overlaying of images over real-world viewable objects, as well as a visible window to be present on an otherwise opaque display screen. Apple credited Aleksandar Pance as the sole inventor of the granted patent.

In June, Samsung Electronics unveiled the first commercial use of its mirror and transparent OLEDs (organic light emitting diodes) at the Retail Asia Expo 2015. Samsung had rolled out the first mass-produced transparent LCD panels in 2011 and Philips’ HomeLab R&D outfit had demonstrated an LCD mirror TV in 2004.

Meanwhile, Ubiquitous Energy, a start-up that was spun off from Massachusetts Institute of Technology in 2014, has developed a “transparent solar cell technology to market to eliminate the battery life limitations of mobile devices”, according to a 28 May release by Ubiquitous Energy co-founder and CEO Miles Barr. Implemented as a fully transparent film that covers a device’s display area, the company’s “ClearView Power technology” transmits light visible to the human eye, while selectively capturing and converting ultraviolet and near-infrared light into electricity to power the device and extend its battery life.

Face recognition

On 9 June, Microsoft Corp. launched a site, twinsornot.net, for users to upload their photos, assuring that it would not retain the pictures that were uploaded. A user simply could upload two photos to assess how similar the people in these photos are, giving a score from 0 to 100. Running in the background was Microsoft’s Face API (application programming interface) which, for instance, can detect up to 64 human faces in an image.

Face recognition typically provides the functionalities of automatically identifying or verifying a person from a selection of detected faces. It is widely used in security systems, celebrity recognition and photo tagging applications.

Optionally, face detection can also extract a series of face-related attributes from each face such as pose, gender and age. The inspiration was from an April launch of Microsoft’s other site: ‘How Old Do I Look?’ (how-old.net) that lets users upload a picture and have the API predict the age and gender of any faces recognized in that picture.

Facial recognition is not new, and technology companies are very interested in this for obvious reasons—knowing who their users are and using that analysis to build better and more marketable products around it.

On 20 June 2014, a University of Central Florida research team said it has developed a facial recognition tool that promises to be useful in rapidly matching pictures of children with their biological parents, and potentially identifying photos of missing children as they age.

Facebook’s DeepFace uses technology designed by an Israeli start-up called face.com, a company that Facebook acquired in 2013. According to a paper published in arxiv.org, a publication owned and operated by Cornell University, in March, three researchers from Google Inc. have also developed a similar deep neural net architecture and learning method that also uses a facial alignment system based on explicit 3D modelling of faces. Google calls it FaceNet.

Quantum computing

Photo: Princeton
Photo: Princeton

Conventional computers use bits, the basic unit of information in computing—zeros and ones. A quantum computer, on the other hand, deals with qubits that can encode a one and a zero simultaneously—a property that will eventually allow them to process a lot more information than traditional computers, and at unimaginable speeds.

Developing a quantum computer, however, is easier said than done. The main hurdle is stability since calculations are taking place at the quantum level because of which the slightest interference can disrupt the process. Tackling the instability problem is one of the main reasons why the effort to make a quantum computer is expensive.

The concept of quantum computing, as conceived by American theoretical physicist Richard Feynman in 1982, though, is not new to governments and companies like IBM, Microsoft Corp. and Google Inc. On 2 January 2014, The Washington Post reported that the US’s National Security Agency, or NSA, was building a quantum computer that could break nearly every kind of encryption.

Google, on its part, teamed up with US space agency Nasa’s Quantum Artificial Intelligence Laboratory in August 2013 (Nasa and Google’s partnership began 10 years ago and is not restricted to quantum computing) to work on a machine by D-Wave Systems (whose claims of having built a pure quantum computer are disputed since the company is not building a “gate model” device) in the hope that quantum computing may someday dramatically improve the agency’s ability to solve problems much faster.

On 3 September 2013, Google said it would continue to collaborate with D-Wave scientists and experiment with the Vesuvius machine at the Nasa Ames campus in Mountain View, California, “which will be upgraded to a 1000 qubit ‘Washington’ processor”. Bloomberg reported on 9 December this year that Google and D-Wave are developing a new computer that can solve some types of complex problems that are next to impossible to solve on conventional computers at Nasa Ames.

In a related development, Princeton University researchers said they have built a rice grain-sized laser powered by single electrons tunnelling through artificial atoms known as quantum dots. It is being touted as a major step towards building quantum-computing systems out of semiconductor materials.

The researchers built the device, which uses about one-billionth the electric current needed to power a hair dryer, while exploring how to use quantum dots, which are bits of semiconductor material that act like single atoms, as components for quantum computers. “It is basically as small as you can go with these single-electron devices,” said Jason Petta, an associate professor of physics at Princeton who led the study, which was published in the journal Science on 16 January.

‘Material’ computers

Manu Prakash, an assistant professor of bioengineering at Stanford, and his students have developed a synchronous computer that operates using the unique physics of moving water droplets. The goal is to design a new class of computers that can precisely control and manipulate physical matter, according to an 8 June press statement.

The team’s aim with the new computer, nearly a decade in the making, is not to compete with digital computers that process information or operate word processors but “to build a completely new class of computers that can precisely control and manipulate physical matter”. “Imagine if when you run a set of computations that not only information is processed but physical matter is algorithmically manipulated as well. We have just made this possible at the mesoscale (between micro- and macro-scale),” Prakash said.

The ability to precisely control droplets using fluidic computation could have a number of applications in high-throughput biology and chemistry, and possibly new applications in scalable digital manufacturing. The results were published in the June edition of Nature Physics. Prakash recruited a graduate student, Georgios ‘Yorgos’ Katsikis, who is the first author on the paper.

Developing a clock for a fluid-based computer implies it has to be easy to manipulate, and also able to influence multiple droplets at a time, the researchers point out. The system also needed to be scalable so that in the future, a large number of droplets can communicate with each other. Prakash used a rotating magnetic field to do the trick. Katsikis and Prakash built arrays of tiny iron bars on glass slides.

Every time the field flips, the polarity of the bars reverses, drawing the magnetized droplets in a new, predetermined direction. Every rotation of the field counts as one clock cycle, like a second-hand making a full circle on a clock face, and every drop marches exactly one step forward with each cycle.

A camera records the interactions between individual droplets, allowing the observation of computation as it occurs in real time. The presence or absence of a droplet represents the 1s and 0s of binary code, and the clock ensures that all the droplets move in perfect synchrony, and thus the system can run virtually forever without any errors.

According to Prakash, the most immediate application might involve turning the computer into a high-throughput chemistry and biology laboratory. Instead of running reactions in bulk test tubes, each droplet can carry some chemicals and become its own test tube, and the droplet computer offers unprecedented control over these interactions.

Science and Medicine

Cheap and fast eye check-ups

More than 4 billion people across the world require eyeglasses. Of these, more than half lack access to eye tests. Traditional diagnostic tools are cumbersome, expensive and fail to take advantage of the power of today’s mobile computing power.

This prompted a MIT spinout, EyeNetra, to develop smartphone-powered eye-test devices. The devices are being readied for a commercial launch with the management wanting to introduce this device in hospitals, optometric clinics, optical stores and even homes in the US.

EyeNetra is also pursuing opportunities to collaborate with virtual-reality companies seeking to use the technology to develop “vision-corrected” virtual-reality displays. “As much as we want to solve the prescription glasses market, we could also (help) bring virtual reality to the masses,” said EyeNetra co-founder Ramesh Raskar, an associate professor of media arts and sciences at the MIT Media Lab, who co-invented the device, in a 19 October press statement. The device, called Netra, is a plastic, binocular-like headset.

Using the company’s app, users can attach a smartphone to the front and peer through the headset at the phone’s display. Patterns, such as separate red and green lines or circles, appear on the screen.

The app calculates the difference between what a user sees as “aligned” and the actual alignment of the patterns. This signals any refractive errors such as nearsightedness, farsightedness and astigmatism (eye condition that causes blurred or distorted vision), providing the necessary information for eyeglasses prescriptions.

In April, EyeNetra launched Blink—an on-demand refractive test service in New York, where employees bring the start-up’s optometry tools, including the Netra device, to people’s homes and offices. In India, EyeNetra has launched Nayantara, a similar programme to provide low-cost eye tests to the poor and uninsured in remote villages, far from eye doctors.

One, of course, must mention the work being done by Adaptive Eyecare, which was founded by Oxford Physics professor Joshua Silver, who is now director of the non-profit Centre for Vision in the Developing World at the University of Oxford.

During a 2009 TED Talk, Silver demonstrated his low-cost glasses, which can be tuned by the wearer. His spectacles have “adaptive lenses”, which consist of two thin membranes separated by silicone gel. The wearer simply looks at an eye chart and pumps in more or less fluid to change the curvature of the lens, which adjusts the prescription.

A plant-based cancer drug

Photo: Shutterstock
Photo: Shutterstock

Elizabeth Sattely, an assistant professor of chemical engineering at Stanford, and her graduate student Warren Lau have isolated the machinery for making a widely-used cancer-fighting drug from an endangered plant, according to a 10 September press statement.

They then put that machinery into a common, easily grown laboratory plant in the Himalayas, which was able to produce the chemical. The drug Sattely chose to focus on is produced by a leafy Himalayan plant called the Mayapple.

Within the plant, a series of proteins work in a step-by-step fashion to churn out a chemical defence against predators. That chemical defence, after a few modifications in the lab, becomes a widely-used cancer drug called etoposide. The starting material for this chemical defence is a harmless molecule commonly present in the leaf. When the plant senses an attack, it begins producing proteins that make up the assembly line.

One by one, those proteins add a little chemical something here, subtract something there, and after a final molecular nip and tuck, the harmless starting material is transformed into a chemical defence. The challenge was figuring out the right proteins for the job.

Sattely and her team tested various combinations of 31 proteins until they eventually found 10 that made up the full assembly line. They put genes that make those 10 proteins into a common laboratory plant, and that plant began producing the chemical they were seeking.

The eventual goal is not simply moving molecular machinery from plant to plant. Now that she’s proven the molecular machinery works outside the plant, Sattely wants to put the proteins in yeast, which can be grown in large vats in the lab to better provide a stable source of drugs.

The technique could potentially be applied to other plants and drugs, creating a less expensive and more stable source for those drugs, the researchers say. Sattely’s work was published on 10 September in the journal Science.

3D-printed heart model may do away with transplants

Work by a group at Carnegie Mellon could one day lead to a world in which transplants are no longer necessary to repair damaged organs.

“We have been able to take MRI (magnetic resonance imaging) images of coronary arteries and 3D images of embryonic hearts and 3D bioprint them with unprecedented resolution and quality out of very soft materials like collagens, alginates and fibrins,” said Adam Feinberg, associate professor of materials science and engineering and biomedical engineering at Carnegie Mellon University, in a press statement.

Feinberg leads the regenerative biomaterials and therapeutics group, and the group’s study was published in the 23 October issue of the journal Science Advances. Traditional 3D printers build hard objects typically made of plastic or metal, and they work by depositing material onto a surface layer by layer to create the 3D object.

Printing each layer requires sturdy support from the layers below, so printing with soft materials like gels has been limited. The challenge with soft materials is that they collapse under their own weight when 3D printed in air, explained Feinberg. So, the team developed a method of printing soft materials inside a support bath material.

“Essentially, we print one gel inside of another gel, which allows us to accurately position the soft material as it’s being printed, layer by layer,” he said.

One of the major advances of this technique, termed FRESH, or Freeform Reversible Embedding of Suspended Hydrogels, is that the support gel can be easily melted away and removed by heating to body temperature, which does not damage the delicate biological molecules or living cells that were bioprinted.

As a next step, the group is working towards incorporating real heart cells into these 3D printed tissue structures, providing a scaffold to help form contractile muscle. Bioprinting is a growing field, but to date, most 3D bioprinters cost over $100,000 and/or require specialized expertise to operate, limiting wider-spread adoption.

Feinberg’s group, however, has been able to implement their technique on a range of consumer-level 3D printers, which cost less than $1,000 by utilizing open-source hardware and software.The 3D printer designs are being released under an open-source licence.

Can we regrow teeth?

Why can’t humans regrow teeth lost to injury or disease the way nature does? By studying how structures in embryonic fish differentiate into either teeth or taste buds, Georgia Tech researchers hope to one day be able to turn on the tooth regeneration mechanism in humans.

The research was conducted by scientists from the Georgia Institute of Technology in Atlanta and King’s College in London, and published on 19 October in the journal Proceedings of the National Academy of Sciences.

The studies in fish and mice, according to the researchers, suggest the possibility that “with the right signals, epithelial tissue in humans might also be able to regenerate new teeth”.

“We have uncovered developmental plasticity between teeth and taste buds, and we are trying to understand the pathways that mediate the fate of cells towards either dental or sensory development,” said Todd Streelman, a biology professor at Georgia Tech.

But growing new teeth wouldn’t be enough, Streelman cautions. Researchers would also need to understand how nerves and blood vessels grow into teeth to make them viable.

“The exciting aspect of this research for understanding human tooth development and regeneration is being able to identify genes and genetic pathways that naturally direct continuous tooth and taste bud development in fish, and study these in mammals,” said professor Paul Sharpe, a co-author from King’s College.

New way to fix a broken heart?

Photo: iStock
Photo: iStock

Coronary artery disease is the leading cause of death worldwide, but there is currently no effective method to regenerate new coronary arteries in diseased or injured hearts. Stanford researchers, according to a 19 October study published in the journal eLife, have identified a progenitor cell type that could make it possible.

The study was carried out with mice but, as the blood vessels of the human heart are similar, it could lead to new treatments for the disease or to restore blood flow after a heart attack, the researchers say.

“Current methods to grow new blood vessels in the heart stimulate fine blood vessels rather than re-establishing the strong supply of blood provided by the main arteries. We need arteries to restore normal function,” said senior author Kristy Red-Horse from the department of biological sciences at Stanford.

The researchers reveal that the smooth muscle of the arteries is derived from cells called pericytes. The small capillary blood vessels throughout the developing heart are covered in pericytes. Pericytes are also found throughout the adult heart, which suggests that they could be used to trigger a self-repair mechanism.

A problem with cell or tissue transplantation can be that the cells don’t integrate or they differentiate into slightly different cells types than intended. As pericytes are spread all over the heart on all the small blood vessels, they could be used as a target to stimulate artery formation without the need for transplantation, the researchers point out.

The team is now investigating whether pericytes differentiate into smooth muscle as part of this process and whether it can be activated or sped up by introducing Notch 3 (a protein) signalling molecules.

“Now that we are beginning to really understand coronary artery development, we have initiated studies to reactivate it in injury models and hope to some day use these same methods to help treat coronary artery disease,” said Red-Horse.

Measuring the ageing process

There are times when we intuitively feel that even people born within months of each other are ageing differently. Indeed they are, say the researchers of a long-term health study in New Zealand that sought clues to the ageing process in young adults.

In a paper appearing the week of 6 July in the Proceedings of the National Academy of Sciences, the team from the US, UK, Israel and New Zealand introduced a panel of 18 biological measures (known as biomarkers) that may be combined to determine whether people are ageing faster or slower than their peers.

The data came from the Dunedin Study, a landmark longitudinal study that has tracked more than a thousand people born in 1972-73 in the same town from birth. Health measures like blood pressure and liver function were taken regularly, along with interviews and other assessments.

According to first author Dan Belsky, an assistant professor of geriatrics at Duke University’s Centre for Ageing, the progress of ageing shows in human organs just as it does in eyes, joints and hair—but sooner.

Based on a subset of these biomarkers, the research team set a “biological age” for each participant, which ranged from under 30 to nearly 60 in the 38-year olds. Most participants clustered around an ageing rate of one year per year, but others were found to be ageing as fast as three years per chronological year.

As the team expected, those who were biologically older at age 38 also appeared to have been ageing at a faster pace. A biological age of 40, for example, meant that the person was ageing at a rate of 1.2 years per year over the 12 years the study examined.

The ageing process, according to the researchers, isn’t all genetic. Studies of twins have found that only about 20% of ageing can be attributed to genes, Belsky said. “There’s a great deal of environmental influence,” he added.

This gives “us some hope that medicine might be able to slow ageing and give people more healthy active years”, said senior author Terrie Moffitt, the Nannerl O Keohane professor of psychology and neuroscience at Duke.

Sunblock that stays on the outside

Researchers at Yale University have developed a sunscreen that doesn’t penetrate the skin, eliminating serious health concerns associated with commercial sunscreens, according to a 28 September news release.

Most commercial sunblocks may prevent sunburn, but they can go below the skin’s surface and enter the bloodstream, and are likely to trigger hormonal side-effects and could even be promote the kind of skin cancers they are designed to prevent.

The new sunblock made by the Yale researchers uses bio-adhesive nanoparticles that stay on the surface of the skin. The results of the research appeared in the 28 September online edition of the journal Nature Materials. Using mouse models, the researchers tested their sunblock against direct ultraviolet rays and their ability to cause sunburn.

Altering brain chemistry to raise the pain threshold

Scientists at the University of Manchester have shown for the first time that the numbers of opiate (akin to sedatives) receptors in the brain increases to combat severe pain in those suffering from arthritis.

Receptors in our brains respond to natural painkilling opiates such as endorphins, but the researchers in Manchester have now shown that these receptors increase in number to help cope with long-term, severe pain. By applying heat to the skin using a laser stimulator, Dr Christopher Brown and his colleagues showed that the more opiate receptors there are in the brain, the higher the ability to withstand the pain, according to a 23 October study.

The researchers used positron emission tomography (PET) imaging on 17 patients with arthritis and nine healthy controls to show the spread of the opioid receptors.

Val Derbyshire, a patient with arthritis, said in a press statement: “As a patient who suffers chronic pain from osteoarthritis, I am extremely interested in this research. I feel I have developed coping mechanisms to deal with my pain over the years, yet still have to take opioid medication to relieve my symptoms.”

A cranial fingerprint?

Photo: iStock
Photo: iStock

A person’s brain activity appears to be as unique as his or her fingerprints, a new Yale University-led imaging study shows. These brain “connectivity profiles” alone allow researchers to identify individuals from the functional magnetic resonance imaging (fMRI) images of brain activity of more than 100 people, according to the study published on 12 October in the journal Nature Neuroscience.

The researchers compiled fMRI data from 126 subjects who underwent six scan sessions over two days. Subjects performed different cognitive tasks during four of the sessions. In the other two, they simply rested. Researchers looked at activity in 268 brain regions—specifically, coordinated activity between pairs of regions.

Highly coordinated activity implies two regions are functionally connected. Using the strength of these connections across the whole brain, the researchers were able to identify individuals from fMRI data alone, whether the subject was at rest or engaged in a task. They were also able to predict how subjects would perform on tasks.

The researchers hope that this ability might one day help clinicians predict or even treat neuro-psychiatric diseases based on individual brain connectivity profiles. Data for the study came from the Human Connectome Project led by the WU-Minn Consortium, which is funded by the 16 National Institutes of Health (NIH) Institutes and Centers that support the NIH Blueprint for Neuroscience Research and by the McDonnell Center for Systems Neuroscience at Washington University.

Robotics

Robots that crowdsource learning

In July, scientists from Cornell University led by Ashutosh Saxena announced the development of a Robo Brain—a large computational system that learns from publicly available Internet resources.

The system, according to a 25 August statement by Cornell, was downloading and processing about 1 billion images, 120,000 YouTube videos and 100 million how-to documents and appliance manuals. Information from the system, which Saxena had described at the 2014 Robotics: Science and Systems Conference in Berkeley, is being translated and stored in a robot-friendly format that robots will be able to draw on when needed.

The India-born, Indian Institute of Technology Kanpur graduate, also launched a website for the project at robobrain.me, which displays things the brain has learnt, and visitors are able to make additions and corrections. Robo Brain employs what computer scientists call structured deep learning, where information is stored in many levels of abstraction. Deep learning is a set of algorithms, or instruction steps for calculations, in machine learning.

There have been similar attempts to make computers understand context and learn from the Internet. For instance, since January 2010, scientists at the Carnegie Mellon University have been working to build a never-ending machine learning system that acquires the ability to extract structured information from unstructured Web pages.

If successful, the scientists say it will result in a knowledge base (or relational database) of structured information that mirrors the content of the Web. They call this system the never-ending language learner, or NELL.

We also have IBM’s Watson, which beat Jeopardy players in 2011, and now has joined hands with the US Automobile Association to help members of the military prepare for civilian life. In January 2014, IBM said it will spend $1 billion to launch the Watson Group, including a $100 million venture fund to support start-ups and businesses that are building Watson-powered apps using the “Watson Developers Cloud”.

Can robots, or cobots, make good teammates?

Photo: ABB
Photo: ABB

At the Yale Social Robotics Lab, run by professor of computer science Brian Scassellati, robots are learning the skills needed to be good teammates, allowing people to work more safely, more efficiently and more effectively. The skills include stabilizing parts, handing over items, organizing a workspace, or helping people use a tool better, according to a 21 September press release.

Sharing a workspace with most robots can be dangerous, the researchers point out.

“It’s only now that this is becoming feasible, to develop robots that could safely operate near and around people,” said Brad Hayes, the PhD candidate who headed the project. “We are trying to move robots away from being machines in isolation, developing them to be co-workers that amplify the strengths and abilities of each member of the team they are on.”

One way of building team skills is by guiding the robot to figure out how to help during tasks by simulating hundreds of thousands of different possibilities and then guessing if that’s going to be helpful. Given that such tasks tend to take very long, the researchers suggest that the other approach is to show the robot directly how to build team skills.

“Here you are naturally demonstrating to the robot and having it retain that knowledge,” Hayes explained. “It can then save that example and figure out if it’s a good idea to generalize that skill to use in new situations.”

Hayes thinks the technology has value for both the workplace and the home, particularly for small-scale, flexible manufacturing or for people who have lost some of their autonomy and could use help with the dishes or other chores.

Collaborative robots, also known as cobots, is the new buzzword in robotics. They are complementary to industrial robots. Safely working alongside humans in an uncaged environment, they are opening up new opportunities for industry. They have to be safe, easy to use, flexible and affordable. Examples include the Baxter robot by Rethink Robotics; the UR5 arm by Universal Robots and Robonaut2 (by GE).

And, of course, the YuMi robot that is short for ‘you and me’. It was unveiled by industrial robotics company ABB at the Hannover Messe on 13 April. ABB touts it as the world’s first truly collaborative robot, able to work side-by-side on the same tasks as humans while still ensuring the safety of those around it.

The company says YuMi is “capable of handling anything from a watch to a tablet PC and with the level of accuracy that could thread a needle”.


Universe

Earth to Mars via a petrol station on the moon

Living on Mars will certainly not be an easy task for human beings. They will have to tackle major issues such as higher radiation, lack of a semblance of an atmosphere, lower gravity pull that can affect our skeletal structure, likely infection from unknown microbes, lack of food and the effect of loneliness on the mind. But first, they have to reach Mars, a journey that will take about 180 days. This means they will need enough fuel, or will have to refuel somewhere.

Studies have suggested that lunar soil and water ice in certain craters of the moon may be mined and converted to fuel. Assuming that such technologies are established at the time of a mission to Mars, a 14 October MIT study has found that taking a detour to the moon to refuel would reduce the mass of a mission upon launch by 68%.

The researchers developed a model to determine the best route to Mars, assuming the availability of resources and fuel-generating infrastructure on the moon. Based on their calculations, they found the most mass-efficient path involves launching a crew from Earth with just enough fuel to get into orbit around the Earth.

A fuel-producing plant on the surface of the moon would then launch tankers of fuel into space, where they would enter gravitational orbit. The tankers would eventually be picked up by the Mars-bound crew, which would then head to a nearby fuelling station to gas up before ultimately heading to Mars.

Olivier de Weck, a professor of aeronautics and astronautics and of engineering systems at MIT, says the plan deviates from Nasa’s more direct “carry-along” route.

The results, which are based on the PhD thesis of Takuto Ishimatsu, now a post-doctorate student at MIT, are published in the Journal of Spacecraft and Rockets. Ishimatsu’s network flow model to explore various routes to Mars—ranging from a direct carry-along flight to a series of refuelling pit stops along the way—assumes a future scenario in which fuel can be processed on, and transported from, the moon to rendezvous points in space.

When did life on Earth begin?

Geochemists at the University of California, Los Angeles (UCLA) have found evidence that life likely existed on Earth at least 4.1 billion years ago—300 million years earlier than previous research suggested, according to a 19 October press release.

The discovery indicates that life may have begun shortly after the planet formed 4.54 billion years ago. The research was published in the online edition of the journal Proceedings of the National Academy of Sciences.


“Twenty years ago, this would have been heretical; finding evidence of life 3.8 billion years ago was shocking,” said Mark Harrison, co-author of the research and a professor of geochemistry at UCLA. The new research suggests that life existed prior to the massive bombardment of the inner solar system that formed the moon’s large craters 3.9 billion years ago.

Scientists had long believed the Earth was dry and desolate during that time period. Harrison’s research—including a 2008 study in Nature he co-authored with Craig Manning, a professor of geology and geochemistry at UCLA, and former UCLA graduate student Michelle Hopkins—is proving otherwise.

The researchers, led by Elizabeth Bell—a post-doctoral scholar in Harrison’s laboratory—studied more than 10,000 zircons originally formed from molten rocks, or magmas, from Western Australia.

Zircons are heavy, durable minerals related to the synthetic cubic zirconium used for imitation diamonds. They capture and preserve their immediate environment, meaning they can serve as time capsules.

The scientists identified 656 zircons containing dark specks that could be revealing and closely analysed 79 of them with Raman spectroscopy—a technique that shows the molecular and chemical structure of ancient microorganisms in three dimensions.

One of the 79 zircons contained graphite (pure carbon) in two locations. The graphite is older than the zircon containing it, the researchers said. They know the zircon is 4.1 billion years old, based on its ratio of uranium to lead but they don’t know how much older the graphite is.

Water on Mars!

Researchers have discovered an enormous slab of ice just beneath the surface of Mars, measuring 130 feet thick and covering an area equivalent to that of California and Texas combined. The ice may be the result of snowfall tens of millions of years ago on Mars, scientists said in a 15 September press statement. The research was published in the journal Geophysical Research Letters.

Combining data gleaned from two powerful instruments aboard Nasa’s Mars Reconnaissance Orbiter, or MRO, researchers determined why a “crazy-looking crater” on Mars’ surface is terraced, and not bowl-shaped like most craters of this size.

Although scientists have known for some time about Mars’s icy deposits at its poles and have used them to look at its climatic history, knowledge of icy layers at the planet’s mid-latitudes, analogous to earthly latitudes falling between the Canadian-US border and Kansas, is something new.

On 16 October, Nasa confirmed that new findings from MRO had provided “the strongest evidence yet that liquid water flows intermittently on present-day Mars”. The findings came five months after scientists found the first evidence for liquid water on the red planet.

The findings are in sync with Nasa’s ambitious project to send humans to Mars in the 2030s, in accordance with the Nasa Authorization Act, 2010, and the US National Space Policy. The discovery of liquid water, therefore, could be a big boost for astronauts visiting the planet.

Based on further research and findings, humans could well be drinking water on Mars, or use it for creating oxygen and rocket fuel, or to water plants in greenhouses.

Most Earth-like worlds yet to be born. Boo

When our solar system was born 4.6 billion years ago only 8% of the potentially habitable planets that will ever form in the universe existed. The bulk of those planets—92%—are yet to be born, so will continue to do so much after the sun burns out 6 billion years hence. This conclusion is based on an assessment of data collected by Nasa’s Hubble Space Telescope and the prolific planet-hunting Kepler space observatory.

“Our main motivation was understanding the Earth’s place in the context of the rest of the universe,” said the 20 October study’s author Peter Behroozi of the Space Telescope Science Institute (STScI) in Baltimore, Maryland.

The data show that the universe was making stars at a fast rate 10 billion years ago, but the fraction of the universe’s hydrogen and helium gas that was involved was very low. Today, star birth is happening at a much slower rate than long ago, but there is so much leftover gas available that the universe will keep cooking up stars and planets for a very long time to come.

Kepler’s planet survey indicates that Earth-sized planets in a star’s habitable zone, the perfect distance that could allow water to pool on the surface, are ubiquitous in our galaxy.

Based on the survey, scientists predict that there should be 1 billion Earth-sized worlds in the Milky Way galaxy at present, a good portion of them presumed to be rocky. That estimate skyrockets when you include the other 100 billion galaxies in the observable universe.

This leaves plenty of opportunity for untold more Earth-sized planets in the habitable zone to arise in the future. The last star isn’t expected to burn out until 100 trillion years from now. That’s plenty of time for literally anything to happen on the planet landscape, the researchers say.

Ocean rise: How much? How soon? Oh no

Seas around the world have risen an average of nearly 3 inches since 1992, with some locations rising more than 9 inches due to natural variation, according to the latest satellite measurements from Nasa and its partners.

In 2013, the UN Intergovernmental Panel on Climate Change issued an assessment based on a consensus of international researchers that stated global sea levels would likely rise from 1 to 3 feet by the end of the century.

The new data, according to a 26 August Nasa press statement, reveal that the height of the sea surface is not rising uniformly everywhere. Regional differences in sea level rise are dominated by the effects of ocean currents and natural cycles such as the Pacific Decadal Oscillation.

But, as these natural cycles wax and wane, they can have major impacts on local coastlines. Scientists estimate that about one-third of sea level rise is caused by the expansion of warmer ocean water, one-third is due to ice loss from the massive Greenland and Antarctic ice sheets, and the remaining third results from melting mountain glaciers.

However, the fate of the polar ice sheets could change that ratio and produce more rapid increases in the coming decades.

Technology and Society

Plastic-eating worms that can annihilate waste

The world over, people throw away billions of plastic cups and a very small percentage of that gets recycled. How does one tackle this plastic menace? With the help of a mealworm, say Stanford researchers.

A mealworm, the larvae form of the darkling beetle, can subsist on a diet of styrofoam and other forms of polystyrene, according to two companion studies co-authored by Wei-Min Wu, a senior research engineer in the department of civil and environmental engineering at Stanford. It’s the microorganisms in the mealworm’s gut that biodegrade the plastic in the process.

The papers, published in Environmental Science and Technology in September, are the first to provide detailed evidence of bacterial degradation of plastic in an animal’s gut.

“There’s a possibility of really important research coming out of bizarre places,” said Craig Criddle, a professor of civil and environmental engineering who supervises plastics research by Wu and others at Stanford. “Sometimes, science surprises us. This is a shock.”

The new research on mealworms is significant because styrofoam was thought to have been non-biodegradable and more problematic for the environment. Researchers led by Criddle, a senior fellow at the Stanford Woods Institute for the Environment, are collaborating on ongoing studies with the project leader and papers’ lead author, Jun Yang of Beihang University in China, and other Chinese researchers.

Together, they plan to study whether microorganisms within mealworms and other insects can biodegrade plastics such as polypropylene (used in products ranging from textiles to automotive components), microbeads (tiny bits used as exfoliants) and bioplastics (derived from renewable biomass sources such as corn or biogas methane).

The researchers plan to explore the fate of these materials when consumed by small animals, which are, in turn, consumed by other animals. Another area of research could involve searching for a marine equivalent of the mealworm to digest plastics, Criddle said. Plastic waste is a particular concern in the ocean, where it fouls habitats and kills countless seabirds, fish, turtles and other marine life.

Fresh milk, off the grid

Photo: iStock
Photo: iStock

How does one preserve milk? Most of us do this by refrigeration and boiling, but what does one do if there’s sporadic electricity? The answers may lie in a 19 May study by Tel Aviv University (TAU) researchers.

Published in the journal Technology, the study finds that short-pulsed electric fields can be used to kill milk-contaminating bacteria. Through a process called electroporation, bacterial cell membranes are selectively damaged.

According to lead investigator Alexander Golberg of TAU’s Porter School of Environmental Studies, applying this process intermittently prevents bacteria proliferation in stored milk, potentially increasing its shelf life.

According to the study, pulsed electric fields, an emerging technology in the food industry that has been shown to effectively kill multiple food-born microorganisms, could provide an alternative, non-thermal pasteurization process.

The stored milk is periodically exposed to high-voltage, short pulsed electric fields that kill the bacteria. The energy required can come from conventional sources or from the sun. The technology is three times more energy-efficient than boiling and almost twice as energy efficient as refrigeration, the researchers say.

Crowdsourcing interactive story plots with AI

Researchers at the Georgia Institute of Technology have developed an artificial intelligence (AI) system that crowdsources plots for interactive stories. While current AI models for games have a limited number of scenarios and depend on a data set already programmed into a model by experts, Georgia Tech’s AI system generates numerous scenes for players to adopt.

“Our open interactive narrative system learns genre models from crowdsourced example stories so that the player can perform different actions and still receive a coherent story experience,” Mark Riedl, lead investigator and associate professor of interactive computing at Georgia Tech, said in a 19 September news release.

A test of the AI system, called Scheherazade IF (Interactive Fiction)—a reference to the fabled Persian queen and storyteller—showed that it can achieve near human-level authoring.

“When enough data is available and that data sufficiently covers all aspects of the game experience, the system was able to meet or come close to meeting human performance in creating a playable story,” Riedl added.

The researchers evaluated the AI system by measuring the number of “common sense” errors (e.g. scenes out of sequence) found by players, as well as players’ subjective experiences for things such as enjoyment and coherence of story. The creators say that they are seeking to inject more creative scenarios into the system.

Right now, AI plays it safe with the crowdsourced content, producing what one might expect in different genres. But opportunities exist to train Scheherazade (just like its namesake implies) to surprise and immerse those in future interactive experiences.

The impact of this research can support not only online storytelling for entertainment, but also digital storytelling used in online course education or corporate training. The research paper Crowdsourcing Open Interactive Narrative (co-authored by Matthew Guzdial, Brent Harrison, Boyang Li and Mark Riedl) was presented at the 2015 Foundations of Digital Games Conference in Pacific Grove, California.

Teaching computers to ‘see’ what humans do

Researchers from Georgia Tech’s school of interactive computing and institute for robotics and intelligent machines have developed a new method that teaches computers to “see” and understand what humans do in a typical day, according to a 28 September news release.

The researchers gathered more than 40,000 pictures taken every 30 to 60 seconds, over a six-month period, by a wearable camera and predicted with 83% accuracy what activity that person was doing. The idea, according to the researchers, is to give users the ability to be able to track all of their activities—not just physical ones like walking and running, which most wearables like Fitbit do.

The ability to literally see and recognize human activities has implications in a number of areas—from developing improved personal assistant applications like Siri to helping researchers explain links between health and behaviour, the researchers say.

No comments:

Post a Comment

Featured post

UKPCS2012 FINAL RESULT SAMVEG IAS DEHRADUN

    Heartfelt congratulations to all my dear student .this was outstanding performance .this was possible due to ...