22 December 2015

Bankruptcy code will change how business is done: TK Viswanathan

Bankruptcy code will change how business is done: TK Viswanathan

If you want to a good market to attract venture capitalists, you need a good legal framework, says the author of the bankruptcy code 

The insolvency and bankruptcy code 2015 was tabled in the Lok Sabha on Monday. It will make insolvency a time-bound process, make it easier for business to find non-bank funding, and create a data bank of serial defaulters.

T.K. Viswanathan, 67A former law secretary who has also held the post of secretary general of the 15th Lok Sabha, his legal skills have been often tapped by the government. Besides the committee tasked with formulating India’s bankruptcy code, Viswanathan heads a panel on ways to fight cyber crime.
In an interview, T.K. Viswanathan, author of India’s bankruptcy code, spoke about the need for its quick implementation and how it will make life easier for businesses in India.
Edited excerpts:
The key motivating factor behind this bankruptcy code is to create an insolvency framework. How will this change the way business is done in India?
We are excessively depending on the banks for finance. If somebody has to raise loans, he has to approach the banks. But banks are not in a position to cater to the huge demands of the masses. In the US, people raise money from the bond market. That is because the unsecured creditors also have a vital say in the winding-up of a company. They are assured that their investment will be safe and they will be able to take back what they have invested. So, the system of winding up everything is very robust.
Unfortunately, the two Acts we have—the Presidency Towns Insolvency Act and the Provincial Insolvency Act enacted in the last century—were never operationalized. We have tried to deal with bank defaults in different ways. We had SICA (Sick Industrial Companies Act), BIFR (Board for Industrial and Financial Reconstruction), which was abused by debtors and only used to get automatic stay by debtors. It only applied to industrial undertakings.
If you want to have a good market where venture capitalists will come and fund the ventures, you need a good legal framework. We have tried to address the problem of bank loan defaults through Sarfaesi Act (The Securitisation and Reconstruction of Financial Assets and Enforcement of Security Interest Act, 2002). It was in response to the abuse of SICA as secured creditors felt unsafe.
We have to see holistically on supporting those who want to go to the market and raise funds. We wanted to put in a legal framework.
Coming to insolvency of corporations, we have the Companies Act dealing with winding up. There is an official liquidators office working under the supervision of the court. The system takes many years. If the company has to be liquidated, ideally the whole process should not take a long time and the proceeds should be distributed to the creditors. That doesn’t happen. It takes 10-15 years and siphoning of assets also takes place. And right now, the test for finding out if a company is sick or not is the erosion of 51% of net worth. This detection at 51% is too late as you will not be able to revive it.
So, we would like to have a system where, at a very early stage, if the company is not able to pay its debts, then action can be taken so that it can be revived. So that the firm is not wound up, employees are not retrenched and there is no lockout. At the earliest indication of default, a debtor can trigger the proceedings.
So, at one level, you are doing a fundamental mindset reset of this country.
It is very much required. It will change the way business is done in India to a great extent. More money will come into the market.
But you have to be prudent in managing your financial affairs. If a default triggers, then you will not have excuses like before. Initial stages, there may be a few hiccups, but it will be beneficial. It will be useful for the start-up economy.
If this goes through, will the flow of unsecured credit to small and medium enterprises, who do not have any collateral, increase?
More money will come in for them because they will get an assurance that they have a say. We are moving away from institutional finance to market. It will be transactional, not personal. This is a reform which is long overdue. It will take away most of the problems that banks are facing today.
You have proposed a new class of insolvency professionals to assist companies. Will they have to get registered with any body?
The insolvency professionals will be drawn from different fields like investment bankers, lawyers, cost accountants, chartered accountants, engineers. They will take over the management of the company and restore the health of a company. They will be given 180 days during which you have to decide if a company can be revived or if there is no hope. If it can be revived, then a resolution has to be planned. The management committee will come under suspension and the creditors committee will take over.
We are providing a regulator to regulate these professionals. We are providing a transient provision also till they are formally recognized. The central government will recognize who will be identified as insolvency professionals.
Why will people want to become insolvency professionals?
First charge on the assets of the company will be the fees due to an insolvency professional. They will be well paid. Also, they will be interested because of the professional challenge that this entails.
Many bureaucrats are heading sick PSUs (public sector units). They may be best placed to become insolvency professionals. Insolvency professional needs to have credibility. People have to believe him so that he can raise funds.
You have proposed a timeline of 180 days for deciding on an insolvency application. Is it too short?
Six months is a long time. It is enough to know if the company can become viable. You are detecting the sickness at a very early stage. So, it is not very difficult to determine if it is worthwhile to put in more money.
There is a scope for extension of 90 days. If you still can’t resolve it, then you have to just liquidate the company. The creditors’ committee will take a decision on dissolving it and the tribunal will stamp it.
In other countries, the courts don’t take too much time. In India, the number of litigants is enormous. We don’t have a proper way of disposing of cases.
So, every decision has to go to a tribunal?
Every decision will be a decision dictated by market forces. It will not be a judicial decision where there are many adjournments. It has to be a time-bound process.
Once you decide to go ahead with liquidation, it will happen quickly. Because it will be done by the insolvency professional who will act as a liquidator.
Dissipation of assets takes place at that time. Management has to change because there is a lot of siphoning of money that takes place.
How do you prevent someone from approaching the court and seeking a stay?
It is not possible under the system we have devised. The courts will not interfere. The matter will go to a tribunal, the National Company Law Tribunal, and the appeal will go to the National Company Law Appellate Tribunal.
How will the information utilities work?
We need to have information utilities where the credit rating is easily available. We do not have a unified system where all details are in one place like CIBIL (Credit Information Bureau India Ltd). There is already a lot of information that companies are giving MCA (ministry of corporate affairs). All this information will be available and easily accessible.
Will you be using existing utilities?
Initially, we will use existing utilities. We want to encourage more private utilities to come up. The regulator will supervise.
You spoke of a fresh start for individuals. Will banks write off these debts?
The threshold is very low at Rs.60,000. Using legal resources to recover such money is not commensurate. So, it is better to write it off. It will be a part of the credit history. It will be taken into account that you have defaulted. You can’t have a serial defaulter. There are fly-by-night operators.
How will individual insolvency be handled?
At present, insolvency is a matter of district court. They are all doing many cases and do not have any specialized knowledge. It will go to debt recovery tribunals now. When this starts, many cases will come.
As an individual, how do you prevent misuse of these laws against you by someone for personal reasons?
There are penalties in the bill if someone is maliciously and for ulterior purposes targeting the individual.
If the bankruptcy code becomes a law, do you think there will be a sea change in the legal infrastructure of the country?
Bankruptcy code is a beginning. There are many more steps that can be taken. There is a plethora of tribunals. They have to be streamlined. The whole process has to be relooked at. Services of the tribunals have to be outsourced. That is a large part of the judicial reform.
Then the other is judicial impact assessment. When a bill is introduced in Parliament, it should have a judicial impact assessment accompanying it. Every legislation adds to the dockets of the courts. More and more cases are going to arise. But the government is not providing any budgetary provision in the bill to fund the expenditure and the burden on the judicial system.

Tread carefully on minimum wage reform

Tread carefully on minimum wage reform

Setting a wage floor fails without other reforms and adequate enforcement 

Milton Friedman famously called minimum wage laws a form of discrimination against low-skilled workers. Serious doubts have been cast on that tenet of neoclassical economics over the past few decades, so much so that the economic and ethical benefits of such legislation can fairly be held to outweigh its downsides.
With the labour ministry having moved a cabinet note seeking to merge four wage-related laws and set a mandatory national minimum wage—part of a thrust to refurbish and consolidate 44 labour laws into four or five broad labour codes—the National Democratic Alliance government is looking to reap these benefits.
But as with all such propositions, it should bear in mind that conditions apply.
The traditional case against wage floors is succinct: It is distortionary pricing that reduces demand for workers affected by the wages since employers operate with the rationale of economic efficiency.
David Card and Alan Krueger of America’s National Bureau of Economic Research presented research in 1993 that suggested otherwise and caused widespread rethinking of this erstwhile truism.
A substantial volume of new research followed, showing that modest minimum wage hikes caused no discernible change in employment levels.
Pushback from economists such as David Neumark and William Wascher notwithstanding, the new position has proved resilient.
Yet, the Indian instance is a prime example of why this may not always hold true. With the informal economy accounting for 90% of the workforce and 50% of the national product, employers hold disproportionate bargaining power.
The resulting wage levels leave enough wiggle room for them to absorb the costs of corrective legislation.
Any minimum wage legislation also runs the risk of being mere tokenism. In the decades since the Minimum Wages Act of 1948, the government has shown a distinct lack of ability to enforce it across the vast informal sector.
This inability to enforce existing social security provisions, inadequate as they are, across swathes of the formally employed labour force is a corollary of this. Despite the establishment of a framework for the formulation of social security schemes via the Unorganised Workers’ Social Security Act of 2008, there has been little progress.
Finally, most empirical studies concerning the impact of minimum wages on poverty in developing countries show different effects at various points of the wage distribution curve.
While aggregate poverty levels may decrease, there is a possibility that some low-income households will be pushed into poverty. In the absence of a safety net, they run the risk of being permanently relegated to an economic underclass.
If the government wants to still go ahead, it should set an optimal minimum wage rate—a modest hike followed by periodic inflation-linked revisions—in relation to the median national income and update anachronistic labour laws to make it easier for employers to adjust their workforce in accordance with market conditions.
Yet, there is little transparency so far on how the government means to arrive at the minimum wage.
Weak demand and uncertain macroeconomic conditions coupled with the need to factor in substantial economic diversity across states mean it will be a delicate balance to achieve.
As for updating labour laws, it has a long tradition of being politically verboten.
Come the budget session in February, the government will push for easing some restrictions here as part of the proposed industrial relations code.
Given its current lack of political capital, the opposition’s obstructionism and the emotive nature of the issue, the prognosis doesn’t seem positive.
Minimum wage reform is simply one measure in an interlocking system necessary for poverty alleviation.
When implemented in isolation, or when guided by political considerations instead of economic, it runs the risk of being ineffective at best and counterproductive at worst.
The government must bear this in mind in the coming year where it hopes to double down on economic reforms.
Is the proposed minimum wage reform likely to be successful?

Isro’s small steps towards developing its own reusable rocket

Isro’s small steps towards developing its own reusable rocket

Next year, Isro will begin with the launch of the fourth satellite in IRNSS in January, which will be followed by tech demo of the reusable launch vehicle 
 On Tuesday, Elon Musk’s aerospace manufacturer company SpaceX sent a Falcon rocket toward orbit with 11 small satellites, and then landed the 15-story leftover booster back on Earth safely.
It marks a landmark in the world’s attempts to develop reusable launch vehicle technologies that will bring down the costs of space missions drastically.
“If one can figure out how to effectively reuse rockets just like airplanes, the cost of access to space will be reduced by as much as a factor of a hundred. A fully reusable vehicle has never been done before. That really is the fundamental breakthrough needed to revolutionize access to space,” Musk had said earlier this year.
India, too, is taking slow steps towards developing its own reusable rocket using a Winged Reusable Launch Vehicle Technology Demonstrator (RLV-TD) to test out technologies including hypersonic flight, autonomous landing, powered cruise flight and hypersonic flight using air-breathing propulsion.
Next year, Indian Space Research Organisation (Isro) will begin with the launch of the fourth satellite in Indian Regional Navigational Satellite System (IRNSS) in January, which will be followed by the tech demo of the reusable launch vehicle.
“The technology demonstration programme will take place probably around first week of February when a scaled model of reusable vehicle will be flown to space and brought down to the Bay of Bengal,” said an Isro spokesperson.
The demo was earlier scheduled for March when Isro would have tested if the 12-tonne vehicle can reach five times the speed of sound, whether it can re-enter the atmosphere and land on the sea using its computer system. But the demonstration was postponed and is now scheduled to take place next year. The take-off in this demonstration will be vertical like a rocket, and landing will be like that of an aircraft.
This year, the space agency completed studies related to the rocket by carrying out various simulations. It also validated the onboard software and conducted a successful test of the solid booster motor (HS9) with Secondary Injection Thrust Vector Control system.
“A reusable vehicle is very important. There are two ways to look at it, one is the time factor and one is the cost factor. With a reusable vehicle , if two launches can be carried out in quick succession. This would save huge costs and will reduce the time to prepare the launch site,” said Ajey Lele, fellow at New Delhi-based Institute for Defence Studies and Analyses.

21 December 2015

Don’t give up on fiscal consolidation

Don’t give up on fiscal consolidation

Deviation from the consolidation path will affect government’s credibility
It is important for policymakers to not shift focus fromlong-term objectives for short-term gains. The mid-year economic review, released by the finance ministry last week, argued in favour of relaxing the fiscal deficit target in order to address demand issues in the economy. The government would do well to avoid any such temptation.
While the government is confident of meeting the fiscal deficit target of 3.9% of the gross domestic product (GDP) in the current year, bringing it down to 3.5% of the GDP in the next fiscal year will be challenging.
As has been highlighted in this space before, there are a number of reasons why the path of fiscal consolidation will get increasingly difficult from here on.
For example, in the current year, the government benefited a great deal from lower oil prices, both in terms of containing the subsidy outflow and mopping up revenue by increasing taxes. It is highly unlikely that such gains will be available even next year. Further, the implementation of the Seventh Pay Commission recommendations is likely to increase expenditure by 0.65% of the GDP.
The fall in nominal GDP growth because of a sharp decline in international commodity prices is also complicating the fiscal management. In the current year, for example, the economy is expected to grow at 8.2% in nominal terms, compared with the budget estimate of 11.5%. This, according to the review, will raise the deficit target by 0.2% of the GDP.
In fact, nominal growth has slipped below the cost of borrowing, and if things remain this way, it will affect the debt dynamics for both the government and the private sector. But the current situation could well be an aberration because of the collapse in commodity prices and things may soon stabilize. India has not faced problems of lower nominal growth in recent times despite lower real growth as economic mismanagement of the previous government made sure that inflation was always at higher levels.
But since nominal growth has declined significantly, the review suggested that medium-term fiscal framework should be reconsidered. Also, in the area of monetary policy, it raises the question whether there is space for greater flexibility in interpreting inflation objectives.
It is important to appreciate that the present macroeconomic stability has been attained after a fair amount of difficulty and must be preserved, especially in the current global environment. Any deviation from the fiscal consolidation path will affect the credibility of the government and will raise questions about its ability to reach the desired goal of consolidation. This government has already pushed the target for bringing down the fiscal deficit to 3% of the GDP by a year. Further delay can dent financial market confidence in a significant way. To be sure, the quality of expenditure has improved and increase in capital outlay will benefit economic activity. However, it is equally important that the government adheres to the fiscal consolidation roadmap and manages this transition well.
Similarly, there is absolutely no need to redefine monetary policy objectives. This debate has been settled. It will not help anyone’s cause if monetary policy is adjusted by looking at the nominal GDP growth or lower capacity utilization in the industrial sector on one day and consumer price inflation on the other. This will only induce uncertainty and complicate matters both for the financial markets and the real economy.
Therefore, the government should avoid picking the easy route of pushing growth by running a higher deficit. Instead, what the government needs to do at this stage is to nurture the economic recovery through structural reforms, and by removing supply side barriers, which will help economic activity and boost revenues. The government also needs to think about reviving the disinvestment programme which can provide a steady inflow for capital expenditure.
The very idea of reconsidering the fiscal consolidation roadmap should immediately be taken off the table as it could potentially backfire by hurting market confidence.
Should the government adhere to the fiscal consolidation roadmap

End the political siege on reforms

End the political siege on reforms

It may sound impossible to reach a consensus, but just remember that politics is the art of the impossible
Last Friday, Supreme Court judge A.K. Sikri weighed in on the eyeball-to-eyeball confrontation between the Congress and the Bharatiya Janata Party (BJP), something that has caused legislative paralysis for the past two sessions of Parliament. Unfortunately, the intervention, which can be key to resolving the face-off, was buried in the cacophony that passes for debate, both in the electronic media and between the two political parties.
Speaking to reporters on the sidelines of the annual general meeting of industry lobby Federation of Indian Chambers of Commerce and Industry, he very pertinently observed, “If a few parties can come together to form a government with a minimum common programme, why can’t all the parties think of a minimum common programme in the interest of the country?”
The judge then went on to elaborate: “Why not have certain issues, those issues which will be for economic development, maybe social issues also, and other kinds of issues? Why should we not shed our party politics, come together, sit together, all the leaders of the different parties, and devise some common programme for the development of the country? It may include GST (goods and services tax), it may include SEZs (special economic zones), it may include even land acquisition.”
In Saturday’s edition, Mint ran a survey conducted by instaVaani, which had some revealing results. Not that three out of four people surveyed believed that the Congress was wrong in obstructing Parliament. Significant, but it is obvious to all (except perhaps the Congress).
More importantly, the survey revealed that an overwhelming proportion—nine out of 10 people—say political parties need to collectively take responsibility and accord priority to legislative business. In other words, judge Sikri and the people—both of whom have elected our politicians—are sending out a singular message: The time has come to strike a consensus.
It may sound impossible, given the complete breakdown in communication between both sides. But just remember that politics is the art of the impossible. Even better, there is evidence that this has been done—all it requires is sage leadership on both sides. And revealingly, the example involves the two central actors in the ongoing drama, the Congress and the BJP.
In 1995, Gujarat inked a loan agreement with the Asian Development Bank (ADB) to fund long overdue structural reforms in the state. It was path-breaking as it was the first time that a state government had directly negotiated with a multilateral lending agency—normally the Union government would negotiate and then pass on the funds. Now, this trend is passé.
The ADB had one condition though: there should be political consensus on the economic reform agenda, which included politically tricky issues like overhaul of the state’s fiscal rules, public sector reform and a greater role for the private sector in economic activity.
What transpired was seminal. Both the Congress and the BJP, which between them have ruled the state since it was formed, jointly signed off on a reform blueprint—committing not to disturb the agenda, regardless of who was in power. This consensus paved the way for reform, a key reason the state turned around and was able to be part of India’s unprecedented growth trajectory when it launched at the turn of the millennium.
On the 20th anniversary of the pact, it may be a perfect moment for both political parties to do an encore. For the Congress, it can understandably hurt as it would have to grudgingly hand over the baton to the BJP, which is now taking over as the central pole of Indian politics. To the BJP, it means overcoming their instinctive desire for payback for the targeting of its current leadership all these years.
It may sound naive to think so. But then, history tell us it can be done and the instaVaani survey says it should be done.
The writing is on the wall. Very rarely do countries get a second chance at redemption. India is one of the lucky ones. Let us not blow it once again.

Analysis of general study paper 3(IAS MAINS EXAM 2015)

Analysis of general study paper 3(IAS MAINS EXAM 2015)

The paper was like that you feel you have prepared this topic,you know everything about it,but you feel you do not know exactly what to write . Indian economy part was easier and direct. UPSC has catched every popular phrase like make in india,digital india,skill india,namami gange,gold monetisation,solar energy,isis,cyber security to make you feel that you know most of the paper.
Question related to agriculture were easy and if one has written well,will be paid well.Infrastructure related question were moderate.In case of science & technology except IRNSS ,nothing was easy.
Environment related question were on easier side.Internal security was current affairs based but complex .
Note :Most of the questions (1,2,3,4,5,6,7,8,9,10,11,15 16,18,19,20) can be attempted from our compilation in our blog  http://samvegias.blogspot.in/.Details we will provide in fews days.




                                                                     Question no                                 maximum marks
IndianEconomy                                                 1,7,8                                          37.5 marks.
Agriculture& current affairs                          2,3,4,5,6                                 62.5 marks              
infrastructure                                                    9,10                                               25 marks
Science & technology                                      11,12,13,14                                50 marks
Environment & disaster management         15,16                                        25 marks
Internal security                                            17,18,19,20                                   50 marks









20 December 2015

25 sci-tech developments that offer a peek into the future

Scientists at universities across the world continue to experiment with radical ideas in fields such as genetics, quantum computing, robotics, chemistry and physics. Many of their scientific and technological developments of 2015, clubbed here under five subheadings of convenience (Cutting Edge, Science and Medicine, Robotics, The Universe, and Technology and Society), are clearly works in progress.

Nevertheless, all of them reflect the potential to change the way we think and work on our planet, and the universe we live in. A handpicked collection of some of 2015’s finest moments of epiphany.

Cutting Edge

Invisibility cloaks

While many would dream of a cloak that makes one invisible to the world, a true Harry Potter-like invisibility cloak is still a distant dream. Nevertheless, a lot of research is clearly leading the way towards the inevitability of making one.

Consider the work of Debashis Chanda at the University of Central Florida. The cover story in the March edition of the journal Advanced Optical Materials explains how Chanda and his fellow optical and nanotech experts were able to develop a larger swathe of multi-layer 3D metamaterial operating in the visible spectral range.

The broad theory behind invisibility cloaks is to manipulate light by controlling and bending it around an object to make the latter seem invisible to the human eye. It is the scattering of light—visible, infrared, X-ray, etc.—that interacts with matter to help us detect and observe objects. However, the rules that govern these interactions in natural materials can be circumvented in metamaterials whose optical properties arise from their physical structure rather than their chemical composition.

By improving the technique, Chanda and his team hope to be able to create larger pieces of the material with engineered optical properties, which would make it practical to produce for real-life device applications.

In April, a group of researchers from the Karlsruhe Institute of Technology (KIT) in Karlsruhe, Germany, said they have developed a portable invisibility cloak that can be taken into classrooms and used for demonstrations. It can’t hide a human, but it can make small objects disappear from sight without specialized equipment.

Scientists hoping to divert light around an object to render it invisible must find a way to offset the increased distance by a higher speed limit. To address this challenge, the KIT team constructed their cloak from a light-scattering material. By scattering light, the material slows down the effective propagation speed of the light waves through the medium. Then the light can be sped up again to make up for the longer path length around the hidden object. In this cloak, the object to be concealed is placed inside a hollow metal cylinder coated with acrylic paint, which diffusely reflects light. The tube is embedded within a block of polydimethylsiloxane, a commonly used organic polymer, doped with titanium dioxide nanoparticles that make it scatter light.

There have been similar attempts. On 17 September, for example, scientists at the US department of energy’s Lawrence Berkeley National Laboratory (Berkeley Lab) and the University of California Berkeley said they have devised an ultra-thin invisibility “skin cloak” that can conform to the shape of an object and conceal it from detection with visible light. Working with brick-like blocks of gold nano-antennas, the Berkeley researchers fashioned a “skin cloak” barely 80 nanometers in thickness. The surface of the “skin cloak” was meta-engineered to reroute reflected light waves so that the object was rendered invisible to optical detection when the cloak is activated.

On 21 September, scientists at the Nanyang Technological University (NTU) in Singapore said in a statement that they have developed a thermal cloak that can render an object thermally invisible by actively redirecting incident heat. To construct the cloak, the researchers deployed 24 small thermoelectric modules, which are semiconductor heat pumps controlled by an external input voltage. The modules operate via the Peltier effect, whereby a current running through the junction between two conductors can remove or generate heat. When many modules are attached in series, they can redirect heat flow.

The researchers also found that their active thermal cloaking was not limited by the shape of the object being hidden. When applied to a rectangular air hole, the thermoelectric devices redistributed heat just as effectively as in the circular one. Baile Zhang and his team plan to apply the thermal cloaks in electronic systems.

Transparent devices

If you recall all the high-tech transparent technology Tom Cruise used in Minority Report, you may wonder why they are yet to become a reality even though it’s well over a decade since that movie was released. It’s batteries that pose a big problem, according to a 14 November press statement, since they have thick materials.

With a technique known as spin-spray layer-by-layer (SSLbL) assembly, Yale researchers have created ultrathin and transparent films from single-walled carbon nanotubes (SWNT) and vanadium pentoxide (V2O5) nanowires to serve as battery anodes and cathodes—in a bid to work around the issue.

The work was done at the lab of André Taylor, associate professor of chemical and environmental engineering, and the results were published online in the journal ACS Nano. Forrest Gittleson, a post-doctoral associate at Yale in chemical and environmental engineering, is the lead author.

The researchers acknowledge that there are still challenges to overcome before transparent devices can be mass-produced, the biggest obstacle being “improving the conductivity of these thin electrodes”. To address the issue, the researchers created a new “sandwich” architecture that integrates conductive SWNT layers and active cathode materials to enhance performance. The next step, Taylor said, is creating a transparent separator/electrolyte—the third major component of a battery. It’s how the lithium ions travel between the anode and cathode.

“Nature has already demonstrated that complex systems can be transparent,” Gittleson said. “In fact, earlier this year, they discovered a new glass frog species with translucent skin in Costa Rica. If nature can achieve it through evolution, we should be able to with careful engineering.”

Companies are indeed interested in transparent devices. On 18 November 2014, Apple Inc. was granted a patent for an invention relating to a method and system for displaying images on a transparent display of an electronic device. Furthermore, the display screens may allow for overlaying of images over real-world viewable objects, as well as a visible window to be present on an otherwise opaque display screen. Apple credited Aleksandar Pance as the sole inventor of the granted patent.

In June, Samsung Electronics unveiled the first commercial use of its mirror and transparent OLEDs (organic light emitting diodes) at the Retail Asia Expo 2015. Samsung had rolled out the first mass-produced transparent LCD panels in 2011 and Philips’ HomeLab R&D outfit had demonstrated an LCD mirror TV in 2004.

Meanwhile, Ubiquitous Energy, a start-up that was spun off from Massachusetts Institute of Technology in 2014, has developed a “transparent solar cell technology to market to eliminate the battery life limitations of mobile devices”, according to a 28 May release by Ubiquitous Energy co-founder and CEO Miles Barr. Implemented as a fully transparent film that covers a device’s display area, the company’s “ClearView Power technology” transmits light visible to the human eye, while selectively capturing and converting ultraviolet and near-infrared light into electricity to power the device and extend its battery life.

Face recognition

On 9 June, Microsoft Corp. launched a site, twinsornot.net, for users to upload their photos, assuring that it would not retain the pictures that were uploaded. A user simply could upload two photos to assess how similar the people in these photos are, giving a score from 0 to 100. Running in the background was Microsoft’s Face API (application programming interface) which, for instance, can detect up to 64 human faces in an image.

Face recognition typically provides the functionalities of automatically identifying or verifying a person from a selection of detected faces. It is widely used in security systems, celebrity recognition and photo tagging applications.

Optionally, face detection can also extract a series of face-related attributes from each face such as pose, gender and age. The inspiration was from an April launch of Microsoft’s other site: ‘How Old Do I Look?’ (how-old.net) that lets users upload a picture and have the API predict the age and gender of any faces recognized in that picture.

Facial recognition is not new, and technology companies are very interested in this for obvious reasons—knowing who their users are and using that analysis to build better and more marketable products around it.

On 20 June 2014, a University of Central Florida research team said it has developed a facial recognition tool that promises to be useful in rapidly matching pictures of children with their biological parents, and potentially identifying photos of missing children as they age.

Facebook’s DeepFace uses technology designed by an Israeli start-up called face.com, a company that Facebook acquired in 2013. According to a paper published in arxiv.org, a publication owned and operated by Cornell University, in March, three researchers from Google Inc. have also developed a similar deep neural net architecture and learning method that also uses a facial alignment system based on explicit 3D modelling of faces. Google calls it FaceNet.

Quantum computing

Photo: Princeton
Photo: Princeton

Conventional computers use bits, the basic unit of information in computing—zeros and ones. A quantum computer, on the other hand, deals with qubits that can encode a one and a zero simultaneously—a property that will eventually allow them to process a lot more information than traditional computers, and at unimaginable speeds.

Developing a quantum computer, however, is easier said than done. The main hurdle is stability since calculations are taking place at the quantum level because of which the slightest interference can disrupt the process. Tackling the instability problem is one of the main reasons why the effort to make a quantum computer is expensive.

The concept of quantum computing, as conceived by American theoretical physicist Richard Feynman in 1982, though, is not new to governments and companies like IBM, Microsoft Corp. and Google Inc. On 2 January 2014, The Washington Post reported that the US’s National Security Agency, or NSA, was building a quantum computer that could break nearly every kind of encryption.

Google, on its part, teamed up with US space agency Nasa’s Quantum Artificial Intelligence Laboratory in August 2013 (Nasa and Google’s partnership began 10 years ago and is not restricted to quantum computing) to work on a machine by D-Wave Systems (whose claims of having built a pure quantum computer are disputed since the company is not building a “gate model” device) in the hope that quantum computing may someday dramatically improve the agency’s ability to solve problems much faster.

On 3 September 2013, Google said it would continue to collaborate with D-Wave scientists and experiment with the Vesuvius machine at the Nasa Ames campus in Mountain View, California, “which will be upgraded to a 1000 qubit ‘Washington’ processor”. Bloomberg reported on 9 December this year that Google and D-Wave are developing a new computer that can solve some types of complex problems that are next to impossible to solve on conventional computers at Nasa Ames.

In a related development, Princeton University researchers said they have built a rice grain-sized laser powered by single electrons tunnelling through artificial atoms known as quantum dots. It is being touted as a major step towards building quantum-computing systems out of semiconductor materials.

The researchers built the device, which uses about one-billionth the electric current needed to power a hair dryer, while exploring how to use quantum dots, which are bits of semiconductor material that act like single atoms, as components for quantum computers. “It is basically as small as you can go with these single-electron devices,” said Jason Petta, an associate professor of physics at Princeton who led the study, which was published in the journal Science on 16 January.

‘Material’ computers

Manu Prakash, an assistant professor of bioengineering at Stanford, and his students have developed a synchronous computer that operates using the unique physics of moving water droplets. The goal is to design a new class of computers that can precisely control and manipulate physical matter, according to an 8 June press statement.

The team’s aim with the new computer, nearly a decade in the making, is not to compete with digital computers that process information or operate word processors but “to build a completely new class of computers that can precisely control and manipulate physical matter”. “Imagine if when you run a set of computations that not only information is processed but physical matter is algorithmically manipulated as well. We have just made this possible at the mesoscale (between micro- and macro-scale),” Prakash said.

The ability to precisely control droplets using fluidic computation could have a number of applications in high-throughput biology and chemistry, and possibly new applications in scalable digital manufacturing. The results were published in the June edition of Nature Physics. Prakash recruited a graduate student, Georgios ‘Yorgos’ Katsikis, who is the first author on the paper.

Developing a clock for a fluid-based computer implies it has to be easy to manipulate, and also able to influence multiple droplets at a time, the researchers point out. The system also needed to be scalable so that in the future, a large number of droplets can communicate with each other. Prakash used a rotating magnetic field to do the trick. Katsikis and Prakash built arrays of tiny iron bars on glass slides.

Every time the field flips, the polarity of the bars reverses, drawing the magnetized droplets in a new, predetermined direction. Every rotation of the field counts as one clock cycle, like a second-hand making a full circle on a clock face, and every drop marches exactly one step forward with each cycle.

A camera records the interactions between individual droplets, allowing the observation of computation as it occurs in real time. The presence or absence of a droplet represents the 1s and 0s of binary code, and the clock ensures that all the droplets move in perfect synchrony, and thus the system can run virtually forever without any errors.

According to Prakash, the most immediate application might involve turning the computer into a high-throughput chemistry and biology laboratory. Instead of running reactions in bulk test tubes, each droplet can carry some chemicals and become its own test tube, and the droplet computer offers unprecedented control over these interactions.

Science and Medicine

Cheap and fast eye check-ups

More than 4 billion people across the world require eyeglasses. Of these, more than half lack access to eye tests. Traditional diagnostic tools are cumbersome, expensive and fail to take advantage of the power of today’s mobile computing power.

This prompted a MIT spinout, EyeNetra, to develop smartphone-powered eye-test devices. The devices are being readied for a commercial launch with the management wanting to introduce this device in hospitals, optometric clinics, optical stores and even homes in the US.

EyeNetra is also pursuing opportunities to collaborate with virtual-reality companies seeking to use the technology to develop “vision-corrected” virtual-reality displays. “As much as we want to solve the prescription glasses market, we could also (help) bring virtual reality to the masses,” said EyeNetra co-founder Ramesh Raskar, an associate professor of media arts and sciences at the MIT Media Lab, who co-invented the device, in a 19 October press statement. The device, called Netra, is a plastic, binocular-like headset.

Using the company’s app, users can attach a smartphone to the front and peer through the headset at the phone’s display. Patterns, such as separate red and green lines or circles, appear on the screen.

The app calculates the difference between what a user sees as “aligned” and the actual alignment of the patterns. This signals any refractive errors such as nearsightedness, farsightedness and astigmatism (eye condition that causes blurred or distorted vision), providing the necessary information for eyeglasses prescriptions.

In April, EyeNetra launched Blink—an on-demand refractive test service in New York, where employees bring the start-up’s optometry tools, including the Netra device, to people’s homes and offices. In India, EyeNetra has launched Nayantara, a similar programme to provide low-cost eye tests to the poor and uninsured in remote villages, far from eye doctors.

One, of course, must mention the work being done by Adaptive Eyecare, which was founded by Oxford Physics professor Joshua Silver, who is now director of the non-profit Centre for Vision in the Developing World at the University of Oxford.

During a 2009 TED Talk, Silver demonstrated his low-cost glasses, which can be tuned by the wearer. His spectacles have “adaptive lenses”, which consist of two thin membranes separated by silicone gel. The wearer simply looks at an eye chart and pumps in more or less fluid to change the curvature of the lens, which adjusts the prescription.

A plant-based cancer drug

Photo: Shutterstock
Photo: Shutterstock

Elizabeth Sattely, an assistant professor of chemical engineering at Stanford, and her graduate student Warren Lau have isolated the machinery for making a widely-used cancer-fighting drug from an endangered plant, according to a 10 September press statement.

They then put that machinery into a common, easily grown laboratory plant in the Himalayas, which was able to produce the chemical. The drug Sattely chose to focus on is produced by a leafy Himalayan plant called the Mayapple.

Within the plant, a series of proteins work in a step-by-step fashion to churn out a chemical defence against predators. That chemical defence, after a few modifications in the lab, becomes a widely-used cancer drug called etoposide. The starting material for this chemical defence is a harmless molecule commonly present in the leaf. When the plant senses an attack, it begins producing proteins that make up the assembly line.

One by one, those proteins add a little chemical something here, subtract something there, and after a final molecular nip and tuck, the harmless starting material is transformed into a chemical defence. The challenge was figuring out the right proteins for the job.

Sattely and her team tested various combinations of 31 proteins until they eventually found 10 that made up the full assembly line. They put genes that make those 10 proteins into a common laboratory plant, and that plant began producing the chemical they were seeking.

The eventual goal is not simply moving molecular machinery from plant to plant. Now that she’s proven the molecular machinery works outside the plant, Sattely wants to put the proteins in yeast, which can be grown in large vats in the lab to better provide a stable source of drugs.

The technique could potentially be applied to other plants and drugs, creating a less expensive and more stable source for those drugs, the researchers say. Sattely’s work was published on 10 September in the journal Science.

3D-printed heart model may do away with transplants

Work by a group at Carnegie Mellon could one day lead to a world in which transplants are no longer necessary to repair damaged organs.

“We have been able to take MRI (magnetic resonance imaging) images of coronary arteries and 3D images of embryonic hearts and 3D bioprint them with unprecedented resolution and quality out of very soft materials like collagens, alginates and fibrins,” said Adam Feinberg, associate professor of materials science and engineering and biomedical engineering at Carnegie Mellon University, in a press statement.

Feinberg leads the regenerative biomaterials and therapeutics group, and the group’s study was published in the 23 October issue of the journal Science Advances. Traditional 3D printers build hard objects typically made of plastic or metal, and they work by depositing material onto a surface layer by layer to create the 3D object.

Printing each layer requires sturdy support from the layers below, so printing with soft materials like gels has been limited. The challenge with soft materials is that they collapse under their own weight when 3D printed in air, explained Feinberg. So, the team developed a method of printing soft materials inside a support bath material.

“Essentially, we print one gel inside of another gel, which allows us to accurately position the soft material as it’s being printed, layer by layer,” he said.

One of the major advances of this technique, termed FRESH, or Freeform Reversible Embedding of Suspended Hydrogels, is that the support gel can be easily melted away and removed by heating to body temperature, which does not damage the delicate biological molecules or living cells that were bioprinted.

As a next step, the group is working towards incorporating real heart cells into these 3D printed tissue structures, providing a scaffold to help form contractile muscle. Bioprinting is a growing field, but to date, most 3D bioprinters cost over $100,000 and/or require specialized expertise to operate, limiting wider-spread adoption.

Feinberg’s group, however, has been able to implement their technique on a range of consumer-level 3D printers, which cost less than $1,000 by utilizing open-source hardware and software.The 3D printer designs are being released under an open-source licence.

Can we regrow teeth?

Why can’t humans regrow teeth lost to injury or disease the way nature does? By studying how structures in embryonic fish differentiate into either teeth or taste buds, Georgia Tech researchers hope to one day be able to turn on the tooth regeneration mechanism in humans.

The research was conducted by scientists from the Georgia Institute of Technology in Atlanta and King’s College in London, and published on 19 October in the journal Proceedings of the National Academy of Sciences.

The studies in fish and mice, according to the researchers, suggest the possibility that “with the right signals, epithelial tissue in humans might also be able to regenerate new teeth”.

“We have uncovered developmental plasticity between teeth and taste buds, and we are trying to understand the pathways that mediate the fate of cells towards either dental or sensory development,” said Todd Streelman, a biology professor at Georgia Tech.

But growing new teeth wouldn’t be enough, Streelman cautions. Researchers would also need to understand how nerves and blood vessels grow into teeth to make them viable.

“The exciting aspect of this research for understanding human tooth development and regeneration is being able to identify genes and genetic pathways that naturally direct continuous tooth and taste bud development in fish, and study these in mammals,” said professor Paul Sharpe, a co-author from King’s College.

New way to fix a broken heart?

Photo: iStock
Photo: iStock

Coronary artery disease is the leading cause of death worldwide, but there is currently no effective method to regenerate new coronary arteries in diseased or injured hearts. Stanford researchers, according to a 19 October study published in the journal eLife, have identified a progenitor cell type that could make it possible.

The study was carried out with mice but, as the blood vessels of the human heart are similar, it could lead to new treatments for the disease or to restore blood flow after a heart attack, the researchers say.

“Current methods to grow new blood vessels in the heart stimulate fine blood vessels rather than re-establishing the strong supply of blood provided by the main arteries. We need arteries to restore normal function,” said senior author Kristy Red-Horse from the department of biological sciences at Stanford.

The researchers reveal that the smooth muscle of the arteries is derived from cells called pericytes. The small capillary blood vessels throughout the developing heart are covered in pericytes. Pericytes are also found throughout the adult heart, which suggests that they could be used to trigger a self-repair mechanism.

A problem with cell or tissue transplantation can be that the cells don’t integrate or they differentiate into slightly different cells types than intended. As pericytes are spread all over the heart on all the small blood vessels, they could be used as a target to stimulate artery formation without the need for transplantation, the researchers point out.

The team is now investigating whether pericytes differentiate into smooth muscle as part of this process and whether it can be activated or sped up by introducing Notch 3 (a protein) signalling molecules.

“Now that we are beginning to really understand coronary artery development, we have initiated studies to reactivate it in injury models and hope to some day use these same methods to help treat coronary artery disease,” said Red-Horse.

Measuring the ageing process

There are times when we intuitively feel that even people born within months of each other are ageing differently. Indeed they are, say the researchers of a long-term health study in New Zealand that sought clues to the ageing process in young adults.

In a paper appearing the week of 6 July in the Proceedings of the National Academy of Sciences, the team from the US, UK, Israel and New Zealand introduced a panel of 18 biological measures (known as biomarkers) that may be combined to determine whether people are ageing faster or slower than their peers.

The data came from the Dunedin Study, a landmark longitudinal study that has tracked more than a thousand people born in 1972-73 in the same town from birth. Health measures like blood pressure and liver function were taken regularly, along with interviews and other assessments.

According to first author Dan Belsky, an assistant professor of geriatrics at Duke University’s Centre for Ageing, the progress of ageing shows in human organs just as it does in eyes, joints and hair—but sooner.

Based on a subset of these biomarkers, the research team set a “biological age” for each participant, which ranged from under 30 to nearly 60 in the 38-year olds. Most participants clustered around an ageing rate of one year per year, but others were found to be ageing as fast as three years per chronological year.

As the team expected, those who were biologically older at age 38 also appeared to have been ageing at a faster pace. A biological age of 40, for example, meant that the person was ageing at a rate of 1.2 years per year over the 12 years the study examined.

The ageing process, according to the researchers, isn’t all genetic. Studies of twins have found that only about 20% of ageing can be attributed to genes, Belsky said. “There’s a great deal of environmental influence,” he added.

This gives “us some hope that medicine might be able to slow ageing and give people more healthy active years”, said senior author Terrie Moffitt, the Nannerl O Keohane professor of psychology and neuroscience at Duke.

Sunblock that stays on the outside

Researchers at Yale University have developed a sunscreen that doesn’t penetrate the skin, eliminating serious health concerns associated with commercial sunscreens, according to a 28 September news release.

Most commercial sunblocks may prevent sunburn, but they can go below the skin’s surface and enter the bloodstream, and are likely to trigger hormonal side-effects and could even be promote the kind of skin cancers they are designed to prevent.

The new sunblock made by the Yale researchers uses bio-adhesive nanoparticles that stay on the surface of the skin. The results of the research appeared in the 28 September online edition of the journal Nature Materials. Using mouse models, the researchers tested their sunblock against direct ultraviolet rays and their ability to cause sunburn.

Altering brain chemistry to raise the pain threshold

Scientists at the University of Manchester have shown for the first time that the numbers of opiate (akin to sedatives) receptors in the brain increases to combat severe pain in those suffering from arthritis.

Receptors in our brains respond to natural painkilling opiates such as endorphins, but the researchers in Manchester have now shown that these receptors increase in number to help cope with long-term, severe pain. By applying heat to the skin using a laser stimulator, Dr Christopher Brown and his colleagues showed that the more opiate receptors there are in the brain, the higher the ability to withstand the pain, according to a 23 October study.

The researchers used positron emission tomography (PET) imaging on 17 patients with arthritis and nine healthy controls to show the spread of the opioid receptors.

Val Derbyshire, a patient with arthritis, said in a press statement: “As a patient who suffers chronic pain from osteoarthritis, I am extremely interested in this research. I feel I have developed coping mechanisms to deal with my pain over the years, yet still have to take opioid medication to relieve my symptoms.”

A cranial fingerprint?

Photo: iStock
Photo: iStock

A person’s brain activity appears to be as unique as his or her fingerprints, a new Yale University-led imaging study shows. These brain “connectivity profiles” alone allow researchers to identify individuals from the functional magnetic resonance imaging (fMRI) images of brain activity of more than 100 people, according to the study published on 12 October in the journal Nature Neuroscience.

The researchers compiled fMRI data from 126 subjects who underwent six scan sessions over two days. Subjects performed different cognitive tasks during four of the sessions. In the other two, they simply rested. Researchers looked at activity in 268 brain regions—specifically, coordinated activity between pairs of regions.

Highly coordinated activity implies two regions are functionally connected. Using the strength of these connections across the whole brain, the researchers were able to identify individuals from fMRI data alone, whether the subject was at rest or engaged in a task. They were also able to predict how subjects would perform on tasks.

The researchers hope that this ability might one day help clinicians predict or even treat neuro-psychiatric diseases based on individual brain connectivity profiles. Data for the study came from the Human Connectome Project led by the WU-Minn Consortium, which is funded by the 16 National Institutes of Health (NIH) Institutes and Centers that support the NIH Blueprint for Neuroscience Research and by the McDonnell Center for Systems Neuroscience at Washington University.

Robotics

Robots that crowdsource learning

In July, scientists from Cornell University led by Ashutosh Saxena announced the development of a Robo Brain—a large computational system that learns from publicly available Internet resources.

The system, according to a 25 August statement by Cornell, was downloading and processing about 1 billion images, 120,000 YouTube videos and 100 million how-to documents and appliance manuals. Information from the system, which Saxena had described at the 2014 Robotics: Science and Systems Conference in Berkeley, is being translated and stored in a robot-friendly format that robots will be able to draw on when needed.

The India-born, Indian Institute of Technology Kanpur graduate, also launched a website for the project at robobrain.me, which displays things the brain has learnt, and visitors are able to make additions and corrections. Robo Brain employs what computer scientists call structured deep learning, where information is stored in many levels of abstraction. Deep learning is a set of algorithms, or instruction steps for calculations, in machine learning.

There have been similar attempts to make computers understand context and learn from the Internet. For instance, since January 2010, scientists at the Carnegie Mellon University have been working to build a never-ending machine learning system that acquires the ability to extract structured information from unstructured Web pages.

If successful, the scientists say it will result in a knowledge base (or relational database) of structured information that mirrors the content of the Web. They call this system the never-ending language learner, or NELL.

We also have IBM’s Watson, which beat Jeopardy players in 2011, and now has joined hands with the US Automobile Association to help members of the military prepare for civilian life. In January 2014, IBM said it will spend $1 billion to launch the Watson Group, including a $100 million venture fund to support start-ups and businesses that are building Watson-powered apps using the “Watson Developers Cloud”.

Can robots, or cobots, make good teammates?

Photo: ABB
Photo: ABB

At the Yale Social Robotics Lab, run by professor of computer science Brian Scassellati, robots are learning the skills needed to be good teammates, allowing people to work more safely, more efficiently and more effectively. The skills include stabilizing parts, handing over items, organizing a workspace, or helping people use a tool better, according to a 21 September press release.

Sharing a workspace with most robots can be dangerous, the researchers point out.

“It’s only now that this is becoming feasible, to develop robots that could safely operate near and around people,” said Brad Hayes, the PhD candidate who headed the project. “We are trying to move robots away from being machines in isolation, developing them to be co-workers that amplify the strengths and abilities of each member of the team they are on.”

One way of building team skills is by guiding the robot to figure out how to help during tasks by simulating hundreds of thousands of different possibilities and then guessing if that’s going to be helpful. Given that such tasks tend to take very long, the researchers suggest that the other approach is to show the robot directly how to build team skills.

“Here you are naturally demonstrating to the robot and having it retain that knowledge,” Hayes explained. “It can then save that example and figure out if it’s a good idea to generalize that skill to use in new situations.”

Hayes thinks the technology has value for both the workplace and the home, particularly for small-scale, flexible manufacturing or for people who have lost some of their autonomy and could use help with the dishes or other chores.

Collaborative robots, also known as cobots, is the new buzzword in robotics. They are complementary to industrial robots. Safely working alongside humans in an uncaged environment, they are opening up new opportunities for industry. They have to be safe, easy to use, flexible and affordable. Examples include the Baxter robot by Rethink Robotics; the UR5 arm by Universal Robots and Robonaut2 (by GE).

And, of course, the YuMi robot that is short for ‘you and me’. It was unveiled by industrial robotics company ABB at the Hannover Messe on 13 April. ABB touts it as the world’s first truly collaborative robot, able to work side-by-side on the same tasks as humans while still ensuring the safety of those around it.

The company says YuMi is “capable of handling anything from a watch to a tablet PC and with the level of accuracy that could thread a needle”.


Universe

Earth to Mars via a petrol station on the moon

Living on Mars will certainly not be an easy task for human beings. They will have to tackle major issues such as higher radiation, lack of a semblance of an atmosphere, lower gravity pull that can affect our skeletal structure, likely infection from unknown microbes, lack of food and the effect of loneliness on the mind. But first, they have to reach Mars, a journey that will take about 180 days. This means they will need enough fuel, or will have to refuel somewhere.

Studies have suggested that lunar soil and water ice in certain craters of the moon may be mined and converted to fuel. Assuming that such technologies are established at the time of a mission to Mars, a 14 October MIT study has found that taking a detour to the moon to refuel would reduce the mass of a mission upon launch by 68%.

The researchers developed a model to determine the best route to Mars, assuming the availability of resources and fuel-generating infrastructure on the moon. Based on their calculations, they found the most mass-efficient path involves launching a crew from Earth with just enough fuel to get into orbit around the Earth.

A fuel-producing plant on the surface of the moon would then launch tankers of fuel into space, where they would enter gravitational orbit. The tankers would eventually be picked up by the Mars-bound crew, which would then head to a nearby fuelling station to gas up before ultimately heading to Mars.

Olivier de Weck, a professor of aeronautics and astronautics and of engineering systems at MIT, says the plan deviates from Nasa’s more direct “carry-along” route.

The results, which are based on the PhD thesis of Takuto Ishimatsu, now a post-doctorate student at MIT, are published in the Journal of Spacecraft and Rockets. Ishimatsu’s network flow model to explore various routes to Mars—ranging from a direct carry-along flight to a series of refuelling pit stops along the way—assumes a future scenario in which fuel can be processed on, and transported from, the moon to rendezvous points in space.

When did life on Earth begin?

Geochemists at the University of California, Los Angeles (UCLA) have found evidence that life likely existed on Earth at least 4.1 billion years ago—300 million years earlier than previous research suggested, according to a 19 October press release.

The discovery indicates that life may have begun shortly after the planet formed 4.54 billion years ago. The research was published in the online edition of the journal Proceedings of the National Academy of Sciences.


“Twenty years ago, this would have been heretical; finding evidence of life 3.8 billion years ago was shocking,” said Mark Harrison, co-author of the research and a professor of geochemistry at UCLA. The new research suggests that life existed prior to the massive bombardment of the inner solar system that formed the moon’s large craters 3.9 billion years ago.

Scientists had long believed the Earth was dry and desolate during that time period. Harrison’s research—including a 2008 study in Nature he co-authored with Craig Manning, a professor of geology and geochemistry at UCLA, and former UCLA graduate student Michelle Hopkins—is proving otherwise.

The researchers, led by Elizabeth Bell—a post-doctoral scholar in Harrison’s laboratory—studied more than 10,000 zircons originally formed from molten rocks, or magmas, from Western Australia.

Zircons are heavy, durable minerals related to the synthetic cubic zirconium used for imitation diamonds. They capture and preserve their immediate environment, meaning they can serve as time capsules.

The scientists identified 656 zircons containing dark specks that could be revealing and closely analysed 79 of them with Raman spectroscopy—a technique that shows the molecular and chemical structure of ancient microorganisms in three dimensions.

One of the 79 zircons contained graphite (pure carbon) in two locations. The graphite is older than the zircon containing it, the researchers said. They know the zircon is 4.1 billion years old, based on its ratio of uranium to lead but they don’t know how much older the graphite is.

Water on Mars!

Researchers have discovered an enormous slab of ice just beneath the surface of Mars, measuring 130 feet thick and covering an area equivalent to that of California and Texas combined. The ice may be the result of snowfall tens of millions of years ago on Mars, scientists said in a 15 September press statement. The research was published in the journal Geophysical Research Letters.

Combining data gleaned from two powerful instruments aboard Nasa’s Mars Reconnaissance Orbiter, or MRO, researchers determined why a “crazy-looking crater” on Mars’ surface is terraced, and not bowl-shaped like most craters of this size.

Although scientists have known for some time about Mars’s icy deposits at its poles and have used them to look at its climatic history, knowledge of icy layers at the planet’s mid-latitudes, analogous to earthly latitudes falling between the Canadian-US border and Kansas, is something new.

On 16 October, Nasa confirmed that new findings from MRO had provided “the strongest evidence yet that liquid water flows intermittently on present-day Mars”. The findings came five months after scientists found the first evidence for liquid water on the red planet.

The findings are in sync with Nasa’s ambitious project to send humans to Mars in the 2030s, in accordance with the Nasa Authorization Act, 2010, and the US National Space Policy. The discovery of liquid water, therefore, could be a big boost for astronauts visiting the planet.

Based on further research and findings, humans could well be drinking water on Mars, or use it for creating oxygen and rocket fuel, or to water plants in greenhouses.

Most Earth-like worlds yet to be born. Boo

When our solar system was born 4.6 billion years ago only 8% of the potentially habitable planets that will ever form in the universe existed. The bulk of those planets—92%—are yet to be born, so will continue to do so much after the sun burns out 6 billion years hence. This conclusion is based on an assessment of data collected by Nasa’s Hubble Space Telescope and the prolific planet-hunting Kepler space observatory.

“Our main motivation was understanding the Earth’s place in the context of the rest of the universe,” said the 20 October study’s author Peter Behroozi of the Space Telescope Science Institute (STScI) in Baltimore, Maryland.

The data show that the universe was making stars at a fast rate 10 billion years ago, but the fraction of the universe’s hydrogen and helium gas that was involved was very low. Today, star birth is happening at a much slower rate than long ago, but there is so much leftover gas available that the universe will keep cooking up stars and planets for a very long time to come.

Kepler’s planet survey indicates that Earth-sized planets in a star’s habitable zone, the perfect distance that could allow water to pool on the surface, are ubiquitous in our galaxy.

Based on the survey, scientists predict that there should be 1 billion Earth-sized worlds in the Milky Way galaxy at present, a good portion of them presumed to be rocky. That estimate skyrockets when you include the other 100 billion galaxies in the observable universe.

This leaves plenty of opportunity for untold more Earth-sized planets in the habitable zone to arise in the future. The last star isn’t expected to burn out until 100 trillion years from now. That’s plenty of time for literally anything to happen on the planet landscape, the researchers say.

Ocean rise: How much? How soon? Oh no

Seas around the world have risen an average of nearly 3 inches since 1992, with some locations rising more than 9 inches due to natural variation, according to the latest satellite measurements from Nasa and its partners.

In 2013, the UN Intergovernmental Panel on Climate Change issued an assessment based on a consensus of international researchers that stated global sea levels would likely rise from 1 to 3 feet by the end of the century.

The new data, according to a 26 August Nasa press statement, reveal that the height of the sea surface is not rising uniformly everywhere. Regional differences in sea level rise are dominated by the effects of ocean currents and natural cycles such as the Pacific Decadal Oscillation.

But, as these natural cycles wax and wane, they can have major impacts on local coastlines. Scientists estimate that about one-third of sea level rise is caused by the expansion of warmer ocean water, one-third is due to ice loss from the massive Greenland and Antarctic ice sheets, and the remaining third results from melting mountain glaciers.

However, the fate of the polar ice sheets could change that ratio and produce more rapid increases in the coming decades.

Technology and Society

Plastic-eating worms that can annihilate waste

The world over, people throw away billions of plastic cups and a very small percentage of that gets recycled. How does one tackle this plastic menace? With the help of a mealworm, say Stanford researchers.

A mealworm, the larvae form of the darkling beetle, can subsist on a diet of styrofoam and other forms of polystyrene, according to two companion studies co-authored by Wei-Min Wu, a senior research engineer in the department of civil and environmental engineering at Stanford. It’s the microorganisms in the mealworm’s gut that biodegrade the plastic in the process.

The papers, published in Environmental Science and Technology in September, are the first to provide detailed evidence of bacterial degradation of plastic in an animal’s gut.

“There’s a possibility of really important research coming out of bizarre places,” said Craig Criddle, a professor of civil and environmental engineering who supervises plastics research by Wu and others at Stanford. “Sometimes, science surprises us. This is a shock.”

The new research on mealworms is significant because styrofoam was thought to have been non-biodegradable and more problematic for the environment. Researchers led by Criddle, a senior fellow at the Stanford Woods Institute for the Environment, are collaborating on ongoing studies with the project leader and papers’ lead author, Jun Yang of Beihang University in China, and other Chinese researchers.

Together, they plan to study whether microorganisms within mealworms and other insects can biodegrade plastics such as polypropylene (used in products ranging from textiles to automotive components), microbeads (tiny bits used as exfoliants) and bioplastics (derived from renewable biomass sources such as corn or biogas methane).

The researchers plan to explore the fate of these materials when consumed by small animals, which are, in turn, consumed by other animals. Another area of research could involve searching for a marine equivalent of the mealworm to digest plastics, Criddle said. Plastic waste is a particular concern in the ocean, where it fouls habitats and kills countless seabirds, fish, turtles and other marine life.

Fresh milk, off the grid

Photo: iStock
Photo: iStock

How does one preserve milk? Most of us do this by refrigeration and boiling, but what does one do if there’s sporadic electricity? The answers may lie in a 19 May study by Tel Aviv University (TAU) researchers.

Published in the journal Technology, the study finds that short-pulsed electric fields can be used to kill milk-contaminating bacteria. Through a process called electroporation, bacterial cell membranes are selectively damaged.

According to lead investigator Alexander Golberg of TAU’s Porter School of Environmental Studies, applying this process intermittently prevents bacteria proliferation in stored milk, potentially increasing its shelf life.

According to the study, pulsed electric fields, an emerging technology in the food industry that has been shown to effectively kill multiple food-born microorganisms, could provide an alternative, non-thermal pasteurization process.

The stored milk is periodically exposed to high-voltage, short pulsed electric fields that kill the bacteria. The energy required can come from conventional sources or from the sun. The technology is three times more energy-efficient than boiling and almost twice as energy efficient as refrigeration, the researchers say.

Crowdsourcing interactive story plots with AI

Researchers at the Georgia Institute of Technology have developed an artificial intelligence (AI) system that crowdsources plots for interactive stories. While current AI models for games have a limited number of scenarios and depend on a data set already programmed into a model by experts, Georgia Tech’s AI system generates numerous scenes for players to adopt.

“Our open interactive narrative system learns genre models from crowdsourced example stories so that the player can perform different actions and still receive a coherent story experience,” Mark Riedl, lead investigator and associate professor of interactive computing at Georgia Tech, said in a 19 September news release.

A test of the AI system, called Scheherazade IF (Interactive Fiction)—a reference to the fabled Persian queen and storyteller—showed that it can achieve near human-level authoring.

“When enough data is available and that data sufficiently covers all aspects of the game experience, the system was able to meet or come close to meeting human performance in creating a playable story,” Riedl added.

The researchers evaluated the AI system by measuring the number of “common sense” errors (e.g. scenes out of sequence) found by players, as well as players’ subjective experiences for things such as enjoyment and coherence of story. The creators say that they are seeking to inject more creative scenarios into the system.

Right now, AI plays it safe with the crowdsourced content, producing what one might expect in different genres. But opportunities exist to train Scheherazade (just like its namesake implies) to surprise and immerse those in future interactive experiences.

The impact of this research can support not only online storytelling for entertainment, but also digital storytelling used in online course education or corporate training. The research paper Crowdsourcing Open Interactive Narrative (co-authored by Matthew Guzdial, Brent Harrison, Boyang Li and Mark Riedl) was presented at the 2015 Foundations of Digital Games Conference in Pacific Grove, California.

Teaching computers to ‘see’ what humans do

Researchers from Georgia Tech’s school of interactive computing and institute for robotics and intelligent machines have developed a new method that teaches computers to “see” and understand what humans do in a typical day, according to a 28 September news release.

The researchers gathered more than 40,000 pictures taken every 30 to 60 seconds, over a six-month period, by a wearable camera and predicted with 83% accuracy what activity that person was doing. The idea, according to the researchers, is to give users the ability to be able to track all of their activities—not just physical ones like walking and running, which most wearables like Fitbit do.

The ability to literally see and recognize human activities has implications in a number of areas—from developing improved personal assistant applications like Siri to helping researchers explain links between health and behaviour, the researchers say.

Featured post

UKPCS2012 FINAL RESULT SAMVEG IAS DEHRADUN

    Heartfelt congratulations to all my dear student .this was outstanding performance .this was possible due to ...