Collapse all descriptions
Science- Is the Arctic is getting over populated ?
Updated: 17 May 2013
China gains observer status on the Arctic Council
As lucrative oil reserves, rare mineral deposits and shipping lanes emerge amid the rapidly disappearing Arctic
ice sheet, the eyes of many nations are turning north. This week, the eight member states of the Arctic Council –
decided at their meeting in Kiruna, Sweden, to admit six non-Arctic nations as observers, most notably China.
New Scientist examines the implications
What is this Arctic Council meeting all about?
It marks the eighth time that the Arctic Council – an intergovernmental organisation made up of Canada,
Denmark, Finland, Iceland, Norway, Russia, Sweden and the US – has met. The Council brokers international
agreements on the Arctic, affecting protected areas, climate change research and shipping routes, among
This week's meeting saw the adoption of a legally binding agreement to alert other members to oil spills and
share responsibility for the clean-up. Greenpeace and other environmental groups that want to see drilling in
Arctic banned are unimpressed, saying it doesn't go far enough to protect the Arctic.
Others say the Council does play a useful role in protecting the region's environment. Despite its limited legal
clout, says Michael Byers of the University of British Columbia in Canada, the Council forces nations that are
historic rivals – like the US and Russia – to work together. "The environment is driving a lot of cooperation.
Everything that happens is connected to climate change," he says. "The Arctic nations all get that."
What is an observer and who gets to be one?
Observer status simply recognises that a non-Arctic country's interests are significant enough to allow it to sit
in on meetings, albeit in the back row. Observers, for instance, do not have the right to introduce new projects
or raise problems to the council. Six European nations, including the UK, already hold observer status, as do
nine international groups and 11 non-governmental organisations such as the WWF.
Of the seven countries that applied for observer status this year, six – China, Japan, South Korea, India,
Singapore and Italy – were admitted. The European Union's application as a single bloc was denied, although it
is temporarily allowed to participate in meetings. The Canadian government is believed to have blocked its
application, worrying that the EU's ban on seal products would hinder native Inuit hunting traditions.
Why do China and other nations want to be involved?
Byers says it's unlikely that China (or any other nation) plans to use the Arctic as a strategic military position.
The observers' interests are more to do with economics and trade. For instance, under the UN Convention on
the Law of the Sea (UNCLOS), anyone can use shipping lanes on the high seas. As northern passages become
navigable, would-be traders from non-Arctic states hope to have a say in establishing ports, search and rescue
operations, and climate monitoring systems. Arctic fisheries are also important to Asian nations, Byers says.
The US has pushed for the Arctic Council to determine how high-seas fisheries are managed and where
commercial fishing may or may not be allowed in future.
So who owns the Arctic?
UNCLOS rules state that nations have rights over the water stretching up to 200 nautical miles (370 kilometres)
off their coasts. Beyond, the high seas belong to no nation. In the Arctic, only the Lomonosov ridge that runs
between Russia and Canada is still under dispute, with each nation claiming the mineral-rich structure is part of
its continental shelf. Both nations have gathered geological evidence to support their claims. A decision on
who has rights over this particular area is expected in the coming years.
Are Arctic's resources off limits to everyone but the Arctic Council's eight member states?
The Arctic is believed to hold up to 13 per cent of the world's oil and 30 per cent of its natural gas. Non-Arctic
nations can exploit these resources through private companies that have operations in the Arctic, and through
agreements with Arctic countries. China, for instance, recently signed an agreement with Russian company
Rosneft to drill in the Barents Sea, and has invested heavily in developing Canada's tar sands north of the
Arctic Circle. And late last year, China sent an icebreaker north of the Arctic Circle to scope out the potential for
shipping. China will obviously be a key player in future development of the Arctic, Byers says. Other energy-
poor nations such as Japan are also likely to have a keen interest in developing these resources
Science- Earth's Twin - Could we bear to look ?
Updated: 01 May 2013
Water worlds bring us closer to finding Earth's twin
IT IS the planetary system that gives two for the price of one.
The star Kepler 62 hosts a pair of planets roughly the size of Earth.
Both are orbiting in the star's habitable zone, the region where temperatures should be neither too hot nor too
cold, but just right for liquid water to exist (see diagram).
Known as Kepler 62e and Kepler 62f, these worlds are being hailed as the most life-friendly planets yet seen
outside our solar system.
But that doesn't mean they would support life quite as we know it.
For one thing, the star is smaller and cooler than our sun, so at least one of the planets would need a potentially
toxic atmosphere to keep warm.
For another, some models suggest that the planets are covered in water, hinting that, if there are extraterrestrials,
they might have evolved almost entirely in marine environments.
Confirming any of these ideas would require taking a more detailed look at the planets, but the system is 1200 light
That's too far for us to take a peek at their atmospheres or get a definite picture of their composition with current
Still, the discovery brings us closer to seeing a true Earth twin, and it gives clues to what we may find in planetary
systems closer to home as new telescopes come online.
The two planets were discovered by a team working with NASA's Kepler space telescope, which uses dips in
starlight to deduce the presence of planets passing in front of stars.
The team actually saw five planets – b, c, d, e and f – around Kepler 62.
But only e and f are in the star's habitable zone (Science, doi.org/k97).
These worlds are both about 1.5 times the size of Earth, which means they are probably rocky.
Two Earth diameters is the cut-off below which planets should be solid, says Lisa Kaltenegger of Harvard
Larger than that and planets may be mini-Neptunes – small, gaseous worlds with no defined solid surfaces.
Of course, some astronomers think Kepler 62e and Kepler 62f don't have solid surfaces.
Theoretical models suggest both balls of rock are covered with liquid oceans (arxiv.org/abs/1304.5058).
"If you take Earth, keep the fraction of water the same and just make it bigger, you get enough water to cover the
whole surface, because the volume is four times bigger but the surface doesn't get four times bigger," says
Kepler 62e is closer to the star than Kepler 62f, and the models hint that it may be hot and muggy all over, its sky
filled with clouds. Kepler 62f, meanwhile, may need an atmosphere rich in powerful greenhouse gases, like carbon
dioxide, to stay warm enough for liquid oceans.
If both planets support alien seas, they may be teeming with life, just as our oceans are, adds William Borucki,
principal investigator for NASA's Kepler mission and leader of the team that found Kepler 62's planets.
The worlds are also massive enough that they should hold on to thick atmospheres, so sea life could perhaps take
to the air.
"We know that at least in our ocean, we have flying fish.
They fly to get away from predators," says Borucki.
"So we might find that they've evolved birds on this ocean planet."
Whether these marine life forms would develop intelligence and craft civilisations like ours is another question,
though, because they would not have easy access to metals, electricity or fire.
Kaltenegger says it is also possible that the Kepler 62 planets have oceans and continents, like Earth.
That raises some fascinating possibilities – including the idea that aliens from the two planets may already have
communicated, or even met. "Imagine looking through a telescope to see another world with life just a few million
miles from your own, or having the capability to travel between them on a regular basis," says astronomer Dimitar
Sasselov, also at Harvard.
"I can't think of a more powerful motivation to become a spacefaring society."
If both worlds host intelligent life, we might be able to snoop on a conversation between Kepler 62e and 62f, says
Jon Jenkins, who is on the Kepler team and with the SETI Institute in Mountain View, California.
If they are using radio signals about as powerful as communications broadcasts on Earth, telescopes here could
pick them up.
It might even be easier to detect a signal going between these planets than broadcasts from a single planet, adds
SETI astronomer Seth Shostak.
"If you wait until they're lined up with respect to us, you might be able to listen in when we're in the path of the
beam," Shostak says.
"Some people say that hearing a signal from another world is like the odds of two bullets hitting each other. But
that's a lot easier to do if you know the bullets are being fired towards you."
The Kepler team also announced last week another potentially life-friendly planet orbiting a star very similar to
ours. Kepler 69c is about 1.7 times the size of Earth, so is also likely to be rocky (The Astrophysical Journal,
doi.org/k9r), says Kepler team member Thomas Barclay of the Bay Area Environmental Research Institute in
Kepler 69c orbits its star at about the same distance that Venus orbits the sun.
The planet could also be a water world, or it could be a super-Venus, with a thick, toxic atmosphere that keeps the
surface too hot for liquid oceans.
Unfortunately, the Kepler 69 system is 2700 light-years away – again too distant for astronomers to determine the
make-up of its atmosphere.
While they remain enigmatic, Kepler 62 and 69 are giving us a taste of what we might expect to find with future
exoplanet surveys, such as NASA's Transiting Exoplanet Survey Satellite (TESS), scheduled for launch in 2017.
Kepler has been looking into deep space in one region of the sky, but TESS will use an array of wide-field cameras
to scan roughly 2 million of the stars closest to ours, searching for Earth-size exoplanets in the habitable zones.
The new Kepler worlds also help to answer the question the mission set out to address: how many stars have
potentially habitable planets?
"Finding these three planets – even if you ignore the others that Kepler has found – shows you it's at least one in a
thousand," says Shostak.
"That means there are about a billion habitable worlds in the Milky Way alone.
Seems like a lot of real estate to me."
This article appeared in print under the headline "Water worlds swell hopes of finding life"
Science- Needs diversity,Needs more Women
Updated: 24 Apr 2013
How can we get more working-class women into science?
Targeting economically disadvantaged people should be our priority if we're serious about increasing diversity in science
Gender interacts with class disadvantage to hold some women back from a career in science, while others go
Science needs diversity – everyone agrees on that. We all shift uncomfortably when a child, asked what a scientist
looks like, draws a picture of an elderly white man in a lab coat.
In recent years a number of entirely laudable initiatives have sprung up with the express aim of getting more
women into science. But gender diversity is a bit more complicated than it first appears.
If we focus solely on the numbers of women entering science careers, we're in danger of unwittingly encouraging
the same old values that ensure a steady stream of middle-class women from affluent homes – homes where
higher education is seen as the obvious choice.
If we don't work on the breadth of our encouragement, we're in danger of ignoring a vast pool of potential: young,
economically disadvantaged women who at the moment may well feel science is irrelevant to their lives.
Young, middle-class women are, in the main, doing fine. The Institute for Public Policy Research's recent report,
Great Expectations – Exploring the Promises of Gender Equality claims that the gender pay gap has effectively
disappeared among 20-something professionals. Pay inequality, found the IPPR, is greater within-gender than
between men and women.
But the IPPR recommends that we should be cautious of celebrating the increasing numbers of women at the top
as a marker of equality, and instead develop "a more nuanced understanding of how gender interacts with class
disadvantage" to hold some women back, while others go forward.
We would support this argument.
Though not disadvantaged, we're both distinctly working class.
One of us never made it into a science career at all, while the other has frequently found that her social
background, not her gender, has made her the minority.
The numbers back this up. In its spring 2013 summary, King College London's ASPIRES project notes that 23% of
socially advantaged pupils aspire to scientific careers, compared with only 9% of economically disadvantaged
So what's the solution? Targeting economically disadvantaged people of both genders should be our priority if
we're to increase the diversity of science.
Diverse role models in science, technology and engineering are a wonderful thing, but we need much more.
Indeed, the IPPR found that successful female role models don't resonate with economically disadvantaged
women – not when a professional career seems so out of reach.
Disadvantaged young people increasingly view a university education as nothing but a source of massive debt
leading to almost certain unemployment, and it will take more than a few working-class accents on TV to change that.
We need to show young people that a career in science is achievable even if you're not the smartest or wealthiest
child in your class. We also need careers advice that demonstrates to youngsters the diversity of science careers –
that it isn't all white coats and pipettes, and that a scientific education can lead to a huge range of careers outside
of the lab.
ASPIRES report that the greatest driver to a science-based career is a family environment in which science is
talked about, its role in modern life is celebrated, and where scientific hobbies or cultural activities are engaged in:
an environment where science is seen as "part of who we are".
The best "science capital" by far is to witness a close family member succeed in a scientific career.
This normalises science as a career option, making it seem both conceivable and achievable.
But if you are disadvantaged, you are far less likely to know any scientists.
You might desperately want to be a scientist when you're eight, but that interest will dwindle if all the dads you see
are manual workers, and the mums are in low-paid, part-time work.
Being a scientist may seem like something from another world, impossibly out of reach.
A love of science is widespread among youngsters of all social backgrounds, but if a scientific career is still seen
as something for "other people" then love is not enough.
Science capital requires cash. Some 33% of students surveyed for ASPIRES said their scientific aspirations were
influenced by out-of-school hobbies and activities.
Trips to cultural events, extra-curricular activities, magazines, and internet access are all a vital influence.
But for those without easy access to such things, how can we make science-based careers seem obtainable? We
can expand upon the excellent projects aimed at getting women and girls into science.
We need more attention on the initiatives targeting disadvantaged youngsters, of both genders.
Projects doing just that – the Access Project , Generating Genius, and the Prince's Trust's recently announced
Launchbox initiative, in collaboration with the Science Museum, go a long way towards increasing diversity.
Such projects don't yet have the media profile of female-focused initiatives, although Launchbox receives the
financial support and public endorsement of Will.I.Am, which will surely help.
While arguing for more focus on economic rather than gender disadvantage, we don't claim that gender issues in
science are absent.
As soon as women have children, financial circumstances all too often force them out of the workplace – likewise
men who take on the role of primary carer. Individual cases of gender discrimination do regrettably occur, as do
cases of racial, sexual orientation, age, disability and size-based discrimination.
Focus on helping one group into science careers should never be at the cost of ignoring the needs of another.
We need to embed the idea from early on that science is for everyone. In the same way that the whole of society
benefits from science, the whole of society can contribute.
We need to assure people from poor economic backgrounds that science isn't beyond their reach or "just for
And when they turn up at our receptions, education days and careers talks, we need to welcome them into our too-
frequently middle class enclaves and listen
Science- "Beam me up Scottie"-Soyez to Space Station in under 6 hours
Updated: 02 Apr 2013
Speedy astronauts make fastest trip yet to the ISS
Victoria Jaggard, physical sciences news editor
(Image: NASA/Carla Cioffi)
Now you can get to the International Space Station in less time than it takes to fly from London to New York.
A Russian Soyuz capsule usually takes at least two days to rendezvous with the ISS, because of the carefully
timed dance of manoeuvres that must take place for a spaceship to safely meet the orbiting laboratory. Using a
new launch process, three astronauts have now made the trip in just under 6 hours.
Russian cosmonauts Pavel Vinogradov and Alexander Misurkin and NASA astronaut Chris Cassidy launched at
20:43 GMT on 28 March from the Baikonur Cosmodrome in Kazakhstan. Soaring high above the western coast
of Peru, they successfully docked with the space station at 02:28 GMT on 29 March - a flight time of just 5 hours,
After a slight delay getting the pressure equalised between the two craft, the Soyuz hatch opened at 04:35 GMT.
With a flurry of hugs and camera flashes, the record-setting spacefarers greeted the three crew members already aboard the ISS.
The Soyuz itself has not been supercharged and wasn't flying any quicker than normal. The shorter time
allowance simply required that mission managers had to be more precise.
When a Soyuz capsule enters orbit, it is on an orbital path a bit lower than the space station's, which means it
circles the Earth faster. As the craft closes in on the ISS, a series of thruster burns boosts the capsule into the
right orbit for docking.
Getting Soyuz to match the station's altitude and speed is a tricky business.
If the capsule has a couple of days before docking, the thruster burns can be spaced out over 34 orbital laps.
Shrinking the time between launch and docking gives the astronauts just 4 orbits to meet up with the ISS,
according to NASA.
The speedier meeting also means the space station has to do some of the work.
On 21 March an uncrewed cargo vehicle already docked with the ISS fired its thrusters to shift the station about
4.8 kilometres higher, putting it in the right position to meet the Soyuz craft.
"Conducting a single-day launch-to-docking takes considerable amounts of planning and maneuvering of the
space station in order to set both the station and the Soyuz on the proper orbit so they can chase each other,"
says NASA spokesman Joshua Buck at the agency's headquarters in Washington DC.
"It also requires a compressed timeline for the Soyuz crew, with them having to conduct two days' worth of
operations within 6 hours."
For safety reasons the astronauts must stay in restrictive pressurised space suits during the faster trip, but the
6-hour journey drastically reduces the time they have to spend in the cramped Soyuz capsule, as well as the
amount of food and fuel they need for the trip.
Now that fast-track launches have been shown to work for Soyuz flights, ISS managers can decide whether to
use the method on a case-by-case basis, says Buck. He adds that SpaceX's Dragon capsule will continue to take
the slow road to the space station.
Science-A Vaccine is now available to treat animal Foot and Mouth virus infection
Updated: 28 Mar 2013
Vaccine promises to cull foot and mouth slaughter
Smouldering pyres of cattle sacrificed to halt foot and mouth disease could be consigned to history, thanks to a
Crucially, it is the first that will allow vets to distinguish between animals have been vaccinated and those that
have been infected by the virus.
That means it could overcome a key objection to the use of vaccines to deal with outbreaks of the disease.
In 2001, during a foot and mouth disease epidemic, the UK government was given the option of vaccinating
cattle to contain outbreaks and prevent the virus from spreading further.
It rejected the vaccine, choosing instead to slaughter 6 million animals at a cost of £8 billion.
They chose this option because the only vaccine available at the time was a whole, deactivated version of the
foot and mouth virus.
Cattle react to such viruses by producing the same spectrum of protective antibodies as they would if they were
infected by the live virus. In other words, so long as the virus is dormant, vaccinated cattle are impossible to
distinguish from infected ones.
That's a big problem for countries that export cattle. They can only do so if they are declared free of foot and
mouth, but to testers, a vaccinated herd looks infected.
Now, Bryan Charleston of the Pirbright Institute in Woking, UK, and his colleagues have developed a synthetic
vaccine that produces a different antibody signature in cattle.
The new vaccine consists of the inanimate outer shell of the virus, gutted of the genetic core that allows the live
virus to infect cells, multiply and spread.
To make it, Charleston inserted the genes coding for the outer shell and the enzymes that assemble it into moth
The cells pumped out empty viral shells.
Vaccinated herds only produce antibodies against this shell, whereas genuinely infected animals would
produce antibodies against the core as well.
Rich and poor countries alike could benefit from the new vaccine.
The old vaccine is in short supply and can only be made in secure labs – mostly in Europe and the US – and
must be kept refrigerated to be functional.
The synthetic one can be made without these precautions and stored at temperatures of up to almost 60 °C.
Charleston and his colleagues tested the vaccine on eight calves, which were protected from infection as
efficiently as if they had been given the existing vaccine. "It induced protective antibodies that neutralise the virus for up to nine months," says Charleston.
The vaccine only protects against the "A" serotype of the virus, one of seven that exists worldwide, but new
versions are almost ready to protect against two other serotypes.
This includes the "O" strain, which accounts for 80 per cent of the world's outbreaks.
It was to blame for the 2001 UK outbreak, and massive epidemics three years ago in Japan and South Korea.
The vaccine against each serotype will need to be tested successfully in herds of at least 17 animals to win
European approval, and Charleston warns it could be seven years before the vaccines are available.
"Once available, vaccines of this type would have clear advantages over current technology as a possible
option to help control the disease should we ever have another outbreak," says Nigel Gibbens, the UK's chief
Journal reference: PLoS Pathogens, DOI: 10.1371/journal.ppat.1003255
Science- Deepest point in the Ocean is teeming with life
Updated: 19 Mar 2013
Deepest point in the ocean is teeming with life
Hollywood director James Cameron found little evidence of life when he descended nearly 11,000 metres to the
deepest point in the world's oceans last year.
If only he had taken a microscope and looked just a few centimetres deeper.
Ronnie Glud at the University of Southern Denmark in Odense, and his colleagues, have discovered unusually
high levels of microbial activity in the sediments at the site of Cameron's dive – Challenger Deep at the bottom of
the western Pacific's Mariana Trench.
Glud's team dispatched autonomous sensors and sample collectors into the trench to measure microbial activity
in the top 20 centimetres of sediment on the sea bed.
The pressure there is almost 1100 times greater than at the surface.
Finding food, however, is an even greater challenge than surviving high pressures for anything calling the trench home.
Any nourishment must come in the form of detritus falling from the surface ocean, most of which is consumed by
other organisms on the way down.
Only 1 per cent of the organic matter generated at the surface reaches the sea floor's abyssal plains, 3000 to 6000
metres below sea level.
So what are the chances of organic matter making it even deeper, into the trenches that form when one tectonic
plate ploughs beneath another?
Surprisingly, the odds seem high.
Glud's team compared sediment samples taken from Challenger Deep and a reference site on the nearby abyssal plain.
The bacteria at Challenger Deep were around 10 times as abundant as those on the abyssal plain, with every
cubic centimetre of sediment containing 10 million microbes.
The deep microbes were also twice as active as their shallower kin.
These figures make sense, says Glud, because ocean trenches are particularly good at capturing sediment.
They are broad as well as deep, with a steep slope down to the deepest point, so any sediment falling on their
flanks quickly cascades down to the bottom in muddy avalanches.
Although the sediment may contain no more than 1 per cent organic matter, so much of it ends up at Challenger
Deep that the level of microbial activity shoots up.
"There is much more than meets the eye at the bottom of the sea,"
says //email@example.com" s_oc="null">Hans Røy, at Aarhus University in Denmark.
Last year, he studied seafloor sediments below the north Pacific gyre – an area that, unlike Challenger Deep, is
almost devoid of nutrients.
Remarkably, though, even here Røy found living microbes.
"With the exception of temperatures much above boiling, bacteria seem to cope with everything this planet can
throw at them," he says.
Journal reference: Nature Geoscience, DOI: 10.1038/ngeo1773
Science- Too Many People -Too Little Water
Updated: 05 Mar 2013
Splash and grab: The global scramble for water
04 March 2013 by Fred Pearce
What we call land-grabbing is often more about access to irrigation.
We urgently need to know how much is being purloined
AS FRENCH troops battled with jihadists in Mali at the start of the year, some people had reason to be thankful for the chaos.
Two million fishers, farmers and herders live on the inner delta of the River Niger, a huge wetland on the fringe of the Sahara.
They hoped the fighting would end foreign investors' plans for irrigation projects that would suck water out of
the river and destroy their livelihoods.
Even before fighting broke out, rumours of impending insurrection had encouraged the food giant Associated
British Foods to abandon a massive sugar cane project.
Since then, "land-grabbers" from the US, Libya, China and elsewhere have departed.
The Mali government's hope of using the river to irrigate up to a million hectares of desert looks doomed.
The wetland – and the people who prosper from it – are saved. For now.
But the same is not true elsewhere.
Wetlands and the people who rely on them are under pressure across Africa.
I have interviewed Kenyans angry at a US property tycoon draining their swamp on the shores of Lake Victoria
and fencing off their wet pastures for a rice farm.
I have also heard the concerns of tribal people in a remote corner of western Ethiopia, where Indian and Saudi
agribusinesses are taking their forests and capturing the headwaters of the Nile.
Usually this is called land-grabbing, but it is as much about water.
In a world of drying rivers and plummeting water tables – and where a quarter of farm production is limited by water shortages – water is valuable stuff.
As Willem Buiter, chief economist at Citigroup, has argued: "Water will become eventually the single most
important commodity asset class, dwarfing oil, copper, agricultural commodities and precious metals."
Yet governments rarely recognise this. In a study of land-grab contracts, Lorenzo Cotula of the London-based
International Institute for Environment and Development found that land-grabbers typically demand water rights, and usually get them.
But governments rarely put a price on the water or limit how much can be taken, even in water-stressed countries.
If your dam or borehole is big enough, you can have every last drop – regardless of the impact on locals.
We urgently need to know more about the scale of this capture, not least because some of the land-grabbers'
favoured crops are among the thirstiest, such as sugar and rice.
But if information on land grabs is poor, then that on water grabs is far worse.
Most of what we know about land grabs comes from press reports collated by GRAIN, an NGO based in
Barcelona, Spain, that campaigns for peasant farmers.
This leads to some over-reporting.
For instance, many Saudi schemes in other Islamic nations have never got beyond grand ministerial declarations.
Under-reporting is a problem, too. I have flown over hundreds of kilometres of tribal lands in northern Paraguay
acquired by Brazilian ranchers but absent from land-grabbing databases.
Similarly missing are grabs by domestic companies in partnership with foreign ones, such as the sugar
plantations that have obliterated Cambodian family rice farms to supply the Tate & Lyle sugar factory in the UK.
For what it's worth, Oxfam puts the total amount of land promised to foreign companies at more than 200 million
A network of researchers known as the Land Matrix, coordinated by Ward Anseeuw of the University of Pretoria
in South Africa, estimates that perhaps a third of these deals have been completed, with most of those now only
in the early stages of cultivation.
The first attempt to turn this land data into estimates of water grabs was made in January by Maria Cristina Rulli
of the Polytechnic University of Milan in Italy and colleagues (PNAS, vol 110, p 892).
The headline stat was that up to 450 cubic kilometres of water are "appropriated" globally by land-grabbers each year.
At least two-thirds of this is rain falling onto the grabbed land, but the rest is extracted from rivers or aquifers for
irrigation – around 5 per cent of total global extractions for irrigation.
That's a lot.
But there are two problems with this analysis.
The first concerns the land grabs that were included.
Rather than using the Land Matrix data, the researchers used miscellaneous grabs listed by GRAIN and other sources.
Their inclusion criteria are unclear, but this skews the data, turning the UK from a middle-ranking land-grabber to
the biggest of them all, for instance.
The second, bigger, problem is their conversion of land grabs into water grabs.
Land contracts merely allow access to water, albeit often unlimited access.
So the authors assessed how much water would be needed to grow the intended crop on all of the grabbed land.
The trouble with this is that few if any land-grabbers have come close to achieving that so far. And many projects,
including those listed from Mali, may never happen.
Rulli told me that she didn't consider the actual extent of cultivation in her calculations.
But that makes the claim that 450 cubic kilometres of water have already been grabbed extremely misleading.
The figure may represent a theoretical maximum, but right now it could be orders of magnitude too high.
The findings would be best ignored, except they are the only peer-reviewed global water grab assessment in existence and are already being quoted.
The authors are right to highlight that there are a huge number of water grabs, many of them in hungry and
But the fact is we are no closer to quantifying how much water is being taken.
We must do better.
This article appeared in print under the headline "Splash and grab"
Fred Pearce is a consultant on environmental issues for New Scientist.
His latest book is The Landgrabbers: The new fight over who owns the Earth (Eden Project Books in the UK, Beacon Press in the US)
Science- For many social development still involves merely surviving - others to profit from them
Updated: 05 Mar 2013
All work and no play: Why Neanderthals were no Picasso
27 February 2013 by April Nowell
Neanderthals had shorter childhoods than us, which profoundly affected their ability to make symbolic art
WATCHING a group of 5-year-olds chasing each other in a park it is easy to forget that child's play is a serious
Through play children figure out how to interact socially, practice problem-solving and learn to innovate, skills
that will be indispensable to them as adults.
But if experiences gained during play are so crucial for cognitive development, what would it mean if a species
had a shorter childhood?
This is exactly the case for our closest relatives, the Neanderthals.
Behaviourally they were very similar to us, with some important differences which, to paraphrase Sigmund
Freud, may stem from their childhoods.
Neanderthals evolved in Europe some 250,000 years ago, spread to the Middle East and eventually went extinct about 30,000 years ago.
Much like their human counterparts they made complex tools and hunted large game animals.
But they also ate fish, tortoise, hare and a variety of plants, adapting their diets to local conditions.
They had language, created fire, at least occasionally showed compassion for others in their group and
sometimes buried their dead.
The single greatest difference between Neanderthals and humans that we can see in the archaeological record,
however, lies in both the quantity and nature of the artefacts they imbued with an obvious symbolic dimension.
Humans today live in what we call a symbolic culture.
All the objects around us have a symbolic dimension.
The clothes we wear, for instance, send out signals about us that are unrelated to their practical function.
We form symbolic relationships where no biological relationship exists, with a husband, sister-in-law, godchild,
blood-brother, for example.
Language, of course, is another key example, the relationship between the words and the objects and concepts
to which they refer is completely arbitrary and that is the essence of a symbol.
Neanderthals created few symbolic artefacts.
Before about 50,000 years ago there is very little evidence of any that stand up to scientific scrutiny.
A few Neanderthal sites dating from 50,000 to 30,000 years ago contain some beads, pigments, raptor talons and
indirect evidence for feathers – all presumably for some kind of body decoration.
Burst of creativity
But these artefacts pale next to the record of symbolic material culture created by early humans who first evolved
in Africa 200,000 years ago.
Even if we focus on just the period 50,000 to 30,000 years ago we find that early humans created bone flutes, the
breathtaking cave paintings of the Chauvet cave in France, imaginative personal ornaments such as ivory beads
carved to look like shells, and figurines incised with geometric patterns.
Two examples that stand out for me are the lion-human statues from the Swabian Jura region of Germany
(currently on display at the Ice Age Art exhibition at the British Museum, London) and the painting of a bison-
woman from Chauvet, both fantastical, imaginary creatures.
The ability to reproduce a three-dimensional form on a two-dimensional surface, or to "see" a figure in ivory,
requires a completely different way of imagining the world. Neanderthals created nothing like these artefacts and
I believe this can be explained by the games they played, or more correctly did not play, as children.
Neanderthals matured more slowly than earlier hominins such as Homo erectus, but more quickly than modern
As a result, they had a shorter childhood than us.
We know this because Neanderthals occasionally buried their dead so we have a relatively large collection of
Neanderthal infants and children from which to measure their development.
One study in particular was a game changer. In 2010, Tanya Smith from Harvard University and colleagues
studied Neanderthal and early human teeth, counting daily growth lines to calculate the exact age.
By comparing this to the individual's patterns of growth, Smith concluded that Neanderthals grew relatively
rapidly and spent less time dependant on their parents.
Why should this make a difference to the minds of Neanderthals compared to modern humans?
To understand this, we need to take a closer look at childhood. In general, species like us, with longer
dependency periods, tend to play more and engage in many more types of play.
This influences our minds, because play is an important part of the healthy cognitive development of many
animals, not just humans, and being deprived of opportunities to play can be detrimental.
For example, a study on rats demonstrated that those raised normally but without access to playmates suffered
from the same kinds of problems as rats with damage to their prefrontal cortex, a region of the brain involved in
social behaviour, abstract thinking and reasoning.
In other words, play shapes the brain. But the kind of brain we have also shapes the type of play we engage in.
Humans are unique in that we engage in fantasy play, part of a package of symbol-based cognitive abilities that
includes self-awareness, language and theory of mind. Its benefits include creativity, behavioural plasticity,
imagination and the ability to plan.
Being able to imagine novel solutions to problems and to work out their consequences before implementing
them would have been an enormous advantage for our early human ancestors – this is exactly what we are
practising when we play "what if" games. From what we can tell, it is unlikely that Neanderthals were able to
engage in fantasy play, and it is this level of imagination that underlies the differences in material culture between
Neanderthals and early humans.
We need to add one final piece to the puzzle: the Neanderthal brain.
Neanderthals experienced accelerated brain
growth compared to us, according to research by Simon Neubauer and Jean-Jaques Hublin from the Max Planck
Institute for Evolutionary Anthropology in Leipzig, Germany, who concluded that this meant the environment has
less impact on the connectivity of their developing brains.
Taking a modern example, accelerated brain growth in children with autism lessens their ability to read social
cues and engage in fantasy play.
The same may have been true for Neanderthals.
This leads us to believe that their perception of the world, and their level of engagement with it, was different from ours.
I think that it is only through years of "training" their unique brains through fantasy play in childhood that
modern humans were able to create fantastical symbolical artworks like the Chauvet bison-woman.
The shorter Neanderthal childhood, combined with their lack of complex fantasy play, influenced the adults they
became, and the artefacts they left behind.
This article appeared in print under the headline "All work and no play left little time for art"
April Nowell is an archaeologist at the University of Victoria, Canada. She focuses on the origins of art, symbol use and language, and on the emergence of modern human behaviour. Her book chapter "Childhood, play and the evolution of cultural capacity in Neanderthals and modern humans" is in the forthcoming book The Nature of Culture (Springer)
Science- Am I just a worker bee or is there a higher purpose in life ?
Updated: 05 Mar 2013
The self: Why are you like you are?
28 February 2013 by Michael Bond
You're so vain, you probably think your self is about you.
The truth is slightly more complicated
Read more: "The great illusion of the self"
THE first time a baby smiles, at around 2 months of age, is an intense and beautiful moment for the parents.
It is perhaps the first sure sign of recognition for all their love and devotion.
It might be just as momentous for the baby, representing their first step on a long road to identity and self-
Identity is often understood to be a product of memory as we try to build a narrative from the many experiences
of our lives.
Yet there is now a growing recognition that our sense of self may be a consequence of our relationships with
"We have this deep-seated drive to interact with each other that helps us discover who we are," says
developmental psychologist Bruce Hood at the University of Bristol, UK, author of The Self Illusion (Constable, 2012).
And that process starts not with the formation of a child's first memories, but from the moment they first learn to
mimic their parents' smile and to respond empathically to others.
The idea that the sense of self drives, and is driven by, our relationships with others makes intuitive sense.
"I can't have a relationship without having a self," says Michael Lewis, who studies child development at the
Robert Wood Johnson Medical School in New Brunswick, New Jersey.
"For me to interact with you, I have to know certain things about you, and the only way I can get at those is by
knowing things about me."
There is now evidence that this is the way the brain works.
Some clues come from people with autism.
Although the disorder is most commonly associated with difficulties in understanding other people's nonverbal
social cues, it also seems to create some problems with self-reflection: when growing up, people with autism are
later to learn how to recognise themselves in a mirror and tend to form fewer autobiographical memories.
Tellingly, the same brain regions – areas of the prefrontal cortex – seem to show reduced activity when autistic
people try to perform these kinds of tasks, and when they try to understand another's actions.
This supports the idea that the same brain mechanism underlies both types of skills.
Further support for the idea comes from the work of Antonio Damasio at the University of Southern California,
who has found that social emotions such as admiration or compassion, which result from a focus on the
behaviour of others, tend to activate the posteromedial cortices, another set of brain regions also thought to be
important in constructing our sense of self (PNAS, vol 106, p 8021).
The upshot is that my own self is not so much about me; it's as much about those around me and how we relate
to one another – a notion that Damasio calls "the social me".
This has profound implications. If a primary function of self-identity is to help us build relationships, then it
follows that the nature of the self should depend on the social environment in which it develops.
Evidence for this comes from cultural psychology.
In his book The Geography of Thought (Nicholas Brealey, 2003), Richard Nisbett at the University of Michigan
presented lab experiments suggesting that Chinese and other east Asian people tend to focus on the context of
a situation, whereas Westerners analyse phenomena in isolation – different outlooks that affect the way we think
Researchers examining autobiographical memory, for example, have found that Chinese people's recollections
are more likely to focus on moments of social or historical significance, whereas people in Europe and America
focus on personal interest and achievement.
Other studies of identity, meanwhile, have found that Japanese people are more inclined to tailor descriptions of
themselves depending on the situation at hand, suggesting they have a more fluid, less concrete sense of
themselves than Westerners, whose accounts tend not to rely on context in this way.
Such differences may emerge at an early age. Lewis points to anthropological reports suggesting that the
"terrible twos" – supposedly the time when a child develops an independent will – are not as dramatic in cultures
less focused on individual autonomy, which would seem to show that culture sculpts our sense of self during
our earliest experiences.
These disparities in outlook and thinking imply that our very identities – "what it is that I am" – are culturally
"I'm a male, I'm an academic, I'm a senior, I'm married, I'm a father and grandfather: all of these things that I define
myself as are really cultural artefacts," says Lewis.
Clearly there is no single pan-cultural concept of selfhood. Yet Hazel Markus, who studies the interaction of
culture and self at Stanford University in California, points out that human personalities do share one powerful
trait: the capacity to continually shape and be shaped by whatever social environment we inhabit.
While the evidence for "the social me" continues to mount, not everyone is convinced that it is always helpful for
To the writer and psychologist Susan Blackmore, the self may be a by-product of relationships.
It may simply unfold "in the context of social interaction and learning to relate to others, which may inevitably
lead you to this sense that I am in here" while bringing some unfortunate baggage along with it.
She points out that the self can compel us to cling neurotically to emotions and thoughts that undermine our
Letting it all go, however, would mean undoing the habit of a lifetime.
This article appeared in print under the headline "Why are you?"
Michael Bond is a New Scientist consultant in London
Science- Sleep Study, Jet Lag, Circadian Rhthyms and Sweet Dreams
Updated: 12 Feb 2013
Sleep and dreaming: Why can't we stay awake 24/7?
From fruit flies to dolphins, every creature needs its shut-eye.
Why we sleep is one of the biggest mysteries in biology, though the clues lie in the brain
Read more: "Sleep and dreaming: The how, where and why"
WE SPEND about a third of our life doing it. If deprived of it for too long we get physically ill.
So it's puzzling that we still don't really know why it is that we sleep.
On the face of it the answer seems obvious: so that our brains and bodies can rest and recuperate.
But why not rest while conscious, so that we can also watch out for threats?
And if recuperation means things are being repaired, why can't that take place while we are awake?
Scientists who study how animals eat, learn or mate are unburdened by questions about the purpose of these activities.
But for sleep researchers the big "Why?" is maddeningly mysterious.
Sleep is such a widespread phenomenon that it must be doing something useful.
Even fruit flies and nematode worms experience periods of inactivity from which they are less easily roused,
suggesting sleep is a requirement of the simplest of animals.
But surveying the animal kingdom reveals no clear correlation between sleep habits and some obvious
physiological need. In fact there is bewildering diversity in sleep patterns.
Some bats spend 20 hours a day slumbering, while large grazing mammals tend to sleep for less than 4 hours a day.
Horses, for instance, take naps on their feet for a few minutes at a time, totalling only about 3 hours daily. In
some dolphins and whales, newborns and their mothers stay awake for the entire month following birth.
All this variation is vexing to those hoping to discover a single, universal function of sleep. "Bodily changes in
sleep vary tremendously across species," says Marcos Frank at the University of Pennsylvania in Philadelphia.
"But in all animals studied so far, the [brain] is always affected by sleep."
So most sleep researchers now focus on the brain.
The most obvious feature of sleep, after all, is that consciousness is either lost, or at least, in some animals, reduced.
And lack of sleep leads to cognitive decline, not only in humans, but also rats, fruit flies and pretty much every other species studied.
Much of our slumber is spent in slow-wave sleep, also known as stage 3 or deep sleep (see diagram), during
which there are easily detectable waves of electrical activity across the whole brain, caused by neurons firing in
synchrony about once a second.
This is interspersed with other phases, including rapid-eye-movement sleep, where brain activity resembles
that seen during wakefulness, and transitional stages between the two states.
It is slow-wave sleep that is generally thought to do whatever it is that sleep actually does.
As well as appearing to be the most different to the brain's waking activity, the waves are larger at the beginning
of sleep, when sleep need is presumably greatest, and then gradually reduce.
And if you go without sleep for longer than usual, these slow waves are larger when you do eventually nod off.
Explanations for sleep fall into two broad groups: those related to brain repair or maintenance, and those in
which the sleeping brain is thought to perform some unique, active function.
There has been speculation over the maintenance angle for over a century.
It was once a fashionable idea that some kind of toxin built up in the brain during our waking hours which,
when it reached a certain level, made sleep irresistible.
Such a substance has never been found, but a modern version of the maintenance hypothesis says that during
the day we deplete supplies of large molecules essential for the operation of the brain, including proteins, RNA
and cholesterol, and that these are replenished during sleep.
It has been found in animals that production of such macromolecules increases during slow-wave sleep,
although critics point out that the figures show a mere correlation, not that levels of these molecules control sleep.
The unique function school of thought also has a long pedigree.
Sigmund Freud proposed that the purpose of sleep was wish fulfilment during dreaming, although scientific support for this notion failed to materialise.
There is good evidence, however, for sleep mediating a different kind of brain function - memory consolidation.
Memories are not written in stone the instant an event is experienced. Instead, initially labile traces are held as
short-term memories, before the most relevant aspects of the experience are transferred to long-term storage.
Several kinds of experiment, in animals and people, show that stronger memories form when sleep takes place
between learning and recall.
Some of the most compelling support for this idea came when electrodes placed into rats' brains showed small
clusters of neurons "replaying" patterns of activity during sleep that had first been generated while the rats had
been awake and exploring. "Memory representations are reactivated during sleep," says Jan Born at the University of Tübingen in Germany.
Many labs remain focused on how memory systems are updated during sleep, but since 2003 a new idea has
been gaining traction.
It straddles both categories of theory, concerned as it is with neuronal maintenance and memory processing.
The hypothesis concerns synapses, the junctions between neurons through which they communicate.
We know that when we form new memories, the synapses of the neurons involved become stronger.
The idea is that while awake we are constantly forming new memories and therefore strengthening synapses.
But this strengthening cannot go on indefinitely: it would be too expensive in terms of energy, and eventually
there would be no way of forming new memories as our synapses would become "maxed-out".
The proposed solution is slow-wave sleep. In the absence of any appreciable external input, the slow cycles of
neuronal firing gradually lower synaptic strength across the board, while maintaining the relative differences in
strength between synapses, so that new memories are retained (see diagram).
There is now much evidence to support what is known as the "synaptic homeostasis hypothesis". In humans,
brain scans show that our grey matter uses more energy at the end of the waking day than at the start. Giulio
Tononi and Chiara Cirelli of the University of Wisconsin-Madison, who proposed the hypothesis, have shown
that in rodents and fruit flies, synaptic strength increases during wakefulness and falls during sleep. The pair
have also shown that when people learn a task that uses a specific part of the brain, that part generates more
intense slow waves during subsequent sleep.
This kind of downscaling is best done "offline", says Tononi.
"You can activate your brain in all kinds of ways, because you don't need to behave or learn."
Synaptic homeostasis has not won over everyone, but it is certainly getting a great deal of attention. It is, says
Jan Born, "currently the most influential [theory] among sleep researchers". Frank, however, would like Tononi
and Cirelli to provide more detail about mechanisms.
Neither is Jerry Siegel convinced. A neuroscientist at the University of California, Los Angeles, Siegel is sticking
with his provocative theory that sleep is simply an adaptive way of saving energy when not doing essential
things, such as foraging or breeding, which are in fact more dangerous than napping someplace safe.
For Siegel, sleep habits reflect the variety of animal lifestyles, with different species sleeping for different purposes.
It's certainly possible that a phenomenon as complex as sleep performs a multitude of functions, agrees Jim
Horne, who studies the impact of sleep loss on health at Loughborough University, UK.
And, given the complexity of the human brain, our sleep may well be among the most complicated of all.
Perhaps then it should be no surprise that theories of sleep function are so diverse. Fathoming whether the big
"Why?" of sleep will yield a single, succinct solution or require myriad answers is likely to keep biologists up at
night for a little while yet.
Liam Drew is a neuroscientist at Columbia University in New York
Science- World's Largest Shellfish Reef Discovered on Scotland's Seabed
Updated: 02 Jan 2013
Shellfish reef off UK's west coast could be biggest in world
Clare Carswell Thursday 27 December 2012
The discovery of a large shellfish reef on the west coast of the UK could be the biggest find of its kind in the world, experts believe.
More than 100 million brightly coloured and rare shellfish have been found in Loch Alsh, a sea inlet between Skye and the Scottish mainland.
The reef of flame shells, or Limaria hians, was found to cover an area of 4.6 square miles (7.5 sq km) during a survey commissioned by Marine Scotland.
It is the largest known colony of flame shells in the UK and possibly the world, according to experts.
Scottish Environment Secretary Richard Lochhead said:
"The seas around Scotland are a hotbed of biodiversity and the clean and cold waters support many fascinating and beautiful species.
"With Scottish waters covering an area around five times bigger than our landmass, it's a huge challenge to try and understand more about our diverse and precious sea life.
"This important discovery may be the largest grouping of flame shells anywhere in the world.
"And not only are flame shells beautiful to look at, these enigmatic shellfish form a reef that offers a safe and productive environment for many other species."
Flame shells have a similar shape to scallops with many neon orange tentacles that appear between the two shells.
They group together on the sea bed and their nests create a living reef to support hundreds of other species.
The Loch Alsh survey was carried out by Heriot-Watt University on behalf of Marine Scotland.
Dr Dan Harries, from the university's School of Life Sciences, said:
"Too often, when we go out to check earlier records of a particular species or habitat we find them damaged, struggling or even gone.
"We are delighted that in this instance we found not just occasional patches but a huge and thriving flame shell community extending right the way along the entrance narrows of Loch Alsh.
"This is a wonderful discovery for all concerned."
Ben James, marine survey and monitoring manager at Scottish Natural Heritage, said:
"Our job has been to advise ministers on suitable places for Marine Protected Areas (MPAs) and to do that we need to have enough information about what's in the marine environment.
"Whilst we had some records of flame shells in Loch Alsh, we had no idea how big the bed was.
"We needed more certainty before recommending them as a protected feature of this MPA proposal.
"It's great to have this new information and it's yet another example of the fantastic diversity of Scotland's marine environment."
The Scottish Government applied to the European Union last month to designate an area in the north-east Atlantic as a conservation area.
Hatton Bank, near the Isle of Lewis, is around 9,752 sq miles (15,694 sq km) and features a large volcanic bank which is home to a large variety of corals.
WWF Scotland spokesman Lang Banks said: "These surveys highlight that Scotland's seas and coasts are home to a truly amazing range of stunning wildlife.
"Who needs space travel when we've still to fully explore and understand the oceans and seas here on planet Earth?
"From helping inform the appropriate deployment of marine renewables to supporting the roll out of a network of
MPAs, these survey findings will prove invaluable in helping ensure the recovery of Scotland's seas."
Science- 2012 Review of Life on Earth
Science- Biofuels that Suck Carbon out of the sky
Updated: 18 Dec 2012
Biofuel that's better than carbon neutral
The race is on to create a biofuel that sucks carbon out of the sky and locks it away where it can't warm the planet
THE green sludge burbles away quietly in its tangle of tubes in the Spanish desert.
Soaking up sunshine and carbon dioxide from a nearby factory, it grows quickly.
Every day, workers skim off some sludge and take it away to be transformed into oil.
People do in a single day what it took geology 400 million years to accomplish.
Indeed, this is no ordinary oil.
It belongs to a magical class of "carbon negative" fuels, ones that take carbon out of the atmosphere and lock it away for good.
The basic idea is fairly simple.
You grow plants, in this case algae, which naturally draw CO2 from the atmosphere.
After you extract the oil, you're left with a residue that holds a substantial portion of the carbon.
This residue is the key to carbon negativity.
If you can store the carbon where it won't decompose and return to the air, more CO2 is taken out of the atmosphere than the fuel emits.
Such carbon negative fuels are no accounting sleight of hand - they could be the most realistic short-term solution we have to curb climate change.
And although it is still early days, companies like General Electric, BP and Google are putting their money behind the idea.
Every time you drive your car or hop on a plane to somewhere sunny you're adding a little more carbon to the atmosphere and bringing a global warming crisis just a little bit closer.
Biofuels are one way of reducing the problem, as plants draw CO2 from the atmosphere as they grow, thereby not adding to the carbon footprint.
Today, the most popular biofuel is ethanol made from corn.
In theory, such a fuel should be carbon neutral: that's to say, for every 100 carbon atoms it draws from the atmosphere, it returns exactly 100 when burned.
Unfortunately, however, it's not that simple.
By the time farmers have tilled the soil, poured on fertiliser and harvested the crop - not to mention the natural gas and coal burned to run the ethanol plant itself - they've used an awful lot of fossil fuel, leaving them well short of carbon neutral.
You might think the problem could be simply solved by capturing the carbon emitted during the biofuels production process.
The fermentation process used to produce ethanol, for example, generates an almost pure stream of CO2 as a by-product.
So, earlier this year, agricultural giant Archer Daniels Midland (ADM) started building the US's first large-scale carbon capture and storage project in Decatur, Illinois.
It will siphon CO2 from the company's ethanol plant, compress it and store it underground nearby.
It plans to store over 1 million tonnes of CO2 annually (see diagram).
However, ADM's ethanol still isn't carbon neutral: instead, thanks to all the energy costs of making the ethanol, it's likely to reduce emissions by only about 20 or 30 per cent compared with fossil fuel.
You might be able to solve the problem if you replaced all the fossil fuels used to run the ethanol plant with renewable energy.
But that doesn't solve the other major issue for crop-based biofuels: they compete with food crops for land. In 2010, corn-based ethanol accounted for 8 per cent of US transport fuel, but consumed almost 40 per cent of the country's corn.
If ethanol replaced all fossil fuels, it would either push food prices into the stratosphere or force farmers onto new land - most likely both.
To make a dent in the amount of greenhouse gas in the atmosphere, we need to find ways around this.
"The question is how many of these situations we can find without infringing on other services that the biomass or the land is supplying," says Johannes Lehmann, a soil scientist at Cornell University in Ithaca, New York.
This is exactly why algae is so promising, notably the single-celled, blue-green variant now referred to as cyanobacteria.
They grow much faster than terrestrial crops, potentially yielding 20 times more biomass per day than soybeans; their oil production is easy to ramp up through genetic engineering; best of all, they can grow in seawater or brackish groundwater on non-arable land, so they don't take land away from food production or forest (Science, vol 314, p 1598).
These qualities were especially appealing to Bio Fuel Systems, (BFS) a small company in Alicante, Spain, that uses cyanobacteria to make its "Blue Petroleum".
The company's prototype plant, in the Spanish coastal desert, is piggybacked on a cement factory, which emits the CO2 the algae need to grow.
The numbers given to New Scientist by BFS president Bernard Stroiazzo illustrate the fraction of carbon that can be trapped by the process.
To make a single barrel of oil, the algae suck a little over 2 tonnes of CO2 from the smokestack of the cement works.
Not all of that stays out of the atmosphere, though.
The algal cultures need regular mixing, which takes energy, as does supplying fertiliser and creating the oil through a patented process involving high heat and pressure.
All the fossil fuels needed for these processes release about 700 kilogrammes of CO2. Burning the oil itself - in car engines, say - emits another 450 kg.
The rest of the carbon - the equivalent of about 900 kg of CO2 - stays in the leftovers, an inorganic carbonate sludge that can be buried or mixed into concrete.
"That will never go back in the atmosphere," says Stroiazzo.
BFS's pilot plant produces about 2.5 barrels of crude oil per hectare of algae each day.
At that rate, Stroiazzo says, a system like BFS's could replace the world's entire crude oil consumption, using an area just a quarter the size of the Libyan desert.
Thirty-five million hectares is a lot of land, to be sure, but not overwhelming if it replaces the 90 million barrels of oil we use each day.
It is also about 1 per cent of the world's pasture area; spread over many plants worldwide it quickly becomes feasible.
But there are a few more factors to consider.
Though they are not selling the oil yet, cost will likely be an issue: BFS's equipment is by no means cheap.
The polycarbonate tubes that house the cultures cost upwards of $1 million per hectare, and stirring the algae requires large amounts of electricity.
This is likely to push the cost of algal biofuel to at least $5 per litre, according to a 2010 International Energy Agency report.
To stay solvent, BFS sells its high-value algal by-products as nutritional supplements, such as omega-3 fatty acids.
While this may work in a nascent biofuels industry, demand for nutritional supplements will falter when the products flood the market, and anyway it doesn't get to the heart of the problem.
Other companies are trying to do that, though. Algae Systems, near San Francisco, suggests cutting costs by culturing its algae in the ocean, in 25-metre plastic bags floating near the shore.
The bags keep the algae at the surface, where the light is most intense, and natural wave action does the mixing.
The firm plans to pipe in nitrogen-rich wastewater to fertilise the algal growth.
Algae Systems is now constructing a pilot plant covering several hectares in Mobile Bay, off the coast of Alabama, which should be operational early next year.
If all the component processes work as well as they have in the research lab, the result should be carbon-negative fuels, says company president Matthew Atwood.
This fuel should be able to undercut fossil petroleum prices within three or four years, he adds.
However, they will need to solve another problem for algal biofuels: fertiliser. Algae are gorge on expensive nutrients like nitrogen and phosphorus.
At relatively small scales, wastewater from cities and croplands can easily supply these, as in Algae Systems's design. But scale up and there simply isn't enough wastewater to go around.
"Human nutrient loading is simply not sufficient," says Stefan Unnasch, an energy analyst and engineer at California consultancy Life Cycle Associates.
"You put more in your car every day than into your toilet." Indeed, producing even a tenth of the US's liquid fuel
from algae would consume more than the entire US supply of both nitrogen and phosphorus, according to
calculations by Ronald Pate, an algal biofuels specialist at Sandia National Laboratory in New Mexico (Applied Energy, vol 88, p 3377).
Researchers may some day find a way to solve the nutrient problem by extracting and reusing nitrogen and
phosphorus from the algal residue, but the biggest difficulty to scaling up is more intractable: how do you get your hands on all that CO2?
Even if algae-growers could tap every last smokestack in the US, that would only be enough to produce about 75 billion litres of algal biofuel per year, according to Pate's calculations.
That's less than 10 per cent of the world's current transport fuel needs.
Moreover, tying biofuel production to fossil-fuel-burning industrial smokestacks merely wrings a second round of energy out of CO2.
"This just postpones emissions," says Jonas Helseth, director of Bellona Europa, an environmental foundation based in Brussels, Belgium.
As yet, this problem has no robust solution.
A few companies are developing technologies to extract and concentrate CO2 from the air. Global Thermostat,
based in New York, has patented a process that uses chemicals and low-temperature waste heat - about 90 °C - to capture CO2 from a stream of air.
Its pilot plant has been operating near San Francisco for more than a year, and a second is on the way, says co-founder Graciela Chichilnisky.
The company has already signed an agreement to supply its technology to Algae Systems and is in talks with
several other algal biofuel companies, she says.
Solve these problems, and algae may yet be vindicated as the most promising path to carbon negative biofuels. But until then, a less glamorous method is poised to take off.
The cheapest, most low-maintenance feedstock for biofuels is waste biomass, such as the cobs and straw left
over after corn harvest, perennial grasses such as giant miscanthus, or dead trees.
This raw material has been used to make ethanol, but its efficiency has been stymied by the difficulty of breaking down the materials.
Cool Planet Energy Systems, based just north of Los Angeles in Camarillo, California, has found a better way to process it.
It has developed a variant of a process called pyrolysis, in which heat, pressure and catalysts convert the biomass directly into the hydrocarbons found in gasoline, diesel oil and jet fuel.
This means the company's fuel can be mixed into regular gasoline to reduce the overall amount of fossil fuel, or in other words, it lowers the carbon intensity of the gasoline.
Earlier this year, researchers at Google - one of the company's investors - road-tested a blend of 5 per cent Cool Planet fuel and 95 per cent gasoline in its GRide cars at its headquarters in Mountain View, California.
The mix reduced the carbon intensity of gasoline by 10 per cent, says vice-president Mike Rocke, meeting California's 2020 Low Carbon Fuel Standard eight years early.
Better yet, carbon gets sequestered. Along with fuel, Cool Planet's pyrolysis process yields large amounts of biochar, a carbon-rich compound that resembles charcoal.
Instead of burying this residue deep underground like ADM or mixing it into cement, however, Cool Planet returns the biochar to the soil.
This has several advantages. It does not depend on the presence of suitable geological formations, and it is easier to transport.
Best of all, the biochar enriches the soil and enhances crop yields because its high surface area helps hold water and nutrients.
"It's like a molecular sponge," says Rocke. Lehmann, a biochar expert, says the stuff can persist in the soil for centuries, which qualifies as carbon sequestration as set by the Intergovernmental Panel on Climate Change.
That's not the only trick that makes the biofuel carbon negative. Instead of wasting fossil fuel on transporting the biomass to a centralised factory to be made into fuel, Cool Planet will build 400 modular units, each capable of producing between 40 and 200 million litres of gasoline per year.
These will use whatever biomass is available within about a 50-kilometre radius.
"Wherever the biomass is, we're going to roll out these plants," says Rocke.
"They're like a Starbucks."
Cool Planet's process only returns half the carbon to the atmosphere and stores the other half as biochar, making the fuel what Rocke terms "100 per cent carbon-negative".
To break into the market, however, the company plans to make a version that is 60 per cent carbon-negative, storing only about a third of the carbon in the plant matter.
At this sweet spot, Rocke reckons the company should be able to sell its fuel for about 40 cents a litre.
To date, the research facility has produced only a few thousand litres of fuel.
However, a pilot plant - bankrolled by investors including Google, BP and GE - will start operation near Los Angeles this month, producing nearly a million litres per year.
And within 20 years, they intend to build 2000 of their modules, enough to supply about 10 per cent of the world's current liquid fuel needs.
Cool Planet's results are encouraging. In 2007, the IPCC reported that for the world to escape catastrophic climate change, carbon emissions would have to begin declining by 2015, with an 85 per cent reduction by 2050. We haven't even started.
Since we can't seem to keep the CO2 from entering the atmosphere, we're left with only two ways to avoid trouble.
We could embark on grand geoengineering schemes to cool the planet, all of which bring huge risks of unintended consequences (New Scientist, 22 September, p 30).
Or we could try to pull some of the CO2 back out of the atmosphere, one car trip at a time.
"Even if carbon-negative biofuels turns out to be just a bit player, they will have done at least a little to reduce carbon emissions," says Lehmann.
"It's a no-regret strategy."
Bob Holmes is a consultant for New Scientist
Science- Gut Instincts- Your other Brain
Updated: 18 Dec 2012
Gut instincts: The secrets of your second brain
When it comes to your moods, decisions and behaviour, the brain in your head is not the only one doing the thinking
IT'S been a tough morning.
You were late for work, missed a crucial meeting and now your boss is mad at you.
Come lunchtime you walk straight past the salad bar and head for the stodge.
You can't help yourself - at times of stress the brain encourages us to seek out comfort foods.
That much is well known.
What you probably don't know, though, is that the real culprit may not be the brain in your skull but your other brain.
Yes, that's right, your other brain.
Your body contains a separate nervous system that is so complex it has been dubbed the second brain.
It comprises an estimated 500 million neurons - about five times as many as in the brain of a rat - and is around 9 metres long, stretching from your oesophagus to your anus.
It is this brain that could be responsible for your craving under stress for crisps, chocolate and cookies.
Embedded in the wall of the gut, the enteric nervous system (ENS) has long been known to control digestion.
Now it seems it also plays an important role in our physical and mental well-being.
It can work both independently of and in conjunction with the brain in your head and, although you are not conscious of your gut "thinking", the ENS helps you sense environmental threats, and then influences your response.
"A lot of the information that the gut sends to the brain affects well-being, and doesn't even come to consciousness," says Michael Gershon at Columbia-Presbyterian Medical Center, New York.
If you look inside the human body, you can't fail to notice the brain and its offshoots of nerve cells running along the spinal cord.
The ENS, a widely distributed network of neurons spread throughout two layers of gut tissue, is far less obvious (see diagram), which is why it wasn't discovered until the mid-19th century.
It is part of the autonomic nervous system, the network of peripheral nerves that control visceral functions.
It is also the original nervous system, emerging in the first vertebrates over 500 million years ago and becoming more complex as vertebrates evolved - possibly even giving rise to the brain itself.
Digestion is a complicated business, so it makes sense to have a dedicated network of nerves to oversee it.
As well as controlling the mechanical mixing of food in the stomach and coordinating muscle contractions to move it through the gut, the ENS also maintains the biochemical environment within different sections of the gut, keeping them at the correct pH and chemical composition needed for digestive enzymes to do their job.
But there is another reason the ENS needs so many neurons: eating is fraught with danger.
Like the skin, the gut must stop potentially dangerous invaders, such as bacteria and viruses, from getting inside the body.
If a pathogen should cross the gut lining, immune cells in the gut wall secrete inflammatory substances including histamine, which are detected by neurons in the ENS.
The gut brain then either triggers diarrhoea or alerts the brain in the head, which may decide to initiate vomiting, or both.
You needn't be a gastroenterologist to be aware of these gut reactions - or indeed the more subtle feelings in your stomach that accompany emotions such as excitement, fear and stress.
For hundreds of years, people have believed that the gut interacts with the brain to influence health and disease.
Yet this connection has only been studied over the last century.
Two pioneers in this field were American physician Byron Robinson, who in 1907 published The Abdominal and Pelvic Brain, and his contemporary, British physiologist Johannis Langley, who coined the term "enteric nervous system".
Around this time, it also became clear that the ENS can act autonomously, with the discovery that if the main connection with the brain - the vagus nerve - is severed the ENS remains capable of coordinating digestion.
Despite these discoveries, interest in the gut brain fell until the 1990s when the field of neurogastroenterology was born.
We now know that the ENS is not just capable of autonomy but also influences the brain. In fact, about 90 per cent of the signals passing along the vagus nerve come not from above, but from the ENS (American Journal of Physiology - Gastrointestinal and Liver Physiology, vol 283, p G1217).
The feel-good factor
The second brain also shares many features with the first. It is made up of various types of neuron, with glial support cells. It has its own version of a blood-brain barrier to keep its physiological environment stable.
And it produces a wide range of hormones and around 40 neurotransmitters of the same classes as those found in the brain.
In fact, neurons in the gut are thought to generate as much dopamine as those in the head. Intriguingly, about 95 per cent of the serotonin present in the body at any time is in the ENS.
What are these neurotransmitters doing in the gut?
In the brain, dopamine is a signalling molecule associated with pleasure and the reward system.
It acts as a signalling molecule in the gut too, transmitting messages between neurons that coordinate the contraction of muscles in the colon, for example.
Also transmitting signals in the ENS is serotonin - best known as the "feel-good" molecule involved in preventing depression and regulating sleep, appetite and body temperature.
But its influence stretches far beyond that. Serotonin produced in the gut gets into the blood, where it is involved in repairing damaged cells in the liver and lungs.
It is also important for normal development of the heart, as well as regulating bone density by inhibiting bone formation (Cell, vol 135, p 825).
But what about mood? Obviously the gut brain doesn't have emotions, but can it influence those that arise in your head?
The general consensus is that neurotransmitters produced in the gut cannot get into the brain - although, theoretically, they could enter small regions that lack a blood-brain barrier, including the hypothalamus.
Nevertheless, nerve signals sent from the gut to the brain do appear to affect mood.
Indeed, research published in 2006 indicates that stimulation of the vagus nerve can be an effective treatment for chronic depression that has failed to respond to other treatments (The British Journal of Psychiatry, vol 189, p 282).
Such gut to brain signals may also explain why fatty foods make us feel good.
When ingested, fatty acids are detected by cell receptors in the lining of the gut, which send nerve signals to the brain.
This may not be simply to keep it informed of what you have eaten.
Brain scans of volunteers given a dose of fatty acids directly into the gut show they had a lower response to pictures and music designed to make them feel sad than those given saline.
They also reported feeling only about half as sad as the other group (The Journal of Clinical Investigation, vol 121, p 3094).
There is further evidence of links between the two brains in our response to stress.
The feeling of "butterflies" in your stomach is the result of blood being diverted away from it to your muscles as part of the fight or flight response instigated by the brain.
However, stress also leads the gut to increase its production of ghrelin, a hormone that, as well as making you feel more hungry, reduces anxiety and depression.
Ghrelin stimulates the release of dopamine in the brain both directly, by triggering neurons involved in pleasure and reward pathways, and indirectly by signals transmitted via the vagus nerve.
In our evolutionary past, the stress-busting effect of ghrelin may have been useful, as we would have needed to be calm when we ventured out in search of food, says Jeffrey Zigman at UT Southwestern Medical Center in Dallas, Texas.
In 2011, his team reported that mice exposed to chronic stress sought out fatty food, but those that were genetically engineered to be unable to respond to ghrelin did not (The Journal of Clinical Investigation, vol 121, p 2684).
Zigman notes that in our modern world, with freely available high-fat food, the result of chronic stress or depression can be chronically elevated ghrelin - and obesity.
Gershon suggests that strong links between our gut and our mental state evolved because a lot of information about our environment comes from our gut.
"Remember the inside of your gut is really the outside of your body," he says.
So we can see danger with our eyes, hear it with our ears and detect it in our gut.
Pankaj Pasricha, director of the Johns Hopkins Center for Neurogastroenterology in Baltimore, Maryland, points out that without the gut there would be no energy to sustain life.
"Its vitality and healthy functioning is so critical that the brain needs to have a direct and intimate connection with the gut," he says.
But how far can comparisons between the two brains be taken?
Most researchers draw the line at memory - Gershon is not one of them.
He tells the story of a US army hospital nurse who administered enemas to the paraplegic patients on his ward at 10 o'clock every morning.
When he left, his replacement dropped the practice.
Nevertheless, at 10 the next morning, everyone on the ward had a bowel movement.
This anecdote dates from the 1960s and while Gershon admits that there have been no other reports of gut memory since, he says he remains open to the idea.
Then there's decision-making.
The concept of a "gut instinct" or "gut reaction" is well established, but in fact those fluttery sensations start with signals coming from the brain - the fight or flight response again.
The resulting feeling of anxiety or excitement may affect your decision about whether to do that bungee jump or arrange a second date, but the idea that your second brain has directed the choice is not warranted.
The subconscious "gut instinct" does involve the ENS but it is the brain in your head that actually perceives the threat.
And as for conscious, logical reasoning, even Gershon accepts that the second brain doesn't do that.
"Religion, poetry, philosophy, politics - that's all the business of the brain in the head," he says.
Still, it is becoming apparent that without a healthy, well-developed ENS we face problems far wider than mere indigestion.
Pasricha has found that newborn rats whose stomachs are exposed to a mild chemical irritant are more depressed and anxious than other rats, with the symptoms continuing long after the physical damage has healed.
This doesn't happen after other sorts of damage, like skin irritation, he says.
It has also emerged that various constituents of breast milk, including oxytocin, support the development of neurons in the gut (Molecular Nutrition and Food Research, vol 55, p 1592).
This might explain why premature babies who are not breastfed are at higher risk of developing diarrhoea and necrotising enterocolitis, in which portions of the bowel become inflamed and die.
Serotonin is also crucial for the proper development of the ENS where, among its many roles, it acts as a growth factor.
Serotonin-producing cells develop early on in the ENS, and if this development is affected, the second brain cannot form properly, as Gershon has shown in mutated mice.
He believes that a gut infection or extreme stress in a child's earliest years may have the same effect, and that later in life this could lead to irritable bowel syndrome, a condition characterised by chronic abdominal pain with frequent diarrhoea or constipation that is often accompanied by depression.
The idea that irritable bowel syndrome can be caused by the degeneration of neurons in the ENS is lent weight by recent research revealing that 87 out of 100 people with the condition had antibodies in their circulation that were attacking and killing neurons in the gut (Journal of Neurogastroenterology and Motility, vol 18, p 78).
If nothing else, the discovery that problems with the ENS are implicated in all sorts of conditions means the second brain deserves a lot more recognition than it has had in the past. "Its aberrations are responsible for a lot of suffering," says Pasricha.
He believes that a better understanding of the second brain could pay huge dividends in our efforts to control all sorts of conditions, from obesity and diabetes to problems normally associated with the brain such as Alzheimer's and Parkinson's (see "Mental illnesses of the gut").
Yet the number of researchers investigating the second brain remains small.
"Given it's potential, it's astonishing how little attention has been paid to it," says Pasricha.
Mental illnesses of the gut
A growing realisation that the nervous system in our gut is not just responsible for digestion (see main story) is partly fuelled by discoveries that this "second brain" is implicated in a wide variety of brain disorders.
In Parkinson's disease, for example, the problems with movement and muscle control are caused by a loss of dopamine-producing cells in the brain.
However, Heiko Braak at the University of Frankfurt, Germany, has found that the protein clumps that do the damage, called Lewy bodies, also show up in dopamine-producing neurons in the gut.
In fact, judging by the distribution of Lewy bodies in people who died of Parkinson's, Braak thinks it actually starts in the gut, as the result of an environmental trigger such as a virus, and then spreads to the brain via the vagus nerve.
Likewise, the characteristic plaques or tangles found in the brains of people with Alzheimer's are present in neurons in their guts too.
And people with autism are prone to gastrointestinal problems, which are thought to be caused by the same genetic mutation that affects neurons in the brain.
Although we are only just beginning to understand the interactions between the two brains, already the gut offers a window into the pathology of the brain, says Pankaj Pasricha at Johns Hopkins University in Baltimore, Maryland.
"We can theoretically use gut biopsies to make early diagnoses, as well as to monitor response to treatments."
Cells in the second brain could even be used as a treatment themselves.
One experimental intervention for neurodegenerative diseases involves transplanting neural stem cells into the brain to replenish lost neurons.
Harvesting these cells from the brain or spinal cord is not easy, but now neural stem cells have been found in the gut of human adults (Cell Tissue Research, vol 344, p 217).
These could, in theory, be harvested using a simple endoscopic gut biopsy, providing a ready source of neural stem cells.
Indeed, Pasricha's team is now planning to use them to treat diseases including Parkinson's.
Emma Young is a writer based in Sheffield, UK
Science- The Dash for Gas - Not all its fracked up to be
Updated: 11 Dec 2012
The UK's new dash for gas is a dangerous gamble
The British government's new emphasis on gas power and fracking puts the climate and consumers at risk, says environmental policy researcher Paul Ekins
Following hard on the heels of the British government's Energy Bill, with its apparent incentives for large quantities of new power from nuclear and renewables, UK chancellor George Osborne has now unveiled his gas generation strategy – building up to 40 new power plants – and given a clear nod to potential shale gas and the fracking that will be needed to extract it.
Often described as a second "dash for gas", it may be as much a cause for perplexity as anything else.
Do we really need these two large government initiatives tailing each other like London buses?
In light of the go-ahead for gas, it is worth asking whether the government remains committed to carbon emissions reduction targets and to the advice of the Committee on Climate Change which it set up. The CCC wants a lesser role for new gas generation.
The secretary of state for energy and climate change, Ed Davey, a member of the Liberal Democrat party that forms a minority within the coalition government, says yes.
The chancellor, a member of the Conservative party which dominates the coalition, demands delay and a review of crucial decisions, which implies no.
The prime minister David Cameron, a fellow Conservative, appoints climate and renewables sceptics to key positions and vetoes the appointment of the chief executive of the CCC, David Kennedy, to head up the government's Department for Energy and Climate Change.
You have to draw your own conclusions. Less clarity over government policymaking is hard to imagine.
If investment in renewables and nuclear power materialises, their costs, and carbon emissions, will fall, perhaps encouraging a future government to reaffirm the UK's commitment to carbon reduction targets.
If that happens, any gas-fired stations built today will either have to fit carbon capture and storage (CCS) technology or operate at ever-lower capacities and lower marginal returns on electricity.
An investor in such a gas plant, perhaps faced with rising gas prices, would stand to lose money unless paid simply to be there for when the wind was not blowing or the nuclear stations were being serviced.
Perhaps the capacity payments envisaged in the Energy Bill will do that.
The detail is still to be revealed.
If, on the other hand, there is little investment in nuclear and new renewables, then the gas-fired power stations will need to run and run to keep the lights on.
Without CCS, the UK will miss emissions targets by a large margin.
Leaving aside the abdication of the high ground of climate change mitigation this would represent, there is the issue of gas prices and security of supply.
Optimists imagine that the bonanza of shale gas in the US will spread to Europe and the rest of the world, prices will fall, and the UK will see a new age of cheap energy.
There is no evidence that this is likely.
Most experts suggest that even where shale gas exists in quantity – not as common as the initial euphoria about shale gas imagined and current commentary asserts – its extraction would be limited by factors such as public opposition to the local environmental impact and lack of the huge quantity of water that fracking needs.
Burgeoning global demand from India, China and other emerging economies would eat up new gas supplies as fast as they became available, so prices would remain high and supplies potentially constrained.
In this scenario, UK households and industry would be tied to a highly unpredictable roller coaster of gas prices that are generally high and can spike higher due to volatility, and be vulnerable to geopolitical disruptions to supply.
For my money, the climate-responsible, economically prudent and relatively secure energy trajectory for the UK is the low-carbon route, based largely on renewables and the efficient use of energy, but perhaps with some new nuclear for a diminishing base load.
Despite the risks, we will still need some of the gas-fired power stations that the government's gas generation strategy envisages.
They are relatively cheap to build and their owners would be increasingly paid through the 2020s to back up the low-carbon energy when it is not available.
But that role would diminish as energy storage options for renewables develop and electricity grids across Europe become more inter-connected – as is envisaged.
What seems to have become clear in the last few weeks is that the chancellor and prime minister do not care much about climate responsibility, and have been persuaded by the gas industry and other voices that large-scale gas use is a relatively safe bet.
This seems to me a dangerous conclusion for the UK, with little foundation in evidence.
A future government should reverse it.
Science- The Great Thaw - The End of the Ice Age
Updated: 06 Nov 2012
The great thaw: Charting the end of the ice age
05 November 2012
by Anil Ananthaswamy
Just 20,000 years ago, ice ruled the planet.
So why did it relax its grip?
Finally, it looks like the answers are in
DURING the summer of 2008, workers excavating Ground Zero in Lower Manhattan dug right down to the bedrock.
There, they found something unexpected: a huge pothole more than 10 metres deep, the crevices around it crammed with stones of several different kinds of rock.
The consulting geologist immediately recognised these features.
The stones had been carried there from many miles away by a glacier that had ground across the bedrock.
At some point, a swirling torrent of glacial meltwater had carved out the pothole.
From potholes in New York City to forests beneath the sea, evidence of the time ice dominated the world is all around us.
The last great ice age began around 120,000 years ago.
One massive ice sheet, more than 3 kilometres thick in places, grew in fits and starts until it covered almost all of Canada and stretched down as far as Manhattan.
Another spread across most of Siberia, northern Europe and Britain, stopping just short of what is now London.
Elsewhere many smaller ice sheets and glaciers grew, vast areas turned into tundra and deserts expanded as the planet became drier.
With so much ice on land, sea level was 120 metres lower than it is today.
Britain and Ireland were part of mainland Europe. Florida was twice the size it is now, with Tampa stranded far from the coast.
Australia, Tasmania and New Guinea were all part of a single land mass called Sahul.
The planet was barely recognisable.
Then, 20,000 years ago, a great thaw began.
Over the following 10,000 years, the average global temperature rose by 3.5 °C and most of the ice melted.
Rising seas swallowed up low-lying areas such as the English Channel and North Sea, forcing our ancestors to abandon many settlements.
So what drove this dramatic transformation of the planet?
We have long known the thaw began with an increase in summer sunlight in the northern hemisphere, melting ice and snow.
It is what happened next that has remained mysterious.
Soon after the thaw began, for instance, the southern hemisphere began to warm while the northern hemisphere cooled - the opposite of what was expected from the changes in sunshine.
Now, after nearly two centuries of wrestling with seemingly contradictory findings, we think we finally understand how the ice age ended.
It all began in the 1830s, when Louis Agassiz noticed that characteristic features created by glaciers, such as scratches in the bedrock and "erratic" rocks dumped far from their place of origin, could be found far from existing glaciers.
Similar discoveries were soon being made all over the world, from Canada to Chile.
It became clear that there had been a whole series of ice ages.
What had made the ice come and go? In 1864, James Croll proposed that changes in the amount of sunlight reaching different parts of Earth's surface, due to changes in the planet's orbit, were responsible.
He also suggested that the orbital effects had been amplified by various feedback mechanisms, such as the melting of heat-reflecting snow and ice, and changes in ocean currents.
Croll got many of the details wrong, but he was on the right track.
Early in the 20th century, the Serbian astronomer Milutin Milankovitch concluded that summer sunlight in the northern hemisphere must be the crucial factor and spent years painstakingly calculating how this had changed over the past 600,000 years.
His ideas weren't accepted at the time, but in the 1970s studies of ocean-sediment cores revealed that the advances and retreats of the ice ages did indeed coincide with "Milankovitch cycles".
Yet many enigmas remained.
For starters, the changes in sunshine were tiny.
Even if they were amplified by more of the sun's heat being absorbed by the planet as snow and ice melted, it was hard to account for the scale of the global changes.
What's more, when summer sunshine increases in the northern hemisphere, it decreases in the southern hemisphere.
This had led Croll to suggest that ice ages alternate between hemispheres: when the north freezes the south thaws and vice versa.
But it had long been clear that the whole world had warmed at around the same time.
The answer to these puzzles seemed to emerge in the 1980s, when ice cores drilled in Antarctica revealed an astonishingly close correlation between atmospheric carbon dioxide levels and temperature.
"For the last million years, you see these two going up and down, up and down, together through each ice age, and it's almost in perfect lockstep," says Jeremy Shakun of Harvard University.
"It's about as beautiful a correlation as you ever get from nature."
If CO2 levels had risen soon after the thaw began in the north, it would explain why the southern hemisphere began to warm too.
It would also help to explain the magnitude of the changes.
But this promising idea ran into a major problem: by around a decade ago, it had become clear that the Antarctic starting warming a few hundred years before CO2 levels began to rise.
So while soaring CO2 levels undoubtedly warmed the planet - they are now thought to be responsible for about half of the warming as the ice age ended - they weren't the initial cause.
"Something else was causing Antarctica to warm," says Daniel Sigman of Princeton University.
This wasn't the only mystery.
In the 1930s, studies of sediments containing the pollen of the alpine flower Dryas octopetala and other plants suggested that almost as soon as Europe began warming, it suddenly got cold again.
This cold phase, called the Oldest Dryas or Mystery Interval, lasted from around 17,500 years ago to 14,700 years ago. Ice cores later showed Greenland cooled at the same time.
Yet during this period Antarctica warmed steadily.
"On the detailed scale, the south seems to warm before the north," says Sigman.
But what would make the southern hemisphere warm even as the northern hemisphere cooled?
It could not be due to orbital changes or rising CO2 levels - but it could be due to changing ocean currents.
As the vast ice sheets began to melt 19,000 years ago, stupendous quantities of fresh water poured into the North Atlantic (see diagram).
Studies of marine sediments off the Irish Sea coast, for example, show that the sea level there rose about 10 metres in just a few hundred years (Science, vol 304, p 1141).
Today in the North Atlantic, salty water arriving from the tropics cools, becomes very dense and sinks to the bottom.
These deep, cold waters flow all the way to the southern hemisphere, while on the surface warm water - including the Gulf Stream - flows north.
This system of currents is called the Atlantic meridional overturning circulation.
The huge quantities of fresh water pouring into the ocean 19,000 years ago would have diluted the salty water, making it less dense.
Result: a slowdown in the overturning circulation.
The proof came in 2004 from a study of ocean sediments.
The ratio of two heavy elements, which indicates the speed of the deep current, showed that the overturning circulation had almost ground to a halt 17,500 years ago (Nature, vol 428, p 834).
The result was a kind of see-saw effect.
With much less heat being carried north by the surface currents, the northern hemisphere cooled.
The tropical and subtropical regions of the southern hemisphere, by contrast, began warming as they were losing less heat to the north.
This explains many puzzling findings.
The slowdown of the Atlantic current can also help explain why CO2 levels rose during the great thaw (see diagram).
By the 1990s, the search for the source of the CO2 was focusing on the Southern Ocean.
Isotopes in ocean sediments suggested that a huge reservoir of CO2 had built up in deep waters during the ice age.
It is thought that a lack of vertical mixing, along with a cover of sea ice, trapped the gas.
During the thaw, however, the ocean was "uncorked" and much of the CO2 escaped into the atmosphere.
Confirmation came earlier this year, thanks to a very detailed isotopic analysis of the CO2 trapped in ice cores from Antarctica.
"The CO2 must have come from the deep ocean," says team member Jochen Schmitt of the University of Bern in Switzerland.
Increased vertical mixing in the Southern Ocean is now widely accepted as being behind the release of CO2.
In 2009, for instance, Bob Anderson of the Lamont-Doherty Earth Observatory in New York reported that the Southern Ocean saw big increases in the growth of plankton with silica shells during the Oldest Dryas, when the southern hemisphere began warming (Science, vol 323, p 1443).
As the growth of these organisms is limited by how much dissolved silica there is in surface waters, the increases must be due to the upwelling of water rich in silica and other nutrients.
But what caused it?
There are two ideas.
Sigman points out that Antarctica began warming at almost the same time as the waters just south of the equator.
By itself, though, the shutdown of the Atlantic current should only have warmed waters in the tropics, not those as far south as Antarctica.
In 2007, his team proposed that when the Atlantic conveyor shut down, it was replaced by a local overturning circulation in the waters around Antarctica.
Dense surface water sank and deep water welled up, releasing both heat and CO2.
"That would explain both the Antarctic warming and the CO2 rise," says Sigman.
Anderson and his colleagues, however, think that the increased upwelling was driven by changes in winds.
Earth has distinct bands of prevailing winds, driven by the temperature differences between the poles and the tropics, coupled with the planet's rotation.
Their positions can change when the temperature differences change.
During the ice age, the band of westerlies in the southern hemisphere - which sailors call the Roaring Forties due to their latitude - would have been further north.
The see-saw effect shifted it southwards over the Southern Ocean, warming Antarctica and stirring up the sea around the frozen continent.
In particular, the wind-driven circular current would have produced more upwelling in the shallower region between South America and Antarctica.
While the details are still being debated, the big picture now seems clear.
"There is still some disagreement about the processes occurring in Antarctica as the last ice age ended," says Anderson.
"But at least the broader features are pretty well accepted."
Earlier this year, Shakun and colleagues drew together many of these strands of research with an analysis of 80 different records of temperature and atmospheric composition over the past 22,000 years (Nature, vol 484, p 49).
Their work pretty much confirms the sequence of events that ended the ice age. It goes like this:
Around 20,000 years ago, the northern ice sheets had spread so far south that just a small increase in sunshine led to extensive melting.
As fresh water poured into the North Atlantic, the overturning circulation shut down, cooling the northern hemisphere but warming the southern hemisphere.
These changes were mostly due to a redistribution of heat - by 17,500 years ago, the average global temperature had risen just 0.3 °C.
Changing winds or currents, or both, then brought more deep water to the surface in the Southern Ocean, releasing CO2 that had been trapped for thousands of years.
As atmospheric levels climbed above 190 parts per million, the whole planet began to warm.
The far north was the slowest to respond, but by around 15,000 years ago, as CO2 levels approached 240 ppm and the Atlantic overturning circulation sped up again, temperatures started to shoot up.
The recovery of the overturning circulation had the opposite effect in the southern hemisphere: warming stalled and the release of CO2 stopped.
Around 12,900 years ago, the see-saw swung again.
Temperatures in northern latitudes suddenly plummeted and remained cold for about 1300 years.
This cold snap, called the Younger Dryas, is thought to have been caused by a colossal meltwater lake in North America, which held more water than all the Great Lakes put together, suddenly flooding into the Atlantic and shutting down the overturning circulation once again.
The Southern Ocean, meanwhile, started releasing CO2 again. Levels in the atmosphere shot up to 260 ppm, causing the whole planet to warm rapidly over the next couple of millennia.
By around 10,000 years ago, Earth had been transformed.
The ice had retreated, the seas had risen and our ancestors were learning how to farm.
Technically, though, the ice age has not actually ended.
The ice has advanced and retreated many times over the past few million years, but some ice has always remained at the poles.
Perhaps not for much longer, though.
It took just a small increase in sunshine and a gradual, 70-ppm rise in CO2 to melt the great ice sheets that once covered Eurasia and America.
Since the dawn of the industrial age levels have risen by 130 ppm and counting.
If we haven't already pumped enough CO2 into the atmosphere to melt the ice sheets on Greenland and Antarctica, we might soon.
Fortunately for us, it might take thousands of years for the last great ice sheets to vanish altogether.
If it does happen, though, perhaps one day builders in Antarctica will find massive potholes in the bedrock carved by meltwater, and reflect on another dramatic transformation of the planet.
Anil Ananthaswamy is a consultant for New Scientist based in Berkeley, California
Science- The Ash Tree fungus is new to Europe but not Asia
Updated: 06 Nov 2012
Are Europe's ash trees finished?
17:31 31 October 2012
by Andy Coghlan
A fungus deadly to ash trees has just reached Britain and Ireland, after emerging 20 years ago in Poland.
Already it has devastated ash trees in mainland Europe, sweeping through more than 20 countries powerless to prevent its spread.
How did this fungus develop?
And what, if anything, can be done to stop it in countries like the UK, where ashes account for around a fifth of all trees? By the sound of it, the outlook is not good. New Scientist investigates.
Just how many ash trees have been killed?
Although there are no official figures, ash trees have effectively been wiped out in Poland, where the disease first made its appearance in 1992.
In Lithuania, 99 per cent of the ashes are gone; in Denmark, 90 per cent.
Elsewhere, the impact has been mixed, with some but not all ashes succumbing.
And now it's reached the UK?
Yes, hence the current panic.
Since February, the disease has been spotted in several English nurseries.
The outbreaks were traced to trees and seeds imported from countries that are already affected, so the response has remained low-key, although 100,000 nursery trees and saplings have been destroyed.
The alarm bells really started ringing last month when the disease was spotted in wild ash trees in East Anglia, one of the regions of England that is closest to mainland Europe.
The likelihood is that the fungus must have spread here naturally.
How did it do that?
It probably blew in from the European mainland; the fungal spores can travel great distances by wind.
Alternatively, it may have been brought here by contaminated birds, or even vehicles and people.
How does the fungus kill ash trees?
Fungal spores land on leaves, germinate and begin invading tissue.
It starts with the leaf, then moves into the leaf stalks.
Ultimately the fungus spreads into the tree's trunk.
As it spreads, the fungus chokes off all water channels in the tree, so in its wake tissues wither and die. Eventually, the tree succumbs.
Can it be stopped?
Most countries where it has taken hold simply gave up.
Part of the problem is that the fungus does not spread from infected trees themselves, but from infected leaves shed in the autumn.
The fungus grows on the leaves and leaf stalks as they decay, then produces copious spores in summer which spread to uninfected ash trees, completing the life cycle.
"There's very little you can do," says Ottmar Holdenrieder of the Swiss Federal Institute of Technology in Zurich, who in 2010 helped uncover the fungus's complete life cycle.
"It's a waste of time to chop down trees."
The infective material is all on the forest floor and cannot be removed or eradicated with fungicides without destroying countless other forms of forest life.
Which fungus causes the disease?
This is a long story. It begins in 2006, when Tadeusz Kowalski of the University of Agriculture in Krakow, Poland, identified a newly discovered fungus, Chalara fraxinea, as the cause of the disease (Forest Pathology, DOI: 10.1111/j.1439-0329.2006.00453.x).
This did not solve the puzzle.
Fungal species often exist in two forms: one that reproduces itself asexually, and one that multiplies sexually by producing spores.
It turned out that Chalara fraxinea is asexual, so the real killer remained at large.
By 2009, Kowalski had found what he thought was the sexual form of C. fraxinea: Hymenoscyphus albidus, which produces spores from tiny toadstool-like growths on ash leaf litter (Forest Pathology, DOI: 10.1111/j.1439-0239.2008.00589.x).
But this was something of a red herring.
H. albidus has been growing on the decaying leaves of Europe's ash trees for centuries, so was unlikely to be the culprit.
It has been known in the UK, for example, since the mid-19th century.
The real killer was unmasked a year later, and produced toadstools identical to those of harmless old H. albidus.
Using painstaking genetic analysis, Holdenrieder, Kowalski and others found that this doppelganger was actually a different, and lethal, species.
They named it Hymenoscyphus pseudoalbidus (Forest Pathology, DOI: 10.111/j.1439-0329.2010.00645.x).
How did this killer emerge, apparently out of the blue?
Holdenrieder and his colleagues are still investigating that, and should reveal their results next year.
For now, their hunch is that it came from Asia, either via the wind or accidentally brought in on imported ash trees.
There is circumstantial evidence: ash trees in Asia are immune to the disease.
The ideas that H. pseudoalbidus evolved from its close European relative, or that climate change made European ashes more vulnerable to a pathogen that was already there, have both been ruled out.
In short, the fungus is new to Europe, and the serious money is on it arriving from the east.
Does our identification of the culprit help in combating the disease?
The most important thing is that it may now be possible to breed or develop ash trees that are immune to the fungus.
These have already emerged in Lithuania, says Holdenrieder, where 99 per cent of the original ash population died out.
He says that the offspring of survivors are proving resistant.
Alternatively, it might be possible to develop a vaccine, as was developed to protect elm trees against Dutch elm disease, but this is a distant goal.
Meanwhile, now that the fungus is in Europe, it is locked in an "arms race" with the ash trees.
The fungus has the whip hand because it breeds far faster.
Can we do anything to stop the fungus evolving?
Newly published work by Holdenrieder's colleague Andrin Gross suggests that the key is to avoid importing any further variants of the fungus, as these help the fungus to continue evolving and overcoming resistance in the trees (Fungal Genetics and Biology, DOI: 10.1016/j.fgb.2012.08.008).
"Ash trees will never be able to adapt if we constantly introduce new variants of the fungus," says Gross.
What's the best hope for UK trees?
Earlier this week, the UK government banned any further imports of ash trees.
It also initiated a huge survey of forests in East Anglia to establish how far the disease may have spread.
"Once we have a handle on how big or small this issue is, we can decide whether to go for eradication or containment," says a spokesman for the UK Forestry Commission.
The best hope is that any fungus is present only in small pockets that can be cleared to prevent further spread.
The worst case is that it is everywhere, in which case it's probably goodbye to the English ash.
"If that's the case, there's nothing they can do about it," says Jim Briercliffe, business development manager of the Horticultural Trades Association (HTA) in Reading, UK, which represents suppliers of plants, seeds and gardening equipment.
The only option will be to replace dead ash with other kinds of tree.
The HTA warned the government in 2009 that the disease was rife in Danish nurseries and could easily reach the UK.
"It is annoying that our warning was ignored," he says.
How is mainland Europe coping?
"My gut feeling is that the whole of Europe will have to live with the disease in the long term," says Holdenrieder.
"Ash tree populations will be reduced to less than 10 per cent what they were originally."
Meanwhile, researchers are pooling their resources to see what can be done.
A pan-European group of Chalara specialists called FRAXBACK is meeting for the first time in November in Uppsala, Sweden
Science- The Universe- Astronomers can see into the past and the full story
Updated: 31 Oct 2012
The universe: the full story
29 October 2012
by Abraham Loeb and Jonathan Pritchard
The grand sweep of cosmic history is about to be revealed in the crackle of giant radio waves
ASTRONOMERS have a great advantage over archaeologists: they can see the past.
The finite speed of light means that the further away a cosmic object is, the longer its light takes to reach us.
An image recorded by our telescopes today tells us how that object looked long ago when its light was emitted.
Using our most powerful telescopes on Earth and in space, we can now trace the universe's story back to a time when it was just 500 million years into its 13.7-billion-year life.
Meanwhile, a single flash that bathed the cosmos in light 400,000 years after the big bang provides us with an isolated exposure of the universe in this infancy.
This infant universe is, like a newborn baby, almost featureless, yet to assume the characteristics that will mark it out in later life.
When our telescopic cameras pick up its story again, however, it is recognisably its adult self - stars, galaxies and clusters of galaxies already populate its reaches.
What happened in between, during the universe's turbulent, formative adolescent years?
That has long been a matter of conjecture.
Now, thanks to a combination of new instruments and refined observational techniques, the missing reel of the universe's story is about to be slotted in - presented in the crackle and hiss of giant radio waves.
We already have a rough storyboard for the universe's missing adolescence.
Clues are encoded in the cosmic microwave background, that radiation suddenly liberated 400,000 years into the universe's life.
At this "epoch of recombination" the cosmos had cooled enough for protons and electrons to form neutral hydrogen, which scattered light in all directions.
Tiny variations in this radiation's temperature show that the atoms were spread not uniformly, but in almost imperceptible clumps.
Gravity's pull, our story goes, caused these lumps to consolidate and grow and eventually ignite into stars. These stars, in turn, felt a mutual attraction, slowly over the course of a few hundred million years forming ever-larger galaxies.
As they did so, the universe underwent a final change in its character: high-energy radiation broke down hydrogen formed at the epoch of recombination, freeing up electrons and protons.
This "epoch of reionisation", which is thought to have ended some 700 million years after the big bang, marked the coming of age of the cosmos we see today.
Convincing as this plot development is, in the absence of observational evidence it is a tale too confidently told. Many significant details of the universe's evolution remain sketchy - and in some cases wholly obscure.
For a start, what made the first galaxies?
The first stars, one might reasonably think.
But the first stars must have been oddballs.
Unlike every subsequent generation, they grew up in a pristine environment consisting solely of hydrogen and helium, the only elements the big bang forged in large quantities.
Nuclear reactions within these pioneering stars created the heavier elements, such as carbon, oxygen and silicon, that went into the mix for later stars such as our sun - and, ultimately, our planet and ourselves.
From what we can tell, these first stars were bloated monsters up to 100 times the size of our sun that lived fast, burned bright and died within a few million years.
Was that long enough to form their own galaxies, or to influence their formation - or was it their less flamboyant successors that first found safety and stability in numbers?
Cosmic monsters that have survived into our times also pose puzzles.
The centre of the Milky Way, and every galaxy like it, seems to host a black hole with a mass millions or even billions of times that of the sun.
How did they get that big?
One theory is that they began as star-sized black holes, produced when massive stars exploded and fell in on themselves, and then slowly grew by sucking in gas and surrounding stars.
Yet a typical supermassive black hole would need longer than the age of the universe to swallow enough material.
An alternative theory is that they were simply born big, produced directly by the collapse of massive amounts of primordial gas.
And was it ultraviolet light from the first stars or X-rays emitted by these black holes as they fed that caused the epoch of reionisation?
Our Milky Way was a product of the universe's dark ages - so asking these questions is again an investigation into our own origins.
We might look for answers by building even larger, more sensitive telescopes to look even further back towards the big bang (see diagram).
At least in the foreseeable future, however, that can only give us a partial view: the objects of interest are so far away that even the most gargantuan telescopes currently planned will see only the very brightest.
An alternative is to capture radio emissions from hydrogen atoms.
Neutral hydrogen gas was an abundant, if diffusely spread, feature of the cosmos between the epochs of recombination and reionisation, and it gives off a faint signal of its presence.
The lone electron and proton within each hydrogen atom act like two bar magnets that can lose energy by "flipping" so that their magnetic moments, or spins, point in opposite directions.
When an atom flips, it releases energy as a photon with a defined radio wavelength of 21 centimetres.
Equally, a hydrogen atom can flip into the higher-energy, aligned state by absorbing a passing photon of the same wavelength.
Either way, the emission or absorption of 21-cm radiation over patches of sky is a sure sign that hydrogen atoms are present.
Because hydrogen is ionised by high-energy radiation, whether from brightly burning stars or from supermassive galactic black holes, maps of where it is should provide a detailed picture of where stars and galaxies are not.
The hydrogen signal was predicted by Dutch astronomer Hendrik van de Hulst in 1942, and first picked up from the Milky Way in 1951 by Harold Ewen and Edward Purcell, who placed a horn antenna through a window of the physics department at Harvard University.
Since then it has been used to detect warm hydrogen gas in our galaxy and others nearby - for example, to measure Doppler shifts to higher or lower wavelengths and so find out what parts of galaxies are moving towards and away from us.
A similar Doppler-like shift can be used to record the evolution of hydrogen in the early universe.
As the universe expands over time, so does the wavelength of radiation that travels through it.
The further away a region is, and so the earlier in cosmic time we see it, the more stretching the radiation undergoes.
This allows us to map the ancient hydrogen in three dimensions: two across the sky and, by tuning our receivers to different stretched wavelengths, a third that is equivalent to distance or "look-back time".
The result is a film reel of the universe's missing years that can confirm - or, indeed, disprove - our general theoretical picture and answer many of those nagging questions.
Fine details in the patterns of the 21-cm signal over time should reveal whether first-generation stars were a long-lasting feature in galaxy evolution, or brief candles that flickered once never to be seen again.
Different patterns of ionisation are expected from ultraviolet and X-ray emission, so that should tell us whether stars or black holes were the main agents of reionisation.
If black holes were significant participants, the size of the ionised bubbles around them should reveal whether they were born big or ate their way to supersize.
Clues to a host of problems in fundamental particle physics might also be contained in the new hydrogen movies (see "Visible in radio"). So why have we not made them before?
Put simply, it has not been technically feasible. Hydrogen emissions from the first billion years after the big bang are stretched to wavelengths of around 2 metres, several million times longer than a typical visible wavelength.
The longer the wavelength, the larger the telescope required to capture it with the necessary resolution.
For the sort of radio telescope that springs to most people's mind - an overgrown satellite dish - the size rapidly becomes unfeasibly large.
The world's largest single radio telescope, a dish 305 metres in diameter built into a mountainside near Arecibo, Puerto Rico, does not come even close to the sensitivity required.
That is why the newest generation of radio telescopes takes a different approach. In an optical telescope, photons are generally separated by much more than their short wavelengths.
The telescope is really just a bucket in which we collect individual photons to count them. In a radio telescope, by contrast, incoming photons from a distant source overlap and are recorded as a single continuous wave.
This wave can be sampled at different points by many widely distributed small radio antennae rather as an analogue sound signal is sampled in time to make a digital recording, and the samples combined by computer algorithms into a single, coherent signal.
One such array, India's 30-antenna Giant Metrewave Radio Telescope, has been up and running since 1996, but has proved too small: it was designed in a data-starved era when we believed there was much more hydrogen in the universe than in fact there is.
The more antennas within a given area, the greater the telescope sensitivity.
The bigger the "baseline" of the telescope - the maximum distance on the ground covered by antennas - the smaller the distant objects you can resolve.
Imagine several thousand TV antennas connected to a supercomputer and you have an accurate image of a series of huge, long-wavelength radio arrays that are about to sharpen our view of the deep cosmic past (see diagram).
These arrays still won't have it easy.
Earth's ionosphere interferes with radio waves as they pass, distorting the position and shape of distant sources and creating an effect that is like trying to perform optical astronomy from the bottom of a swimming pool.
This must be corrected by referencing coordinates to a network of radio "pulsars", fast, rotating neutron stars that send regular, pinpointed pulses of radiation our way.
Individual bright radio sources and diffuse radio emissions from our own galaxy, which are 10,000 times brighter than the ancient cosmic signal, must also be peeled away - not to mention the busy noise of our own radio transmissions.
The theoretical and computational tools to overcome such difficulties are now largely in place, and the first definitive detection of hydrogen from the epoch of reionisation is expected within the next five years.
These first pictures will still be a little fuzzy.
For a finer-grained view, all eyes will be on the Square Kilometre Array (SKA), on which construction is due to start in South Africa and western Australia in 2016.
The long-wavelength (low-frequency) part of the SKA, to be completed in Australia by about 2020, will consist of around one million radio antennas, with a collecting area of a square kilometre, plugged into one of the world's fastest supercomputers.
It should help to unravel the nature of the very first galaxies that formed in the first few hundred million years after the big bang.
Looking even further ahead, NASA is examining the possibility of 21-cm experiments on the far side of the moon, thus avoiding the problems both of Earth's ionosphere and of human radio activity.
The idea is not as fanciful as it sounds. After all, in comparison to hulking a weighty conventional telescope with mighty mirrors into orbit, a simple radio array would be little more than a few wires connected to a supercomputer and a power source.
Not much for the ultimate cinematic record of the universe's history.
Visible in radio
Hydrogen's 21-centimetre radio emissions should illuminate the universe's dark adolescence (see main story), but address some deeper questions too.
What was inflation?
The 21-cm observations ultimately measure variations in cosmic density seeded at the period of inflation, a breakneck expansion of space thought to have occurred a split second after the big bang.
The cosmic microwave background radiation provides a single 2D projection of these fluctuations 400,000 years after the big bang.
The 21-cm observations will give us a far richer, 3D source of information on what physics looked like under extremely hot, dense conditions far beyond those recreated in earthly accelerators.
How massive are neutrinos?
That neutrinos have a mass comes as a surprise to the standard model of particle physics - but they do, albeit a tiny and as yet unmeasured one.
Characterising these elusive particles is important for understanding physics beyond the standard model, but it is a frustrating process using detectors on Earth.
Measuring their influence on the formation of structures in the universe is a surprisingly effective alternative.
The passage of neutrinos smoothes the distribution of matter, and the scales on which this smoothing occurs at different times tells us how far and fast neutrinos have travelled since the big bang - and hence their mass.
What is dark matter?
Invisible dark matter is thought to make up 80 per cent of cosmic matter, and is needed to explain why galaxies rotate as fast as they do. Most particles postulated to be dark matter can annihilate each other, releasing energy and heating their surrounds.
This would make an imprint in the hydrogen 21-cm radiation, so measuring its pattern across the early cosmos will narrow down where dark matter was - and possibly reveal its nature.
Abraham Loeb is director of the Institute for Theory and Computation and chair of the Astronomy Department at Harvard University.
Jonathan Pritchard is a lecturer in astrostatistics at Imperial College London
Science- At least your brain is on time
Updated: 31 Oct 2012
Brain circuits run their own clocks
21:00 30 October 2012
by Douglas Heaven
Timing is everything.
But exactly how the brain keeps time, which it does very well, has been something of a mystery.
One widely held theory suggests that a single brain region acts as a centralised timekeeper – possibly in the basal ganglia or cerebellum.
However, a study now suggests that timekeeping is decentralised, with different circuits having their own timing mechanisms for each specific activity.
The finding could help explain why certain brain conditions affect our sense of timing, and even raise the possibility of artificially manipulating time perception.
Geoffrey Ghose and Blaine Schneider, at the University of Minnesota in Minneapolis, investigated timing in the brain by training two rhesus macaques to perform tasks in which they moved their eyes between two dots on a screen at regular 1-second intervals.
There were no external cues available to help them keep track of time.
After three months, the monkeys had learned to move their eyes between the two dots with average intervals of 1.003 and 0.973 seconds, respectively.
The researchers then used electrodes to record brain activity across 100 neurons in the lateral intraparietal cortex – associated with eye movement – while the monkeys performed the task.
The activity of these neurons decreased during the interval between each eye movement, and the rate of decrease correlated with the monkeys' timing.
Using this information, Ghose and Schneider were able to predict the interval between eye movements by measuring the preceding decay rate.
For example, in one task, a slower rate of decrease in the neurons' activity corresponded with a macaque overestimating of the length of a second.
Likewise, if neuron activity decreased at a faster rate the monkeys moved their eyes before a second was up.
The researchers now want to study what goes on in this brain area while the monkeys are learning the task, to work out how these time intervals arise.
This may help our understanding of why people with brain lesions or Parkinson's can have difficulty keeping time, says Ghose.
As well as indicating that brain circuits may have their own ability to keep time, the results also hint at how our perception of time can be altered during high emotional states.
Stress is associated with changes in the amount of neuromodulators such as adrenalin present in the brain. Adrenalin is known to affect the rate of decay of neuronal activity.
"And in our model, a change in the activity decay rate is all you need to have a different sense of 'what time' it is," says Ghose.
It might be possible to tweak an individual's sense of timing by altering these signals, he says.
The results support the idea that local neuron populations govern timing behaviour, says Catherine Jones at the University of Essex, UK.
"Given the promising findings, it would certainly be of value to investigate human performance on this task."
Science- Mortality and The Evolution of Funerals
Updated: 23 Oct 2012
Death: The evolution of funerals
23 October 2012
by Graham Lawton
When did our ancestors become aware of their own mortality?
The answer may help us understand the origin of our unique way of life, says Graham Lawton
PANSY died peacefully one winter's afternoon, her daughter Rosie and her friends Blossom and Chippy by her side.
As she lay dying her companions stroked and comforted her; after she stopped breathing they moved her limbs and examined her mouth to confirm she was dead.
Chippy tried twice to revive her by beating on her chest. That night Rosie kept vigil by her mother's side.
Pansy's death, in December 2008, sounds peaceful and relatively routine, but in fact it was highly unusual.
Captive chimpanzees are rarely allowed to die at "home"; they are usually whisked away and euthanised.
But the keepers at Blair Drummond Safari and Adventure Park in Stirling, UK, decided to let Pansy stay with her loved ones until the last so that their response to her death could be observed.
It is hard not to wonder what was going on in the minds of Rosie, Blossom and Chippy before and after Pansy's death.
Is it possible that they felt grief and loss?
Did they ponder their own mortality?
Until recently these questions would have been considered dangerously anthropomorphic and off-limits.
But not any more.
The demise of Pansy is one of many recent observations of chimpanzee deaths, both in captivity and the wild, that are leading to surprising insights about our closest living relatives' relationship with death.
This, in turn, is opening up another, deeper, question: at what point in human evolution did our ancestors develop a modern understanding of death, including awareness of their own mortality?
The answer goes much wider than our attitude to death - it may help us to better understand the origin of our unique way of life.
As far as most animals are concerned, a dead body is just an inanimate object.
Some species have evolved elaborate-looking behaviours to dispose of bodies - mole rats, for example, drag them into one of their burrow's latrines and seal it up - but these are practical acts with no deeper purpose or meaning.
Some non-human animals, though, clearly have a more complex relationship with death.
Elephants are known to be fascinated with the bones of dead elephants, while dolphins have been observed spending long periods of time with corpses.
No animal, though, arouses interest as much as chimps do.
Psychologists James Anderson and Louise Lock from the University of Stirling, who recorded Pansy's death, point out that her companions' responses were "strikingly reminiscent of human responses to peaceful death", including respect, care, testing for signs of life, attempts to revive, vigil, grief and mourning.
Similar things have been seen in the rare occasions that death has been observed among wild chimps.
Primatologists Alexander Piel of the University of California, San Diego, and Fiona Stewart of the University of Cambridge witnessed just such an event in Gombe national park in Tanzania in January 2010.
Early one morning, rangers discovered the body of a female chimp, Malaika, who had apparently fallen out of a tree.
When Piel and Stewart arrived at 9.15 am there was a crowd of chimps around Malaika's body. For the next three and a half hours the pair observed and filmed the scene as a succession of chimps visited the body, while others observed from the trees. Some seemed merely curious, sniffing or grooming the body.
Others shook, dragged and beat it as if in frustration and anger.
Dominant males performed displays of power around it or even with it; the alpha male threw it into a stream bed. Many made distress calls.
When the body was finally removed by rangers, eight of the chimps rushed to where it had lain and intensively - and excitedly - touched and sniffed the ground.
They stayed for 40 minutes, making a chorus of hooting calls before moving off.
The last chimp to visit the spot was Malaika's daughter Mambo.
What are we to make of this? According to Piel, the chimps' behaviour can be classified into three categories: morbidity, or intense interest in the body, mourning and "social theatre". And as with Pansy's death, these are very reminiscent of how we behave.
"The danger is to anthropomorphise, but much of this behaviour is still practised by modern humans," says Paul Pettitt, an archaeologist at the University of Sheffield, UK, who studies the origins of human burial.
"We see in chimps very simple behaviours that have become elaborated into more formal expressions of mourning.
It gives us a feel for what we might expect to have been practised by Miocene apes and early protohumans."
We will never know for sure, of course.
But the fossil and archaeological record contains tantalising hints of how this kind of behaviour evolved into modern rituals. And this has become a major question in palaeoanthropology.
Our treatment of the dead clearly falls into the category of "symbolic activity", akin to language, art and the other things that make modern humans unique.
These were all thought to have emerged around 40,000 years ago, but recent discoveries have tentatively pushed this back to 100,000 years or more.
Anything resembling mortuary practices predating 40,000 years ago used to be dismissed as an artefact.
But not any more, says Francesco d'Errico of the University of Bordeaux in France.
"Most archaeologists now accept that modern humans, Neanderthals and possibly other archaic hominins were engaged in mortuary practices well before 40,000 years ago."
Hominids on a hillside
The earliest signs are very old indeed. In 1975, on a steep grassy hillside near Hadar, Ethiopia, palaeontologists discovered 13 specimens of our 3.2 million-year-old ancestor Australopithecus afarensis - nine adults, two juveniles and two infants - all within touching distance of one another and apparently deposited around the same time.
How they got there is a mystery.
There is no evidence of a flash flood or similar catastrophe that could have killed all of them at once.
There is no sign that the bones had been chewed by predators.
They are, as discoverer Donald Johanson later wrote, "just hominids littering a hillside" (see diagram)
Last year, partly in light of chimp research, Pettitt proposed a new explanation: the bodies were left there deliberately in an act of "structured abandonment".
That doesn't mean burial, or anything with symbolic or religious meaning.
"It was probably just the need to get rid 0f a rotting corpse," says Pettitt.
Even so, it represents a significant cognitive advance over what is seen in chimpanzees, who leave their dead where they fall - perhaps the first stirring of something human.
"It could be recognition that the appropriate place for the corpses is not among the living - a first formal division between the living and the dead," says Pettitt.
Barring new discoveries it will be impossible to confirm that australopithecines deposited their dead in a special place.
But by half a million years ago the evidence is much clearer.
Sima de los Huesos - the pit of bones - was discovered in the 1980s at the bottom of a limestone shaft in a cave in the Atapuerca Mountains of northern Spain.
It contained the remains of at least 28 archaic humans, most likely Homo heidelbergensis, a probable ancestor of both Homo sapiens and Neanderthals.
How did they get there?
An obvious possibility is that they accidentally fell down the shaft, but that seems unlikely from the way the bones fractured. "It doesn't look like a natural accumulation," says Pettitt.
Most of the skeletons are adolescent males or young men, and many show signs of bone disease or deformity.
According to Pettitt the best explanation is that they were deliberately placed at the top of the shaft after death and then gradually slumped in.
If so, this is the earliest evidence of funerary caching, or the designation of a specific place for the dead - perhaps, in this case, for deformed outcasts - a further advancement towards the modern conception of death.
Once you have designated places for the dead you are clearly treating them as if they still have some kind of social agency.
"Once you've reached that point you're on the road to symbolic activity," says Pettitt.
What did these protohumans understand about death?
Did they know that they themselves were mortal?
Did they have a concept of an afterlife?
"We haven't got a clue," says Pettitt.
What we do know is that funerary caching became increasingly common: bodies are found in places that are hard to account for any other way, tucked into fissures and cracks, in hard-to-reach overhangs or at the back of caves.
From funerary caching it is a short conceptual leap to burial - creating artificial niches and fissures to stash the dead.
The earliest evidence we have of this is from two caves in Israel - Skhul and Qafzeh - where the skeletons of 120,000-year-old Homo sapiens were found in what are clearly human-made hollows.
There is also evidence of Neanderthal burials from around the same time.
All this adds to the evidence that humans were on their way to a symbolic culture much earlier than we thought.
"Once you start getting deliberate burials I think it's much more likely that people are thinking in formalised terms, things like life after death," says Pettitt.
Even so, these burials do not represent a point of no return.
Only a handful of such sites are known; compared with the number of people who must have died they are incredibly rare.
It appears that burial was for special occasions; most dead people were probably still cached or abandoned.
It was not until about 14,000 years ago that most people were buried in what we would recognise as cemeteries.
Around the same time people were settling in one place and inventing agriculture and religion - it is probably no coincidence that the world's oldest ceremonial building, Göbekli Tepe in Turkey, was built at that time.
Well before that, however, archaic humans appear to have had a concept of death not unlike ours.
Art, language, elaborate funerary practices - they are just expressions of the same thing, says Pettitt.
"It's part of what distinguishes us not only from other animals but from every other type of human that's gone before."
Graham Lawton is deputy magazine editor of New Scientist
Science- Gasoline from Air ?
Updated: 23 Oct 2012
The big question mark over gasoline from air
22 October 2012
by Paul Marks
In a shipping container on a British industrial park, not far from where George Stephenson launched the world's first steam railway in 1825, another transport revolution might be beginning.
Every day the machinery inside produces half a litre of purified gasoline.
It sounds humdrum until you realise one thing: the only raw material used is air.
Last week, Air Fuel Synthesis (AFS), a company in Stockton, UK, revealed the first successful demonstration of an idea that dates back to the oil crisis of the 1970s: that carbon, hydrogen and oxygen can be plucked from carbon dioxide and water in air to be converted into methanol and then morphed into gasoline.
However, amidst the headlines, some media coverage overlooked the key point: the energy efficiency of the process has yet to be demonstrated. This matters because the technique uses electricity for key stages.
It should not require more energy input than is gleaned from burning the fuel it produces.
The big idea is to capture atmospheric CO2 and turn it into fuel so there's no net increase in CO2 from cars and trucks fuelled by such gasoline.
As long as the process is powered by renewable electricity sources such as solar, wind or tidal, using the gasoline is carbon neutral.
Snagging carbon dioxide
The AFS plant comprises a CO2 capture unit in one shipping container, with a methanol reactor and miniature gasoline refining system in another.
Air is blown into a sodium hydroxide mist, snagging CO2 as sodium carbonate.
A condenser collects water from the same air.
To make methanol – formula CH3OH – hydrogen is generated by electrolysing the water while the carbon and oxygen come from electrolysing the sodium carbonate. The methanol is then converted to gasoline.
Following tests over the last three months, AFS chief executive Peter Harrison says the demonstrator reliably produces half-a-litre of gasoline a day.
Peter Edwards ,an inorganic chemist at the University of Oxford whose team is working with a Saudi firm on similar ideas, is impressed: "I take my hat off to Air Fuel Synthesis.
They have taken a concept that has been around for 35 years and gotten the process going."
But Harrison points out the demonstrator, funded with a £1.2 million, two-year investment from private backers, was built to make gasoline, "not to prove its net efficiency or energy balances".
Douglas Stephan, a chemist at the University of Toronto, Canada, also researching fuel production from CO2, describes AFS's demonstrator as "an engineering tour-de-force".
But he too warns efficiency is the key.
"Until a detailed assessment of the energy efficiency is enunciated, I would remain sceptical about this technology," he says.
Andrew Bocarsly, chief science advisor at Liquid Light Inc, a company in Monmouth Junction, New Jersey, aiming to synthesise chemicals like methanol from CO2, points out that many researchers worldwide have so far failed to find cost-effective and efficient ways to split hydrogen from water.
Going to need a bigger plant
"I do wonder about the cost efficiency of their chemical conversion processes," he says, noting energy is required to back convert carbonate to gaseous CO2, to liberate hydrogen from water, to convert the hydrogen and CO2 to methanol and to transform methanol to gasoline.
AFS says demonstrating efficiency will have to wait for a bigger plant, which will fit into three shipping containers that can be dropped anywhere fuel is needed and produce 1200 litres of gasoline a day.
Harrison says motorsport venues, keen to reduce their fossil fuel dependence, and some remote islands have expressed an interest in these £5 million units.
"The demonstrator has given us the confidence that this next level of gasoline plant will be efficient enough," says AFS marketing manager Graham Truscott.
Harrison says the ultimate goal is to build refinery-sized plants that could compete with oil – but he says they could cost £10 billion and need serious government aid.
That in turn would need serious proof of energy efficiency.
Bocarsly adds: "This issue will be the test for commercialisation."
There's one more factor to consider, says Edwards: "The efficiency of this process would also have to be balanced against the cost of alternative measures like burying or dumping CO2 underground."
Science- British Earthquakes
Updated: 17 Oct 2012
Historic British Earthquakes
In a low seismicity country like the UK, even moderate earthquakes are rare.
To get an accurate picture of how often the larger events occur, we need to extend our window of observation back in history.
This means examining all available historical records for reports of ground shaking and determining if this could have been due to an earthquake.
Earthquake location and magnitude are estimated from the intensity of the shaking at different locations.
The following list is a extract of the M5+ earthquakes, given by Musson (1994), A Catalogue of British Earthquakes, British Geological Survey, Technical Report WL/94/04.
Earthquakes around the British Isles in the last 50 days
Last updated: 06:40:04, Wed Oct 17, 2012 (GMT)
17KM SW OF ST HELIER
140KM SSW BRIGHTON
140KM SSW BRIGHTON
30KM NW OF GUERNSEY
3KM NE OF COLERAINE
75KM SE EASTBOURNE
90KM SE EASTBOURNE
Science - Putting the Brakes on Progress but not the search for more Profit
Updated: 09 Oct 2012
Busted! The myth of technological progress
04 October 2012
by Laura Spinney
The history of technology holds some salutary lessons for anyone who blithely believes there is a high-tech fix for all our 21st-century problems
Editorial: "The Singularity is upon us? Not so fast"
WHEN Admiral Zheng He led his fleet out of the eastern Chinese port of Suzhou in 1405, it must have been a sight to behold.
The largest of the several hundred ships under his command were the size of modern aircraft carriers and housed 500 men apiece.
The fleet made seven expeditions in all, to advertise the might of the Ming dynasty around the Indian Ocean, but having returned to port for the last time it was dismantled, vanishing along with the engineering know-how that created it.
For the next few centuries China's seagoing vessel of choice was a much humbler junk.
It seems incredible that such an impressive, and effective, body of knowledge could have disappeared like that, yet history is full of such examples.
When archaeologists began excavating at Pompeii in the 18th century, they uncovered remains of a Roman aqueduct system that was more sophisticated than the one in use at the time.
The Egyptian pyramids still haven't given up all their construction secrets.
And going even further back, finds at Howieson's Poort Shelter in South Africa indicate that people were making highly sophisticated stone tools there until about 60,000 years ago when, for reasons unknown, they reverted to producing much simpler ones.
We tend to think of technological evolution as an exponential curve that starts out more-or-less flat in the early Stone Age and accelerates towards the present.
But the idea that we are becoming ever more inventive may be an illusion. Looked at under the magnifying glass, the apparently smooth curve breaks up into a frenetic series of advances, retreats and new advances - what Peter Richerson, who studies cultural evolution at the University of California, Davis, describes as evolution "noodling about".
In fact, over the whole of human history, we have probably lost more innovations than we now possess, says anthropologist Luke Premo at Washington State University in Pullman.
It is a sobering thought.
Just when we were pinning our hopes on producing hi-tech fixes for today's problems - climate change, overpopulation, emerging infectious diseases and so on - comes the news that we are not advancing inexorably towards technological Nirvana after all.
Nevertheless, a better understanding of how technologies evolve could hold some valuable lessons for the future. In building a more fine-grained picture of human technological history, we may identify clues as to what will work and what will not.
One of the long-standing mysteries of human technological evolution is why our Stone Age ancestors apparently showed so little inventiveness in their toolmaking.
The oldest tools discovered to date are 2.6 million-year-old stone flakes in what is now Ethiopia.
They mark the beginning of a refining process that didn't culminate in really effective stone hand axes until about 2 million years later.
This slow progress, the flat part of the technological evolution curve, has been put down to the limited cognitive abilities of early hominins.
Unable to learn from previous generations, each one had to start again from scratch, which explains why they lacked so-called cumulative culture.
Generally considered to be what separates humans from other primates, cumulative culture rests on two key skills: social learning, which is the transmission of knowledge to new members of a group, and over-imitation - the high-fidelity copying of a behaviour, including irrelevant or incidental elements, which allows the behaviour and its context to be passed along together.
Some researchers have argued that cumulative culture only made its appearance around 100,000 years ago, with Homo sapiens (New Scientist, 24 March, p 34).
But anthropologist and stone-tool expert Dietrich Stout of Emory University in Atlanta, Georgia, has challenged that view.
Innovation tends to happen by the introduction, deliberate or otherwise, of copying errors - the equivalent of genetic mutations in biological evolution - with those that provide an adaptive advantage being more likely to be passed on.
Early humans might well have had what it takes cognitively to learn from their forebears, Stout argues, it is just that with the simple tools at their disposal, there wasn't much room for copying error (Philosophical Transactions of the Royal Society B, vol 366, p 1050).
Put simply, he says, "you can't change much about a hand axe if you still want it to perform all the functions that a hand axe performs".
However, as tool complexity increased, the potential for innovation grew.
Premo suggests another reason why Stone Agers' creativity may have been underestimated.
Throughout those apparently uneventful 2 million years, they were hunter-gatherers who lived in extended, itinerant family groups of between 20 and 40 adults, plus children.
"These small groups could have been exposed to fairly high chances of the whole group going extinct," he says, whether because their best hunter was incapacitated due to illness or injury, or because environmental conditions changed rapidly.
When a local population died out, all its innovations would have died with it, and sometimes that could have meant the loss of generations' worth of know-how.
In 2010, with anthropologist Steven Kuhn of the University of Arizona in Tucson, Premo developed a computer model that recapitulated the behaviour of innovative, tool-using hominin groups in a Stone Age landscape.
It showed that a period of technological innovation followed by the wiping out of the innovators and their kin looks identical, on the broad scale, to a period of technological stasis (PLoS One, vol 5, p e15582).
But if Stone Age toolmakers were innovating, then where is the evidence? The archaeological record is notoriously patchy and the further back you go the sparser it becomes.
Even so, the apparent lack of progress may be partly explained by the timeframe we choose to consider. In soon-to-be-published work, Charles Perreault at the Santa Fe Institute in New Mexico gathered information about 500 archaeological samples - tools, pots and other artefacts dating from the past 10,000 years and coming mainly from North America - and analysed how they changed over time.
He found that the rate of change depended on the period over which he calculated it, appearing to be rapid over short time periods and slower over longer ones.
A key reason for this is that there are many advances and retreats over the shorter term that tend to cancel each other out in the longer timeframe.
There is an intriguing parallel here with biological evolution.
Back in 1983, University of Michigan palaeontologist Philip Gingerich studied how shape and structure changed over millions of years in a wide range of animals.
He, too, found an inverse relationship between rate of change and period of measurement and, like Perreault, concluded that this is simply an illusion of perspective (Science, vol 222, p 159).
The main difference between the two studies is that, by Perreault's calculations, technological change happens approximately 50 times faster than morphological change.
As well as challenging preconceptions about the inventiveness of our Stone Age ancestors, these findings have also fuelled a growing realisation that technological innovations are highly prone to extinction.
Premo and Kuhn's model hinted that there are many reasons why even seemingly clever inventions don't catch on, or die out. In the real world, a classic example can be found on the island of Tasmania.
About 12,000 years ago, as temperatures and sea levels rose at the end of the last ice age, Tasmania was cut off from the Australian mainland and its inhabitants marooned.
Archaeological evidence shows that until the land bridge was severed, Tasmanians possessed a range of complex technologies, including cold-weather clothing, fishing nets, spears and boomerangs.
When Europeans arrived 10 millennia later, almost nothing remained.
They found people whose technology was the simplest of any known contemporary human group.
Low population density and fragile networks for knowledge transfer were the main reasons for this loss, according to Stephen Shennan, director of University College London's Institute of Archaeology.
He notes, though, that in other places and eras different influences have been at play.
For example, market forces and political or social factors can dictate rates of innovation.
A wealthy elite may be essential to sustain a community of craftspeople who need a long training period to learn to make the artefacts the elite desires.
Patents, in the modern sense of the word, were invented in the 15th century, before which craftspeople found other ways to profit from their knowledge for as long as possible - ways that influenced the development of the technologies in question.
Guilds emerged to protect skilled knowledge, for example, keeping the price high but the pool of knowledge transmitters small, and hence vulnerable to extinction if conditions changed.
Factors intrinsic to a technology may also determine its evolution.
An example of this is found in Japanese katana or samurai swords, which remained unchanged for centuries because errors in forging the blades became too costly, discouraging experimentation.
"We tend not to consider cost-benefit ratios," says Shennan, but they can be crucial.
"Something that seems like a thoroughly useful innovation may actually disappear because of the costs associated with it."
Conversely, a technology may spread at the expense of better alternatives because once established it is too expensive to change tack.
An example is the QWERTY keyboard, which is slower to type on than other keyboard layouts, but continues to monopolise the keyboard market in English speaking parts of the world.
Rumour and gossip can shape the trajectory of a technology too.
In the past, using a new tool or medicinal herb might have got you branded as a witch, encouraging people to hide or suppress discoveries.
Religious institutions still have a special kind of power: by attaching moral or spiritual value to an innovation, they can usher it in, by denouncing it they can prevent its spread.
So, what of the future? Are things different now, enabling technological evolution to continue at an ever-faster pace?
Because of the sheer numbers of us on the planet, sparse populations and fragile transmission networks no longer pose a serious threat to innovation.
Besides, since the invention of writing, we have been able to store knowledge outside people's heads and disseminate it widely.
But we may have unwittingly introduced other brakes on progress.
According to Alex Mesoudi, an evolutionary anthropologist at the University of Durham, UK, technological progress - as measured by indicators such as the rate of scientific publication and patents filed - has indeed been accelerating exponentially over the past few centuries, but is now showing signs of slowing.
The trouble, he says, is that we have accumulated so much knowledge, that young people now spend proportionately more time learning from previous generations and less time innovating.
Schoolchildren and students tend to learn a subject in the order that it developed historically. For example, physics undergraduates are tested on their grasp of pre-1900 discoveries. "Only at master's level do they start learning 20th-century stuff," he says. And that lag is having an impact.
In a paper published last year, Mesoudi pointed out that the mean age at which Nobel prizewinners made their prizewinning discovery, or inventors came up with inventions that were considered worthy of entry in prominent technological almanacs, increased from 32 in 1900 to 38 a century later.
It is in this period that he found a decrease in overall rates of innovation (PLoS One, vol 6, p e18239).
"There is some evidence that fields are slowing down," he says.
Something else is happening too.
As technologies become more complex, the associated contextual or causal knowledge is being lost.
People who build cars today do not necessarily understand how a car works, for example, since they may just assemble one part or operate a robot that does it for them.
In Fiji, where houses have to withstand hurricanes, anthropologist Robert Boyd of Arizona State University in Tempe has found that locals have a pretty good grasp of why certain materials are better at withstanding hurricanes, but not why certain structural designs work and others do not.
"Causal understanding is a very powerful and beneficial thing," he says.
"If you are put in a different situation, due to environmental change, say, you can adapt much more quickly if you understand how a technology works than if you have to adapt as a population by trial and error."
It is not yet clear how much of a problem this is, since the information tends to be recorded and the body of people who do understand it, while relatively small, is probably still large enough to ensure preservation.
In publishing his findings, Mesoudi intended to be provocative rather than pessimistic.
He wanted to make people think about how ever-adaptable humans are adapting to the new problems that technological prowess presents.
He suggests, for example, that one way we are overcoming the problem of that long learning period is through the collectivisation of science.
What used to be a predominantly individual activity is now increasingly the occupation of groups who pool their knowledge. And there are potential benefits if it allows us to harness the power of the hive mind.
Mesoudi hopes that by building such adaptations into models of technological evolution, researchers will be able to make more accurate predictions and identify the factors that predispose an innovation to success or failure.
Not all those factors will be under the control of innovators.
Nevertheless, with better insight, they may at least be able to minimise the likelihood of repeating the experience of poor Zheng He, who lost the greatest fleet the world had ever seen.
Laura Spinney is a writer based in Lausanne, Switzerland
Science- Geoengineering- Climate Intervention to moderate global warming
Updated: 25 Sep 2012
From Wikipedia, the free encyclopedia
Not to be confused with Geotechnical engineering.
Oceanic phytoplankton bloom in the South Atlantic Ocean, off the coast of Argentina.
Encouraging such blooms with iron fertilization could lock up carbon on the seabed.
The concept of geoengineering (or climate engineering, climate remediation, and climate intervention) refers to "the deliberate large-scale intervention in the Earth’s climate system, in order to moderate global warming".
The discipline divides broadly into two categories, as described by the Royal Society: "Carbon dioxide removal techniques [which] address the root cause of climate change by removing greenhouse gases from the atmosphere.
Solar radiation management techniques [which] attempt to offset effects of increased greenhouse gas concentrations by causing the Earth to absorb less solar radiation."
The Intergovernmental Panel on Climate Change concluded in 2007 that geoengineering options remained largely unproven. It was judged that reliable cost estimates for geoengineering had not yet been published.
Geoengineering has been proposed as a potential third option for tackling global warming, alongside mitigation and adaptation.
Scientists do not typically suggest geoengineering as an alternative to emissions control, but rather an accompanying strategy.
Reviews of geoengineering techniques have emphasised that they are not substitutes for emission controls and have identified potentially stronger and weaker schemes.
However, such is the lifetime of some greenhouse gases in the atmosphere, most notably carbon dioxide, that geoengineering represents the only currently known method for reducing Earth's temperature in the short term (years to decades).
To date, no large-scale geoengineering projects have been undertaken and almost all research has consisted of computer modelling or laboratory tests.
Some limited tree planting and cool roof projects are already underway, and ocean iron fertilization is at a beginning stage of research, with small-scale research trials and global modelling having been completed.
Field research into sulfur aerosols has also started.
Various criticisms have been made of geoengineering and some commentators appear fundamentally opposed.
Some have suggested that the concept of geoengineering presents a moral hazard because it could reduce the political and popular pressure for emissions reduction.
Groups such as ETC Group and individuals such as Raymond Pierrehumbert have called for a moratorium on deployment and out-of-doors testing of geoengineering techniques.
The effectiveness of the schemes proposed may fall short of predictions.
The full effects of various geoengineering schemes are not well understood
Science- The Neolithic Dentist used Beeswax dental fillings and Flint drills
Updated: 25 Sep 2012
Oldest dental filling is found in a Stone Age tooth
19 September 2012
by Colin Barras
You may not want to try this at home.
A simple wax cap that was applied to a broken tooth 6500 years ago is the oldest dental filling on record.
It adds to evidence that Neolithic communities had a surprisingly sophisticated knowledge of dentistry.
The recipient of the treatment was most likely a 24 to 30-year-old man, living in what is now Slovenia.
His fossilised jawbone was found early last century near the village of Lonche.
At the time, the find – one of the oldest human bones ever found in the region – was described, catalogued and filed away in a museum in nearby Trieste, Italy.
"The jawbone remained in the museum for 101 years without anybody noticing anything strange," says Claudio Tuniz at the International Centre for Theoretical Physics in Trieste.
That was until Tuniz and his colleague Federico Bernardini happened to use the specimen to test new X-ray imaging equipment, and spotted some unusual material attached to a canine.
They constructed a high-resolution 3D picture of the tooth, which revealed a long vertical crack, and an area of enamel that had worn away to create a large cavity in which the dentine was exposed.
The unusual material formed a thin cap that perfectly filled the cavity and the upper part of the crack.
Infrared spectroscopy identified the material as beeswax, and radiocarbon dating found it and the tooth to be both around 6500 years old.
This suggests the beeswax may have been used to plug the cracked and worn tooth while its owner was still alive, which would make it the oldest example of a dental filling ever found – predating gold prostheses used in Imperial Rome.
"We knew that we had hit the jackpot," says Tuniz.
Flint dentist drills
Although it's difficult to rule out the possibility that the beeswax was added for another purpose – perhaps as part of a funeral ritual – and that the tooth cracked as it dried out in the cave, Tuniz and Bernardini think this is unlikely.
They point out that the placing of the wax suggests it was purposely added to seal the exposed dentine.
Previous finds also suggest that Neolithic humans were competent dentists. In 2001, David Frayer at the University of Kansas in Lawrence and his colleagues found drill holes – probably made by a flint tip – in 11 human molars from a 7500 to 9000-year-old graveyard in Pakistan.
Four of the drilled teeth showed signs of decay, but none carried a dental filling.
"It is always difficult to make sense of manipulations of skeletons or teeth," says Frayer. "But I think [Tuniz and Bernardini's team] have made the best argument possible for the beeswax being used as a dental filling."
"Beeswax would make sense as a filling material for a number of reasons," says Stephen Buckley at the University of York, UK, who was part of a team that recently found evidence, from an analysis of teeth, that Neanderthals practised medicine.
"The melting point of the wax is low, so it's easily melted, yet it solidifies to fit the gap when cooled to room temperature."
He adds that beeswax can contain honey and propolis, both of which have antibacterial and anti-inflammatory properties. "I used beeswax for a major project on Egyptian mummification, and it was very useful – hence its employment by the Egyptian embalmers," he says.
"The more we learn about prehistoric populations the more we appreciate their sophisticated ways," says Frayer. "They did so many interesting things, now being unlocked by careful observation and advanced technology."
Science- Our Ancestors - a melting pot of early humans
Updated: 25 Sep 2012
Genetic study finds complexity in cradle of humanity
21 September 2012
by Douglas Heaven
The melting pot of early human history has just been given a good stir.
The largest study of genetic variance across present-day populations in southern Africa suggests that there is no single place in Africa from which all modern humans emerged.
Instead, our species is the result of mixing between numerous early human populations across a vast area.
Mattias Jakobsson at Uppsala University, Sweden, and his colleagues analysed around 2.3 million single nucleotide polymorphisms (SNPs) – variations in DNA that are useful for comparing regions of the genome between populations – in 220 individuals from 11 southern African populations.
"When we start digging into this data, the most striking result is the deep population structure that we find," says Jakobsson.
This structure suggests that modern humans emerged from a geographically diverse group, in contrast to the "bottleneck" theory in which all humans alive today are descended from a single, relatively homogenous group of people.
"This is important," says Robert Foley at the University of Cambridge, who was not involved in the study.
"One of the big questions has always been where in Africa humans evolved.
Given the size of the continent – three times the size of Europe – saying we evolved in Africa does not really answer the question."
Genomic studies let us investigate the question more fully, says Foley. "Just as today, the earliest modern humans and their descendants lived in populations that did not have clear-cut boundaries, but existed in a world of other African populations," he says.
The study suggests that population structure continued to be complicated even after modern humans had evolved.
It showed that one group that survives to this day – the click-speaking Khoe-San of the Kalahari – was one of the earliest to separate from the rest of humanity, at least 100,000 years ago.
However, the study also found divergence within the Khoe-San themselves, with the Namibean and Angolan groups in the north having separated from those in South Africa as early as 25,000 to 40,000 years ago.
"Most astonishing to me is the deep divergence among the Khoe-San populations," says Brenna Henn at Stanford University, California.
"This really suggests we need to understand the structure of southern African populations in much finer detail."
The team also identified some of the genes that make us look the way we do – such as those affecting eyebrow ridges and the shape of the rib cage.
Since these date back to a time before the Khoe-San split from the rest of humanity, the researchers say that our modern anatomy is at least that old.
"This study is likely to have considerable impact in the field, although perhaps in unexpected ways," says Murray Cox at Massey University in Palmerston North, New Zealand.
"It certainly tells us new things about our history, but more importantly, it highlights the complexity of that history.
In many respects, the study raises a lot of questions about our very earliest ancestors that we haven't thought to ask before."
Science- Animals like Humans are Conscious, but should they both be a welfare concern ?
Updated: 25 Sep 2012
Animals are conscious and should be treated as such
24 September 2012
by Marc Bekoff
Now that scientists have belatedly declared that mammals, birds and many other animals are conscious, it is time for society to act
ARE animals conscious?
This question has a long and venerable history.
Charles Darwin asked it when pondering the evolution of consciousness.
His ideas about evolutionary continuity - that differences between species are differences in degree rather than kind - lead to a firm conclusion that if we have something, "they" (other animals) have it too.
In July of this year, the question was discussed in detail by a group of scientists gathered at the University of Cambridge for the first annual Francis Crick Memorial Conference.
Crick, co-discoverer of DNA, spent the latter part of his career studying consciousness and in 1994 published a book about it, The Astonishing Hypothesis: The scientific search for the soul.
The upshot of the meeting was the Cambridge Declaration on Consciousness, which was publicly proclaimed by three eminent neuroscientists, David Edelman of the Neurosciences Institute in La Jolla, California, Philip Low of Stanford University and Christof Koch of the California Institute of Technology.
The declaration concludes that "non-human animals have the neuroanatomical, neurochemical, and neurophysiological substrates of conscious states along with the capacity to exhibit intentional behaviors.
Consequently, the weight of evidence indicates that humans are not unique in possessing the neurological substrates that generate consciousness.
Non-human animals, including all mammals and birds, and many other creatures, including octopuses, also possess these neurological substrates."
My first take on the declaration was incredulity.
Did we really need this statement of the obvious? Many renowned researchers reached the same conclusion years ago.
The declaration also contains some omissions.
All but one of the signatories are lab researchers; the declaration would have benefited from perspectives from researchers who have done long-term studies of wild animals, including nonhuman primates, social carnivores, cetaceans, rodents and birds.
I was also disappointed that the declaration did not include fish, because the evidence supporting consciousness in this group of vertebrates is also compelling.
Nevertheless, we should applaud them for doing this.
The declaration is not aimed at scientists: as its author, Low, said prior to the declaration: "We came to a consensus that now was perhaps the time to make a statement for the public...
It might be obvious to everybody in this room that animals have consciousness; it is not obvious to the rest of the world."
The important question now is: will this declaration make a difference?
What are these scientists and others going to do now that they agree that consciousness is widespread in the animal kingdom?
I hope the declaration will be used to protect animals from being treated abusively and inhumanely.
All to often, sound scientific knowledge about animal cognition, emotions and consciousness is not recognised in animal welfare laws.
We know, for example, that mice, rats and chickens display empathy, but this knowledge has not been factored into the US Federal Animal Welfare Act. Around 25 million of these animals, including fish, are used in invasive research each year.
They account for more than 95 per cent of animals used in research in the US. I'm constantly astounded that those who decide on regulations on animal use have ignored these data.
Not all legislation ignores the science. The European Union's Treaty of Lisbon, which came into force on 1 December 2009, recognises that animals are sentient beings and calls on member states to "pay full regard to the welfare requirements of animals" in agriculture, fisheries, transport, research and development and space policies.
There are still scientific sceptics about animal consciousness.
In his book, Crick wrote "it is sentimental to idealize animals" and that for many animals life in captivity is better, longer and less brutal than life in the wild.
Similar views still prevail in some quarters. In her recent book Why Animals Matter: Animal consciousness, animal welfare, and human well-being, Marian Stamp Dawkins at the University of Oxford claims we still don't really know if other animals are conscious and that we should "remain skeptical and agnostic... Militantly agnostic if necessary."
Dawkins inexplicably ignores the data that those at the meeting used to formulate their declaration, and goes so far as to claim that it is actually harmful to animals to base welfare decisions on their being conscious.
I consider this irresponsible. Those who choose to harm animals can easily use Dawkins's position to justify their actions. Perhaps given the conclusions of the Cambridge gathering, what I call "Dawkins's Dangerous Idea" will finally be shelved.
I don't see how anyone who keeps abreast of the literature on animal pain, sentience and consciousness - and has worked closely with any of a wide array of animals - could remain sceptical and agnostic about whether they are conscious.
Let us applaud the Cambridge Declaration on Consciousness and work hard to get animals the protection they deserve. And let us hope that the declaration is not simply a grandstanding gesture but rather something with teeth, something that leads to action.
We should all take this opportunity to stop the abuse of millions upon millions of conscious animals in the name of science, education, food, clothing and entertainment.
We owe it to them to use what we know on their behalf and to factor compassion and empathy into our treatment of them.
Marc Bekoff is an emeritus professor of ecology and evolutionary biology at the University of Colorado, Boulder. He has written many essays and books about animal emotions, animal consciousness and animal protection
Science- We need British built Nationalised Nuclear Energy not foreign company contracts
Updated: 24 Sep 2012
Nuclear power subsidies 'could add £70 to annual household energy bills'
By Emily Gosden |
Subsidies for new nuclear power could add £70 to annual household energy bills, Ian Marchant, the chief executive of SSE (Frankfurt: A0RFBG - news) warns.
Ministers should refuse to subsidise EDF Energy’s plans for the first British nuclear reactors in a generation unless the French energy giant agrees to deliver them for a substantially lower price than is widely expected, Mr Marchant argues.
The government is in negotiations with EDF (Paris: FR0010242511 - news) over a long-term guaranteed price for electricity from its proposed plant at Hinkley Point in Somerset.
If the market price for electricity remains below that level, EDF will receive 'top-up’ subsidies paid for through levies on all UK electricity consumers.
EDF has so far refused to say how much the plants will cost or the level of subsidy it will seek, although last month EDF Energy chief executive Vincent de Rivaz told this newspaper the price would be less than £140 per megawatt hour (MWh) .
Writing in the Daily Telegraph on Monday, Mr Marchant calculates that the price should be nearer £65/MWh, based on estimates EDF published in 2008, and on independent estimates produced last year for the Government.
“We will all end up bearing the cost of the opaque negotiations going on between Whitehall and Paris,” he warns.
“The difference between paying £65/MWh for new nuclear, versus £140/MWh for new nuclear, assuming we decide to build two reactors, will amount to over £2bn each and every year.
That is around £70 for every household on top of the current electricity bill.”
Mr Marchant adds: “As a country we should walk away at anything over £90/MWh, if not less.”
The huge cost estimates surrounding EDF’s nuclear plans - with some putting the price of Hinkley Point at as much as £14bn - have alarmed even pro-nuclear lobby groups.
The Supporters of Nuclear Energy (SONE) group wrote to the Government this month urging it to block any price deal “unless and until you are absolutely satisfied the nation is getting value for money”.
EDF is seeking new partners to help share the cost of the project and is thought to be in talks with Chinese state nuclear corporations, with a near-30pc stake in the project said to be on offer.
British Gas owner Centrica (LSE: CNA.L - news) has an option for a 20pc stake in the new nuclear plants but is thought to be concerned about the risk of cost overruns.
The first two new plants built with the 'EPR’ technology to be used in Hinkley Point have both seen major delays and spiralling pricetags.
Last week, Alain-Pierre Raynaud, UK chairman of the EPR’s manufacturer Areva (Munich: 890173 - news) , insisted the two most recent EPR plants were on time and on budget and that Areva was now “in a position to bring a lot of certainty in the execution of the nuclear programme”.
But, despite the apparent confidence in its capabilities, Mr Raynaud said Areva would not necessarily agree a fixed-price contract to shield EDF or consumers from any cost overruns.
“There is no specificity in terms of contract,” he told the Daily Telegraph.
“Depending on the case we have a lot of solutions. It is not yet finalised.”
The contract with Areva is likely to be the biggest single cost component of the nuclear project.
Science- Oil is still king but it is running out !
Updated: 21 Aug 2012
We're still on the slippery slope to peak oil
20 August 2012 by David Strahan
Technology and exploitation of unconventional sources can't defer the long-predicted decline in global oil production
IN 2007 former US energy secretary James Schlesinger claimed the arguments in favour of peak oil - the key theory that global production must peak and then decline - had been won.
With production flat and prices surging towards an all-time high of $147 per barrel, he declared, "we are all peakists now".
Five years on and production has risen by 2.7 million barrels per day to 93 mb/d, prices have recently slumped to around $100 a barrel and those who dismissed the idea that the rate we extract oil from the ground must inevitably decline jeer in delight.
In June a much-touted report by Leonardo Maugeri - an Italian oil executive now at the Geopolitics of Energy Project, based at Harvard University and part-funded by BP - forecast that far from running out of oil, this decade will see the strongest growth in production capacity since the 1980s and a "significant, stable dip of oil prices".
So is that it, panic over, as some commentators who once agreed with the peak view have declared on the basis of Maugeri's report?
Ironically, such shifts come just as some economists - traditionally hostile to peak theory - were coming round to it. Peakonomics, if you will. Unfortunately, any reasonable reading suggests Maugeri is wide of the mark.
The recent hysteria rests heavily on the rise of shale oil in the US, which was unforeseen and is significant.
After four decades of decline, US oil production turned in 2005 and has generated the bulk of the global supply growth since then.
But to brand this a "paradigm-shifter", as Maugeri does, is wrong.
He forecast that this boom will lead to an astonishing 4 mb/d of additional US shale production capacity by 2020. By contrast, the US Department of Energy, usually optimistic, predicts total US shale oil production will peak at just 1.3 mb/d in 2027.
One reason Maugeri's forecast is so high is that he assumes production from existing shale wells will decline by just 15 per cent per year.
Industry consultant Art Berman puts decline rates at around 40 per cent. Analysis by Bob Bracket of US market analysts Bernstein Research shows similarly steep declines, and also that the average shale well takes just six years to become a "stripper well" - producing just 10 to 15 barrels a day.
Such declines are far higher than for conventional wells, effectively meaning the industry must drill furiously just to stand still. It is this factor that will limit future production growth.
It is distressing that Maugeri's report - which appears to contain glaring mathematical mistakes - got so much attention, but he insists the gist of his report is right. In contrast, an excellent International Monetary Fund working paper in May received much less attention.
The IMF's paper sets out to test the idea that the recent 10-year rise in the oil price - it hit a low of $10 a barrel in the late 1990s - can be explained by geological constraints.
The team took an approach which expresses mathematically the idea that oil becomes harder to produce, the less there remains to be produced - the basis of peak oil theory.
This is clearly right: why would we be scraping out tar sands if there were easy oil left?
When they combined this with the impact of global GDP and oil price, the results were striking.
By testing their model against historical data, they found their production forecasts were more accurate than those of both peak oilers, who are traditionally too pessimistic, and authorities such as the US Energy Information Administration, which is generally far too optimistic.
Their price forecasts were also far more accurate than traditional economic models that take no account of oil depletion, predicting a strong upward trend that closely fits what has happened since 2003.
"When you look at the oil price [over the past decade], the trend is almost entirely explained by the geological view," said Michael Kumhof, one of the authors, when I interviewed him earlier this year.
The IMF paper also slays the belief that rising oil prices will liberate vast new supplies and vanquish peak oil.
The team found that production growth has halved since 2005, and forecast that even the lower rate of growth will only be sustained if the oil price soars to $180 by 2020.
"Our prediction of small further increases in world oil production comes at the expense of a near doubling, permanently, of real oil prices over the coming decade," write the authors. In this context, shale oil is not a "game-changer" but a sign of desperation.
"We have to do these really expensive and really environmentally messy things just in order to stand still or grow a little," says Kumhof.
It is true that global oil production has not yet peaked, but that is almost beside the point.
The people who fixate on this need to wake up and smell the fumes we are reduced to running on. The IMF paper shows clearly we are supply-constrained.
The oil price itself ought to be a clue: persistently above $100 per barrel, 10 times higher than it was at the eve of the 21st century.
Price spikes in recent years and recessions are the inevitable outcome of rising competition from fast-growing developing economies for limited supplies.
Domestic consumption among major producers such as Saudi Arabia is also soaring, reducing supply to others.
While global production rose in the five years to 2010, global net exports fell by 3 mb/d, according to independent US geologist Jeff Brown.
How much worse would you like it?
In the film No Country for Old Men, two lawmen find the aftermath of a drug deal gone bad, with corpses strewn about the desert.
The deputy remarks, "It's a mess, ain't it, sheriff?", to which the sheriff replies: "Well, if it ain't, it'll do til the mess gets here."
Likewise, if peak oil has not yet arrived, what I call the last oil shock certainly has.
It'll do til the peak gets here.
David Strahan is an energy writer and author of The Last Oil Shock (John Murray, 2008)
Science- Pye on Power- Nuclear Generated Electricity
Updated: 10 Aug 2012
About Joan Pye Project
Who We Are
The Joan Pye Project has been established to help balance the debate concerning the future of energy in the United Kingdom and to put the case for nuclear generated electricity.
Founded by Joan Pye, MA (Oxon), FINucE(Hon), the Project is an independent network of some 12 distant supporters: physicists, chemists and chartered engineers who have spent the major part of their careers in research and in applied areas of the nuclear power industry, and 250 other keen supporters and known supporters of nuclear power including those in the industry, numbering over 150,000.
Joan Pye associates are in direct contact with Professor Ian Fells, author of the recent Fells Associates report: “A pragmatic energy policy for the UK.”
Joan is passionate about the contribution nuclear generated electricity can make towards a clean and reliable energy future for the UK, so the Project aims to educate and inform, allay genuinely held fears, and promote better public acceptance of the current need for energy produced by nuclear power.
Joan Pye’s experience as PA to the Head of the Atomic Energy Research Establishment, Harwell where she worked for many years, convinced her that nuclear power is indeed the “Energy for the Next Generation”.
Now in her 90’s she is still the driving force behind the project.
AREA OF EXPERTISE
The Project has a growing band of supporters, links directly with the newly formed Nuclear Institute, and a core team of some of the finest and most experienced brains in the UK today who can give expert commentary in the following fields:
* Supply of electricity and micro-electricity
* National Grid
* Development of fast reactors and cooling systems
* Uranium supply and distribution
* Vitrification – fixing nuclear waste in glass cylinders
* Safe disposal of highly radioactive waste
* Proposed Severn Barrage - See Fells report for an update
* Long term storage of intermediate nuclear waste
* Design of fuel elements for new nuclear power stations which consume their own waste
What We Do
The Project generates its own funding and is totally independent of Government departments or other political or commercial organizations.
Through media campaigns, papers written by its members, responses to Government Consultations and this website, it strives to demonstrate the enormous contribution which nuclear energy can make to sustainable and efficient “carbon free” energy production and so make a major contribution to combating global warming.
The project hopes to provide assistance and funding to students pursuing careers in the nuclear power industry and is currently working with leading people in Education to produce information which can be used in schools nationwide to educate our children and students on the science behind nuclear energy generation.
Science- Nuclear Power remains popular,Thorium is safer than Uranium
Updated: 10 Aug 2012
UK public confidence in nuclear remains steady despite Fukushima
Benefits of nuclear power outweigh risks,
say 41% of the British
public, according to poll
The Guardian, Friday 9 September 2011
The accident at Fukushima in Japan in March this year seems to have had little overall impact on the UK public's confidence in nuclear power, according to a poll.
The survey, carried out by Populus last month and commissioned by the British Science Association, found that 41% of respondents agreed the benefits of nuclear power outweighed the risks, up from 38% in 2010 and 32% in 2005.
Those who said that the risks greatly or slightly outweighed the benefits of nuclear power in 2010 numbered 36%, and in 2011 this dropped to 28% of respondents.
The nuclear power plants at Fukushima in northern Japan were damaged during the magnitude 9 earthquake and the resulting tsunami that hit the country in March.
Three of the six nuclear reactors suffered meltdowns in the biggest nuclear accident since Chernobyl in 1986.
There were concerns that the event would irrevocably damage the case for nuclear power around the world – in the months after the Fukushima accident Germany announced the cancellation of its future programme.
And a referendum in Italy in June voted down the government's plan to start a programme of new nuclear reactors.
"It's genuinely surprising to me that views have returned to these early 2010 levels quite so quickly and slightly more positively," said Nick Pidgeon of the University of Cardiff, who discussed the findings of the latest poll at a briefing to mark the launch of the British Science Festival, which starts in Bradford on Saturday.
"There's been a lot of speculation about the impacts of Fukushima on public attitudes – this is the first fully independent study we've had in the UK."
Though overall support was up, there was a striking difference between men and women, with 53% of men in favour of nuclear power but only 21% of women supportive.
"If you dig into the data, you see that men in particular become much more confident about nuclear energy," said Pidgeon.
He also said that blanket media coverage and commentary – something he referred to as the "George Monbiot effect" – may have had a positive effect on public attitudes because, despite the severity of the crisis, no one has so far died.
Populus interviewed 2,050 adults between 26 and 29 August and weighted its results to ensure they were representative of the British population.
Overall, the support for nuclear power has been gradually increasing for about 10 years, said Pidgeon, and, in the past five years, the majority of people in Britain has come to support the renewal of the nuclear programme.
Pidgeon said that polls in the direct aftermath of the Fukushima accident had showed a dip in support for nuclear in the UK and elsewhere, though confidence did not collapse.
"There were still more people, even immediately afterwards, in favour of nuclear energy than against in Britain," he said.
The focus of potential concerns has also shifted in the wake of Fukushima.
"If you asked people why they were unhappy about nuclear energy a year ago, they would have brought waste up," said Pidgeon.
"What is clear from other polling is that accidents have gone to the top of what people are now concerned about with nuclear energy, the waste has dropped further down."
Bryony Worthington, a Labour peer and environmental campaigner, said that for the general public the perception of the main cause of the Fukushima problem had not been the design of the reactor but the siting of the power plant.
"Most people said, hang on, why did you put them all on that eastern seaboard, which is a seismically unstable region?"
The withdrawal of support for future nuclear power stations by the German government, she said, was political.
"For Angela Merkel to reverse her decision and phase out the nuclear, Fukushima gave her a good opportunity to do it.
She was already under huge political pressure to do that and Fukushima was just the trigger she found politically expedient to do it."
Thorium reactors are safer
Scientists at the briefing discussed the future of nuclear power, arguing that thorium, rather than uranium, was the safer alternative fuel.
Bob Cywinski, of the University of Huddersfield, said: "One tonne of thorium is equivalent to 200 tonnes of uranium and it doesn't need processing or enriching – 57 kilotonnes of thorium would provide the total energy need of the planet for a year, not just electricity but transport."
Though thorium has been used as a fuel in experimental reactors in the past, it was sidelined in favour of uranium.
"Why did we stop using it? It's the unfortunate fact that civil nuclear power has been so closely linked with the military.
And thorium, unfortunately, does not produce plutonium and is useless as far as proliferation is concerned.
The linking of civil nuclear with military nuclear has probably done a great disservice not just to thorium but to nuclear energy in general."
Kirk Sorensen, president of the Weinberg Foundation, a new NGO launched on Thursday to promote the cause of thorium around the world, said the design of thorium reactors had always focused on safety first.
The intent was to eliminate the root causes of danger in existing nuclear reactors, such as high-pressure coolants and chemically reactive situations.
In addition, thorium reactors only operate as long as there is a source of neutrons being beamed in to split atoms. If this is switched off, the reactor shuts down without any human intervention.
"Reasonable estimates suggest there is no more than 100 years of uranium left, maybe it is time we started turning to thorium," said Cywinski.
"Thorium is four times more abundant than uranium. In principle, there is something like 10,000 years of energy left in our thorium reserves."
Science-The Nuclear Power Debate-The reply letter the Morning Star refused to publish
Updated: 27 Jul 2012
The letter the Morning Star refused to publish -
The Nuclear Power Debate.
Pat Sanchez, Littleborough- (MS 18th July 2012) -has made up his mind. Nuclear Power is wrong, everything else is right, but without Nuclear Power someone will put his lights out !
Currently 18% of electricity in Britain comes from Nuclear power but by 2030 our needs will increase by 50% and current renewable sources will not meet that demand.
Pat may be happy for Britain to be totally depended on foreign power but that is a dangerous route to take.
This Government and the next will support a Nuclear Power option simply because they have no alternative.
Pat is confused in his argument, because Nuclear Power will help meet our Carbon target, and the Government has only said it is keeping its options open.
Currently there are 150 Nuclear stations across the world, but because the Japanese built theirs in the wrong place, doesn’t make them a disaster.
We need £110 billion of new investment in this countries energy needs and Nuclear Power, replacing old stations, is just what we need to help get the country back on its feet, as long as it is British labour, British design and British money being used.
"We need nuclear, we need renewables, we need clean coal, we need all of those things if we are going to make that transition to cleaner energy." says Ed Miliband, and I add to meet our energy needs."
Pat Sanchez would prefer to depend on the likes of Solar Power but when we need it most, the lights go out early in Britain
Science- China to build UK Power Stations ?
Updated: 24 Jul 2012
China in talks to build UK nuclear power plants
British officials talking to Chinese about plan that could see up to five reactors being built at cost of £35bn, sources say
• Terry Macalister and Fiona Harvey
• guardian.co.uk, Friday 20 July 2012 20.00 BST
China is poised to make a dramatic intervention in Britain's energy future by offering to invest billions of pounds in building a series of new nuclear power stations around the country.
Officials from China's nuclear industry have been in high-level talks with the Department of Energy and Climate Change (DECC) this week about a plan that could eventually involve up to five different reactors being built at a total cost of £35bn.
Greenpeace described the move as desperate while others warned of security threats, but the government has been courting China as the UK atomic programme has been slowed by rows over subsidies and worries that EDF – the French company with the most advanced plans to build new reactors here – could be hampered by the change of government in Paris.
China, which has operated its own atomic plants since 1994, is awash with cash from its hugely successful industrial modernisation programme and sees the UK as a potential shop window for exporting its atomic technology and expertise worldwide.
Companies from China have already invested in or taken over other key infrastructure assets in Britain, such as Thames Water, the port of Felixstowe and the Grangemouth oil refinery.
They also own businesses ranging from Weetabix to the Gieves & Hawkes tailoring brand.
The China National Nuclear Power Corporation (CNNPC), which is keen to invest in Britain, has just unveiled plans to raise about £17bn through a domestic share offering.
A team from the Shanghai Nuclear Engineering Research and Design Institute (SNERDI), an arm of the huge China National Nuclear Corporation (CNNC) , met senior DECC officials over the last few days, three different sources confirmed.
The first part of the plan involves CNNC and another state-owned firm, China Guangdong Nuclear Power Corporation, bidding in two separate groups against each other for a stake in the Horizon consortium, which wants to construct new atomic plants at Wylfa in Wales and Oldbury in Gloucestershire.
But sources with close connections to the Chinese say Beijing is also interested in three other locations at Bradwell in Essex, Heysham in Lancashire and Hartlepool in County Durham.
EDF currently has the right of first refusal to operate on these sites but CNNC wants to use an existing technology tie-up with US-based nuclear engineering group Westinghouse to potentially build three more reactors.
The Chinese accept they would need to bring in a UK utility firm to operate the plants and overcome any political or public resistance to their plans.
"The Chinese have the money and the experience," said the well-placed source. "They see setting up in the UK as an opportunity to show they can operate in one of the world's toughest regulatory environments so that they can then move into other markets in Africa and the Middle East."
The DECC was unwilling to comment on whether it had met SNERDI officials this week, saying such meetings would be commercially confidential.
A department spokesman would only say: "The UK is open for business and actively welcomes inward investment to our energy sector, but any potential nuclear operator is, and would be, subject to rigorous scrutiny through the established regulatory process."
Keith Parker, chairman of the Nuclear Industry Association in London, said it was "highly encouraging" that China wanted to invest in the UK.
"They have 14 of their own reactors in operation and 25 under construction and they use both Areva and Westinghouse designs that could be used here. It was clear from my discussions with them that they have international ambitions."
In May, the energy minister Charles Hendry told the Energy and Climate Change select committee he had no objection to Chinese firms being involved in the UK.
"In China, there are different companies who have experience of building dozens of nuclear power stations on time and on budget, and so there is no suggestion that these are companies that do not have expertise in this sector.
They have extremely well-proven expertise in this sector, and in looking at how we take this forward in the United Kingdom I think we should be guided by where that expertise has already been proven."
But Greenpeace said the bid to woo China was a last throw of the dice by the government.
"This is a sign of desperation," said Doug Parr, chief scientist at Greenpeace.
"Chinese nuclear players have state backing, which could help solve the issue of financing colossally expensive new nuclear power stations in the UK.
But this just means that the money from UK taxpayers will flow to the Chinese government, rather than to France."
The potential for political conflict has been highlighted by the former Downing Street energy policy director Nick Butler. He wrote in a recent Financial Times blogpost that Chinese involvement in the UK energy business could be a concern (subscription required):
"They will be inside the system, with access to the intricate architecture of the UK's National Grid and the processes through which electricity supply is controlled, as well as to the UK's nuclear technology.
"Perhaps that doesn't matter. Perhaps a Chinese wall exists between the Guangdong Holding company and the government in Beijing.
Perhaps we have reached a level of globalisation in which the nationality of ownership is irrelevant.
"But even if all those things are true, it seems regrettable that in return for this investment the Chinese are not being required to halt the cyberattacks and the theft of intellectual property in which they are now the world leaders."
Science- The Nuclear Option in the UK
Updated: 19 Jul 2012
THE NUCLEAR ENERGY OPTION IN THE UK
The government’s recent White Paper on energy policy did not endorse a programme of new nuclear power
stations at this time,
but declared that “at some point in the future new nuclear build might be necessary if we
are to meet our carbon targets.”
Thus, its policy on nuclear energy is “to keep the option open”.
1 Parliamentary interest in this topic is high.
This briefing analyses some of the issues associated with keeping the option open that the government and industry might need to resolve.
It does not examine whether there is a need to keep the option open nor indeed the precise means for doing this.
Rather, it focuses on options for new reactors, the economics of nuclear energy, the knowledge base for nuclear technology, and issues related to waste management, licensing and security
Science-The Case for Nuclear Power-World Energy demand is projected to grow by 50% by 2030
Updated: 19 Jul 2012
Is nuclear energy justified and should it be expanded?
Nuclear power is any nuclear technology designed to extract usable energy from atomic nuclei via controlled nuclear reactions.
The most common method today is through nuclear fission, though other methods include nuclear fusion and radioactive decay.
All current methods involve heating a working fluid such as water, which is then converted into mechanical work for the purpose of generating electricity or propulsion.
Today, more than 15% of the world's electricity comes from nuclear power, over 150 nuclear-powered naval vessels have been built, and a few radioisotope rockets have been produced.
Some countries in the world currently use nuclear power.
However, high construction costs have hindered the development of nuclear power in many countries.
Yet, rising concerns regarding global warming and energy prices, however, nuclear energy has seen renewed attention as alternative form of energy.
The world energy demand is projected to grow by 50% by 2030.
To meet the short-term demand, the use of coal and other fossil fuels will increase.
The main question and debate is whether nuclear energy should be included as a major component of 21st century plans to combat global warming and to help us meet the growing energy demand?
Many questions frame this debate: Is nuclear power helpful in reducing greenhouse gas emissions?
Can nuclear power scale to become a serious energy replacement to coal electric power (the main source of electricity globally)?
Does the construction of nuclear plants contribute to global warming in any significant ways?
What about the mining of Uranium, and what general environmental risks might this pose?
What concerns surround nuclear waste?
Can these concerns be addressed?
How long can we expect supplies of Uranium and nuclear energy to last?
Even if it will run out in the future and is not "renewable", is it still worth pursuing now (particularly in the face of global warming)?
Do nuclear plants pose a risk of "melting down", or have modern nuclear plants eliminated the risk of another Chernobyl or Three Mile Island disaster?
Are there any radiation risks to local communities and to workers at nuclear plants?
What about the threat of terrorist attacks on nuclear plants?
What weapons proliferation risks surround nuclear energy?
Should this prevent the further development of nuclear energy, particularly if it is believed that nuclear energy is part of the solution to the global warming crisis?
Science- Its Nuclear Power or the lights will go out
Updated: 19 Jul 2012
After years of indecision and a drain of expertise,
how Britain's nuclear future (and YOUR money)
could end up in the hands of the Chinese
By Alex Brummer
PUBLISHED: 22:20, 22 May 2012 | UPDATED: 22:20, 22 May 2012
..After years of procrastination, our government has belatedly admitted that unless it urgently addresses Britain’s energy needs, there is a real risk that the lights will go out across the country over the next decade.
It took the departure of Chris Huhne from the Department of Energy and Climate Change for the Coalition to concede that we need a chain of new nuclear power plants - rather than plastering the land with wind turbines.
But the idea that £110 billion of new investment in the country’s future energy needs can be found without a government subsidy, as ministers claim, is disgraceful piece of government dissembling.
Back on the agenda: It took the departure of Chris Huhne and the appointment of Ed Davey as Energy and Climate Change Secretary for nuclear power to be reconsidered as a solution to Britain's energy shortage
The truth is that the only way that the giant, mainly foreign-owned, global power groups who have been entrusted with Britain’s energy future will be willing to stump up the massive amounts of money to build these nuclear plants is if they are incentivised to do so.
Even then there is no guarantee that the investment will go ahead.
Already we have seen the big German power companies RWE and E.ON renege on promises to build new nuclear plants in Anglesey and Gloucestershire.
This was because of jitters among their shareholders about the scale of outlay required (some £14billion just to start with).
The Government must persuade French giant EDF to keep open its existing nuclear reactors, such as Sizewell B (pictured) longer than expected
The Coalition must persuade Électricité de France (EDF), the state-controlled French group and its minority partner, Centrica, that they should keep their existing UK nuclear plants operating until a new generation of power stations can take over and produce the necessary 20 per cent of our electricity needs.
In their effort to pretend that there will be no state subsidy (meaning that the costs will, yet again, fall on the poor benighted consumer), ministers have devised a complicated pricing mechanism that is nothing but a piece of shabby camouflage.
Under this scheme, the wholesale price paid for electricity from new nuclear plants will, as a rule, be artificially higher than the market price.
For without this help, foreign companies are most unlikely to invest in Britain’s energy future.
Burden: The Government has said companies building nuclear power plants will not receive a state subsidy, meaning the costs will fall on households
As a result, the extra costs will be added to the already escalating utility bills of British households who can be expected to pay up to £200 a year extra.
And this doesn’t include the extra cost to the National Grid of building new pylons, cables and transmission centres. In fact I’m told that these costs will be up to £200 billion.
And that is not all. Ministers have kept suspiciously quiet about the enormous costs of storing the waste from the new nuclear plants (over a period of thousands of years) or of decommissioning them when they are eventually shut.
More...Electricity bills to rise £160 under Energy Bill plan to pay for shift to nuclear
Are the Chinese about to take control of our nuclear power plants? Beijing piles into industry after Germans were scared off
Of course none of this is surprising since the history of Britain’s nuclear industry is one of serial mismanagement by successive governments of all parties.
From the 1950s to the 1970s, Britain was a pioneer of nuclear engineering and our skills were sought around the world. But with the availability of cheap oil and gas supplies, nuclear energy was considered too costly.
In addition there were mounting safety concerns, as demonstrated by accidents at Three Mile Island, Pennsylvania in 1979 and Chernobyl in 1986. These made nuclear power a politically toxic issue.
Governments were reluctant to support nuclear power in the wake of crises like the Chernobyl explosion (pictured)
The result was that governments drew back from fresh investment and much of the nation’s expertise moved to places like France and China where there was huge investment in nuclear power.
The final nail in the industry’s coffin was hammered in by Gordon Brown who, as Labour Chancellor, authorised the sale to Japan of UK-owned Westinghouse, one of the few companies capable of building fast-breeder reactors.
With bitter irony, Brown was forced to realise a few years later when he became prime minister that Britain faced a dire energy shortage and we needed to build several new nuclear power station very quickly.
As Chancellor of the Exchequer, Gordon Brown allowed the sale of Westinghouse, Britain's only maker of nuclear power plants
Since British Energy, the country’s largest electricity generation company had been taken over by EDF in 2009, British Gas-owned Centrica was brought in as a minority partner in order to maintain some British element in any future nuclear building programme.
But the need is now so great that the Government is willing to allow any foreign company to join the scramble for contracts.
Energy minister Charles Hendry told me this week that the Chinese would be acceptable investors. He said they had built ‘dozens of plants on time and on budget’ and had ‘a very strong commitment to technology and safety.’
VIEW FULL ARCHIVE .
The idea, however, that control of some of our nuclear power stations, paid for by taxpayers via their energy bills, being in the hands of the Chinese government with its contempt for human rights and a totally different strategic outlook to the UK beggars belief.
In any case, new nuclear power stations take years to build.
From planning permission to construction and then bringing electricity generation on stream takes at least ten years.
In the meantime, we have little choice but to fill the gap by using more gas.
But this, too, will involve costly investment and risks Britain being dependent on Middle Eastern potentates or Russia.
The one lingering hope for the UK is the possible extraction of natural gas and oil from rock formations (so-called ‘fracking).
This process has proved very successful in America - making U.S. natural gas the cheapest in the world.
With its vast land mass, fracking carried out miles from human populations.
But in parts of Britain where it has been attempted there have been earth tremors in nearby towns.
It is time that the Government faced up to these truths.
Above all, it must stop trying to hide from consumers the fact that they will have to pay the bill for its shameful history of short-termism and its obsession with carbon emission targets and flawed renewable energy programmes
Read more: http://www.dailymail.co.uk/debate/article-2148370/After-years-indecision-drain-expertise-Britains-nuclear-future-YOUR-money-end-hands-Chinese.html#ixzz210Iw0opx
Science- 18% of Electricity in the UK comes from Nuclear Power
Updated: 19 Jul 2012
Nuclear Power in the United Kingdom
(Updated July 2012)
• The UK has 17 reactors normally generating about 18% of its electricity and all but one of these will be retired by 2023.
• The country has full fuel cycle facilities including major reprocessing plants.
• The UK has implemented a very thorough assessment process for new reactor designs and their siting.
• The first of some 19 GWe of new-generation plants are expected to be on line about 2018.
In the late 1990s, nuclear power plants contributed around 25% of total annual electricity generation in the UK, but this has gradually declined as old plants have been shut down and ageing-related problems affect plant availability.
In 2010, electricity generated in nuclear power plants was 62.14 billion kWh (56.48 TWh net), or 16.4% of total electricity produced from all sources (378 billion kWh).
Gas-fired generation accounted for 46.3% of total (175 billion kWh); coal-fired 28.5% (108 billion kWh); ‘other renewables’ 3.4% (12.8 billion kWh – mainly from biomass); wind 2.7% (10.2 billion kWh), and hydro 0.95% (3.6 billion kWh).
Oil and 'other': 1.7% (6.5 billion kWh).
Net electricity imports from France – mostly nuclear – in 2010 were 2.66 billion kWh, less than 1% of overall supply and similar to 2009, compared with 12.5 billion kWh in 2008, or 3.7% of final electricity consumption.
There is a high-voltage DC connection with France with 2000 MW capacity, and a 1400 MWe link over 700 km with Norway is planned.
Per capita UK electricity consumption was 5220 kWh in 2009.
In 2009, half of British gas was supplied from imports (compared with 32% in 2007), and this is expected to increase to at least 75% by 2015, as domestic reserves are depleted.
This has major implications for electricity generation, with the amount expected to be from gas to almost double from the 170 billion kWh in 2008.
UK generating capacity (2010) is 90 GWe, comprising 35 GWe conventional steam, 34 GWe CCGT (with 5 GWe increase in 2010), 11 GWe nuclear, 2.26 GWe wind (21.7% load factor in 2010), 4.27 GWe hydro including pumped storage, 2.0 GWe other renewables and 1.5 GWe gas turbines and oil. Peak demand is 61 GWe.
The history and development of the UK nuclear industry is covered in Appendix 1 to this paper, Nuclear Development in the United Kingdom.
Currently, there are 16 operating reactors in the UK totalling 10 GWe capacity.
The last operating Magnox reactor - Wylfa 1 - is due to shut down when its fuel runs out, in September 2014.
This will leave seven twin-unit AGR stations and one PWR, all owned and operated by Electricite de France (EdF) subsidiary EdF Energy.
Science-This Country needs new Nuclear Power
Updated: 19 Jul 2012
Country needs new nuclear power, government insists
The Government today insisted the country needed nuclear power as it prepared to unveil plans to fast-track a new generation of nuclear power stations.
Energy and Climate Change Secretary Ed Miliband acknowledged anxieties about nuclear power but said it had a "relatively good" safety record in this country.
Is Britain's future nuclear?
"The basic message here is, we can't say no to all of the nuclear or all of the low carbon fuels that are out there," he told GMTV.
"We need nuclear, we need renewables, we need clean coal, we need all of those things if we are going to make that transition to cleaner energy."
Mr Miliband was speaking as he was due to announce a series of national policy statements which would include a list of sites deemed suitable for new nuclear developments.
Under changes to the planning laws, the Infrastructure Planning Commission (IPC) will be able to speed through the proposals for new schemes if it decided they fitted in with the policy statements.
But shadow energy secretary Greg Clark said that a simple ministerial statement on the issue was inadequate and called for a Commons vote to give the process "democratic legitimacy".
"It is a national emergency and it's been left far too late - we've known for the last 10 years that most of our nuclear power fleet would come to the end of its planned life," he told BBC Radio 4's Today programme.
Green groups expressed dismay at the prospect of new nuclear power and warned the Government could be open to legal challenge if the statements do not properly consider climate change.
They have also raised concerns that people will not be able to influence decisions on major projects because schemes covered by the statements will not be subject to public inquiry.
But the Government insists firms will have to work closely with local regions and show they have consulted widely to gain approval.
The statements are expected to cite the finite nature of fossil fuels and the pressing demands of climate change while making the case for nuclear power stations.
Mr Miliband will also set out the financial and regulatory framework for driving forward clean coal "carbon capture and storage" technology, but Greenpeace said neither should be part of Britain's future energy mix.
Robin Oakley, head of the group's climate and energy campaign, said: "Nuclear is a dangerous and expensive irrelevance to tackling climate change and providing real energy security.
"We don't need coal or nuclear, because proven green technologies such as wind and combined heat and power stations can secure Britain's energy needs, create green jobs and slash our emissions."
Friends of the Earth executive director Andy Atkins said the battle against climate change should be at the core of all Government decisions to meet commitments on reducing emissions.
And he added: "Building new nuclear reactors is not the answer to the challenges of climate change and energy security.
"Nuclear power leaves a deadly legacy of radioactive waste that remains highly dangerous for tens of thousands of years and costs tens of billions of pounds to manage.
"And building new plants would divert precious resources from developing safe renewable power, while doing little to bring about the urgent emissions reductions that are desperately needed within the next decade."
Read more: http://www.metro.co.uk/news/765010-country-needs-new-nuclear-power-government-insists#ixzz210IdmWot
Science - US - "Fracking wastes could poison drinking water"
Updated: 17 Jul 2012
Study suggests fracking wastes could poison drinking
by: Blake Deppe
July 10 2012
According to a new scientific study, salty fluids rich with minerals in parts of Pennsylvania's Marcellus Shale are seeping thousands of feet upward, right into drinking water.
This suggests that drilling waste and chemicals caused by fracking might migrate as well.
Scientists at Duke University and California State Polytechnic University conducted the research, which involved the testing of drinking water wells across Northeastern Pennsylvania. In many instances, the water was mixed with brine that had seeped in from the Marcellus Shale.
The Marcellus Shale is believed to be rich with natural gas, and so fracking, which involves the extraction of that gas, is a top priority for the gas industry.
Fracking has been proven to pose a number of threats both to the environment and peoples' health, and the study, released July 9, concluded that deeply buried rock layers will not necessarily keep harmful fracking chemicals away from drinking water, as previously thought.
The brine that was found had moved thousands of feet to reach the well water, and while its presence had nothing to do with fracking, continuous tampering with the Marcellus Shale could change that - potentially exposing the water to toxicity.
"Everything is not black and white," said Avner Vengosh, a Duke University professor of geochemistry and one of the researchers.
"We're just at the very beginning of understanding what's going on.
The result of this study does not apply to all of Pennsylvania. It needs to be duplicated."
In a separate study conducted by some of the same Duke researchers in 2011, it was also understood that methane gas was much more likely to leak into water supplies in places that were adjacent to drilling.
"In this paper," said the study, "we evaluate the potential impacts associated with gas-well drilling and fracturing on shallow groundwater systems of the Catskill and Lockhaven formations that overlie the Marcellus Shale in Pennsylvania.
Our results show evidence for methane contamination of shallow drinking water systems in at least three areas of the region and suggest important environmental risks accompanying shale gas exploration worldwide."
In a report released by the Penn Environment Research and Policy Center, it's noted that fracking in the Marcellus Shale poses a significant threat to entire populations.
"From Pittsburgh to Scranton, gas companies have already drilled more than 3,000 wells," said the report, "and the state has issued permits for thousands more. Permitted well sites exist within two miles of more than 320 day care facilities, 67 schools, and nine hospitals statewide."
Whether further contamination of Pennsylvania drinking water from the effects of fracking is imminent, is yet to be seen.
"The biggest implication is the presence of connections from deep underground to the surface," said biology professor Robert Jackson, who was part of the study.
"It's a suggestion based on good evidence that there are places that may be more at risk."
Photo: Rally against fracking in the Marcellus Shale region, at the Capitol in Albany, N.Y. Mike Groil/AP
Science-Journey to the centre of the Earth ? Drilling through the Earth's crust
Updated: 06 Jul 2012
Mission to the mantle: Drilling through Earth's crust
03 July 2012 by Jheni Osman
It's geology's moonshot.
A bold plan to drill into Earth's interior promises to solve profound mysteries about our planet –
and might even find life down there
See more in our gallery: "The quest to drill the world's deepest hole"
AN UNLIKELY explorer is floating off the east coast of Japan.
At first glance, the ship resembles a rather strange oil tanker.
It is colossal: perched on deck are a helipad, cranes and a scaffold tower around 30 storeys high.
In the control room, a supervisor monitors the screens, before setting the tall scaffold in motion.
"Confirm the hole position," he says. Inside the tower, machinery whirs as the world's longest drill is lowered towards the ocean floor.
Its ultimate destination, when it gets there, will be uncharted territory.
So goes a typical day on board Chikyu, a Japanese deep-sea drilling vessel.
Today, it is aiming for the fault that caused last year's Tohoku earthquake to reconstruct its causes, but the ship has a much more ambitious goal in sight.
Geologists are planning to use Chikyu to drill all the way through the crust and into the mantle to fetch a cache of rock samples. This feat has never been done before - in fact, no one has even come close.
If the project gets the go-ahead, it will be one of earth science's most spectacular ventures.
Comparable to a moonshot, it could transform our understanding of our planet's evolution, and challenge the fundamental paradigms of earth science.
There is even a chance that we will find something unusual lurking down there, something few would have thought possible until recently.
This is not the first time geologists have yearned to explore the deep Earth.
In 1909, Croatian meteorologist Andrija Mohorovicic discovered that seismic waves, triggered by earthquakes, travelled significantly faster below a depth of 30 kilometres than they did higher up, hinting that these deep rocks had different compositions and physical properties.
With this discovery, Mohorovicic secured his place in the annals of science.
This step change in seismic velocity was named the Mohorovicic discontinuity - aka the Moho - and marks the upper boundary of the mantle.
Geologists now know that the top of the mantle lies 30 to 60 kilometres beneath the surface of thick continental crust, and as little as around 5 km below the seabed at points where the crust is at its thinnest.
What happens at that depth shifts tectonic plates, moulds the land we stand on, and unleashes the fury of earthquakes and volcanoes. It has therefore shaped all life on the planet - including us.
Yet it wasn't until the late 1950s that scientists felt the urge to investigate the mantle.
At the time, the idea of plate tectonics was still hotly debated. Harry Hess, and other proponents of the theory, claimed that hot convective currents from deep within the mantle were driving floating tectonic plates around the planet's surface.
Hess and colleague Walter Munk felt hampered by the lack of physical evidence for the theory, and turned to some of their drinking buddies from the US National Academy of Sciences.
At a wine-fuelled breakfast in California in April 1957, the so-called American Miscellaneous Society hatched a plan to fetch mantle samples. Project Mohole was born.
Numerous challenges had to be met - everything from finding funding to inventing the technology to keep a drilling ship stationary on the high seas.
They couldn't borrow ideas from offshore oil companies - they weren't drilling in deep water at the time - so the Mohole team developed a technology called dynamic positioning, where cleverly placed propellers and thrusters keep a ship stable and in place.
The first core was drilled to 183 metres off the coast of Guadalupe Island in the Pacific in April 1961.
It was also the last.
Soon after the expedition returned, the leading scientists were side-lined, management changed hands, costs spiralled, and a certain young politician called Donald Rumsfeld stuck his nose in.
In 1966, Project Mohole folded after the US Congress voted to drop its funding.
Despite this, drilling into oceanic crust did continue. Still, we have never got further than about a third of the way to the mantle. The closest a drill has got is a 1507-metre borehole off the coast of Costa Rica.
It's not the deepest hole ever - even in oceanic crust - but the crust there is estimated to be less than 5.5 kilometres thick.
Some boreholes on land extend much further from the surface, but since continental crust is far thicker, their deepest points are tens of kilometres from the mantle.
As far as the geologists behind the 2012 Mohole to Mantle project are concerned, there is a clear scientific rationale to firing up the drill once more.
After all, while the mantle makes up 68 per cent of the Earth's mass, we actually know very little about it.
"There are currently no pristine mantle samples, so we just have hints of what's going on," says Damon Teagle at the UK's National Oceanography Centre in Southampton, who is part of the international team working on the Japanese-led project.
Some samples have reached the surface, but they are all contaminated. For example, rare rocks called mantle nodules have erupted in volcanoes, showing the mantle is made of magnesium-rich, silicon-poor minerals like olivine and pyroxene.
And in some parts of the ocean floor, rocks that were once part of the mantle lie exposed, but contact with seawater has changed their composition dramatically.
Think of these samples as the difference between Martian meteorites and actual rocks picked up from the Red Planet.
Without fresh samples, geologists struggle to confirm even simple facts about our planet, including what exactly the mantle is made of, how it formed and how it works.
Instead, they have had to piece together their theories about the mantle using indirect evidence.
Its broad layering structure is inferred by tracking the speed of seismic waves, as Mohorovicic did.
Further clues to its composition have come from meteorites, which were forged from the same cosmic debris as our rocky planet, or more recently via exotic methods such as looking at the neutrinos produced during the radioactive decay of certain elements.
Many questions remain unanswered, however.
Getting our hands on tracers of mantle convection, such as noble gases and isotopes, would reveal how and when our planet differentiated into the core, mantle and crust, and when plate tectonics started.
Identifying the chemicals and isotopes that make up the upper mantle would show how water, carbon dioxide and energy are transferred to the crust, and how they influence global geochemical cycles.
And finding out how heterogenous the mantle is would reveal how magma wells up and then erupts onto the sea floor at mid-ocean ridges.
Perhaps the most extraordinary thing we might find in the mantle is life.
While any creatures won't quite live up to the prehistoric monsters envisioned by Jules Verne in A Journey to the Centre of the Earth, they would still be significant.
Recent discoveries suggest such extremophiles might be possible.
Last year, Tullis Onstott at Princeton University uncovered microscopic roundworms, known as nematodes, living an incredible 4 km down in a gold mine in South Africa.
Considering their size, Onstott likened the discovery to finding Moby Dick in Lake Ontario (Nature, vol 474, p79). He has also found single-celled microbes at even greater depths - up to 5 km down.
Under the sea floor, microbes have turned up 1.6 km down off the east coast of Canada (Science, vol 320, p 1046). The researchers who found them speculate they might be hundreds of millions of years old.
"We showed that the bacteria might be dividing as slowly as, say, once in 100,000 years," says John Parkes of Cardiff University, UK.
Pressure does not seem to be a problem for many extremophiles. In the lab, microbes can tolerate up to 1000 atmospheres, and there are bacteria living happily under 11 km of water in the Mariana Trench in the western Pacific.
In fact, pressure is crucial for survival in searing hot conditions, because it stops water boiling - steam can be a killer.
So temperature could be the deciding factor. Just below the Moho, geologists believe it could be as low as 120 oC.
"This is tantalisingly close to the known upper limit for life: 122 oC," says Parkes. An organism living on hot ocean
vents was shown to be capable of growing at this temperature in 2008 (Proceedings Of The National Academy Of Sciences, vol 105, p 10949).
Still, Matt Schrenk at East Carolina University in Greenville, who studies microbiology in extreme environments, thinks the chances of finding mantle life are slim.
Apart from the heat, he says, fluid circulation will be minimal, so the flow of nutrients would be too.
Despite his doubts, Schrenk supports the Mohole to Mantle project as he thinks it could define the physiological limits of life - and even help the study of climate change since the biosphere down there may influence the circulation of the "deep" carbon cycle.
Deep life could also prove useful in medicine.
"If the organisms are evolutionarily distinct, they could carry out unique activities or possess unique enzymes that could be of use in biotechnology," he says.
Mantle samples could also help us unravel the role of microbial life in the evolution of our planet.
Recent research by geophysicist Norman Sleep at Stanford University in California found that life can be subducted into the crust - and its products, such as ammonium, can be dragged even further down.
Essentially, all the nitrogen in the mantle comes from subducted ammonium in organic matter (Annual Review of Earth and Planetary Sciences, vol 40, p 277).
This raises the possibility that life on the very early Earth changed the composition of the mantle - and useful samples for studying life in this period might still be down there.
At the National Oceanography Centre, Teagle and colleagues have been helping to assemble all of these scientific reasons for the Mohole to Mantle project.
In the labs upstairs, scientists carry out delicate analyses of cores from ocean drill holes.
The chances are, this is where many of the precious mantle samples will be scrutinised.
Teagle says it's not surprising that it has taken decades to pick up where Project Mohole left off. "Technology, time and money were previously the limiting factors to drilling to the mantle," he says.
First, consider the accuracy required to drill 6 km into the crust beneath the ocean floor.
"It will be like lowering a piece of steel string the width of a human hair to the bottom of a 2-metre-deep swimming pool," says Teagle, "and then drilling 3 metres into the foundations."
That means a new extra-long drill will have to be built for Chikyu, which cannot reach such depths at the moment.
New materials will also be required. When drilling a 30 centimetre-wide hole in hard igneous rock at a speed of 1 metre an hour, drill bits only last about 50 hours.
They can also fail catastrophically and be ground into smooth stumps.
The uber-tough materials being developed for the project will need to cope with pressures of 2 kilobars and temperatures of up to 250 °C.
The good news is that an independent review carried out in 2011 by Blade Energy, a deep-water drilling firm, concluded that the project is technically feasible.
"It always used to be that an engineer would invent some gadget and then ask scientists whether they could use it in some way.
More and more, now, the needs of science are driving technology," says Teagle.
In fact, whether the plan succeeds relies less on technology and more on political and scientific will. Teagle reckons the operation of the research vessel alone will cost at least $1 billion.
Fortunately, the Japanese government is committed to covering a significant portion of these costs.
While this is a big investment, it is understandable considering that Chikyu might eventually help with earthquake forecasting.
And it's not only the Japanese who are getting behind the project - others have expressed interest too.
If the Mohole to Mantle team wins an official thumbs up in the next year or so, its members hope to strike mantle gold within a decade.
First, a decision needs to be made on which of the three potential drilling sites to choose.
They are all in the Pacific - one candidate includes the Project Mohole site - and each one is relatively close to mid-ocean ridges, where new crust forms.
Rising magma pushes up the seabed here, making the water shallow enough to reach down with a drill. The rocks at the three sites have also cooled down enough to penetrate safely, and, crucially, the crust formed quickly, so it should be reasonably uniform, which will make drilling easier.
Getting to the mantle is going to be extraordinarily tough, but Teagle sees the project as vital to answering some of the biggest questions challenging geologists today.
It will give us a significantly better understanding of how our planet evolved, he says, as well as defining the limits of life.
"The project will require a space mission-level of planning, but will cost a fraction of going back to the moon or returning rocks from Mars.
Yet a pristine mantle sample would be a geochemical treasure trove, like bringing back the Apollo rocks."
Jheni Osman is a science writer based in Bristol, UK, and author of 100 Ideas that Changed the World (BBC Books, 2011)
Science- We may have to start loving nematode worms
Updated: 06 Jul 2012
Gene switch that turns bacteria into mighty Hulk
19:00 05 July 2012 by Tessa Noonan
It's the kind of biological horror story you couldn't make up.
Tiny nematode worms burrow into moth larvae guts and release a biological weapon: bacteria that switch from feeble, dormant passengers in the worm's intestine to the bacterial version of the Hulk – except they're red.
Todd Ciche and colleagues at Michigan State University in East Lansing have found that just one gene, randomly switching back and forth between two states, transforms tiny, quiescent bacteria into big, red, glowing killers – a process that may reveal how important human infections like antibiotic-resistant staphylococcus, or MRSA, persist.
Some individuals belonging to one species of nematode worm, Heterorhabditis bacteriophora start life by killing their own mother.
They hatch inside her uterus instead of waiting for her to lay her eggs.
While breaking out – and killing mum – those babies acquire a load of Photorhabdus luminescens bacteria from her body cavity, a gift denied siblings hatched the conventional way.
The tiny, translucent bacteria colonise the baby worms' guts – then go to sleep.
The baby worms then crawl through the soil and into the guts of moth larvae – through its mouth, anus or breathing pores, or just by slashing their way in with a fang.
Once in the larva's gut, the worms vomit up their bacterial passengers.
But these are no tiny dormant gut-clingers: they are seven times larger than 'normal' P. luminescens, glowing red and exuding larva-killing toxins.
Both the bacteria and the worm then feast on the corpse.
The bacteria multiply, process nutrients for the worm and signal it to reproduce – which it does, spawning half a million babies in a two-week infestation.
Wimps to warriors
"The nematode depends on the bacteria to reproduce in addition to killing the insects," says Ciche.
Both then go on to kill again. In fact, you can buy the worms on the internet to kill larval infestations in your lawn.
Biologists studying this ghoulish life cycle knew about the Hulk bacteria but were puzzled by their wimpy alter egos.
Ciche previously found that the Hulk form allows the worm to kill and eat larvae, while the wimps adhere to the mother-worm's anus and burrow into its body cavity to infect the next generation.
The team has now discovered that both kinds are in fact in the worm all the time, and one simple, small stretch of DNA triggers the switch from wimp to Hulk and back.
Locking this madswitch – so-called because it promotes maternal adhesion – stopped the bacteria from switching and infesting new baby worms or parasitising moth larvae.
Surprisingly, the switch flips on and off spontaneously.
"Dogma is that the bacteria would sense a cue or signal and regulate gene expression in response," says Ciche – for instance, turning into Hulks only in the larva's gut. Instead, they use a "bet-hedging strategy" in which they keep both the attacking and colonising forms just in case they get a chance to shine.
Ciche suspects similar Jekyll-and-Hyde transformations occur in human infections.
A stretch of DNA similar to the madswitch regulates gene expression in Escherichia coli that causes fatal disease in humans, such as the one that struck Germany last year.
They may switch bacteria into attack mode to counter antibiotics or the human immune system.
But the wimp bacteria may also be useful.
Similar, slow-growing cells have been found in nasty human bacterial infections such as salmonella or MRSA.
As these are in suspended animation, they are less vulnerable to antibiotics, which attack during bacterial growth – the wimp bacteria in the worms were a hundred times less sensitive to antibiotics than the killers.
Ciche suspects some human infections use such mild-mannered forms to dodge attack, perhaps using some of the same genes
Science- In the beginning was an Atom from which all Matters ?
Updated: 06 Jul 2012
Lisa Grossman, reporterYesterday both Higgs boson-hunting experiments at the Large Hadron Collider separately reported signs of the Higgs boson with 5 sigma of statistical significance.
That means there's only a 1 in 2 million chance that the result is due to background processes - and not a Higgs boson - enough to declare a discovery.
But what if you combined the two results?
Independent physicist Philip Gibbs, not a member of the teams responsible for detectors at the LHC, has done his best to do just that.
The calculation gave him a staggering new confidence level 7 sigma - though it should be treated with caution as the calculation isn't official.
Both the ATLAS and CMS experiments separately combined data collected in 2011 and 2012 to produce the 5 sigma results.
Most of the evidence came from two channels: a Higgs that decays into two photons, and a Higgs that decays into two Z bosons that then decay further into four leptons. These are two of the easiest ways to look for the Higgs at the LHC because they can be measured precisely, unlike some other decay products like neutrinos that escape the detector without being seen.
But the Z boson decay happens only once for every 12,000 Higgs particles, and the photon signal can be drowned out by other processes that also produce photons.
Combining both channels helps overcome each one's weaknesses and gives more confidence in the result.
The same could be said for combining results from the two experiments - each has its own set of possible experimental errors, which the other could cancel out.
But that option "is a controversial one, because once we combine results to get the final observation we can no longer use each experiment as a crosscheck for the other", cautioned Aidan Randle-Conde of ATLAS at the Quantum Diaries blog on Monday, two nights before the result was announced.The fact that each experiment cleared five sigma on its own makes that less of a concern, so Gibbs, who blogs at viXra, did a rough combination on his own. Combining both channels in both experiments over the past year and a half yielded a striking confidence level of 7.4 sigma.
That means that the chance that both results were produced by background processes in the detector are less than 2 in 10 billion.
Three sigma is considered "observation" and five sigma "discovery", so perhaps 7 sigma could be considered "certainty"?Still, as the Associated Press reported on Monday, CERN spokesman James Gillies said that he would be "very cautious" about unofficial combinations of ATLAS and CMS data:
"Combining the data from two experiments is a complex task, which is why it takes time, and why no combination will be presented on Wednesday."
So the final count is still to come.
Science- Mining or Minding the Moon
Updated: 19 Jun 2012
Who owns asteroids or the moon?
04 June 2012 by Paul Marks
Plans to mine minerals on celestial bodies could violate many aspects of international space law
SHOULD asteroids rich in precious metals be regarded, in legal terms, like the fish in the sea?
That is one approach the United Nations could take as it struggles to come to terms with mining plans announced by Planetary Resources, a start-up company based in Seattle.
In just under two years, Planetary Resources says it will launch the first of a series of space telescopes into low-Earth orbit in a bid to spot nearby asteroids of a size and mineral composition potentially worth mining.
When a strong candidate is found, it plans to dispatch a robotic probe to assess the asteroid's precious metal content, with platinum a priority.
If that is found, yet-to-be developed robots will be dispatched to mine it.
If it is small enough, the asteroid could be brought into an Earth orbit first, to make the process easier.
Planetary Resources's plans seem well advanced and others are not far behind.
And it's not just asteroids in these firms' sights. Moon Express, a start-up based in Las Vegas, is planning to prospect the moon for platinum and other metals deposited on its surface by meteorites.
It all sounds mind-bogglingly expensive and complicated, and it is.
But those planning the operations have more earthly concerns to deal with, too.
Mining asteroids or the moon appears to violate many of the tenets of international space law.
The most important of these is the UN's Outer Space Treaty of 1967, which in rather pompous language states that "the exploration and use of outer space shall be carried out for the benefit of all countries and shall be the province of all mankind".
It also specifically prohibits states from making territorial claims in space.
"States cannot claim rights over an asteroid," says Joanne Wheeler, a lawyer at London legal practice CMS Cameron McKenna and a UK government adviser on the UN's Committee on the Peaceful Uses of Outer Space.
"The Outer Space Treaty says the moon and celestial bodies such as asteroids are not subject to national appropriation.
Whether that means no one owns the asteroids, or we all do under some common heritage, what's clear here is there is no state sovereignty over them."
What applies to sovereign states probably also applies to private companies.
"It is not possible for Planetary Resources to say it owns all of an asteroid even if they are the first there," says Wheeler.
If the ownership of an asteroid is in question, who, then, has legal title to the ores extracted from it and sold back on Earth?
Again, it is not clear, though Wheeler points out that there is already a legitimate market for space rocks in the form of meteorites. This probably puts Planetary Resources in the clear.
Eric Anderson, co-founder of Planetary Resources, doesn't see a problem: "Our analysis shows we have an unequivocal right to mine asteroids. Nothing in the Outer Space Treaty prevents that."
He doesn't agree that asteroids, especially those in the 50 to 500-metre size range, are "celestial bodies". Meteorites are fallen asteroids, he says, and they are not regarded as celestial bodies.
Some even see the treaty as irrelevant to asteroid mining.
"The Outer Space Treaty is a paper tiger with no teeth," says Michael Gold, a lawyer specialising in commercial spaceflight in Washington DC.
"It's unenforceable and any state can pull out of it with a year's notice.
I expect mining capability will trump the law in any situation."
Whichever interpretation you prefer, it is clear that there is no international regime explicitly governing asteroid mining. "Planetary Resources are in a rather grey zone," says Wheeler.
"This is no legal certainty over whether they can do it or not."
She suggests that a future regime could be based on the law of the sea.
"The fish in the high seas are not owned by anyone. You can 'mine' the high seas by taking fish out of them and you can sell them," she says.
"Similarly, asteroids might not be owned by anyone but you might be able to mine the resources and then sell them on."
Mining the moon is also fraught with legal uncertainties. In principle it is governed by an international treaty informally called the Moon Agreement, which seeks to manage our satellite's natural resources.
But the treaty is largely worthless because it has not been ratified by any of the spacefaring nations.
"The Moon Agreement recognises that mining of the moon is about to become feasible," says Wheeler.
"But the US, China and Russia are not signatories, so it lacks teeth."
The UN is encouraging members to sign, but the concern is that a fait accompli by a mining company could render the treaty moot.
Finally, what if space mining operations go wrong?
If miners cause an asteroid that they have nudged nearer to Earth to plummet into the planet, who would be liable?
This is covered by another UN treaty, the Space Liability Convention, which makes the nation that launches a spacecraft liable for damages.
"This concept worked back when it was a clear-cut case of governments launching objects, but with many entrepreneurs now launching spacecraft it's getting much more difficult to apportion blame," says Wheeler.
As a result, the US and Japan are investigating new liability mechanisms, she says.
The chances of Planetary Resources causing impacts are minimal, says Timothy Spahr, director of the asteroid-hunting Minor Planet Center at Harvard University.
Orbital mechanics are well understood, he says, making asteroid trajectory calculations simple.
"Hitting the Earth is a damn hard thing to do."
Like many astronomers, Spahr has an asteroid named after him.
How would he feel about 2975 Spahr being captured and mined?
"That's a tough question," he says.
"You'd have to ask a lawyer."
Somehow, I don't think they will have an answer.
Paul Marks is New Scientist's senior technology correspondent
Science- Criminalising drugs is harming medical research
Updated: 19 Jun 2012
Criminalising drugs is harming medical research
12 June 2012 by Jon White
David Nutt, former adviser to the UK government, says the ban on drugs like ecstasy is hampering neuroscience
How do the drug laws in most countries affect scientific research?
One of the things I find very disturbing about the current approach to drugs, which is simply prohibition without necessarily any full understanding of harms, is that we lose sight of the fact that these drugs may well give us insights into areas of science that need to be explored and may give us new opportunities for treatment.
In what way?
Almost all the drugs of interest in terms of understanding brain phenomena such as consciousness, perception, mood and psychosis are illegal.
And so there is almost no work done in this field.
How bad is the impact?
The effects these laws have had on research is greater than those caused by the US government hindering stem cell research.
No one has done an imaging neuroscience study of smoking cannabis.
I can show you 150 papers telling you how the brain reacts to an angry face, but I can't show you a single paper that tells you what cannabis does.
Any examples of missed opportunities?
There were six trials of LSD as a treatment for alcoholism, the last one in 1965.
The evidence is it's as good as anything we've got, maybe better.
But no one is using it for this.
I wonder how many other opportunities have been lost in the past 40 years with important drugs, like MDMA (ecstasy) and its empathetic qualities or cannabis for all its possible uses and insights into conditions like schizophrenia.
All those opportunities have been wasted because it is virtually impossible to work with a drug when it is illegal.
How do you see change coming about?
The scientific bodies in the UK are the ones that should really be challenging the government.
I will try to get the Royal Society and the Academy of Medical Sciences to support my campaign for a more rational approach to the regulation of drugs for research.
You were sacked as a UK government adviser for comparing the risks of horse riding with taking MDMA.
Do you still take this line?
It is still a very important discussion.
It raises the question of what the appropriate comparisons are.
Where do you draw the line on harm?
Should it be drawn equally across all sorts of endeavours and activities that humans engage in?
Should recreational-drug laws be relaxed?
If you are using a drug less dangerous than alcohol, that is a rational choice.
If you are using drugs that are more harmful than alcohol, essentially heroin or other forms of opiates and crystal meth and cocaine, then that's different.
As head of the UK Independent Scientific Committee on Drugs you've written a book, Drugs: Without the hot air.
Who is it for?
Parents and those with no scientific background can read it, children can read it and hopefully the media and politicians will read it.
I hope we can start having more of a discussion about drugs.
David Nutt is neuropsychopharmacology professor at Imperial College London.
He headed the UK Advisory Council on the Misuse of Drugs technical committee for seven years
Science- Harnessing the Sun for renewable energy involves Solar investment
Updated: 13 Jun 2012
Renewable energy's growing pains
18:01 11 June 2012 by Michael Marshall
Renewable energy received record investment in 2011 and expanded massively, but it also struggled with dwindling political support and plummeting prices.
The industry faces several more years of growing pains before it can properly compete with fossil fuels.
By the end of 2011, the global power capacity from renewables was more than 1360 gigawatts, and renewables supplied 20.3 per cent of global electricity, according to the REN21 Renewables 2012 Global Status Report.
Meanwhile, investment in renewables increased by 17 per cent last year to a record $257 billion – six times what it was in 2004, according to a report by the UN Environment Programme.
The UNEP report, Global Trends in Renewable Energy Investment 2012, concludes there were particularly big gains for solar power, which received $147 billion – 52 per cent more than in 2010.
Yet at the same time, solar power companies suffered massive drops in share prices. Six major companies, including Solyndra and Solar Millennium, have sought bankruptcy protection.
The main reason is the steep drop in the cost of solar panels over the last three years, largely due to a switch to large-scale manufacturing. Solar power is now cheaper than diesel in countries such as India. Many governments, including the UK, also cut their financial support for solar, as tumbling prices meant consumers did not need such large subsidies to buy a panel.
It is normal for weaker firms to fall by the wayside as industries ramp up, says Michael Liebreich, chief executive of Bloomberg New Energy Finance.
"The challenge for policy-makers is to reduce support mechanisms at just the right pace," he says. Cutting subsidies too fast will stop renewables in their tracks, but maintaining them for too long will be a waste of money.
Science- Our reasoning is affected by our environment
Updated: 29 May 2012
The argumentative ape:
Why we're wired to persuade
28 May 2012 by Dan Jones
We're all guilty of flawed thinking because our brains evolved to win others round to our point of view – whether or not our reasoning is logical
HAVE you ever, against your better judgement, nurtured a belief in the paranormal?
Or do you believe that gifted rock singers are more likely to die at the age of 27?
Maybe you just have the sneaking suspicion that you are smarter, funnier and more attractive than the next person.
If you buy into any of these beliefs, you are probably suffering from confirmation bias - the mind's tendency to pick and choose information to support our preconceptions, while ignoring a wealth of evidence to the contrary.
Consider the idea that rock stars die at 27 - a fallacy that crops up time and again in the media.
Once you have heard of the "27 club", it is easy to cite a handful of examples that fit the bill - Janis Joplin, Kurt Cobain, Amy Winehouse - while forgetting the countless other musicians who survived their excesses past the age of 30.
The confirmation bias is just one of a truckload of flaws in our thinking that psychologists have steadily documented over the past few decades. Indeed, everything from your choice of cellphone to your political agenda is probably clouded by several kinds of fuzzy logic that sway the way you weigh up evidence and come to a decision.
Why did we evolve such an apparently flawed instrument?
Our irrational nature is very difficult to explain if you maintain that human intelligence evolved to solve complex problems, where clear, logical thought should offer the advantage.
As such, it has remained something of a puzzle.
An elegant explanation may have arrived. Hugo Mercier at the University of Neuchâtel, Switzerland, and Dan Sperber at the Central European University in Budapest, Hungary, believe that human reasoning evolved to help us to argue.
An ability to argue convincingly would have been in our ancestors' interest as they evolved more advanced forms of communication, the researchers propose.
Since the most persuasive lines of reasoning are not always the most logical, our brains' apparent foibles may result from this need to justify our actions and convince others to see our point of view - whether it is right or wrong.
"You end up making decisions that look rational, rather than making genuinely rational decisions," says Mercier.
The flip side, of course, is that we also face the risk of being duped by others, so we developed a healthy scepticism and an ability to see the flaws in others' reasoning.
This ability to argue back and forth may have been crucial to humanity's success - allowing us to come to extraordinary solutions as a group that we could never reach alone.
Mercier and Sperber are by no means the first to suggest that the human mind evolved to help us manage a complex social life.
It has long been recognised that group living is fraught with mental challenges that could drive the evolution of the brain.
Primates living in a large group have to form and maintain alliances, track who owes what to whom, and keep alert to being misled by others in the group.
Sure enough, there is a very clear correlation between the number of individuals in a primate group, and the species' average brain size, providing support for the "social brain" - or "Machiavellian intelligence" - hypothesis (New Scientist, 24 September 2011, p 40).
The evolution of language a few hundred thousand years ago would have changed the rules of the game.
The benefits are clear - by enabling the exchange of ideas, complex communication would have fostered innovation and invention, leading to better tools, new ways to hunt and trap animals, and more comfortable homes.
But the gift of the gab would also have presented a series of challenges. In particular, our ancestors had to discern who to trust.
Signs of expertise and examples of past benevolence would offer reasons to listen to some people, but our ancestors would have also needed to evaluate the ideas of people they may not have known well enough to trust implicitly.
A powerful way to overcome this challenge would have been to judge the quality of their arguments before accepting or rejecting what they had to say, helping the group arrive at the best strategies for hunting and gathering, for instance.
"Providing and evaluating reasons is fundamental to the success of human communication," says Sperber, who has spent years considering the ways an argumentative mind might ease our way through the "bottleneck of distrust", as he calls it.
On the one hand, a healthy scepticism would have been essential, leading us to more critical thought.
Equally beneficial, however, would have been an ability to persuade others and justify our point of view with the most convincing arguments.
It was Mercier who began to wonder whether this need to sway other people's opinions might explain some of our biases, which might skew our logic but which may nevertheless give us the edge when arguing our opinions.
So the pair set about reviewing an enormous body of psychological studies of human reasoning.
Consider the confirmation bias.
It is surprisingly pervasive, playing a large part in the way we consider the behaviour of different politicians, for instance, so that we will rack up evidence in favour of our chosen candidate while ignoring their competitor's virtues.
Yet people rarely have any awareness that they are not being objective.
Such a bias looks like a definite bug if we evolved to solve problems: you are not going to get the best solution by considering evidence in such a partisan way.
But if we evolved to be argumentative apes, then the confirmation bias takes on a much more functional role. "You won't waste time searching out evidence that doesn't support your case, and you'll home in on evidence that does," says Mercier.
Mercier and Sperber offer a similar explanation for the "attraction effect" - when faced with a choice between different options, irrelevant alternatives can sway our judgement from the logical choice.
It is perhaps best illustrated by considering a range of smartphone contracts: people who would tend to choose the cheapest option can be persuaded to opt for a slightly up-market model if an even more expensive, supposedly luxury model is added to the mix (see "Decisions, decisions").
According to Mercier and Sperber's argumentative theory, the luxury option might sway our decision by offering an easy justification for our decision to go with the middle option - we can use it to claim that we have landed a bargain. Notably, the attraction effect is strongest when people are told that they will have to defend publicly whatever choice they make.
"In these kinds of situations, reasoning plays its argumentative role and drives you towards decisions that you can easily justify rather than the best decision for you," says Mercier.
The duo found further evidence from the framing effect, first identified 30 years ago by psychologists Daniel Kahneman of Princeton University and Amos Tversky.
In a series of studies, they found that people treat identical options very differently depending on how the options are presented, or framed.
One classic experiment asks people to imagine an outbreak of disease threatening a small town of 600 people.
The subjects are offered two forms of treatment: Plan A, which will definitely save exactly 200 people, and Plan B, which has a 1-in-3 chance of saving everyone and a 2-in-3 chance of saving no one.
Most people choose Plan A. But they tend to change their mind when exactly the same plans are rephrased with a different emphasis.
The subjects are now told that if Plan A is selected, 400 people, but no more, will definitely die.
Plan B stays the same: there's a 1-in-3 chance no one will die, and a 2-in-3 chance that everyone will die.
In this case, most people opt for Plan B - the choice they had previously shunned (Science, vol 211, p 453).
Kahneman and Tversky explained this inconsistency in terms of "loss aversion": in the second set-up, the loss of life seems especially salient, so people avoid it.
But the argumentative theory offers a new twist, suggesting that participants in these experiments choose the response that will be easiest to justify if challenged.
In the first scenario, there is a direct argument for their choice - it will definitely save 200 lives - whereas in the second scenario, they can instead argue that their decision might save 400 people from certain death.
Once again, experiments have shown that people are more susceptible to the bias when they are told that they will have to defend their decision, just as you would expect if we evolved to convince others of our actions (Journal of Behavioral Decision Making, vol 20, p 125).
The effect may weigh heavily on the way we weigh up the benefits and risks of certain lifestyle choices - it is the reason that "90 per cent fat-free" food sounds healthy, when a product advertised with "10 per cent fat content" would seem less attractive.
Drawing together all the difference strands of evidence, Mercier and Sperber published a paper in Behavioral and Brain Sciences journal last year outlining their theory (vol 34, p 57).
In addition to confirmation bias and the framing and attraction effects, they cited many other seemingly irrational biases that might be explained by our argumentative past, including the sunk-cost fallacy - our reluctance to cut our losses and abandon a project even when it would be more rational to move on - and feature creep, which includes our tendency to buy goods with more features than we would ever actually use.
The paper has caused quite a stir since it was published. Jonathan Haidt, a moral psychologist at the University of Virginia in Charlottesville, believes the theory is so important that "the abstract of their paper should be posted above the photocopy machine in every psychology department".
Mercier and Sperber's ideas dovetail neatly with Haidt's influential view that our moral judgements stem from our gut reactions to moral transgressions, and not from rational reflection. In one example, Haidt and Thalia Wheatley of Dartmouth College in Hanover, New Hampshire, showed that hypnotically inducing the feeling of disgust leads people to make harsher moral judgments, even in cases when no one has done anything wrong - supporting the idea that emotion rather than logical reasoning drives morality (Psychological Science, vol 16, p 780).
We still spend masses of time arguing about the morality of certain situations - whether we are considering a friend's infidelity or debating the "war on terror" - but according to Haidt's research, we are simply trying to justify our gut reactions and persuade others to believe our judgments, rather than attempting to come to the most just conclusion.
"Moral argumentation is not a search for moral truth, but a tool for moral persuasion," says Haidt.
The idea that we evolved to argue and persuade, sometimes at the expense of the truth, may seem to offer a pessimistic view of human reasoning.
But there may also be a very definite benefit to our argumentative minds - one that has proved essential to our species' success. Crucial to Sperber and Mercier's idea is the fact that we are not only good at producing convincing arguments, but we are also adept at puncturing other people's faulty reasoning.
This means that when people get together to debate and argue against each other, they can counterbalance the biased reasoning that each individual brings to the table.
As a result, group thinking can produce some surprisingly smart results, surpassing the efforts of the irrational individuals. In one convincing study, psychologists David Moshman and Molly Geil at the University of Nebraska-Lincoln looked at performance in the Wason selection test - a simple card game based on logical deduction.
When thinking about this task on their own, less than 10 per cent of people got the right answer.
When groups of 5 or 6 people tackled it, however, 75 per cent of the groups eventually succeeded.
Crucially for the argumentative theory, this was not simply down to smart people imposing the correct answer on the rest of the group: even groups whose members had all previously failed the test were able to come to the correct solution by formulating ideas and revising them in light of criticism (Thinking and Reasoning, vol 4, p 231).
There is also good evidence that groups are more creative than individual lone thinkers (see "Genius networks: Link to a more creative social circle").
Given that the skills of the individual members do not seem to predict a group's overall performance, what other factors determine whether it sinks or swims?
Anita Williams Woolley of Carnegie Mellon University in Pittsburgh, Pennsylvania, helped to answer this question with a series of studies designed to measure a group's "collective intelligence", in much the same way an individual's general intelligence can be measured by IQ tests.
The tasks ranged from solving visual puzzles and brainstorming ideas to negotiating how to distribute scarce resources.
She concluded that a group's performance bears little relation to the average or maximum intelligence of the individuals in the group. Instead, collective intelligence is determined by the way the group argues - those who scored best on her tests allowed each person to play a part in the conversations.
The best groups also tended to include members who were more sensitive to the moods and feelings of other people. Groups with more women, in particular, outperformed the others - perhaps because women tend to be more sensitive to social cues (Science, vol 330, p 686).
Such results are exactly what you might expect from a species that evolved not to think individually, but to argue in groups. Mercier and Sperber do not believe this was the primary benefit of our argumentative minds, though.
"We think that argumentation evolved to improve communication between individuals, helping communicators to persuade a reticent audience, and helping listeners to see the merits of information offered by sources they might not trust," says Sperber.
"As a side effect, you get better reasoning in a group context."
Others aren't so sure, believing instead that improved group reasoning drove the evolution of our ability to argue.
"If reasoning works so much better in a group context, then why shouldn't it have evolved for collective reasoning, given that we are a social animal?" asks philosopher Keith Frankish of the University of Crete in Greece, who nevertheless remains undecided on the issue.
That is not to say that group thinking does not backfire occasionally.
"The problem is that in many high-stakes situations, vested interests and emotions run high," says Robert Sternberg, a psychologist at Oklahoma State University in Stillwater.
This is especially true when groups of like-minded individuals focus on emotionally charged topics.
"In these situations, people egg each other on to more extreme positions, while more moderate thinkers are chased out," says Sternberg.
This can all too easily lead to dangerous "groupthink", in which dissent is stifled and alternative courses of action are ignored, often resulting in disastrous decisions.
When Irving Janis developed the idea of groupthink in the 1970s, he used it to explain catastrophic group decisions such as the escalation of the Vietnam war under US president Lyndon Johnson.
Today, the same perils can be seen in the decision to invade Iraq despite the lack of compelling evidence for weapons of mass destruction.
Even though thinking things through in groups can go awry, some researchers believe it is high time to make better use of our argumentative brains for collective reasoning.
For the past decade, Neil Mercer, an educational psychologist at the University of Cambridge has been leading the "Thinking Together" project, which explores collaborative reasoning and learning in the classroom.
His work shows that when children think together, they engage with tasks more effectively, and use better reasoning as they solve problems.
The results are striking in science and mathematics problems; not only do groups often do better on these task, but individuals who participate in group reasoning also end up doing better in their exams in these subjects. Similar improvements can be seen in the kinds of non-verbal reasoning tasks used in IQ tests.
"Kids can learn to see group reasoning as a kind of enlightened self-interest that benefits everyone," says Mercer.
His work suggests a few pointers to get the best results. Group reasoning was most productive when the children were asked to engage in "exploratory talk", he says, where ideas can be openly aired and criticised, and when they entered the task with the clear goal of seeking agreement, even if this goal remained elusive.
Although such collaborative forms of teaching have gained some measure of popularity in recent years, Sternberg believes educational systems are still too focused on developing individual knowledge and analytical reasoning - which, as the research shows, can encourage us to justify our biases and bolster our prejudices.
"We believe that our intelligence makes us wise when it actually makes us more susceptible to foolishness," says Sternberg. Puncture this belief, and we may be able to cash in on our argumentative nature while escaping its pitfalls.
Dan Jones is a writer based in Brighton, UK
Science- Clean Coal Technology : How It Works
Updated: 23 May 2012
Clean coal technology: How it works
When burned, coal is the dirtiest of all fossil fuels but a range of technologies are being used and developed to reduce the environmental impact of coal-fired power stations.
Collectively, they are known as clean coal technology (CCT).
CARBON CAPTURE AND STORAGE
Despite the improving efficiency of coal-fired power stations, CO2 emissions remain a problem.
Carbon capture and storage (CCS) involves capturing the carbon dioxide, preventing the greenhouse gas entering the atmosphere, and storing it deep underground.
OPTIONS FOR CARBON CAPTURE AND STORAGE
1. CO2 pumped into disused coal fields displaces methane which can be used as fuel
2. CO2 can be pumped into and stored safely in saline aquifers
3. CO2 pumped into oil fields helps maintain pressure, making extraction easier
A range of approaches of CCS have been developed and have proved to be technically feasible. They have yet to be made available on a large-scale commercial basis because of the costs involved.
Coal arriving at a power plant contains mineral content that needs to be removed before it is burnt. A number of processes are available to remove unwanted matter and make the coal burn more efficiently.
Coal washing involves grinding the coal into smaller pieces and passing it through a process called gravity separation.
One technique involves feeding the coal into barrels containing a fluid that has a density which causes the coal to float, while unwanted material sinks and is removed from the fuel mix. The coal is then pulverised and prepared for burning.
Coal gasification plants are favoured by some because they are flexible and have high levels of efficiency. The gas can be used to power electricity generators, or it can be used elsewhere, i.e. in transportation or the chemical industry.
INTEGRATED COAL GASIFICATION COMBINED CYCLE PLANT
1. Coal burnt to produce syngas
2. Syngas burnt in combustor
3. Hot gas drives gas turbines 4. Cooling gas heats water
5. Steam drives steam turbines
In Integrated Gasification Combined Cycle (IGCC) systems, coal is not combusted directly but reacts with oxygen and steam to form a "syngas" (primarily hydrogen). After being cleaned, it is burned in a gas turbine to generate electricity and to produce steam to power a steam turbine.
Coal gasification plants are seen as a primary component of a zero-emissions system. However, the technology remains unproven on a widespread commercial scale.
Burning coal produces a range of pollutants that harm the environment: Sulphur dioxide (acid rain); nitrogen oxides (ground-level ozone) and particulates (affects people's respiratory systems).
There are a number of options to reduce these emissions:
Sulphur dioxide (SO2)
Flue gas desulphursation (FGD) systems are used to remove sulphur dioxide. "Wet scrubbers" are the most widespread method and can be up to 99% effective.
A mixture of limestone and water is sprayed over the flue gas and this mixture reacts with the SO2 to form gypsum (a calcium sulphate), which is removed and used in the construction industry.
Nitrogen oxides (NOx)
NOx reduction methods include the use of "low NOx burners". These specially designed burners restrict the amount of oxygen available in the hottest part of the combustion chamber where the coal is burned. This minimises the formation of the gas and requires less post-combustion treatment.
Electrostatic precipitators can remove more than 99% of particulates from the flue gas. The system works by creating an electrical field to create a charge on particles which are then attracted by collection plates. Other removal methods include fabric filters and wet particulate scrubbers.
Science- Antiviral Drug used to prevent not treat HIV
Updated: 16 May 2012
Antiviral drug backed for use in HIV prevention
17:22 11 May 2012 by Andy Coghlan
For similar stories, visit the HIV and AIDS Topic Guide
We may not yet have a vaccine against HIV, but an antiviral drug trade-named Truvada looks likely to become the next best thing.
Available since 2004 and now one of the world's most widely prescribed antiviral treatment against HIV, Truvada is a combination of two antiviral drugs called tenofovir and emtricitabine, made by biopharmaceutical company Gilead of Foster City, California.
So far, it has been prescribed exclusively to people already infected with HIV.
Yesterday, in a landmark decision, an expert committee at the US Food and Drug Administration (FDA) recommended for the first time that Truvada be offered daily to uninfected people to prevent them from catching the virus.
The FDA will make a final decision on whether to approve Truvada for "pre-exposure prophylaxis" by 15 June. The FDA usually accepts and ratifies the advice of its expert committees.
Prevention not treatment
The first beneficiaries are likely to be homosexual men and uninfected partners in couples where one of the pair already has the virus.
According to Gilead, experts on the committee voted 19 to 3 in favour of approval for the drug to be offered to uninfected men who have sex with men.
The committee voted 19 to 2 in support of Truvada for the uninfected partner within a couple, and 12 to 8 for its use in "other individuals at risk for acquiring HIV through sexual activity".
If approved by the FDA, the availability of Truvada for prevention rather than treatment would open a new front in the battle to stop HIV spreading, sustaining momentum for prevention as a major tool to combat the epidemic.
"It's potentially a valuable addition to the existing HIV prevention methods, and we welcome it," said a spokeswoman for the World Health Organization (WHO) in Geneva, Switzerland.
"Full FDA approval will encourage countries needing additional prevention methods to undertake their own regulatory approval processes.
The WHO is in the process of producing new guidance to countries on pre-exposure prophylaxis, and plans to release this in the coming two months."
Among other prevention methods, condoms and safe sex have been recommended throughout the 30-year epidemic.
In 2005, male circumcision emerged as a powerful preventive, reducing the risk of infection by 60 per cent.
More recently, the preventive potential of giving Truvada or other antiretroviral drugs, either in gels or as a pill, has come to the fore, with several trials complete or under way.
Results driving the committee's recommendations yesterday include a trial in men who have sex with men published in 2010, which showed that taking Truvada reduced the risk of infection by 44 per cent, and the "Partners-PrEP" trial in Kenya and Uganda to see if Truvada could prevent infection spreading within couples where one but not the other carries the virus.
It found that Truvada given to the uninfected partner reduced their risk of infection by 73 per cent.
Last month, the WHO recommended that infected partners in couples should receive antiretroviral drugs immediately, but the prospect of giving the drugs to uninfected individuals could vastly scale-up the battle to stop HIV spreading.
A possible downside of offering Truvada to uninfected individuals is that recipients may already carry the virus without knowing, which could allow the virus to develop resistance.
Accurate testing of potential recipients beforehand is essential.
Also, as with all possible preventive treatments including male circumcision, there is a worry that some people will believe themselves immune to infection and engage in unprotected or unsafe sex.
Science- New Protein that keeps you cool could help you to lose weight
Updated: 16 May 2012
Thermostat protein could help burn off the flab
13:39 11 May 2012 by Andy Coghlan
Turning the body's brown fat into a furnace fuelled by unwanted flab might provide a new way to lose weight.
The main role of brown fat is to burn just enough fuel to keep body temperature constant.
This suggests that working out how the body controls the brown fat thermostat could lead to new drugs that order it to burn more energy than usual, gradually consuming stores of the unwanted white fat that leads to obesity.
Recent research suggests the thermostat might be a protein called bone morphogenetic protein 8B, or bmp8B.
Mice kept at a chilly 5 °C make about 140 times more bmp8B than mice at room temperature.
Now, Andrew Whittle of the University of Cambridge and colleagues have confirmed the hunch, using mice unable to make the protein.
These mice became obese even when fed a normal diet.
They grew even larger when given a high-fat diet.
Whittle's team found that mice make bmp8B in a part of the brain called the ventromedial hypothalamus, and inside brown fat itself.
The protein seems to work by increasing nerve signals to brown fat from the brain, and by making the fat cells more attentive to the signals so they burn more energy than normal.
The researchers discovered that lab-grown brown fat cells could be made to burn more energy than usual by treating them with bmp8B. What's more, mice given extra bmp8B through infusions into the brain lost weight.
"It was priming the fat cells to be stimulated by the nervous system," says Whittle.
The research may ultimately lead to a "slow-burn" drug that very subtly steps up energy consumption by brown fat.
Whittle says that the mice unable to make bmp8B only burned 2 joules of energy per minute less than normal mice, but this constant energy under-burn rapidly led to obesity.
The aim of a drug would be to have an equal but opposite effect, tricking brown fat into burning slightly more than usual.
Journal reference: Cell, DOI 10.1016/j.cell.2012.02.066
Science- Sweating it out and saving on the perfume
Updated: 16 May 2012
Eau de BO: The allure of sweat
15 May 2012 by Mairi Macleod
For similar stories, visit the Love and Sex and Human Evolution Topic Guides
Is the perfume industry looking for fragrances in the wrong place? The most seductive scents might come from ourselves
IN ELIZABETHAN England, it was common practice for a maiden to peel an apple, place a slice in her armpit to absorb the smell and then present it to a potential suitor as a memento.
Traditional Balkan dancing follows a similar principle. In an activity akin to Morris dancing, but with added odour, men put handkerchiefs in their armpits, work up a sweat by dancing hard and then wave their hankies under the noses of young females.
Throughout history and across cultures, body odour has played a key role in attraction, just as it does with many other animals.
Yet modern societies tend not to appreciate nature's perfume.
Many of us go to considerable lengths to expunge our personal smells and replace them with ones we consider to be more appealing.
Instead of apples in our armpits, we have deodorants and perfumes that are marketed as smelling of innocence, vivacity, sophistication or whatever attributes we believe will make us more alluring.
Is the multibillion-dollar fragrance industry missing a trick?
As we discover which elements of body odour are attractive and to whom, the commercial potential of these chemicals is becoming increasingly apparent.
Most people don't want to smell of sweat, but it can only be a matter of time before some components of our natural perfumes are bottled.
You might think of yourself as a primarily visual animal, relying little on your sense of smell, but in recent years the idea that olfactory communication is not important in humans has been challenged.
In fact, we possess more apocrine sweat glands than other apes.
These are concentrated in the armpits along with springy hair to promote bacterial growth, which helps create body odour, and that has led to humans being labelled the "scented ape".
Our sense of smell is also more sensitive and discerning than it was thought to be, especially when it comes to sniffing out information about other people.
Women's noses tend to be more sensitive than men's, but each sex is particularly adept at decoding the messages contained in the odours of the other.
Blind tests reveal that a person's smell gives an indication of their sex, age, diet and some aspects of their health. It has even been claimed that we can smell fear and anxiety (New Scientist, 17 September 2011, p 44).
More relevant for perfumers is the recent discovery that our natural smells communicate information about our personalities. Agnieszka Sorokowska at the University of Wroclaw in Poland and her colleagues got 60 men and women to complete personality tests and then to wear T-shirts in bed for three nights.
During this time, they were asked to sleep alone and to avoid smoking, using scented products or consuming smelly foods and alcohol.
A different group of 200 men and women then sniffed the T-shirts and rated their wearers on various character traits.
Their assessments were most astute when it came to judging levels of neuroticism, extroversion and dominance (European Journal of Personality, DOI: 10.1002/per.848).
"Neuroticism and extroversion are very emotional traits and might change sweating rates and the composition of bacteria in the armpits, thus changing how a person smells," says Sorokowska.
Dominance, she adds, is associated with higher levels of some hormones with metabolites that could influence body odour.
See gallery: "Making perfumes past and present"
People seem to have preferences when it comes to these scents.
Women tend to prefer the smell of dominant men and are particularly attracted to the smell of dominance during the most fertile stage of their menstrual cycle, according to research by Craig Roberts of the University of Stirling, UK, and Jan Havlíček of Charles University in Prague, Czech Republic (Biology Letters, vol 1, p 256).
This could be a useful predilection, says Roberts, because a dominant man may provide more resources for his partner and offspring.
Dominance might also be linked with higher levels of testosterone, which is thought to indicate genetic quality, say the researchers.
Women also tend to prefer the smell of men who have more symmetrical bodies (Evolution and Human Behavior, vol 20, p 175).
High body and facial symmetry is thought to indicate an ability to withstand environmental stresses, such as infection or toxins, also a sign of genetic quality.
Essence of dominance
Men, in turn, have preferences when it comes to women's odours. Havlíček and his colleagues found that women tend to smell least appealing to men during menstruation and most attractive when they are ovulating (Ethology, vol 112, p 81).
"The changes in smell are quite subtle and the variance [in odour attractiveness] between individual women is much greater than within individuals," says Havlíček.
Nevertheless, it seems that men can smell women's fertility to some extent, at least subconsciously, and they like it.
Roberts sees potential for perfume manufacturers to cash in here.
For instance, the compounds responsible for making women smell more attractive during their fertile phase could be included in fragrances.
Likewise, the chemicals that make symmetrical men smell good could become ingredients in aftershave.
These compounds indicating fertility and symmetry haven't yet been identified, but we may be close to pinning down the essence of dominance.
In mice, dominant males produce high levels of androstenes, which are breakdown products of androgen steroids, the group of hormones to which testosterone belongs.
Several studies suggest that these compounds can be attractive to women.
In a speed-dating experiment, for example, Roberts and Tamsin Saxton of the University of Abertay Dundee, UK, found that women who had androstene dabbed on their top lip rated a given man more highly than those who received a water control or clove oil, which blocks out the smell of androstenes (Hormones and Behavior, vol 54, p 597).
In Patrick Süskind's 1985 novel Perfume, the main character goes to extreme and murderous lengths to create the ultimate fragrance, one that captures the sublime beauty of the human soul such that its wearer will be loved by all.
Unfortunately, in real life that fantasy falls at the first hurdle. It turns out that the scent of androstenes is not universally appealing.
Not all women fall for the androstene trick: some actually find these chemicals unpleasant and a minority cannot smell them at all.
Preferences for some other aspects of body odour are even more idiosyncratic.
Differences in taste are particularly strong when it comes to the major histocompatibility complex (MHC), molecules involved in immune system functioning.
People prefer the smell of those of the opposite sex whose MHC genes are different from their own (New Scientist, 10 February 2001, p 36).
"In our evolutionary past, humans would have lived in small groups where the risk of inbreeding was high, so a method of distinguishing the most dissimilar mates would have been useful," says Claus Wedekind at the University of Lausanne, Switzerland, who made the discovery (Proceedings of the Royal Society B, vol 260, p 245).
Another possibility, he says, is that children of parents with differing MHC will have immune systems capable of fighting a wider range of pathogens.
The exact mechanism through which MHC affects our body odour remains unclear. Wedekind thinks it could be down to the influence of the skin's MHC on the bacterial community that can thrive there, which in turn affects the smelly substances that are produced.
Another possibility is that the smells come from peptide ligands, the business end of the protein molecules involved in the MHC-mediated immune response.
It is a sign of the commercial potential of this work that there are already two patents covering the use of these peptides in the customisation of scent (WO/2003/090705 and WO/2001/081374).
But even if we can pinpoint the chemicals associated with our MHC-based odours, the perfect bespoke perfume is unlikely to rely on them alone, says August Hämmerli of the Swiss Federal Institute of Technology in Zurich.
"An MHC-based perfume will not necessarily make the wearer more attractive," he says.
It might increase your chances of finding a compatible mate, but at the price of making you smell unappealing to others.
Wedekind suspects that this may be exactly what we are trying to avoid when we wash and use perfumes.
"In one of our experiments, we presented the same six [body] odours to 100 people and each of the odours was excellent to some and very bad to others, so maybe we just don't want to risk smelling bad to some," he says.
But there is another way in which MHC might help perfumers to tailor their products. MHC genotype affects preferences for odours other than body smells.
In 2001, when Wedekind was still at the University of Bern in Switzerland, he and his colleague Manfred Milinski published results from a study in which they asked 137 men and women for their views on 36 perfume ingredients.
They found that people's preferences for scents they would choose for themselves were linked with their MHC genotypes (Behavioral Ecology, vol 12, p 140).
Last year, Hämmerli and his colleagues extended this research. Using the same set of perfume ingredients and genetic markers, they found that MHC-based preferences were stronger when presented in the context of sexual communication than they were in a neutral context.
They suggested that instead of simply targeting perfumes crudely to groups based on factors such as sex, age and income, manufacturers could customise and sell scents according to MHC preferences (International Journal of Cosmetic Science, vol 34, p 161).
"Our work is offering an applied starting point of how an MHC-based perfume could actually be created," says Hämmerli.
Although MHC influences the perfumes we prefer for ourselves, Wedekind and Milinski found that it had no bearing on the scents people chose for their partner.
This suggests that MHC does not directly affect how particular chemicals smell. Instead, it seems, participants in the experiment were subconsciously choosing scents that complemented their natural body odour rather than masking it.
"The message is clear," says Havlíček.
"Don't buy perfume for your lover, let them choose it themselves."
More evidence that we do this comes from research by a group including Havlíček, his Charles University colleague Pavlina Lenochova and Roberts.
They asked people to rate the smell of others first without and then with a perfume they supplied.
"We found that people smell more pleasant and attractive when they use perfume, but it improved the smell of some more than others. Some actually smelled worse wearing perfume," says Havlíček.
This was clearly due to interactions between the perfume and the wearer's own smell, because the perfumes that smellers ranked most pleasant from the bottle were not necessarily the ones they liked best when people wore them.
What's more, subjects were rated as smelling better when wearing their own perfume than an assigned one, even when the latter had previously been judged as being more pleasant, again suggesting that people are able to choose scents to complement their own body odour (PLoS One, vol 7, p e33810).
This may make things more difficult for perfume manufacturers trying to cash in on any insights into body odour and smell preferences.
"It's not as simple as identifying and including particular compounds in perfume," says Havlíček.
"My guess is that there are mixtures of compounds interacting with each other, and that concentrations and ratios are also important."
Perfumers are already aware that the attractiveness of a scent cannot be assessed simply by sniffing the bottle. "Many factors affect the behaviour [of perfume] on human skin," says Matthijs van de Waal, who is based in Geneva, Switzerland, and has been creating scents for 45 years.
"The most relevant, according to my observations, is diet."
Besides, he says, although international perfume houses like to take a global approach with their leading brands, they recognise that there are cultural differences and they already produce fragrances that are targeted to meet different regional preferences.
Customisation and niche marketing looks like the future as perfumers incorporate these findings into fragrance production.
It is surely only a matter of time until personalised perfumes become as familiar as personalised medicine.
Ashamed of our own scent?
The inclination to hide our natural body odours is very strong.
One study found that 79 per cent of women and 60 per cent of men reported using a deodorant every day, while 44 per cent of women also used perfume on a daily basis (Review of General Psychology, vol 14, p 318).
These artificial scents can have a profound influence on our behaviour and the way others see us.
In one study, a team led by Takahiro Higuchi of the Tokyo Metropolitan University in Japan, filmed a woman interviewing a series of other women, half of whom applied perfume in the middle of the interview.
The performances were then rated by another group of women.
Although unaware of the perfume application, they noticed that the interviewees who had used scent were less likely to shift in their seats, fiddle with their hair and display other signs of nervousness.
Overall, they rated them as more confident (International Journal of Psychology, vol 40, p 90).
In another experiment, Craig Roberts at the University of Stirling, UK, and colleagues allocated 35 men either a fully formulated deodorant or an odourless placebo and told them to avoid other fragrances.
"There was no difference between the groups of men in how attractive they were, as rated by an independent panel of women judges from photos," says Roberts.
After three days, however, the confidence of the men using the placebo plummeted and they considered themselves to be less attractive.
Women also thought them less attractive on the basis of only a video of the men talking about their holidays (InternationalJournal of Cosmetic Science, vol 31, p 47).
Mairi Macleod is a freelance journalist based in Edinburgh, UK
Science- 1.3bn people rely on forests to survive
Updated: 16 May 2012
1.3 billion people rely on forests to survive
17:30 15 May 2012 by Michael Marshall
Hands off our forest! The UN has adopted a series of voluntary guidelines to protect indigenous peoples' rights to the land on which they live.
And not before time, as a recent report suggests hundreds of millions could be evicted by modern-day land grabs.
Many indigenous peoples have lived in the same place for centuries, but they do not have legal tenure.
Forest peoples, who live in the rainforests of South America, Africa and south-east Asia, are particularly vulnerable.
Their forests are often sold or leased to companies or foreign countries for farming, logging or mining.
The local people are usually evicted.
The sheer scale of the problem is highlighted in a report from the Forest Peoples Programme in the UK.
The FPP estimates that roughly 1.3 billion people – more than one-seventh of the global population – are directly dependent on forests.
At least 350 million could lose their homes in land grabs, says Sophie Chao of the FPP, because their rights to the land are not recognised under national law.
Many of the forests that people occupy are state property and can be sold or leased without consulting the inhabitants.
The UN Food and Agriculture Organization's guidelines encourage governments to recognise and protect indigenous peoples' rights to their land.
FAO director general José Graziano da Silva calls the agreement a "historic breakthrough".
Science- Human Nature: Six Things we all do
Updated: 09 May 2012
Human nature: Six things we all do
WHAT sort of creature is the human?
The obvious answer is a smart, talkative, upright ape with a penchant for material possessions.
But what about the more subtle concept of human nature?
That is more controversial. Some deny it exists, preferring to believe that we can be anything we want to be.
They cannot be right.
Although we exhibit lots of individual and cultural variations, humans are animals, and like all animals we have idiosyncrasies, quirks and characteristics that distinguish us as a species.
An invading alien would have no trouble categorising us but, being so close to our subject matter, we struggle to pin down the essence of humanness.
Nevertheless, the task may not be beyond us.
Anthropologists have identified many “human universals”
– characteristics shared by all people everywhere, which constitute a sort of parts list of our species.
What if we were to use these to examine the human animal in the same way we would study any other?
As the following articles reveal, what emerges is a suite of characteristics that encapsulate our nature – and a rather peculiar one it proves to be.
If you thought you knew what humans were like, then think again.
Human nature: Being playful
Humans are not nature's only funsters. All mammals play, as do some birds and a few other animals.
But no other species pursues such a wide variety of entertainment or spends so much time enjoying themselves.
The list of universals includes such diverse extracurricular pleasures as sports, music, games, joking, hospitality, hairdressing, dancing, art and tickling.
What sets us apart is the fact that we play with objects and with language, says Clive Wynne at the University of Florida
We can also go beyond the literal.
"What revolutionises human play is imagination," says Francis Steen at the University of California.
"We're a playful species," says primatologist Frans de Waal at Emory University in Atlanta, and we retain our juvenile sense of fun right into adulthood.
The only other primate to do that is the bonobo, perhaps ...cont....
Human nature: Being scientific
From earliest infancy, humans are constantly sorting the world into categories, predicting how things work, and testing those predictions.
Such thinking, which is the essence of science, is evident in a range of human universals from time, calendars and cosmology to family names and measuring.
"Science is basically working at understanding the world around us," says Edward Wasserman at the University of Iowa.
And it is not confined to humans - all animals need scientific thinking to survive.
"It's in our job description," he says. Pigeons, for example, can learn to discriminate between cars and chairs (Journal of Experimental Psychology: Animal Behavior Processes, vol 14, p 235).
Dogs can associate the sound of a bell with food, and when chimps try to extract a nut from a tube, they are performing a simple experiment... Cont....
Human nature: Being legislative
The question of whether every human society has formal laws is far from settled, but they do all have rules.
This is a peculiarly human trait.
Our closest relatives, the chimps, may stick to simple behavioural rules governing things like territories and dominance hierarchies, but we humans, with our language skills and greater brainpower, have developed much more elaborate systems of rules, taboos and etiquette to codify behaviour.
Though every society has different rules, they always involve regulating activity in three key areas - a sure sign that these are fundamental to human nature.
For a start, we are all obsessed with kinship, which brings rights, in particular to inheritance of goods and status.
"There are always rules about who counts as kin, and what obligations you have to kinfolk," says Robin Fox at Rutgers University New ...Cont....
Human nature: Being epicurean
Compared with other animals, the feeding behaviour of humans is exceedingly odd.
Where they just eat, we make a meal of it.
The main difference is down to one of humanity's greatest inventions: cooking.
People in every culture cook at least some of their food, says Richard Wrangham at Harvard University.
He has made a persuasive case that cooked food, which delivers more calories with much less chewing than raw food, was the key innovation that enabled our ancestors to evolve big energy-hungry brains and become the smart, social creatures we are today (New Scientist, 16 July 2010, p 12).
Chimps spend at least 6 hours a day chewing, he notes, humans, less than 1.
That leaves a lot of free time for culture.
Culinary culture includes the strange phenomenon of ritualised, familial, food-sharing, otherwise known as mealtimes. Chimps ..Cont.....
Human nature: Being clandestine
Nothing reveals an animal's nature quite as well as its sexual practices, and humans certainly have some strange ones - even from a biological point of view.
Woman are continually receptive and have concealed ovulation - that is, there is no external sign that they are in a position to conceive.
We are the only monogamous primate to live in large mixed-sex groups - more about these later.
But surely nothing is quite as puzzling as our predilection for clandestine copulation.
Why do humans have sex in private?
This coyness is not just the consequence of particular cultural or moral views.
"It is the rule across all kinds of human societies," says cultural anthropologist Frank Marlowe of the University of Cambridge.
There is the odd case of public ritual sex, such as orgies among the Canela of Brazil. But where there is no alcohol - as would have been the ..Cont......
Human nature: Being gossipy
Language was once thought to be the defining characteristic of humans.
These days we are more likely to consider it as part of a continuum of animal communication.
Nevertheless, nobody doubts that it has shaped our nature profoundly.
Language is central to human universals ranging from education, folklore and prophesy to medicine, trade and insults. Arguably, our way with words reaches its apogee in gossip.
A compulsion to talk about other people is only human.
And it is not nearly as frivolous as you might think.
Some anthropologists believe we gossip to manipulate the behaviour of others, which may help explain why gossip often takes place within earshot of the person being gossiped about.
Among the Kung Bushmen of Africa, for example, that is the case 70 per cent of the time, says Polly Wiessner of the University of Utah. ...Cont....
Science- Save the Bees - from toxic chemicals
Updated: 28 Apr 2012
24 hours to save the bees
Friday, 27 April, 2012 6:47
Alice Jay - Avaaz.org" <firstname.lastname@example.org>
Pesticides are killing bees and threatening our food supply. In 24 hours, shareholders at the biggest chemical producer, Bayer, could vote to stop their toxic production.
Massive public pressure has forced this debate at their Annual General Meeting, now let’s make sure they vote to stop the pesticides and save the bees. Sign the emergency petition:
Quietly, globally, billions of bees are dying, threatening our crops and food. But if Bayer stops selling one group of pesticides, we could save bees from extinction.
Four European countries have begun banning these poisons, and some bee populations are already recovering. But Bayer, the largest producer of neonicotinoids, has lobbied hard to keep them on the market.
Now, massive global pressure from Avaaz and others has forced them to consider the facts -- and in 24 hours, Bayer shareholders will vote on a motion that could stop these toxic chemicals. Let’s all act now and shame the shareholders to stop killing bees.
The pressure is working, and this is our best chance to save the bees. Sign the urgent petition and send this to everyone -- let's reach half a million signers and deliver it directly to shareholders tomorrow in Germany!
Bees don't just make honey, they are vital to life on earth, every year pollinating 90% of plants and crops -- with an estimated $40bn value, over one-third of the food supply in many countries. Without immediate action to save bees, many of our favourite fruits, vegetables, and nuts could vanish from our shelves.
Recent years have seen a steep and disturbing global decline in bee populations -- some bee species are already extinct and some US species are at just 4% of their previous numbers.
Scientists have been scrambling for answers.
Some studies claim the decline may be due to a combination of factors including disease, habitat loss and toxic chemicals.
But increasingly, independent research has produced strong evidence blaming neonicotinoid pesticides. France, Italy, Slovenia and even Germany, where the main manufacturer Bayer is based, have banned one of these bee killers. But, Bayer continues to export its poison across the world.
This issue is now coming to the boil as major new studies have confirmed the scale of this problem. If we can get Bayer shareholders to act, we could shut down once and for all Bayer’s influence on policy-makers and scientists.
The real experts -- the beekeepers and farmers -- want these deadly pesticides prohibited until and unless we have solid, independent studies that show they are safe. Let's support them now. Sign the urgent petition to Bayer shareholders now, then forward this email:
We can no longer leave our delicate food chain in the hands of research run by the chemical companies and the regulators that are in their pockets.
Banning this pesticide will move us closer to a world safe for ourselves and the other species we care about and depend on.
Alice, Antonia, Mia, Luis, Ricken, Stephanie, Pascal, Iain, Ari and the whole Avaaz team
Science- Software to spot fake online reviews
Updated: 17 Apr 2012
Paul Marks, senior technology correspondent
A couple of fake reviews probably won't kill (or unfairly promote) somebody's product/hotel/restaurant - but sustained postings of fake reviews by large groups of colluding opinion spammers might succeed.
To the rescue come software engineers from the University of Illinois in Chicago and Google - who this week reveal a software algorithm that can pinpoint review threads in which groups of fraudsters are trying to wrest control of online sentiment.
In a paper to be delivered at this week's World Wide Web 2012 conference in Lyon, France,
Illinois researchers Arjun Mukherjee and Bing Liu, alongside Google's Natalie Glance, found the task easier than they expected thanks to the spammers' mob-handed behaviour.
They hired eight online review experts from e-commerce sites eBay and Rediff and got them to assess the "spamicity" of 2400 English language reviews as being "spam", "borderline spam" or "non spam".
Unlike previous teams who have failed to make headway on this problem, they did not use Amazon's crowdsourcing machine, the Mechanical Turk, to use the general public to assess the reviews - preferring to use paid experts with a keen eye for a fake to improve accuracy.
They then trained software to spot the differences between how spammers and genuine writers write their reviews. Signature giveaways often included timing: spamming groups often file their "reviews" in quick bursts, the researchers say.
And as the spammers are often briefed by a contracting agency working for a rival (for bad reviews) or the product maker/hotel/restaurant (for good reviews) each cod reviewer falls into the trap of using very similar language.
Such behaviour meant the groups who colluded to kill or hype products or businesses stuck out like sore thumbs. "Although labelling individual fake reviews and reviewers is very hard, to our surprise labelling fake reviewer groups is much easier," they write in their paper.
The work suggests that such fakes can be more easily deleted by moderators (or automatically) in future.
However, they found opinion spam is growing less on sites where the hosts moderate reviews posted by first time reviewers - offering another weapon that sites can wield against fraud
Science- More than Chicken Soup to support your immune system
Updated: 17 Apr 2012
Immune retune: Eat yourself strong
10:15 10 April 2012 by Jessica Hamzelou
From an old-fashioned faith in the healing powers of chicken soup to more modern obsessions with so-called superfoods, we like to think some things we eat can help ward off infections.
The vast majority of these beliefs have little evidence to back them up, but there are dietary interventions that appear to work.
Numerous supplements are sold on the basis of supposed immune-boosting powers, but their health claims usually stem from tests done on cells in the lab.
That is just the first stage of gathering evidence, though; the only way to know for sure if something will work is a randomised, controlled trial done on people, preferably several trials.
By that measure, zinc supplements probably come out best, with evidence they can both prevent colds and shorten their duration if started within 24 hours of the symptoms first appearing (Cochrane Database of Systematic Reviews, DOI: 10.1002/14651858.CD001364.pub3).
Zinc may work by stopping the cold virus from replicating or preventing it from gaining entry to cells lining the airways.
That old favourite vitamin C doesn't seem to prevent colds, although as a treatment it might reduce symptoms slightly. The only other supplement with any credibility is echinacea, an extract of the purple coneflower – although again only as treatment, not prevention, and even then the evidence is mixed.
Vitamin C boosts immune cell activity in theory so why does it perform so poorly in practice?
It seems that while vitamin supplements help people who are malnourished avoid diseases caused by vitamin deficiency, such as scurvy, there is no extra benefit to exceeding the recommended levels, which most people in the west hit anyway. In fact, popping vitamin pills – including vitamin C – may even be harmful overall (New Scientist, 5 August 2006, p 40).
If you really want to support your immune system the best approach is simply to eat a plentiful supply of fruit and vegetables.
They contain not just vitamins but thousands of other compounds called phytochemicals, which have numerous beneficial effects we are only just starting to understand.
It is also important to focus on the quantity of food, not just its quality.
People who are obese are more likely to get a range of infections, including respiratory, skin and urinary ones (The Lancet Infectious Diseases, vol 6, p 438).
Piling on the pounds not only makes it harder to breathe, which predisposes people to colds and flu, but the excess fat releases chemical signals that interfere with immune functioning.
Think carefully about how you shed the pounds, though, because yo-yo dieting is also harmful.
Frequent cycles of weight loss and regain seem to reduce the performance of natural killer cells, an important branch of the immune system that targets cancerous cells and those infected with viruses (Journal of the American Dietetic Association, vol 104, p 903).
Jessica Hamzelou is a science writer based in London
Science- Why was the Sumatra quake so large ?
Updated: 12 Apr 2012
Why was the Sumatra quake so large?
17:42 11 April 2012 by Catherine Brahic
The most puzzling question about today's 8.6 earthquake off the coast of Indonesia is how it got to be so big.
Most large earthquakes that occur in oceans take place at subduction zones, where one tectonic plate is thrust beneath another.
The epicentre of today's quake, however, was a vertical crack through the ocean crust 400 kilometres west of Sumatra.
This is some distance from the Sundra trench, the nearest subduction zone (see map). Vertical cracks generally don't generate this much energy.
"It is a remarkable event, scientifically," says geophysicist John McCloskey at the University of Ulster in Coleraine, UK.
McCloskey is an expert in the seismology of the Indian Ocean.
Together with colleagues, he is currently running calculations to understand what led to the unusual event.
The vertical orientation of the crack also explains why the quake did not generate a tsunami.
Vertical cracks like the one that ripped earlier today form at mid-ocean ridges, where oceanic crust is created.
As newly formed crust moves away from the ridge, it develops a series of cracks, or "transform faults", that run perpendicular to the ridge.
These faults typically become inactive in older oceanic crust that is no longer at the ridge.
This morning's quake happened because one of them became active again.
Earthquakes on transform faults tend to be smaller than the great megathrust events generated, especially compared to megathrust events in subduction zones.
That's because faults in subduction zones extend diagonally through the brittle crust and so travel greater distances before running into the viscous mantle lower crust beneath.
Vertical transform faults take a shorter route to the mantle lower crust (see diagram).
As a result, the energies released when vertical faults slip are generally smaller than subduction quakes.
"I would have to check this but I don't believe I've ever heard of an 8.7 vertical quake," says McCloskey. "We are still looking at very preliminary data but if everything stays the way it is looking now, this is an amazingly large event for this mechanism."
Vertical cracks also tend not to generate tsunamis. In subduction zones, earthquakes happen when mounting pressure causes one plate to suddenly pop up vertically.
Last year's Tohoku quake occurred in a subduction zone and lifted the seafloor by 10 to 20 metres.
In 2004, the 26 December Sumatran earthquake lifted trillions of tonnes of water by around 5 metres when the Sundra subduction fault ruptured.
Earthquakes on vertical cracks occur because two chunks of crust grind past each other horizontally, with little to no vertical movement.
They do not tend to lift much water.
Earlier today a buoy in the Bay of Bengal was reported to have sensed a 30 centimetre wave.
That's exactly what you would expect after an earthquake on a vertical crack, says McCloskey.
The crack that ruptured may have been on a slight diagonal, generating a small vertical motion, or it could have shifted some underwater topography structure like a seamount.
That would have generated a small wave, but nothing lethal.
Why so big?
So why was the earthquake so large? McCloskey and his colleagues are working on a few theories.
Although the modelled epicentre of this morning's earthquake is not in a subduction zone, it is close to one: the Sundra trench.
Faults rip over large areas, so it's possible that the Sundra trench megathrust shifted as well during the event.
It is also possible that the vertical fault ruptured backwards in the opposite direcation into the Indian Ocean over hundreds of kilometres.
The longer the rupture, the larger the potential energy released.
Either way, although the event is of great scientific interest preliminary data suggests it is unlikely to have significant human repercussions. A backwards rupture would be perfectly safe, says McCloskey.
There are no islands in that direction.
Even if the event did involve the Sundra trench, he says that is not necessarily cause for concern.
The huge quantities of energy released by the 2004 quake and another that followed in 2005 mean the area "would be reasonably relaxed", he says.
McCloskey and his colleagues are currently running calculations to distinguish between these and other possibilities.
Science- Driverless cars- like Tory Policies going the wrong way up a One-way street?
Updated: 03 Apr 2012
Driverless cars ready to hit our roads
02 April 2012 by Paul Marks
Sceptical about autonomous cars?
Too late. They're already here – and they're smarter than ever
LEAN back, let go of the steering wheel, ease your feet off the pedals and relax: your car is now in charge.
The dream of a car that can drive itself has grown over the last decade as the necessary technologies have gradually proved their worth, but the idea has faced major legal hurdles.
Not for much longer.
Politicians are now scrambling to make self-driving cars a reality.
From Hawaii to Florida, and Oxford to Berlin, the race is on to get driverless cars onto our streets.
Promising improved safety, better fuel-efficiency and freedom from the boredom of long drives, autonomy has been coming piecemeal to our cars for some time - and it has always had its critics.
In 1994, on a UK motorway, Jaguar and Lucas Industries demonstrated the safety of adaptive cruise control and automatic lane keeping; both technologies are now commonplace on our roads.
The media were not impressed, describing the idea of cars that drive themselves as "madness".
But concerns about the safety of autonomous cars are misplaced in a world where 1.2 million people die every year in road accidents due to human error, says Paul Newman, a robotics engineer at the University of Oxford, whose team is developing autonomous cars.
"It's crazy to imagine that we are going to keep driving cars like we do now - that in 10 to 20 years we'll still have to sit behind a wheel, concentrating hard, not falling asleep and not running over people," he says.
This notion now has powerful backers - and barriers are beginning to fall.
In an act that came into force on 1 March, the state of Nevada now allows driverless cars to ply the state's road network provided they sport a special red licence plate, and the owners pay a $1 to $3 million insurance bond. Similar legislation is being considered in California, Arizona, Florida, Hawaii and Oklahoma.
The phenomenon is not confined to the US either. In Germany, a driverless car research team led by Tinosch Ganjineh at the Free University of Berlin has permits to use the abandoned Templehof airport for autonomous tests. When necessary, team members get special permits to drive on Berlin's streets, and hope to drive on the autobahn soon. The Oxford team plan to approach the British government for similar permits.
The Berlin team are automating a VW Passat, patriotically named MadeInGermany, while Oxford is turning a BAE Systems WildCat military jeep into a self-driving machine. Nissan has just joined the Oxford project, so the Leaf all-electric car may end up driverless too.
Driverless cars first appeared in a meaningful way in the US Defence Advanced Research Project Agency's "grand challenges". Cars competed to drive fastest around desert courses in 2004 and 2005, and in an urban setting in 2007.
Mike Montemerlo and Sebastian Thrun of Stanford University, California, whose car won the 2005 prize, lead Google's self-driving car research programme. Their cars, based on the Toyota Prius and Audi TT, typify the approaches of the Oxford and Berlin teams. All the cars have laser rangefinders, radar and optical cameras to sense the vehicle's changing real-time environment with high accuracy. They know where the traffic lights and road signs are, and which moving objects are animals, people, bikes, motorbikes or trucks. Newman's team are studying how algorithms can make sense of data streaming from a 3D laser rangefinder and quickly decide whether an object is a car or pedestrian, for example. His team is also looking at how a robotic visual system can build up a picture of its world and adapt to changing conditions, varying light levels or even seasons. The commercial sensors and software to make this happen are still some way off, though.
"The Velodyne - the 64 spinning lasers on top of most driverless cars - give a quickly updated 360-degree, 3D view of the surroundings up to 40 metres away," says Newman. But cars of the future won't have unwieldy spinning lasers on them, he says.
Ganjineh agrees that driverless technology has to be refined. "The size and price of these systems needs to come down. Today, half a trunk of equipment is needed for autonomous driving," he says.
Another challenge, says Newman, is getting the cars to recognise the precursors to risky events - like sudden bright sun reflections on the road, truck spray, which may blind some sensors, or simply a burst tyre.
Google's cars, meanwhile, tell each other about the roads they have travelled, such as exchanging data on how to negotiate awkward junctions, says Vinton Cerf, a Google technology evangelist. Ganjineh wants similar technology to broadcast GPS map changes car-to-car, when there are roadworks ahead, for example.
However, driverless cars will not need to communicate wirelessly with expensive roadside technology as they need to be "independently smart" and aware of all risks around them at all times, says Newman.
"Automation of cars is going to happen," he says. "Computing has caused devastating change and transport is going to be its next target."
If the going's tough, the car gets cover
Driverless cars could reduce insurance costs, says Paul Newman of the University of Oxford, by allowing the car to add to its own insurance as road conditions change.
"On a dark icy night, when it is riskier to drive, the car could go online and bid for extra insurance cover until conditions change," he says. "If that proves too expensive, because conditions are tough for the autonomous system, the owner could take the wheel."
Meanwhile, clear standards for programmers and developers of such cars need to be drawn up, says Tinosch Ganjineh at the Free University of Berlin, Germany, as accident liability may fall more often on software or sensor-makers
Science- Elgin's Passing Gas
Updated: 28 Mar 2012
North Sea gas leak venting from newly disturbed source
18:40 27 March 2012 by Andy Coghlan and Michael Marshall
A major methane gas leak is under way at the Elgin wellhead in the North Sea, 240 kilometres off Aberdeen, UK.
The leak started on 25 March, but according to sources at Total, the company operating the well, the gas is not coming from the gas reservoir itself, but from a newly disturbed source in the rock above.
All 238 personnel have been evacuated.
Many questions remain.
Total says that until it works out the capacity of the source and the rate at which methane and gas condensate are leaking into the environment, it is impossible to say either how much gas will be released or how long it will take to block it, despite some reports putting it at six months.
"We've got geologists working on the productivity of the horizon [reservoir] the leak is coming out of," a spokesman for Total told New Scientist. "We must do some modelling to find out the rate."
Although the main reservoir itself at the base of the drill shaft is safely closed off, he says, the gas from the secondary source in chalk above it is escaping by leaking into the shaft containing the drilling tubes that lead down to the main gas reservoir.
Sealing the leak
One option to halt the leak would be to pump heavy mud down the shaft to stop the gas, but that would require access to the platform, which is currently too dangerous because of the risk of fire.
The optimal solution is a self-sealing event, in which the pressure dips as gas vents, so the leak effectively plugs itself. But the likelihood of this will depend on how much gas is in the chalk, and how chambers are connected by fractures and channels. Total has called in well-control specialists from the US.
The platform was drilling for sour gas: natural gas polluted with hydrogen sulphide and carbon dioxide, which 20 years ago would have been too expensive to extract. "It's gas we started using as a last resort," says Simon Boxall of the National Oceanography Centre in Southampton, UK.
The gas is purified on the platform itself, before being taken to the mainland.
The major threat to the local ecosystem is the hydrogen sulphide, which is toxic to virtually all animal life. "You might as well put Agent Orange in the ocean," Boxall says.
Because the leak is below the water's surface, the hydrogen sulphide is bubbling through the sea water.
This is the worst-case scenario, says Boxall, because it could lead to mass animal and plant deaths.
Boxall says Total needs to monitor the water quality to see if this is happening.
Much of the methane in the water will be consumed by microorganisms and converted to carbon dioxide.
This will make the water slightly more acidic, but the effect will be short-lived and localised, and therefore should not cause too much harm to marine life.
The volatile mix of gases means there is a significant risk of an explosion. As a result, Total have shut down the power supply on the rig, which otherwise might throw a spark.
The Total spokesman told New Scientist that the leaking methane posed no safety risks to neighbouring rigs, the nearest of which is the Shearwater rig owned by Shell, about 7 kilometres away.
All three of the escaping gases are greenhouse gases, but unless the leak carries on for weeks or months, the effect on the climate is likely to be small. "The impact on climate change is not going to worry anyone," says Boxall.
"With wind and atmospheric conditions, the gas disperses and the petrol-like condensate on the surface will disperse quite quickly too," the Total spokesman said.
Science- The future is bright for humanity
Updated: 20 Mar 2012
The future is bright for humanity
05 March 2012
The more optimistic we are about the future of our species the better we can focus on today's challenges
Read more: "100,000 AD: Living in the deep future"
WHATEVER happened to the future? Up until a few decades ago, our visions of the future were largely - though by no means uniformly - glowingly positive. Science and technology would cure all the ills of humanity, leading to lives of fulfilment and opportunity for all.
Now utopia has grown unfashionable, as we have gained a deeper appreciation of the range of threats facing us, from asteroid strike to pandemic flu to climate change.
You might even be tempted to assume that humanity has little future to look forward to.
But such gloominess is misplaced.
The fossil record shows that many species have endured for millions of years - so why shouldn't we?
Take a broader look at our species' place in the universe, and it becomes clear that we have an excellent chance of surviving for tens, if not hundreds, of thousands of years (see "100,000 AD: Living in the deep future").
Look up Homo sapiens in the IUCN's "Red List" of threatened species, and you will read: "Listed as Least
Concern as the species is very widely distributed, adaptable, currently increasing, and there are no major threats resulting in an overall population decline."
So what does our deep future hold? A growing number of researchers and organisations are now thinking seriously about that question.
For example, the Long Now Foundation, based in San Francisco, has created a forum where thinkers and scientists are invited to project the implications of their ideas over very long timescales.
Its flagship project is a mechanical clock, buried deep inside a mountain in Texas, that is designed to still be marking time thousands of years hence.
Then there are scientists who are giving serious consideration to the idea that we should recognise a new geological era: the Anthropocene.
They, too, are pulling the camera right back and asking what humanity's impact will be on the planet - in the context of stratigraphic time.
Perhaps perversely, it may be easier to think about such lengthy timescales than about the more immediate future.
The potential evolution of today's technology, and its social consequences, is dazzlingly complicated, and it's perhaps best left to science-fiction writers and futurologists to explore the many possibilities we can envisage.
That's one reason why we have launched Arc, a new publication dedicated to the near future.
But take a longer view and there is a surprising amount that we can say with considerable assurance.
As so often, the past holds the key to the future: we have now identified enough of the long-term patterns shaping the history of the planet, and our species, to make evidence-based forecasts about the situations in which our descendants will find themselves.
This long perspective makes the pessimistic view of our prospects seem more likely to be a passing fad.
To be sure, the future is not all rosy: while our species may flourish, a great many individuals may not.
But we are now knowledgeable enough to mitigate many of the risks that threatened the existence of earlier humans, and to improve the lot of those to come.
Thinking about our place in deep time is a good way to focus on the challenges that confront us today, and to make a future worth living in.
Science- If Your Lying ? Your face gives you away
Updated: 19 Mar 2012
Your face gives the game away when you lie, says study
You may think you're hiding the truth, but your expression will always betray you
Roger Dobson Sunday 18 March 2012
An angel's face is still, goes the saying, because there are no lies inside.
Now scientists have revealed there is scientific truth behind the notion.
Researchers have established for the first time that five tell-tale facial muscle groups, including those activated by grief, behave differently in people who are lying.
Psychologists examined the facial movements of 52 people who had appeared on television in a number of countries, including the UK, to appeal for the return of someone who was missing.
In half the cases analysed they were lying, and were later convicted of murder.
The researchers found that the stress of dealing with so-called high-stakes lies meant they were unable to control some muscle movements.
The videos of 26 liars and 26 genuine people who made televised pleas for the safe return or information leading to an arrest in the murder of their relative were gathered from news agencies in Australia, Canada, the UK and the US.
The liars had all been convicted eventually on overwhelming physical evidence, including DNA.
The researchers, from the University of British Columbia, analysed more than 20,000 frames of the appearances and found marked differences between the two groups.
They homed in on when the interviewee made a direct appeal to the (supposed) perpetrator to release the missing person; to the missing person to make contact, or to the public for information. Then they focused on muscles associated with sadness, happiness and surprise – the frontalis, corrugator supercilii, orbicularis oculi, zygomatic major and depressor anguli oris.
Their results showed that the "grief" muscles – the corrugator supercilii and depressor anguli oris – were more often contracted in the faces of people who were genuine. The liars were more likely to show subtle contraction of the zygomatic major – masking smiles – and full contraction of the frontalis, frowning – in a sign of failed attempts to appear sad.
In cases in the UK include Tracie Andrews, above, 27, who stabbed her fiancé, Lee Harvey, to death during an argument in their car near Alvechurch, Worcester in 1996. She subsequently appeared in a televised press conference, with Harvey's mother, in which she claimed he was killed by a man in a road-rage attack. Andrews was sentenced to life imprisonment at Birmingham Crown Court in July 1997.
John Tanner, Paul Dyson, Fadi Nasri and Gordon Wardell all made similar appeals, lying to the cameras after killing their partners or having them killed. They were all found subsequently found guilty and jailed.
The study – "Darwin the detective: Observable facial muscle contractions reveal emotional high-stakes lies" – concludes that there is an evolutionary component to lying, and people lie on average twice a day.
"While interpersonal deception often is highly successful, signs of covert emotional states are communicated clearly to the informed observer," the researchers said.
"The present study investigated, for the first time, the action of specific facial muscles speculated to reveal falsified sadness on the faces of individuals deceptively pleading for the return of a missing relative who they recently had murdered.''
In its paper, published last week, the team concluded: "Our findings support the notion that the human face is indelibly stamped with the tale of our humble origin and attempts to mask our emotions are likely to fail when engaging in a consequential act of deception.''
Additional reporting Koos Couvée
Science- CO2 in the Atmosphere is making us fatter
Updated: 16 Mar 2012
'CO2 in the atmosphere is making us all fatter':
Researcher says we are increasing in size as gas levels go up
By Eddie Wrenn
PUBLISHED: 17:16, 14 March 2012 | UPDATED: 07:29, 15 March 2012
Could CO2 emissions be making us fat?
The startling theory has been put forward by a Danish researchers, who say that the increase in obese people in Denmark is roughly equivalent to the increase of carbon dioxide in the atmosphere.
Researcher Lars-Georg Hersoug studied the weight of both fat and thin people over 22 years, and first strated looking for explanations after noticing even the thin people were putting on the pounds.
CO2 in the atmosphere is being blamed for global warming issues - but could it be behind the population's march towards obesity?
Hersoug, now a post-doc at the Research Centre for Prevention and Health at Glostrup University Hospital, told Science Nordic: 'The normal theory is that fat people get fatter because they don’t move as much as they should.
More...'What do you MEAN it's not butter?: Margarine makes you aggressive - and should not be fed to school pupils
A new chapter in human history: Startling discovery of Stone Age cavemen in China who 'are an entirely new species'
'But the study showed that thin people also get fatter, and this happened over the whole of the 22-year period of the study.'
When he looked around for other factors, he saw how the CO2 concentration of the atmosphere had also increased in correlation to the weight gain.
Fight the flab: The effect of CO2 on our hormones may lie at the root of the problem
He now proposed that orexins, a type of hormone which reside in the brain and stimulate wakefulness and energy expenditure, may be affected by CO2.
The hormones regulate when we go to bed, as well as the stimulation of food intake.
Hersoug also suggests as evidence that obesity increases in the U.S. happened fastest in the period 1986-2010 on the East Coast, which is where CO2 concentrations are highest.
He also cites a 2010 study of 20,000 laboratory animals who all gained weight, despite being in controlled conditions.
Testing his hypothesis, a pilot study at university placed six men in special climate rooms, where some of them were exposed to increased amounts of CO2.
Seven hours later, the men were allowed to eat as much as they liked, and the men with more exposure to CO2 ate six per cent more food than the control group.
On the lighter side... Slimming down could save the world
Big and small: Losing weight could lead to less CO2 and methane in the atmosphere
The world’s obese people could help stop global warming by going on a diet, scientists claimed today.
Obese and overweight people were said to be contributing to climate change just by breathing.
Researchers calculated that if all the world’s heavyweights dropped 10kg, C02 emissions would fall by 49.560 metric tonnes a year.
That’s the equivalent of 0.2% of the CO2 emitted globally in 2007.
The experts from Robert Gordon University in Aberdeen published their findings in the International Journal of Obesity.
However, they did not include methane gas emissions from flatulent large people, despite evidence about cows contributing to greenhouse gases.
Researcher Anna Gryka said: 'Due to the fact that CO2 production is proportionate to body mass, heavier individuals produce more.
'Universal moderate weight loss of the overweight and obese would result in an equivocal influence on the world carbon emissions with possible effects on climate disruption.
'Nevertheless, this relatively small amount could help to meet the CO2 emission reduction targets and unarguably would be of great benefit to the human’s health.'
She said the shift from seeing weight loss as beneficial for an individual’s health to also being beneficial for the planet could help change attitudes toward global warming and weight loss.
But she warned: 'It is clear that an omnipresent weight loss of all obese and overweight population is as improbable in the short term as global warming is inevitable if no action is taken.'
Read more: http://www.dailymail.co.uk/sciencetech/article-2114995/CO2-atmosphere-making-fatter-Researcher-says-increasing-size-gas-levels-up.html#ixzz1pFuwvhlE
Science- All waste should be re-used or recycled
Updated: 08 Mar 2012
No-waste circular economy is good business – ask China
18:16 29 February 2012 by Michael Marshall
Don't throw out that broken toaster: it's key to our prosperity. Redesigning the economy so that all waste is reused or recycled would be good for business, according to two new reports.
For centuries the global economy has been linear. Companies extract resources from the environment, turn them into products and sell them to consumers – who eventually throw them out. As a result we are burning through Earth's natural resources and wasting useful materials.
But it doesn't have to be that way, says Felix Preston of think tank Chatham House in London. Instead, we could have a circular economy in which waste from one product is used in another.
In "A Global Redesign: Shaping the circular economy", Preston argues that reusing resources makes good business sense now that resource prices are high and volatile. He cites a January report by consultants McKinsey & Company which tries to put a value on the circular economy.
"Towards the Circular Economy: Economic and business rationale for an accelerated transition" estimates the circular economy could save the European Union $340 to $630 billion per year in materials costs, about 3 per cent of the EU's GDP.
"The opportunity is enormous," Preston says. "The challenge is how to unlock it."
However, a company wishing to go circular will face considerable upfront costs, and companies that have invested heavily in the existing system will be reluctant to change. Nevertheless some are pushing forward: for instance Renault's Eco2 cars are designed so that 95 per cent of their mass can be recovered and reused.
China is already pushing the circular economy. According to its 12th five-year plan – covering 2011-15 – China will "plan, construct and renovate various kinds of industrial parks according to the requirements of the circular economy".
Science-Carbon Coal gives off Dioxide and Plants take it in-WindTurbines are a blot on all our lives
Updated: 28 Feb 2012
Fossil fuels can't solve problems
Sunday 26 February 2012
Alan Johnson (M Star February 14) needs "to wake up to the fact" that global warming due to the burning of fossil fuels is continuing apace with potentially catastrophic consequences.
And coal is the worst because it generates more carbon dioxide than other fossil fuels.
Therefore, "all this coal underground" can only make a contribution in the future if the carbon dioxide can be locked up somehow without it entering the atmosphere.
Definitely research along those lines should proceed, but success is by no means guaranteed.
Wind turbines, on the other hand, are tried and tested and are particularly suitable for Britain's climate, with the potential of generating as much as 80 per cent of our energy needs.
And, once in place, they cost nothing in terms of fuel.
It is true, as Mr Johnson points out, that since the turbines are not produced here they "do little for employment," but that is due to intransigent government policies which need to be changed.
Similarly, his criticism that much of the money paid out by taxpayers and consumers to energy companies merely adds to profits going offshore could be rectified by taking the companies into public or common ownership, so that the money could be devoted entirely to getting renewable technologies up and running.
In the case of wind, back-ups would be needed for the odd times when no wind is blowing.
Gas turbines are particularly suitable here, because they can be started up quickly - only generating carbon dioxide, obviously, when in use.
Better would be to invest in the development of molten salt nuclear reactors over the next 20 years which, unlike conventional nuclear technologies, are inherently safe and produce little long-life nuclear waste - and no carbon dioxide.
Science- India 's Solar Panels Price Crash
Updated: 09 Feb 2012
India's panel price crash could spark solar revolution
02 February 2012 by Michael Marshall
SOLAR power has always had a reputation for being expensive, but not for much longer. In India, electricity from solar is now cheaper than that from diesel generators.
The news - which will boost India's "Solar Mission" to install 20,000 megawatts of solar power by 2022 - could have implications for other developing nations too.
Recent figures from market analysts Bloomberg New Energy Finance (BNEF) show that the price of solar panels fell by almost 50 per cent in 2011.
They are now just one-quarter of what they were in 2008.
That makes them a cost-effective option for many people in developing countries.
A quarter of people in India do not have access to electricity, according to the International Energy Agency's 2011 World Energy Outlook report.
Those who are connected to the national grid experience frequent blackouts.
To cope, many homes and factories install diesel generators.
But this comes at a cost.
Not only does burning diesel produce carbon dioxide, contributing to climate change, the fumes produced have been linked to health problems from respiratory and heart disease to cancer.
Now the generators could be on their way out. In India, electricity from solar supplied to the grid has fallen to just 8.78 rupees per kilowatt-hour compared with 17 rupees for diesel.
The drop has little to do with improvements in the notoriously poor efficiency of solar panels: industrial panels still only convert 15 to 18 per cent of the energy they receive into electricity.
But they are now much cheaper to produce, so inefficiency is no longer a major sticking point.
It is all largely down to economies of scale, says Jenny Chase, head of solar analysis at BNEF. In 2011, enough solar panels were produced worldwide to generate 27 gigawatts, compared with 7.7 GW in 2009.
Chase says solar power is now cheaper than diesel "anywhere as sunny as Spain".
That means vast areas of Latin America, Africa and Asia could start adopting solar power.
"We have been selling to Asia and the Middle East," says Björn Emde, European spokesman for Suntech, the world's largest producer of silicon panels.
Over the next few years he expects to add South Africa and Nigeria to that list.
The one thing stopping households buying a solar panel is the initial cost, says Amit Kumar, director of energy-environment technology development at The Energy and Resources Institute in New Delhi, India.
Buying a solar panel is more expensive than buying a diesel generator, but according to Chase's calculations solar becomes cheaper than diesel after seven years.
The panels last 25 years.
Even in India, solar electricity remains twice as expensive as electricity from coal, but that may soon change.
While the price drop in 2011 was exceptional, analysts agree that solar will keep getting cheaper.
Suntech's in-house analysts predict that, by 2015, solar electricity will be as cheap as grid electricity in half of all countries.
When that happens, expect to see solar panels wherever you go.
Science- Russians have discovered a vast Antarctic Lake
Updated: 08 Feb 2012
Water contact may suggest Russians hit Antarctic lake
14:12 07 February 2012 by Gabrielle Walker and Michael Marshall
A Russian drilling team is trying to confirm that they have finally hit Lake Vostok, a vast subglacial body of water hidden 3.5 kilometres beneath the surface of the Antarctic ice sheet
A spokesperson for the Russian Antarctic Expedition in St Petersburg told New Scientist this morning that the drill made contact with water late last week and then automatically withdrew up the borehole, as planned.
That suggests the lake has been breached, but the team are now checking the level of water in the borehole and readings from pressure sensors to confirm that the water did come from the lake and not a pocket of water in the ice above the lake.
Ice temperatures rise as you go deeper into the ice sheet, and approach melting point just above the lake, so the fact that the team hit liquid water doesn't necessarily mean they've reached the lake.
"For the time being we are waiting for official confirmation," said the spokesperson. An announcement is expected within the next two days.
No more drilling
Drilling stopped on 5 February and most of the team, led by Valerii Lukin, have left the area.
Two team members have remained to monitor the borehole over the Antarctic winter.
Even if Lukin's team have broken through the ice sheet to the lake, they will still need to wait nearly a year to sample its secrets.
To avoid contaminating Vostok with drilling fluid Lukin and his team planned from the start to pierce the roof of the sealed ice cave which encases the lake and then let pressure in the lake force water into the drill hole.
The plan is to leave the lake water to freeze in the borehole and create a plug, preventing contamination.
The team will return to sample it during the following austral summer.
Life, or nothing
Lake Vostok has been isolated from the surface for millions of years, and many hope it contains bizarre new life forms.
At present, however, that seems unlikely.
The drillers have already sampled wedges of accretion ice – lake water that has naturally frozen onto the underside of the ice sheet – and although some researchers claim it contains bacteria, others write this off as contamination.
Moreover, the ice above is loaded with bubbles of trapped air.
That air has accumulated in the lake for millennia, boosting the oxygen concentrations in the water and creating a potentially toxic environment. Some say that as a result, it is likely that the lake is completely sterile.
That could be just as interesting.
If Lake Vostok turns out to be sterile, that will make it the only place on Earth where there is water but no life.
Gabrielle Walker is the author of Antarctica: An intimate portrait of the world's most mysterious continent, to be published by Bloomsbury on 1 March
Science- Solar Storms to hit Earth knocking out power supplies
Updated: 08 Feb 2012
Earth in for bumpy ride as solar storms hit
01 February 2012 by David Shiga
Editorial: "No room for complacency over solar storms"
THE sun is gearing up for a peak in activity at a time when technology makes our planet more vulnerable to solar outbursts than ever before.
Monitoring has improved since the last solar maximum, so what are the big risks this time around?
About once every 11 years, the sun goes ballistic, throwing out more bursts of magnetic activity than normal.
As a large but harmless solar flare signalled last week, the next solar maximum is due in 2013.
In the past, these storms have triggered extra currents in power lines, destroying transformers and leading to blackouts.
This time around, blackouts could be more common.
John Kappenman of Storm Analysis Consultants in Duluth, Minnesota, found that many transformers in the US are ageing and therefore extra fragile.
He also points out that while new transformers consume less power, that means relatively small currents from solar storms can overload and damage them.
"If anything, we're making things on the grid more vulnerable," he says.
As well as causing blackouts, solar storms can fry satellite electronics, which we rely on more and more for communication, navigation and weather forecasts.
To assess how vulnerable this leaves us, New Scientist enlisted the help of the Union of Concerned Scientists and Jonathan McDowell of the Harvard-Smithsonian Center for Astrophysics, both based in Cambridge, Massachusetts, who each keep careful records of satellite launches and failures.
They calculated that there are 994 working satellites in orbit today compared with 629 during the sun's last peak. Better storm forecasting should make them less vulnerable (see "And now for the solar forecast"). Ground controllers can command satellites to switch off sensitive parts temporarily in response to a forecast.
However, there is another risk that barely existed 11 years ago.
Many passenger flights between North America and Asia now take shortcuts over the North Pole.
This saves flying time and cuts fuel consumption, but it leaves planes vulnerable to solar storms.
Earth's magnetic defences are weakest at the poles (see diagram), allowing electrons and protons to pour into the atmosphere during solar storms.
This can interfere with planes' communication and navigation signals.
Airlines reroute polar flights when solar storms are predicted, as they did last week, adding hours of flight time and costing tens of thousands of dollars in extra fuel per flight.
Astronauts on the International Space Station, meanwhile, get an extra dose of radiation during a storm: there are six there now compared with three 11 years ago.
We can take some comfort in the knowledge that the looming maximum is supposed to be relatively weak, but we shouldn't be complacent. In 1859, during an otherwise weak cycle, a solar storm made telegraph wires spark, starting fires.
"You've got the opportunity for flares, and they can be big ones," warns David Hathaway of NASA's Goddard Space Flight Center in Greenbelt, Maryland.
And now for the solar forecast
Predicting the weather is tricky and solar storms are no exception.
We've improved in leaps and bounds since the last solar maximum but we still can't say whether an approaching flare will be a perfect storm or a just a damp squib.
In 2000, the best early warning tool was NASA's Solar and Heliospheric Observatory.
Potentially damaging plasma clouds would show up on the spacecraft's images of the sun.
However, the images were transmitted to Earth just once a day and since solar outbursts can travel all the way to Earth in less than a day, some clouds were missed.
In 2010, NASA launched the Solar Dynamics Observatory, which streams images of the sun to Earth in real time.
That is one reason why the predicted arrival time for a plasma blast last week - the strongest since 2003 - was "exceptionally good", says David Hathaway of NASA's Goddard Space Flight Center in Greenbelt, Maryland, where SDO is managed.
The forecast was accurate to within 13 minutes.
SDO has limitations too, though.
The most violent storms tend to come from plasma clouds that have a strong magnetic field in opposite alignment to Earth's, but SDO's images cannot reveal a cloud's magnetic properties.
So we don't really know what we're in for until an hour ahead of impact, when the cloud engulfs NASA's Advanced Composition Explorer.
Positioned between the sun and Earth, it can measure the cloud's magnetic field, though even ACE is ageing and needs to be replaced
Science- London Science Museum -Free fun this half term
Updated: 04 Feb 2012
Free family fun this half-term! London Science Museum
There's plenty of fun, free things to do with the kids this half-term.
Take one of our fascinating tours to discover the history behind some of our amazing objects or uncover a secret world of everyday objects in our new Design a Hero drop-in workshop.
Be amazed by our IMAX 3D Cinema...
Be transported deep into the action as you plunge into the ocean, journey into space or get right up close to baby orangutans and elephants. With a screen taller than four double-decker buses you'll feel like you're actually there!
Book online now
Chilean miners' capsule
This half term the Museum is honoured to display Fénix 2, the capsule used to rescue the 33 miners who were trapped underground at the San José mine, Chile in October 2010. To welcome the capsule to the Museum, we'll be running some special search and rescue events for 3 days only.
Can science save humanity?
Discover Futurecade, an innovative online suite of games that explores how science and technology impact our everyday lives and asks questions about robotics, space, geo-engineering and synthetic biology.
Who am I? Live Science - Me in 3D
How are our faces constructed? How does your face differ from other faces?
Come and get a 3D picture of your face taken by doctors from Great Ormond Street Hospital and your face will be added to a database that could help them improve treatment for future patients.
Science- Newt - "Fly me to the Moon"
Updated: 31 Jan 2012
Newt Gingrich, bizarre space visionary
17:47 30 January 2012 by Lawrence Krauss
Newt Gingrich described himself as a visionary when he unveiled plans to create a mammoth new space programme, including a permanent colony on the moon within the next nine years.
Within eight years, he pledges a new Mars rocket programme – specifically, a "continually operating propulsion system capable of getting to Mars within a remarkably short time".
He also reiterated his plan to declare at least part of the moon as US territory, with colonists capable of petitioning for statehood status.
There is little doubt that Gingrich believes in big ideas. Unfortunately, however, there is a difference between big ideas and good ideas.
After all, being a visionary doesn't mean abandoning practicality altogether but rather harnessing it creatively to make new things happen.
Put aside that Gingrich was speaking in Florida, the state most invested in space exploration and, by happenstance, the next up on the Republican primary schedule. Let's consider cost first.
The Apollo missions to the moon cost in excess of $100 billion in current dollars. In 2005, NASA administrator Michael Griffin estimated the cost of a programme to land four astronauts on the moon by 2018 (as was then planned), at $104 billion.
Who will pay?
Now, four astronauts is not a permanent colony on the moon.
To have a permanent colony, you would have to manufacture housing, most likely underground, or at least under significant shielding, since there is no atmosphere and no magnetic field to shield against the harmful effects of cosmic rays for an extended period.
Not to mention the need to build facilities for waste recycling, plus food storage and preparation.
That is, unless we continually provide food and other provisions for pilgrims from Earth, creating a non-self-sustaining colony.
But Gingrich has already made it quite clear, in his attacks on President Obama, that he would not like to be remembered for championing any such sort of government-sponsored food programme.
So, to truly embark on such an endeavour within a decade, we would have to spend somewhere between a few hundred billion and a trillion dollars.
Whether we could develop the necessary technology for such a task within a decade is an open question, although for a sufficiently large investment, it might not be impossible.
However, Gingrich is vying for leadership of a party whose major rallying cry is an end to big government programmes and make-work projects to stimulate the economy.
Gingrich might argue that we need not rely on government for the investment.
However, without a clear business plan, it is hard to imagine private money investing $1 trillion in a programme with no clear commercial goal.
Yet he did not explain precisely what he wanted to do with such a colony, or what it might achieve, besides potentially populating a new 51st state.
Certainly the goal would not be a scientific one, since there is little scientific gain to be made that would justify the cost, and one could populate the whole solar system with unmanned spacecraft that could explore all the planets and their moons for this cost, as well as send up satellites that could map the heavens on unprecedented scales.
So is manufacturing his goal?
But what would we manufacture on the moon that we could not do on Earth for a fraction of the cost?
It is true that there may be significant amounts of terrestrially rare isotopes like helium-3 in the lunar soil, and some have argued that this would be useful for fusion power here on Earth.
But since we don't yet know how to produce fusion power on Earth, it seems a little premature to rush out on a trillion-dollar adventure to gather up potential fuel.
Perhaps we could put mirrors on the moon to beam sunlight to Earth for power.
But given that currently 10,000 times the total energy used by humanity on a daily basis falls on the Earth from the sun, it is not clear that we need to go to the moon to harness more of it.
Gingrich also said during this same address that he envisions a vibrant commercial near-Earth space programme for the purposes of science, tourism and manufacturing. Once again, he didn't bother to explore precisely what sort of programme one might envisage here.
It took more than $100 billion to manufacture a white elephant in near-Earth orbit called the International Space Station, a large, smelly metal can that to date has produced no science, no manufacturing and tourism that only billionaires could afford.
Perhaps Gingrich imagines a vibrant Earth-surveying programme that might help monitor climate change?
No, probably not.
Not content to merely colonise the moon in a decade, Gingrich has also promised to develop a viable Mars programme to begin human space exploration of that planet within the next decade.
It is hard to imagine why he didn't also promise an intergalactic starship in this timeframe as well, as long as he was being visionary.
Finally, Gingrich may not be aware that the current US flags on the moon don't mean the US owns it, any more than those on US research stations in Antarctica mean the US owns that continent.
But I suppose if one is willing to suspend reality to imagine creating an imaginary new expensive, and expansive, space programme from nothing in a mere decade, without raising the taxes to do it, anything is possible.
It certainly seems easier to imagine populating the moon in this way than actually solving the very real problems we face on Earth today.
This article originally appeared in Slate. Lawrence Krauss is foundation professor and director of the Origins Project at Arizona State University in Tempe. His newest book is A Universe from Nothing: Why there is something rather than nothing
Science- New Improved (Industrial) Washing up liguid ?
Updated: 31 Jan 2012
Jacob Aron, technology reporter
New ScientistA soap that responds to magnetic fields could be used to clean up oil spills without leaving behind detergents that can harm surrounding wildlife. Researchers at the University of Bristol, UK dissolved iron particles in water that contained chlorine and bromine ions, materials which are commonly found in household products such as mouthwash or fabric cleaner. This created a metallic centre within the soap particles that could be influenced by a nearby magnetic field.The team tried out their new soap by placing it in a test tube beneath layers of water and an oil-like substance. Using a magnet, they were able to overcome both gravity and surface tension to lift the soap through the layers and out of the tube.This test shows that it is much easier to remove magnetic soaps from mixtures of other liquids, suggesting they could be used in response to environmental disasters such as oil spills, where concerns have been raised about the cleaning substances in use. A magnetic soap could easily be collected after cleaning, reducing the environmental impact.Magnetic soaps could also have a range of industrial applications thanks to their ability to change properties such as electrical conductivity or melting point at will with a magnetic on/off switch. These properties are normally altered by adding an electric charge or changing the pH, temperature or pressure of the substance, meaning they can not be reversed |
Science- Where did I put those car keys ?
Updated: 31 Jan 2012
Can't find your keys? Your brain's out of sync
30 January 2012 by Jessica Hamzelou
YOU'RE running late for work and you can't find your keys.
What's really annoying is that in your frantic search, you pick up and move them without realising.
This may be because the brain systems involved in the task are working at different speeds, with the system responsible for perception unable to keep pace.
So says Grayden Solman and his colleagues at the University of Waterloo in Ontario, Canada.
To investigate how we search, Solman's team created a simple computer-based task that involved searching through a pile of coloured shapes on a computer screen.
Volunteers were instructed to find a specific shape in a stack as quickly as possible, while the computer monitored their actions.
"Between 10 and 20 per cent of the time, they would miss the object," says Solman, even though they picked it up.
"We thought that was remarkably often."
To find out why, the team developed a number of further experiments.
To check whether volunteers were just forgetting their target, they gave a new group a list of items to memorise before the search task, which they had to recall afterwards.
The idea was to fill each volunteer's "memory load", so that they were unable to hold any other information in their short-term memory.
Although this was expected to have a negative effect on their performance at the search task, the extra load made no difference to the percentage of mistakes volunteers made.
To check that the volunteers were paying enough attention to the items they were moving, Solman's team created another task involving a stack of cards marked with shapes that only became visible while the card was being moved.
Again, they were surprised to see the same level of error, says Solman.
Finally, the team analysed participants' mouse movements as they were carrying out a similar search task.
They discovered that volunteers' movements were slower after they had moved and missed their target (Cognition, DOI: 10.1016/j.cognition.2011.12.006).
Solman's team propose that the system in the brain that deals with movement is running too quickly for the visual system to keep up.
While you are rummaging around a messy house to find your keys, you might not be giving your visual system enough time to work out what each object is.
Since time can be costly, sacrificing accuracy on occasion for speed might be beneficial overall, Solman thinks.
The slowing of mouse movements suggests that at some level the volunteers were aware that they had missed their target, a theory that is backed up by other studies that show people tend to slow down their actions after they have made a mistake, even if they don't consciously realise the mistake.
Solman reckons this reflects the brain's "attempt to slow down the motor system", to allow the visual system to catch up and conscious perception to occur.
"What's really interesting is the notion that the motor and perceptual system are decoupled.
They're both trying to help you find [your keys] but they're not coordinating," says Todd Horowitz, at Harvard University.
"There are implications for social search, such as a doctor looking through an X-ray or [security] looking through luggage."