THOUSANDS OF FREE BLOGGER TEMPLATES ?

Global Warming - A Chilling Possibility

Global warming could plunge North America and Western Europe into a deep freeze, possibly within only a few decades.

That's the paradoxical scenario gaining credibility among many climate scientists. The thawing of sea ice covering the Arctic could disturb or even halt large currents in the Atlantic Ocean.

Without the vast heat that these ocean currents deliver - comparable to the power generation of a million nuclear power plants - Europe's average temperature would likely drop 5 to 10°C (9 to 18°F), and parts of eastern North America would be chilled somewhat less. Such a dip in temperature would be similar to global average temperatures toward the end of the last ice age roughly 20,000 years ago.

Some scientists believe this shift in ocean currents could come surprisingly soon - within as little as 20 years, according to Robert Gagosian, president and director of the Woods Hole Oceanographic Institution. Others doubt it will happen at all. Even so, the Pentagon is taking notice. Andrew Marshall, a veteran Defense Department planner, recently released an unclassified report detailing how a shift in ocean currents in the near future could compromise national security.

"It's difficult to predict what will happen," cautions Donald Cavalieri, a senior scientist at NASA's Goddard Space Flight Center, "because the Arctic and North Atlantic are very complex systems with many interactions between the land, the sea, and the atmosphere. But the facts do suggest that the changes we're seeing in the Arctic could potentially affect currents that warm Western Europe, and that's gotten a lot of people concerned."

There are several satellites keeping an all-weather watch on ice cover in the Arctic. NASA's Aqua satellite, for instance, carries a Japanese-built sensor called the Advanced Microwave Scanning Radiometer-EOS ("AMSR-E" for short). Using microwaves, rather than visible light, AMSR-E can penetrate clouds and offer uninterrupted surveillance of the ice, even at night, explains Roy Spencer, the instrument's principal investigator at the Global Hydrology and Climate Center in Huntsville, Alabama. Other ice-watching satellites, operated by NASA, NOAA and the Dept. of Defense, use similar technology.

The view from orbit clearly shows a long-term decline in the "perennial" Arctic sea ice (the part that remains frozen during the warm summer months). According to a 2002 paper by Josefino Comiso, a climate scientist at NASA's Goddard Space Flight Center, this year-round ice has been retreating since the beginning of the satellite record in 1978 at an average rate of 9% per decade. Studies looking at more recent data peg the rate at 14% per decade, suggesting that the decline of Arctic sea ice is accelerating.

Retreating Arctic ice, 1979-2003, based on data collected by the Defense Meteorological Satellite Program (DMSP) Special Sensor Microwave Imager (SSMI) Some scientists worry that melting Arctic sea ice will dump enough freshwater into the North Atlantic to interfere with sea currents. Some freshwater would come from the ice-melt itself, but the main contributor would be increased rain and snow in the region. Retreating ice cover exposes more of the ocean surface, allowing more moisture to evaporate into the atmosphere and leading to more precipitation.
Because saltwater is denser and heavier than freshwater, this "freshening" of the North Atlantic would make the surface layers more buoyant. That's a problem because the surface water needs to sink to drive a primary ocean circulation pattern known as the "Great Ocean Conveyor."

Sunken water flows south along the ocean floor toward the equator, while warm surface waters from tropical latitudes flow north to replace the water that sank, thus keeping the Conveyor slowly chugging along. An increase in freshwater could prevent this sinking of North Atlantic surface waters, slowing or stopping this circulation.

AMSR-E is collecting new data that will help scientists evaluate this possibility. For one thing, it provides greatly improved ground resolution over previous all-weather sensors. AMSR-E images reveal smaller cracks and fissures in the ice as it breaks up in the spring. This detail allows scientists to better understand the dynamics of ice break-up, says Cavalieri, a member of the AMSR-E team.

"Other important pieces of the puzzle, like rainfall, sea-surface temperatures, and oceanic winds, are also detected by AMSR-E. Looking at those variables together should help scientists assess the likelihood of a change in the Atlantic currents," adds Spencer.

Will it happen again? Researchers are scrambling to find out.
On Feb. 13, an expedition set sail from Great Britain to place current-monitoring sensors in the Atlantic Ocean that will check the Gulf Stream for signs of slowing. The voyage is the latest step in a joint US / UK research project called Rapid Climate Change, which began in 2001. Another international project, called SEARCH (Study of Environmental Arctic CHange), kicked off in 2001 with the goal of more carefully assessing changes in Arctic sea ice thickness.

more The RRS Discovery, on a voyage to measure currents in the Atlantic Ocean. Much depends on how fast the warming of the Arctic occurs, according to computer simulations by Thomas F. Stocker and Andreas Schmittner of the University of Bern. In their models, a faster warming could shut down the major Atlantic current completely, while a slower warming might only slow the current for a few centuries.

And, inevitably, the discussion turns to people. Does human industry play a major role in warming the Arctic? Could we reverse the trend, if we wanted to? Not all scientists agree. Some argue that the changes occuring in the Arctic are consistent with large, slow natural cycles in ocean behavior that are known to science. Others see a greater human component.

"The sea ice thawing is consistent with the warming we've seen in the last century," notes Spencer, but "we don't know how much of that warming is a natural climate fluctuation and what portion is due to manmade greenhouse gases."

If the Great Conveyor Belt suddenly stops, the cause might not matter. Europeans will have other things on their minds - like global warming that causes concerns as to how to grow crops in snow. Now is the time to find out, while it's merely a chilling possibility.

Mysteries Of Bermuda Triangle

Over the past century, hundreds of ships and planes have gone missing in a mysterious stretch of water in the Atlantic Ocean called the Bermuda Triangle. Is there a scientific explanation for these disappearances? Miami, Puerto Rico and Bermuda are prime holiday destinations boasting sun, beaches and coral seas. But between these idyllic settings, there is a dark side: countless ships and planes have mysteriously gone missing in the one and a half million square miles of ocean separating them. About 60 years ago, the area was claiming about five planes every day and was nicknamed the Bermuda Triangle by a magazine in 1964. Today, about that many planes disappear in the region each year and there are a number of theories explaining what could be happening.

The Bermuda Triangle, a stretch of water between Puerto Rico, Bermuda and Florida, has been the site of many plane and ship disappearances.Twins George and David Rothschild are among the first passengers to have experienced bizarre effects in the Bermuda Triangle. In 1952, when they were 19 years old, the two naval men had to make an emergency trip home on a navy light aircraft, north over the Florida Keys, to attend their father's funeral. We had been flying for probably 20 or 30 minutes when all of a sudden the pilot yelled out that the instruments were dead and he became very frantic, says George Rothschild. He had lost his bearings, and not only did he not know where he was, he also had no idea how much gas was left in the fuel tanks. After what seemed like hours, they landed safely in Norfolk, on the Florida coast.

Some speculate that it had nothing to do with the location, but rather the instruments that were available at the time. Pilot Robert Grant says that back in the 1940s, navigating a plane involved a lot of guesswork since they relied completely on a magnetic compass to guide them. Dead reckoning was used, which means that pilots would trust their compass and then estimate how the wind would influence their planned flight path to remain on track. No matter what your mind tells you, you must stay on that course, says Grant. If you don't, and you start turning to wherever you think you should be going, then you're toast.

An image of Tropical Storm Harvey, which hit Bermuda in August 2005.Hurricanes are common in the Bermuda Triangle area. In the Atlantic Ocean, they typically originate off the African coast and thrive off the moisture of the warm, tropical waters. Hurricane records from the past 100 years have shown that they often head west for the United States but swerve into the waters of the Bermuda Triangle at the last minute. Jim Lushine, a meteorologist at the National Hurricane Centre in Miami, Florida studies the weather in the Bermuda Triangle and says that there are more hurricanes in that particular area than in any other in the Atlantic basin.

But thunderstorms in the area can be just as dangerous. In 1986, a historic ship called the Pride of Baltimore vanished from radar screens while it was in the Bermuda Triangle, making a trip from the Caribbean to Baltimore. About four and a half days later, the wreckage and eight survivors were found and they revealed that the ship had been hit by a microburst: 80 mile per hour winds emanating from a freak thunderstorm. It happened so quickly that the crew didn't have time to make a distress call. The ship was sunk in the downburst, unfortunately with a great loss of life, says Lushine. Similar downbursts are probably responsible for some of the sunken ships in the Bermuda Triangle.

Even more unpredictable than thunderstorms are waterspouts. These can be caused by tornadoes that move out to sea or rotating columns of air that drop from thunderstorms, creating a vortex of spray. When the moisture condenses, it forms a twisting column that connects the sea to the clouds. Jim Edds, an amateur fisherman who chases and films waterspouts for fun, says that if you are out at night and a tornado-like waterspout develops - the really big, strong ones with high velocity - it can flip your vessel over.

Bubbling methane seismic activity at the bottom of the ocean can also be an explanation for disappearing ships. Scientists have discovered that huge bubbles of methane gas can violently erupt without warning from the ocean floor and at least one oil rig is thought to have sunk because of this phenomenon. Ralph Richardson, the director of the Bermuda Underwater Exploration Institute, claims that a large pocket of gas could surround a ship, causing it to lose buoyancy and disappear without warning.

At the U.S. Navy,s research centre in California, Bruce Denardo, an expert in fluid dynamics, has proved that bubbles from methane gas eruptions could be responsible for vanishing ships in the open ocean. Water pressure causes objects to float, and the deeper the water, the greater the pressure exerted to keep the object floating at the surface. If bubbles from methane are introduced, they lower the density of the water. They take up space, but the volume of water stays the same, causing the buoyant force to decrease. In an experiment with a ball in water, Denardo can demonstrate that the ball sinks deeper and deeper down in water as the amount of bubbles increases, until it reaches a critical point where it sinks completely. If a ship were to take on enough water, it would sink to the bottom and stay there, says Denardo.

A mysterious time warp?Others have more far out explanations for the Bermuda Triangle disappearances. Property developer Bruce Gernon claims that on December 4th 1970, when he flew from the island of Andross in the Bahamas to Florida, he experienced a distortion in space time. He had made the same trip on many occasions, but he claims that his journey that day was much faster than usual. I noticed a huge U-shaped opening in the clouds, but as I approached it, the top of the opening closed and it became a horizontal tunnel that appeared to be 10 to 15 miles long, he says.when the aircraft entered the tunnel, some lines, which I call time lines, appeared which were rotating counter-clockwise. It was difficult to keep it level and concentrate on the other end of the tunnel which was aiming directly for Miami.

Gernon claims that when he came out of the tunnel, it closed fast behind him and he was surrounded by a strange fog. His instruments had stopped working and Air Traffic Control had no radar trace of his plane until they realized that it was actually over Miami beach. Given the time they had been flying, they should still have been about 45 minutes away from Miami. After researching what could have happened, Genon is now writing a book about his experience. I have come to the conclusion that we experienced a space time warp of a hundred miles in thirty minutes, he says.

Is this scientifically feasible? About 80 years ago, Einstein proposed his general theory of relativity which claimed that huge spinning objects could distort space and time in their surroundings. Although NASA researchers have now found signs that black holes and neutron stars appear to warp space time, it is still a far cry from concepts introduced by science fiction like wormholes, or tunnels in space time that provide travellers with an express route between different dimensions and great distances.

Explanations for the vanishings in the Bermuda Triangle are all still theories. But especially for people who have witnessed bizarre events in this area, there is a strong desire to find some answers. One author, Gian Quasar, has been investigating every single plane and ship disappearance in the Bermuda Triangle and has listed every case on a massive internet database at http://www.bermuda-triangle.org/. With initiatives like this and further research, perhaps the mystery will come to a conclusion.

What is the Loch Ness Monster?

As far back as the 7th Century, people have reported seeing a Loch Ness monster in Scotland. Can science explain these mysterious sightings?The Loch Ness is a lake in Scotland that holds the largest volume of freshwater in the United Kingdom. But rather than being known for its size, it is famous for the mysterious legend of the Loch Ness monster.

For hundreds of years, people have reported catching a glimpse of a huge creature in the lake while others have shared photos they claim to have taken of this sea creature. The legend is so great that even scientists have been intrigued and many have conducted experiments and come up with theories to try and explain what people could be witnessing. Painting of plesiosaurs, creatures thought to be most similar to people's descriptions of the Loch Ness monster.

A real creature in Loch Ness?It has been proposed that Nessie as the Loch Ness monster is commonly called could be a prehistoric creature called a plesiosaur, an animal that spanned up to ten meters in length and has long been considered to be extinct. Adrian Shine, the leader of a British team called the Loch Ness Project, has spent over 30 years trying to rationally explain the monster sightings by researching the ecology of the region. If in fact a large creature was living in the lake, there would have to be evidence of a food chain for it to survive. A creature like the Loch Ness monster would most likely eat fish, which in turn would live off large quantities of microscopic animals called zooplankton. There would have to be enough zooplankton in the lake to support populations of larger animals.

A way of estimating the amount of zooplankton in the lake is to examine the quantities of green algae in the bottom rung of the food chain - that zooplankton feed from. Green algae needs some light to thrive, and so by examining how deep down in the lake sunlight can penetrate, researchers can estimate the amount of green algae and following from this, the type of population that could be sustained.

Scientists have calculated that a maximum of 17 to 24 tons of fish live in the Loch Ness. For a lake of its size, it is a small amount, and would be able to keep alive about ten creatures weighing 226 kg each. According to Richard Forrest, an expert on plesiosaurs, ten creatures would not be enough to keep a colony going. Thirty to forty creatures would be the minimum size of a breeding population, he says.

In addition, if creatures similar to plesiosaurs lived in the waters of the Loch Ness, they would be seen very frequently as they would have to surface several times a day to breathe. Eye witnesses have often mentioned seeing an animal throwing back its long neck from the water, but Forrest claims that plesiosaurs couldn't do that. The simple fact is that a plesiosaur's neck is too stiff. The bones of the neck interlock and there are tall spines on top of them so the neck can't go straight out of the water, he says.

SONAR investigations
But it is not impossible for prehistoric creatures to still be around today. In 1938, South African fishermen caught a gigantic fish that turned out to be a Coelacanth, a prehistoric fish thought to be extinct for the past 80 million years. Because of murky water filled with peat, it has been hard for divers to properly investigate the depths of the Loch Ness and the life that exists there. The advent of SONAR is a measuring instrument that sends sound waves into water and measures distance by calculating the time it takes for an echo to travel back to the source has proved to be useful for probing the mystery since the waves can detect any objects that come in their way.

In 1987, Operation Deepscan took place - the biggest SONAR exploration of Loch Ness. Boats equipped with SONAR were deployed across the whole width of the lake and they simultaneously sent out acoustic waves. BBC News reported that the scientists had made sonar contact with a large unidentified object of unusual size and strength. The researchers decided to return to the same spot and re-scan the area.

After analysing the SONAR images, it seemed to point to debris at the bottom of the lake, although three of the pictures were of moving debris. Shine speculates that they could be seals that got into the lake, since they would be of about the same magnitude as the objects detected. But no one has been able to confirm their identity.

The Surgeon's Photo: Famous picture that for a long time was considered to be the most trusted photo of the monster.Similarly, many people have captured photos of monster-like creatures that have never been explained. Many have been dismissed as forgeries, but the most trusted one was called the Surgeon's Photo, since it was supposed to be have been taken by well-respected surgeon Robert Wilson. For about 50 years, the true story behind the picture was a mystery, but it was finally revealed to be a hoax started by a man called Marmaduke Wetherall. Attempting to prove to the world that Nessie exists, Wetherall had already claimed to have found monster-sized footprints near the Loch, but when the casts he sent to the Natural History Museum in London were analysed, they were found to be hippopotamus tracks! As revenge, he made a model of the monster and photographed it on Loch Ness. He managed to persuade Wilson to pass it off as his own, since he knew that no one would believe him after his hippo prank.

Monster earthquakes?Although many sightings could be hoaxes, there could also be a geological interpretation: seismic activity in the lake could cause disturbances on its surface that could be mistaken for Nessie. Loch Ness is situated on the Great Glen fault line that was created by the collision of continents that formed Scotland 400 million years ago. Over 200 years ago, a major earthquake with its epicentre in Lisbon, Portugal caused water disturbances in the Loch more than 1500 km away. Reports state that a wave about two or three feet high was seen travelling up and down Loch Ness,says Robert Musson, the principal seismologist at the British Geological Survey. But he claims that generally there is little seismic activity in the area and doesn't think that earthquakes can account for the repeated sightings.

Dr Luigi Piccardi, an Italian specialist in Mediterranean geology, disagrees. Currently studying events depicted in Greek mythology, he says that many of the effects described can be related to real effects during strong earthquakes. Similarly, he thinks that the same theory applies to Nessie and claims that there are recurring tremors around the town of Inverness just 16 kms away that could spread to the Loch. He plans to test his theory by conducting a detailed seismic survey in the area.

But should geological explanations fail, psychology may be able to provide some insight. Helen Ross, a psychologist and expert on illusions, thinks that myth is so powerful that people can convince themselves that an ordinary object floating in the water is a monster. When something really ambiguous is there, people often don't know what they're seeing and they can see all sorts of strange things, she says. It's a bit like seeing faces in the fire or ink blots appearing as all sorts of creatures.

Until physical evidence of the Loch Ness monster is found like the creature itself or its skeleton it may be hard to convince most scientists that it exists. Perhaps the sightings are simply an example of the human fascination for mystery and intrigue, and the awe that many people have for the natural world.

Robots of the Future

Does the future of robotics hold the promise of a dream come true to lighten the workload on humanity and provide companionship. Or the murder and mayhem of Hollywood movies?
When the Czech playwright Karel Capek sat down in 1920 to write a play abouthumanoid machines that turn against their creators, he decided to call his imaginary creations 'robots', from the Czech word for 'slave labour'. Ever since then, our thinking about robots, whether fictional or real, has been dominated by the two key ideas in Capek's play. Firstly, robots are supposed to do the boring and difficult jobs that humans can't do or don't want to do. Secondly, robots are potentially dangerous.

These two ideas remain influential, but not everyone accepts them. The first dissenting voice was that of the great Russian-American science-fiction writer, Isaac Asimov, who was born the same year that Capek wrote his notorious play. In 1940, barely two decades later, while others were still slavishly reworking Capek's narrative about nasty robots taking over the world, Asimov was already asking what practical steps humanity might take to avoid this fate. And instead of assuming that robots would be confined to boring and dangerous jobs, Asimov imaged a future in which robots care for our children, and strike up friendships with us.

From the perspective of the early twenty-first century, it might seem that Capek was right and that Asimov was an idealistic dreamer. After all, most currently-existing robots are confined to doing nasty, boring and dangerous jobs, right? Wrong. According to the 2003 World Robotics Survey produced by the United Nations.

Robot Arm over Earth with Sunburst Economic Commission for Europe, over a third of all the robots in the world are designed not to spray-paint cars or mow the lawn, but simply to entertain humans. And the number is rising fast. It is quite possible, then, that the killer app for robots will turn out to be not the slave labour envisaged by Capek, but the social companionship imagined by Asimov.

AIBO
The most impressive entertainment robot currently on the market is undoubtedlythe Aibo, a robotic dog produced by Sony. According to Onrobo.com, a website devoted to home and entertainment robotics, Aibo is the standard by which all other entertainment robots are measured. Special software allows each Aibo to learn and develop its own unique personality as it interacts with its owner. But at over a thousand pounds a shot, they aren't cheap.
Commercial products like the Aibo still have some way to go before they have the quasi-human capacities of 'Robbie', the child-caring robot envisaged by Asimovin one of his earliest short-stories, but the technology is moving fast. Scientists around the world are already beginning to develop the components for more advanced sociable robots, such as emotional recognition systems and emotional expression systems.

Emotions are vital to human interaction, so any robot that has to interact naturally with a human will need to be able to recognise human expressions of emotion and to express its own emotions in ways that humans can recognise. One of the pioneers in this area of research (which is known as 'affective computing') is Cynthia Breazeal, a roboticist at the Massachusetts Institute of Technology who has built an emotionally-expressive humanoid head called Kismet. Kismet has moveable eyelids, eyes and lips which allow him to make a variety of emotional expressions. When left alone, Kismet looks sad, but when he detects a human face he smiles, inviting attention. If the carer moves too fast, a look of fear warns that something is wrong. Human parents who play with Kismet cannot help but respond sympathetically to these simple forms of emotional behaviour.

Another emotionally-expressive robot called WE-4R has been built by Atsuo Takanishi and colleagues at Waseda University in Japan. Whereas Kismet is limited to facial expressions and head movements, WE-4R can also move its torso and wave its arms around to express its emotions.

The gap between science fiction and science fact is closing, and closing fast. In fact, the technology is advancing so quickly that some people are already worried about what will happen when robots become as emotional as we are. Will they turn against their creators, as Capek predicted? In the new Hollywood blockbuster, I, Robot (which is loosely based on an eponymous collection of Asimov's short stories), Will Smith plays a detective investigating the murder of a famous scientist. Despite the fail-safe mechanism built into the robots, which prevents them from harming humans, the detective suspects that the scientist was killed by a robot. His investigation leads him to discover an even more serious threat to the human race.

I, Robot is set in the year 2035, thirty one years in the future. To get an idea of how advanced robots will be by then, think about how far videogames have come in the last thirty one years. Back in 1973, the most advanced videogame was Pong, in which a white dot representing a tennis ball was batted back and forth across a black screen. The players moved the bats up and down by turning the knobs on the game console. By today's standards, the game was incredibly primitive. That's how today's robots will look to people in the year 2035.

iRobots from the film Will those future people look back at the primitive robots of 2004 and wish they hadn't advanced any further? If we want to avoid the nightmare scenario of a battle between humans and robots, we should start thinking about how to ensure that robots remain safe even when they are more intelligent. Isaac Asimov suggested that we could make sure robots don't become dangerous by programming them to follow the following 'Three Robot Laws':
1. A robot may not injure a human being or, through inaction, allow a humanbeing to come to harm.
2. A robot must obey orders given it by human beings except where such orderswould conflict with the First Law.
3. A robot must protect its own existence as long as such protection does notconflict with the First or Second Law.

At first blush, these three laws might seem like a good way to keep robots intheir place. But to a roboticist they pose more problems than they solve. Asimov was well aware of this, and many of his short stories revolve around the contradictions and dilemmas implicit in the three laws.
The sobering conclusion that emerges from these stories is that preventing intelligent robots from harming humans will require something much more complex than simply programming them to follow the three laws.

Microscopes: why seeing smaller is not always better

Why are researchers working on a new type of microscope that has a lower resolution than those which already exist?Antonie Van Leeuwenhoek first saw and described cells and bacteria through one of the first microscopes in the 17th century.

Since then we have wanted to know about biology in smaller and smaller scale. The first microscopes consisted of nothing more than a tube with a plate for the object in one end and a magnifying glass in the other. In the 18th century the resolution was improved by the development of lenses of bigger curvature resulting in greater magnification, and through combining several lenses together. It wasn't until the 20th century that new scientific theories and technologies allowed the creation of different types of microscopes altogether.

New technologies and methods in fluorescence microscopy will make it possible to understand cellular processes in a scale never seen before. Besides broadening our understanding of how life works, it will open endless new possibilities for the development of new treatments in fields such as cancer research, immunology and cardiovascular diseases. With these microscopes resolution of 20-50 nm should be commonly achievable.

What will fluorescence microscopes enable us to see?Most microscopes ever invented have been 'optical'. That is they bounce light off an object in order to study it. However, light microscopy suffers from one weakness: limited resolution. Due to the wave nature of light, different waves in a beam of light interfere with each other, i.e. they diffract. Because of this, when a beam of light is focused using a lens, it forms a spot that is about 200 nm wide in the x- and y-directions and 500 nm long in the z-direction, depending on the wavelength of the light and the angle of which the lens can collect light.

Since the 1930's various types of electron microscopes have been invented and while remaining expensive, have come into fairly common usage. The development of the electron microscope, where a beam of electrons is used instead of a beam of light, greatly increased the resolution due to the smaller wavelength of electrons compared to photons. Photons are the particle which light is made of.

While electron microscopes revealed an entirely new world of detail never before observed they are generally not compatible with biological imaging. Samples need to be held in an airless vacuum in order to be viewed with an electron microscope. Also, techniques for the preparation of samples involve cutting the material to be observed into thin slices , use of metals such as uranium, lead or coating the sample with a variety of conductive metals. In any case, biological material viewed through an electron microscope is no longer alive.

There are many applications in biology and medicine where it would be desirable to have the resolution of an electron microscope without killing the sample. Although human and other animal cells are big enough to be observed with a light microscope, the functioning of the cells is regulated by the synthesis and transportation of proteins that often interact or bind together to perform specific functions. For example, our immunologic reactions are based on the ability of cells to produce proteins that target foreign objects. Also the death of a cell is regulated by proteins; the inability of cells to die in a controlled manner leads to cancer. However, with the typical resolution of a light microscope of about 200 nm it is not possible to tell if and how the proteins interact, how they are transported to specific parts of the cell and why they are needed there. Understanding these mechanisms is essential in medical research and the development of new treatments.

How do fluorescence microscopes work?
Early in the 20th century, the phenomenon of fluorescence was applied to microscopy. Fluorescence is a luminescence phenomenon. Usually we see objects when light is reflected from them - the colour of an object depends on what wavelengths it reflects. With fluorescence, a photon (a light 'particle') of certain wavelength is absorbed by a molecule, and then re-emitted at a longer wavelength.

Fluorescence is a very commonly used technique in biological imaging. Biological materials usually scatter a lot of light making it difficult to see beyond the surface of the cell. With fluorescence, the emitted light is always longer wavelength than the excitation light, so the light scattered from the cell surface can be separated from the emitted fluorescent light using dichroic mirrors that reflect the excitation light into the sample but let through the fluorescence light, making it possible to see structures inside the cell.

Some biological materials are naturally fluorescent, but there are also many fluorescent dyes and proteins available that can be used to highlight specific parts of a cell, for example the nucleus, or they can be attached to specific proteins in cells so that it is possible to follow their movement inside the cell.

Recently discovered photoswitchable fluorescent dyes and proteins have many applications in fluorescence imaging. These molecules can exist in two states: a bright, fluorescent state, and a dark, nonfluorescent state. The switching between these states is done by irradiating the molecules with two different wavelengths of light.

One application of photoswitchable molecules is protein tracking. If the fluorescent molecules are attached to a specific protein, and a small part of them are activated, it is easier to follow where the proteins move than having all the proteins in the cell emitting light. Also, the exact moment of the activation can be controlled.

How can we produce a high resolution microscope which won't kill our biological samples?Since the resolution of a light microscope depends on the wavelength, an obvious way to improve resolution is to decrease the wavelength. However, as we move from the visible spectrum towards the ultraviolet (UV) spectrum, the light becomes toxic to living materials. Even the least harmful UVA radiation has the ability to break bonds in DNA causing mutations and stopping the cell to function in a normal way.

Resolution improvement in the z-direction (depth, essentially) has been achieved by the use of two opposing objective lenses. Because of the increased angle from which light is collected, resolution of about 100 nm is achievable. However, resolution is improved only in one direction, and this technique suffers from technical difficulties, such as keeping the objectives accurately aligned.

Photoswitchable molecules could make fluorescence imaging possible in nanometre scale with living samples. If the molecules are switched on in a small spot on the sample, and another doughnut-shaped beam is used around it to switch off the molecules, the effective spot where from where fluorescence is emitted becomes much smaller. In fact, resolution on the scale of tens of nanometers has been achieved by Stimulated Emission Depletion Microscopy (STED), which is based on similar principles, but so far this has not shown to be generally compatible with live cell imaging. If photoswitchable proteins or dyes are used, high intensities are no longer needed.

Alternatively to scanning a spot across the sample, a grating pattern can be projected onto the object to squeeze the fluorescence into thin lines. The grating pattern can then be scanned across the sample. Although several images are required to construct the final high-resolution image, this approach still makes the data acquisition quicker than the process of scanning a spot across the object.

Yet another approach for nanoscale imaging with photoswitchable dyes and proteins is to first switch off all the molecules in the sample, then adjust the activation intensity so that only few molecules are switched on. Depending on the brightness of the molecules, the centroid position of the molecules can calculated with the accuracy of few tens of nanometers. After imaged, the molecules can be switched off, and the process can be repeated by switching on different molecules. The final image can be reconstructed by combining a stack of these images. The drawback of this approach is that by imaging only a few molecules per image, thousands of images are required for the final high resolution image, making this technique presently too slow for imaging living samples.

Although most of these superresolution techniques are not yet commercially available, this could change quickly in a matter of years.