New strategy to cut heart attack risk is effective in ...



New strategy to cut heart attack risk is effective in initial test

The first clinical trial of a new kind of drug to cut the risk of cardiovascular disease has been found safe and effective at dropping levels of "bad" low density lipoprotein (LDL) cholesterol by as much as 40 percent. High LDL levels increase the risk for heart attack and stroke.

The drug mimics the action of thyroid hormone and safely accelerates the hormone’s natural ability to rid the body of LDL. It is unrelated in structure or action to statins, the widely used class of drugs to lower cholesterol, and may offer an alternative for patients who cannot tolerate statins, according to the research team. It might also complement the use of statins to further decrease cholesterol levels, the researchers report in "The Proceedings of the National Academy of Sciences" (PNAS).

Someone suffers a heart attack about every 30 seconds in the U.S., yet the best drug trials using statins show that the drugs reduce the incidence of new heart attacks and other coronary events by only about 35 percent, highlighting the need for new therapies, the scientists say.

In the clinical trial, the new drug was shown to decrease cholesterol levels in two ways: It lowers LDL levels and promotes the removal of cholesterol through the liver.

Known as KB2115, the drug was developed by Karo Bio AB, a Swedish pharmaceutical company. Scientists there are co-authors of the scientific paper reporting the finding, along with researchers at the Karolinska Institute in Sweden and at the University of California, San Francisco (UCSF). All scientists have a proprietary interest in Karo Bio.

The results are published online in an expedited "early edition" of PNAS. The journal also is scheduled to publish an editorial on the research finding.

The Phase II trial involved 24 moderately overweight people with high LDL levels. It confirms earlier tests in animals. The animal studies also found that the drug stimulated the "good cholesterol" (HDL) pathway, which removes cholesterol from arteries and transports it into the liver, where it is converted into bile and eliminated from the body.

The animal studies also found that the drug countered both obesity and diabetes. The researchers hope to test the drug’s ability to safely treat people with these conditions too.

"In spite of today’s therapies for heart attack and stroke, there are more than a million heart attacks a year in the U.S.," said John Baxter, MD, professor of medicine in the UCSF Diabetes Center, and senior author on the paper. "We need other types of drugs to attack this problem. Using thyroid mimics is an entirely different approach, and I think one with great promise for treating high cholesterol and probably other conditions such as obesity and diabetes."

Baxter is former president of the Endocrine Society, a recipient of its highest honor and a member of the National Academy of Sciences.

Leaders of the study include Anders Berkenstam and Jens Kristensen at Karo Bio AB; Bo Angelin at the Kaolinska Institute in Stockholm, along with UCSF’s Baxter.

The beneficial cholesterol-lowering effects of thyroid hormone largely depend on its docking with one form of the thyroid hormone receptor in the cell nucleus, known as the "beta" form. Until now, efforts to attack cholesterol using drugs that mimic thyroid hormone have been thwarted because the drugs stimulated not only the healthy effects of thyroid hormone made possible by the beta receptor, but also the harmful effects – such as increased heart rate – caused by docking with the second, or "alpha" form.

KB2115 binds selectively to the helpful beta receptor and is preferentially take up by the liver. It is taken up only poorly into the heart, thereby minimizing dangerous over-stimulation. The trial results show that this strategy gains the benefits of excess thyroid without the potential severe drawbacks, the researchers say.

In the early 1990s at UCSF, Baxter and Thomas Scanlan (now at Oregon Health and Science University), began efforts to develop compounds that elicit the good, but not the unwanted effects of thyroid hormone. Their work underlies the development of compounds like KB2115.

In the study, 24 people were divided into four groups. One group received a placebo, and each of the other groups received a different dose of KB2115. After two weeks, LDL levels were lowered by an average of 40 percent in the groups that took the highest doses.

Findings showed the drug was well tolerated with no detectable effects on the heart. Further clinical trials are planned.

Additional co-authors of the study are Karin Mellstrom, Bo Carlsson, Johan Malm, Stefan Rehnmark, Neeraj Garg and Carl Magnus Andersson at Karo Bio AB; Mats Rudling at the Karolinska Institute and Folke Sjoberg of the Berzelius Clinical Research Center, AB, Linkoping, Sweden.

UCSF is a leading university dedicated to defining health worldwide through advanced biomedical research, graduate level education in the life sciences and health professions, and excellence in patient care.

Older antibiotic gains new respect as potent treatment for tuberculosis

Rifapentine is already approved for use in humans

It has no current market, not even a prescription price. Its makers stopped commercial production years ago, because demand was so low. But an antibiotic long abandoned as a weak, low-dose treatment for tuberculosis (TB) may have found renewed purpose, this time as a potent, high-dose fighter against the most common and actively contagious form of the lung disease.

"Rifapentine is back," says Johns Hopkins infectious disease specialist Eric Nuermberger, M.D., whose studies in mice, to be published in the Public Library of Science journal PLoS Medicine online Dec. 17, have found it so promising as an initial treatment for active TB that clinical trials are scheduled to begin next year in at least eight countries.

The mouse studies showed that substituting higher and daily doses of rifapentine for another antibiotic, rifampin, cured mice two to three times faster than the much older, standard regimen of drugs that includes rifampin. Researchers say if tests in people confirm the findings in mice, the average time to clear the potentially fatal bacterial infection could be reduced from six months to three or less.

"People infected with TB are desperate for better therapies to combat the infection, therapies that can work more quickly and thus limit its chances to spread," says Nuermberger, who headed the team of researchers from Hopkins and elsewhere. "And having the ability to more effectively treat the most common form of the disease, so-called drug-susceptible TB, is a key step in holding off multidrug-resistant strains from developing, too."

"It’s a huge advantage to have a drug that’s already government approved and an equally great surprise to know that it was there all the while," says Nuermberger, an assistant professor at The Johns Hopkins University School of Medicine. He says that phase II clinical trials will begin as quickly as possible by mid-2008 to gauge the effectiveness of rifapentine as a key component to daily, anti-TB drug regimens.

The nonprofit Global Alliance for TB Drug Development (GATB) estimates that worldwide, more than 9 million people are infected with the highly contagious and active form of TB, caused by Mycobacterium tuberculosis; experts say another 424,000 are infected with the more dangerous, multidrug-resistant form of the disease.

Most of the antibiotics currently used to treat TB, Nuermberger notes, were developed in the 1950s or 1960s, and few new medications have appeared since.

Rifapentine, approved by the U.S. Food and Drug Administration in 1998 for treating widespread, drug-susceptible TB, was initially developed as a less cumbersome, once-weekly tablet. But the drug "was never really considered effective in low doses when compared to the gold standard, daily, high-dose regimens with rifampin," says Nuermberger. Rifampin (sold as Rifadin and Rimactane) was F.D.A. approved in 1968.

The potential for shortened treatment times follows an advance by the team’s top scientist, Richard Chaisson, M.D., director of Hopkins’ Center for TB Research, in October. Chaisson and his group showed that moxifloxacin (Avelox), another antibiotic, when substituted for ethambutol (Myambutol), may cut treatment times from six months to four.

Nuermberger and his team investigated the high-dose potential of rifapentine because the drug was in the same class of drugs as rifampin, which is part of the standard antibiotic cocktail of rifampin, pyrazinamide, and isoniazid, a triple drug combo sold as Rifater, or with moxifloxacin in place of isoniazid.

Given as part of Directly Observed Therapy Short-Course, or DOTS for short, because the drugs are usually given in direct view of a caregiver to ensure compliance, the regimen requires several daily doses for six to nine months. DOTS cures 95 percent of those treated, but the lengthy treatment period has proven a problem for patients, who sometimes miss taking their drugs on time, minimizing the therapy’s effectiveness.

In the new study, Nuermberger and his team tested seven different combinations of antibiotic drugs in hundreds of mice infected with active TB. Some were treated with the standard DOTS regimen, daily Rifater, while others took rifapentine in place of rifampin. Rifapentine, in daily amounts similar to what an adult human would take (600 milligrams), was also tested separately in combination with moxifloxacin- or isoniazid-based DOTS regimens.

Blood and tissue testing were done over a six month period to see how quickly each drug combination rid the body of active TB. Treated mice were also tested three months later to check against any potential for relapse.

After 10 weeks of drug therapy, mice taking rifapentine and moxifloxacin tested negative for active TB and remained so when retested three months later. Those treated with rifapentine and isoniazid also tested clear of the bacterium by 10 weeks, but were at least 10 percent more likely to relapse unless treatment persisted for another month. Meanwhile, the traditional DOTS regimen mostly took the full six months to work.

Results showed a distinct advantage in using rifapentine over rifampin, with rifapentine having remained in the blood in three times higher concentrations throughout treatment, indicating the drug’s "longer-lasting action," says Nuermberger.

Two clinical trials are scheduled to study high-dose and daily combinations with rifapentine. The first is being spearheaded by Chaisson and will take place in Brazil. The second will be led by Hopkins researcher Susan Dorman, M.D., and scientists from the U.S. Centers for Disease Control and Prevention, and will study those infected in 40 cities in six countries around the world.

Nuermberger and colleagues conducted their research with funding from the U.S. National Institute of Allergy and Infectious Diseases, a member of the National Institutes of Health. Bayer donated supplies of moxifloxacin, and Sanofi Aventis donated supplies of rifapentine for the study, which started in early 2006 and took nearly a year to finish.

Besides Nuermberger and Chaisson, other researchers from Hopkins involved in this study, led by Ian Rosenthal, Ph.D., were Ming Zhang, M.S.; Kathy Williams; Sandeep Tyagi; William Bishai, M.D., Ph.D.; and Jacques Grosset, M.D. Additional assistance was provided by Charles Peloquin in Denver; and Andrew Vernon in Atlanta. Bishai has previously received research support from Bayer and currently has a grant pending with Schering-Plough, which markets moxifloxacin. He has also previously received grants from Hoechst Marion Rousel, now owned by Sanofi Aventis. Chaisson has also received study funding from Bayer and Sanofi Aventis for this and other studies.

For additional information, go to: dom/tb_lab/

Monkeys can perform mental addition

DURHAM, N.C.--Researchers at Duke University have demonstrated that monkeys have the ability to perform mental addition. In fact, monkeys performed about as well as college students given the same test.

The findings shed light on the shared evolutionary origins of arithmetic ability in humans and non-human animals, according to Assistant Professor Elizabeth Brannon, Ph.D. and Jessica Cantlon, Ph.D., of the Duke Center for Cognitive Neuroscience.

Current evidence has shown that both humans and animals have the ability to mentally represent and compare numbers. For instance, animals, infants and adults can discriminate between four objects and eight objects. However, until now it was unclear whether animals could perform mental arithmetic.

"We know that animals can recognize quantities, but there is less evidence for their ability to carry out explicit mathematical tasks, such as addition," said graduate student Jessica Cantlon. "Our study shows that they can."

Cantlon and Brannon set up an experiment in which macaque monkeys were placed in front of a computer touch screen displaying a variable number of dots. Those dots were then removed and a new screen appeared with a different number of dots. A third screen then appeared displaying two boxes; one containing the sum of the first two sets of dots and one containing a different number. The monkeys were rewarded for touching the box containing the correct sum of the sets.

The same test was presented to college students, who were asked to choose the correct sum without counting the individual dots. While the college students were correct 94 percent the time and the monkeys 76 percent, the average response time for both monkeys and humans was about one second.

Interestingly, both the monkeys' and the college students' performance worsened when the two choice boxes were close in number.

"If the correct sum was 11 and the box with the incorrect number held 12 dots, both monkeys and the college students took longer to answer and had more errors. We call this the ratio effect," explained Cantlon. "What's remarkable is that both species suffered from the ratio effect at virtually the same rate."

That monkeys and humans share the ability to add suggests that basic arithmetic may be part of our shared evolutionary past.

Humans have added language and writing to their repertoire, which undoubtedly changes the way we represent numbers. "Much of adult humans' mathematical capacity lies in their ability to represent numerical concepts using symbolic language. A monkey can't tell the difference between 2000 and 2001 objects, for instance. However, our work has shown that both humans and monkeys can mentally manipulate representations of number to generate approximate sums of individual objects," says Brannon.

Cat Fleas' Journey Into The Vacuum Is A 'One-Way Trip'

COLUMBUS , Ohio – Homeowners dogged by household fleas need look no farther than the broom closet to solve their problem. Scientists have determined that vacuuming kills fleas in all stages of their lives, with an average of 96 percent success in adult fleas and 100 percent destruction of younger fleas.

In fact, the results were so surprisingly definitive that the lead scientist, an Ohio State University insect specialist, repeated the experiments several times to be sure the findings were correct. The studies were conducted on the cat flea, or Ctenocephalides felis, the most common type of flea plaguing companion animals and humans.

The lead researcher also examined vacuum bags for toxicity and exposed fleas to churning air in separate tests to further explore potential causes of flea death. He and a colleague believed that the damaging effects of the brushes, fans and powerful air currents in vacuum cleaners combine to kill the fleas. The study used a single model of an upright vacuum, but researchers don't think the vacuum design has much bearing on the results.

"No matter what vacuum a flea gets sucked into, it's probably a one-way trip," said Glen Needham, associate professor of entomology at Ohio State and a co-author of the study.

The results are published in a recent issue of the journal Entomologia Experimentalis et Applicata.

Needham theorized that the vacuum brushes wear away the cuticle, a waxy outer later on fleas and most insects that allows the bugs to stay hydrated. Without the waxy protection, the adult fleas. larvae and pupae probably dry up and die, he said.

"We didn't do a post-mortem, so we don't know for sure. But it appears that the physical abuse they took caused them to perish," he said.

Conventional wisdom has suggested for years that homeowners should vacuum carpeted areas to physically remove fleas, and some recommendations went so far as to say the contents of the bags should be emptied, burned or frozen.

Lead study author W. Fred Hink, professor emeritus of entomology at Ohio State and a longtime researcher in nontoxic controls of fleas on dogs, sought to test the effects of vacuuming on all flea life stages and whether any extra disposal steps or additional chemical controls are necessary.

Fleas have multiple life stages. Adult fleas eat blood meals and mate while living on a host animal. Females lay eggs, which roll off of the animal and onto the floor, furniture or pet bedding. After hatching from the eggs two to 14 days later, the insects go through three larval stages, the last of which spins a cocoon to protect the pupa stage. New adults typically emerge within a week or two.

The study involved groups of 100 adult fleas at a time, as well as groups of 50 pupae and 50 larvae, by vacuuming them up from a tightly woven kitchen-type of carpet. Six tests of vacuuming the adult fleas yielded an average of 96 percent of fleas killed; three tests of vacuumed pupae and one test of vacuumed larvae (in their third stage of development) resulted in 100 percent killed.

In comparison, an average of only 5 percent of adult fleas died after being held in paper vacuum bags to test for toxicity, and an average of only 3 percent died when circulated in moving air.

"I did not include eggs in the vacuum study, but I'm sure they would not have survived," Hink said.

Flea survival in general is on the wane these days, Needham noted, because of numerous effective chemical treatments on the market that kill fleas on companion animals.

"For awhile, fleas owned us. But now they're on the run," Needham said. "There are all kinds of ways to manage the problem, but how people feel about insecticides and how much money they want to spend factors into what they're going to do for flea control. Vacuuming is a great strategy because it involves no chemicals and physically removes the problem."

He also said the effectiveness of some insecticides is likely to decrease as fleas inevitably develop resistance to the currently available compounds. Because of that, Needham is among researchers seeking other nontoxic ways to kill fleas and other household pests, including studying the use of ultraviolet light.

"We're hoping to find that exposure to UV light could knock the flea population down even further. It appears to be a pretty powerful technology for this purpose," he said.

The vacuum study was partially funded by the Royal Appliance Manufacturing Co.

Recent studies confirm significant underuse of colorectal cancer screening

OAK BROOK, Ill. – December 17, 2007 – Two recently released studies confirm an alarming reality, that a majority of Americans who should be getting screened for colorectal cancer are not. Men and women over the age of 50 should be screened for colorectal cancer, but according to a study in the journal Cancer, researchers found that among an assessment of Medicare beneficiaries between 1998 and 2004, only 25.4 percent of people were screened, despite Medicare coverage for colorectal cancer screening. According to figures released by the Agency for Healthcare Research and Quality, only half of all Americans age 50 and over have had a screening colonoscopy.

"These numbers are very discouraging and, unfortunately they confirm previous studies that show not enough people are getting screened for colorectal cancer. This disease is preventable and treatable when caught in its early stages, and screening is a covered benefit for those eligible for Medicare," said Grace Elta, MD, president of the American Society for Gastrointestinal Endoscopy (ASGE). "We know that screening works. According to a recent study by leading cancer groups, including the American Cancer Society and the CDC, deaths from colorectal cancer dropped nearly 5 percent between 2002 and 2004. Prevention through screening and the removal of precancerous polyps were among the reasons credited for the decline. The ASGE encourages all people age 50 and older to talk to their doctor about getting screened for colorectal cancer."

Colorectal cancer is the third most commonly diagnosed cancer in men and women and the second leading cause of cancer-related deaths in the United States, killing nearly 56,000 people each year. Many of those deaths could be prevented with earlier detection. The five-year relative survival rate for people whose colorectal cancer is treated in an early stage is greater than 90 percent. Unfortunately, only 39 percent of colorectal cancers are found at that early stage. Once the cancer has spread to nearby organs or lymph nodes, the five-year relative survival rate decreases dramatically.

ASGE screening guidelines recommend that, beginning at age 50, asymptomatic men and women at average risk for developing colorectal cancer should begin colorectal cancer screening. People with risk factors, such as a family history of colorectal cancer, should begin at an earlier age. Patients are advised to discuss their risk factors with their physician to determine when to begin routine colorectal cancer screening and how often they should be screened. Colonoscopy is a procedure which looks at the entire colon and plays a very important role in colorectal cancer prevention because it is the only method that is both diagnostic and therapeutic. Not only does colonoscopy view the entire colon, but it also removes polyps before they turn into cancer.

The Cancer study, published online December 10, looked at 153,469 cancer-free Medicare beneficiaries beginning in 1998, the first year Medicare began coverage for colorectal cancer screening. The beneficiaries included 17,940 patients with one or more risk factors for cancer and 135,529 "average risk" patients. Between 1998 and 2004, only 25.4 percent of patients were screened for colorectal cancer, this is down from 29.2 percent from 1991 to 1997, before Medicare coverage of colorectal cancer screening began. Researchers identified claims for various colorectal cancer screening methods including fecal occult blood test (FOBT), flexible sigmoidoscopy, colonoscopy, and barium enema.

Recently released figures from The Agency for Healthcare Research and Quality show that in 2005 only half of all Americans age 50 and over have had a screening colonoscopy. Nearly 67 percent of Hispanics age 50 and older reported never having had a colonoscopy screening, compared to 47.1 percent of Caucasians and 55.8 percent of African Americans. Age was found to be an issue in the study. Among those aged 50 to 64, 57.5 percent reported never having had a screening colonoscopy, compared to 39.4 percent of those aged 65 and older. Among those 65 and older, 41.6 percent of women versus 36.4 percent of men reported never having had a screening colonoscopy.

The ASGE notes that some reasons for low colorectal cancer screening rates include:

* Lack of public awareness about colorectal cancer and the benefits of regular screening

* Inconsistent recommendations for screening by medical care providers

* Uncertainty among healthcare providers and consumers about insurance benefits

* Concern about painful or embarrassing screening tests

* Hesitance to discuss "the disease down there"

The ASGE encourages all people aged 50 or older, and those with risk factors for colorectal cancer, to talk to their physician about getting screened for colorectal cancer.

For more information about colorectal cancer screening or to find a qualified physician, visit ASGE's colorectal cancer awareness Web site at .

Z-shaped incision enhances minimally invasive surgery

A novel surgical technique allowing doctors to operate on patients by making a Z-shaped incision inside the stomach could potentially replace certain types of conventional surgery in humans, according to Penn State medical researchers who have successfully demonstrated the procedure in pigs.

If the technique ultimately proves successful in human trials, researchers say it could circumvent the long painful recovery times and medical complications associated with surgery.

The new approach, known as NOTES (natural orifice transluminal endoscopic surgery), involves using a natural opening in the body, in this case the mouth, to advance a flexible video endoscope into the stomach.

Using this tube, and the instruments contained within it, doctors currently make a small straight incision in the stomach to gain access to the abdominal cavity and the organs requiring attention.

"Theoretically, by eliminating body wall wounds with their associated complications and allowing some procedures to be done without general anesthesia, this method could leave a truly minimal surgical footprint, and may even allow certain procedures to be done outside a traditional operating room," said Matthew Moyer, M.D., a gastroenterology fellow at Penn State Milton S. Hershey Medical Center.

But he cautioned that NOTES is still in the developmental phases and even a simple procedure may be fraught with potential complications at this point.

"One of those barriers is the closure of the access site," said Moyer. "In other words, the opening made in the stomach must be reliably and safely sealed off at the end of the procedure."

Moyer and his Hershey Medical Center colleagues Eric M. Pauli, M.D.,resident surgeon; Randy S. Haluck, M.D., director of minimally invasive surgery and assistant professor, and Abraham Mathew, M.D., director of endoscopy and assistant professor, all at Penn State College of Medicine, believe their technique elegantly solves the problem.

The key to their approach lies in the way the flexible probe exits the stomach. Instead of cutting straight through the stomach wall the researchers guide the endoscope so that it first tunnels under the mucous membrane of the stomach wall for a while before exiting near an organ to be operated on. The endoscope essentially charts a Z-shaped path.

This new technique, known as STAT (self-approximating transluminal access technique), has two main advantages according to Moyer. There is significantly less bleeding involved and the Z-shaped tract effectively seals itself due to pressure created on the abdominal wall by normal breathing.

The team published its findings in a recent issue of Gastrointestinal Endoscopy.

The technique has other advantages as well. "Most people operate straight through the gastric wall and then use a bunch of complex maneuvers to get the endoscope where it needs to be," said Pauli. "And it can get difficult to operate because the endoscope is upside down and in a reverse position."

By tunneling through instead, he points out, doctors can maintain a directional sense and guide the endoscope more accurately.

"There are landmarks in the mucous membrane such as specific blood vessels and groupings of blood vessels. We can also see through the wall of the stomach in some areas to guide the endoscope to the organ we want to operate on," Pauli said.

The researchers have so far operated on 17 animals and only one of them has developed a minor complication.

Once they have perfected their tunneling technique, Moyer and colleagues will try to figure out how exactly to remove surgical specimens from an operation.

"The gall bladder, small tumors, even the ovaries are potentially removable through this technique," said Mathew. "We could in theory make the tunnel as big as we want, and take something out into the stomach and cut it into small pieces before extracting it."

If successful, the procedure in humans could translate into significantly shorter recovery times, little or no pain, less anesthesia and without surgical scars. But the researchers acknowledge it may be a while before their surgical technique reaches human trials.

Mathew said he and his colleagues are confident that their technique lets them get the endoscope out of the stomach and back in safely with currently available instruments. "We have to perfect the technique so we can fully understand the risks," he added.

The Penn State researcher envisions minimally invasive surgery being employed to help patients who are critically ill and may not be able to tolerate a traditional surgery or leave the ICU. In such cases, doctors could access the internal organs and perform procedures such as a biopsy to make a better diagnosis or even perform intestinal bypass surgery.

According to Pauli, these findings could accelerate the pace of research in minimally invasive surgery and ease the way for other breakthroughs.

"We are looking at some fundamental questions: can we get the endoscope in safely, can we get it out safely, and can we get it at the organ we want to operate on. Those are the questions nobody has really answered," he said.

Bacteria that cause urinary tract infections invade bladder cells

St. Louis, Dec. 17, 2007 — Scientists at Washington University School of Medicine in St. Louis have found definitive proof that some of the bacteria that plague women with urinary tract infections (UTIs) are entrenched inside human bladder cells.

The finding confirms a controversial revision of scientists' model of how bacteria cause UTIs. Previously, most researchers assumed that the bacteria responsible for infections get into the bladder but do not invade the individual cells that line the interior of the bladder.

"Our animal model of UTIs has allowed us to make a number of predictions about human UTIs, but at the end of the day, we felt it was critical to show this in humans, and now we've done just that," says senior author Scott J. Hultgren, Ph.D., the Helen L. Stoever Professor of Molecular Microbiology at the School of Medicine.

The results appear in the December issue of Public Library of Science Medicine.

Fully understanding what bacteria do in the bladder is critical to developing better diagnoses and treatments for UTIs, Hultgren says. The bacterium Escherchia coli is thought to be responsible for 80 percent to 90 percent of UTIs, which occur mainly in women and are one of the most common bacterial infections in the United States. Scientists estimate that more than half of all women will experience a UTI in their lifetimes, and recurrent UTIs will affect 20 percent to 40 percent of those patients.

"Recurrence is one of the biggest problems of UTIs," says Hultgren. "Even though we have treatments that eliminate the acute symptoms, the fact that the disease keeps recurring in so many women tells me that we need to develop better treatments."

Prior to the work of Hultgren and his colleagues, most microbiologists and urologists believed for a variety of reasons that E. coli wasn't getting into bladder cells.

"For example, there is a barrier in the bladder that prevents toxins and other things in your urine from leaking back into the body," notes David Rosen, an M.D./Ph.D. student at the School of Medicine and lead author of the paper. "And it was thought that bacteria could not penetrate that barrier."

A biopsy could reveal the presence of bacteria in bladder cells, but taking a tissue sample in an infected bladder incurs an unacceptable risk of allowing bacteria to spread into the bloodstream, a dangerous condition called sepsis.

Scientists also thought that if the bacteria were getting into bladder cells, they would replicate and spread rapidly, sometimes leading to sepsis. But after Hultgren first discovered that bacteria are able to invade bladder cells in 1998, he later found evidence in his animal model that bacteria could establish residence inside those cells. He showed that this process involved several behavioral changes that allow the bacteria to form cooperative communities known as biofilms. By working together, bacteria in biofilms build themselves into structures that are more firmly anchored in infected cells and are more resistant to immune system assaults and antibiotic treatments.

To prove that the model correlates with human infections, Rosen led an analysis of human urine samples sent from a clinic at the University of Washington in Seattle. The 100 patients who gave samples were either suffering from an active, symptomatic infection or had previously suffered infections. Researchers analyzing the specimens were not told which group of patients individual specimens had come from.

Using light and electron microscopy and immunofluoresence, scientists found signs of bladder cell infection in a significant portion of the samples from patients with active UTIs. These included cells enlarged by bacterial infection and shed from the lining of the bladder.

In addition, Hultgren's experiments had previously suggested that some bacteria progress to a filament-like shape when exiting out of the biofilm. Rosen was able to identify bacteria with this filamentous morphology in 41 percent of samples from patients with symptomatic UTIs.

Neither indicator was detected in urine from women who did not have active infections. This was anticipated: Hultgren's animal model work suggests that when women are between episodes of symptomatic infection, intracellular E. coli may be in dormant phases where there would be little cause for bacteria or the cells they infect to be shed into the urine.

Further research is needed to determine if the infection indicators Rosen detected in urine samples from symptomatic women are signs of increased risk of recurrent infection. But looking for those signs using immunofluorescent staining and a variety of microscopy methods is unlikely to be practical on a widespread clinical basis. So to follow up, Hultgren plans a search for biochemical indicators linked to higher risk of recurrent UTIs and of infection spreading to a patient's kidneys. His lab also continues to be involved in many different efforts to develop new vaccines and treatments.

"What we're learning about how bacteria behave in the bladder may also have application to other chronic, treatment-resistant infections such as sinus infections and ear infections," he says. "We're increasingly starting to realize that biofilm formation is generally an important strategy bacteria use to evade host responses and antibiotic therapies. Attacking biofilms is going to be a really important approach as we enter a new era of fighting infectious diseases."

New property found in ancient mineral lodestone

Latest nanofabrication methods yield new clues about well-studied mineral

Using the latest methods for nanofabrication, a team led by Rice University physicists has discovered a surprising new electronic property in one of the earliest-known and most-studied magnetic minerals on Earth -- lodestone, also known as magnetite.

By changing the voltage in their experiment, researchers were able to get magnetite at temperatures colder than minus 250 degrees Fahrenheit to revert from an insulator to a conductor. The research was published online Dec. 16 and will be included in February's print edition of Nature Materials.

"It's fascinating that we can still find surprises in a material like magnetite that has been studied for thousands of years," said lead researcher Doug Natelson, associate professor of physics and astronomy. "This kind of finding is really a testament to what's possible now that we can fabricate electronic devices to study materials at the nanoscale."

The magnetic properties of lodestone, also known as magnetite, were documented in China more than 2,000 years ago, and Chinese sailors were navigating with lodestone compasses as early as 900 years ago.

Magnetite is a particular mineral of iron oxide. Its atoms are arranged in a crystal structure with four oxygen atoms for every three of iron, and their arrangement gives the mineral its characteristic magnetic and electrical properties. Physicists have known for more than 60 years that the electronic properties of magnetite change radically and quickly at cold temperatures. As the material cools below a critical temperature near minus 250 degrees Fahrenheit, it changes from an electrical conductor to an electrical insulator -- an electrical transformation that's akin to the physical change water undergoes when it freezes into ice.

"When we applied a sufficiently large voltage across our nanostructures we found that we could kick the cooled magnetite out of its insulating phase and cause it to become a conductor again," Natelson said. "The transition is very sharp, and when the voltage is then lowered back below a lower critical value the magnetite snaps back into its insulating phase. We don't know exactly why this switching occurs, but we think further experiments will shed light on this and the nature of the insulating state."

With engineers looking to exploit novel electronic materials for next-generation computers and hard drives, phase transitions between insulating and conducting states have become an increasingly hot research topic in physics and materials science in recent years.

The debate about the causes and specifics of magnetite's temperature-driven phase change has simmered much longer. Natelson said physicists have long sparred about the possible underlying physical and electronic causes of the phase transition. The discovery of this new voltage-driven switching provides new clues, but more research is still needed, he said.

"The effect we discovered probably wasn't noticed in the past because nanotechnology is only now making it possible to prepare the electrodes, nanoparticles, and thin films required for study with the precision necessary to document the effect," he said.

Natelson's team experimented on two kinds of magnetite. One, called nanorust, consists of tiny particles of magnetite developed in the laboratory of Rice chemist Vicki Colvin, director of Rice's Center for Biological and Environmental Nanotechnology. The second, thin films of single-crystal magnetite, were produced by Igor Shvets' research group at the University of Dublin's Trinity College. These high quality materials with precise compositions were essential to the study, said Natelson. The research was funded by the Department of Energy.

Color sudoku puzzle demonstrates new vision for computing

Researchers at the University of Warwick’s Department of Computer Science have developed a colour based Sudoku Puzzle that will help Sudoku players solve traditional Sudoku puzzles but also helps demonstrate the potential benefits of a radical new vision for computing.

The colour Sudoku adds another dimension to solving the puzzle by assigning a colour to each digit. Squares containing a digit are coloured according to the digit's colour. Empty squares are coloured according to which digits are possible for that square taking account of all current entries in the square's row, column and region. The empty square's colour is the combination of the colours assigned to each possible digit. This gives players major clues as darker coloured empty squares imply fewer number possibilities.

More usefully an empty square that has the same colour as a completed square must contain the same digit. If a black square is encountered then a mistake has been made. Players also can gain additional clues by changing the colour assigned to the each digit and watching the unfolding changes in the pattern of colours.

Sudoku players can test this for themselves at: . (NB page requires Flash 9)

However the colour Sudoku is more than just a game to the University of Warwick Computer Scientists. For doctoral researcher Antony Harfield it is a way of exploring how logic and perception interact using a radical approach to computing called Empirical Modelling. The method can be applied to other creative problems and he is exploring how this experimental modelling technique can be used in educational technology and learning.

The interplay between logic and perception, as it relates to interactions between computers and humans is viewed as key to the building of better software. It is of particular relevance for artificial intelligence, computer graphics, and educational technology. The interaction between the shifting colour squares and the logical deductions of the Sudoku puzzle solver is a good illustration of the unusual quality of this "Empirical Modelling" approach.

Previously the researchers have been able to use their principles to analyse a railway accident in the Clayton Tunnel near Brighton when the telegraph was introduced in 1861. Reports at the time sought to blame various railway personnel but by applying Empirical Modelling the researchers have created an environment in which experimenters can replay the roles of the drivers, signalmen and other personnel involved. This has shown that there were systemic problems arising from the introduction of the new technology.

Dr Steve Russ of the Empirical Modelling group at the University of Warwick said:

"Traditional computer programs are best-suited for tasks that are so well-understood they can, without much loss, be expressed in a closed, mechanical form in which all interactions or changes are ‘pre-planned’. Even in something so simple as a Sudoku puzzle humans use a mixture of perception, expectation, experience and logic that is just incompatible with the way a computer program would typically solve the puzzle. For safety-critical systems (such as railway management) it is literally a matter of life and death that we learn to use computers in ways that integrate smoothly with human perception, communication and action. This is our goal with Empirical Modelling."

Giant rat found in 'lost world'

A giant rodent five times the size of a common rat has been discovered in the mountainous jungles of New Guinea.

The 1.4kg Mallomys giant rat is one of two species of mammal thought to be new to science documented on an expedition to an area described as a "lost world".

Conservationists also found a pygmy possum - one of the world's smallest marsupials - on the trip to the remote north of Papua province, Indonesia.

Both are currently being studied to establish whether they are new species.

The 1.4kg Mallomys giant rat is one of two species of mammal found by Conservation International on an expedition to the Foja Mountains in the north of Papua province, Indonesia.

Scientists on the trip, organised by Conservation International (CI), also recorded the mating displays of several rare birds for the first time.

"It's comforting to know that there is a place on Earth so isolated that it remains the absolute realm of wild nature," said Bruce Beehler, who led the expedition.

Old friends

The trip was the second time that CI had visited the Foja Mountains, part of the Mamberamo Basin, the largest pristine tropical forest in the Asia Pacific region.

Scientists are studying the possum to confirm if it is new to science

In 2005, the area was dubbed a "lost world" after scientists discovered dozens of new plants and animals in the dense jungle.

During the most recent trip, in June of this year, scientists accompanied by a film crew managed to capture courtship displays of the golden-fronted bowerbird (Amblyornis flavifrons) and of the black sicklebill bird of paradise (Epimachus fastuosus).

They also recorded the wattled smoky honeyeater (Melipotes carolae), documented for the first time on the 2005 expedition and known only from the Foja Mountains.

This tiny species of Cercartetus pygmy possum - one of the world's smallest marsupials - is also thought to be new to science after being spotted in the remote jungle.

The bird, with a bright orange patch on its face, was then the first new bird species to be sighted on the island of New Guinea in more than 60 years.

The team also captured an old friend on film - the "lost" Berlepsch's six-wired bird of paradise (Parotia berlepschi).

The iridescent gold-breasted bird was "rediscovered" in 2005 by CI experts after 20 years without a confirmed sighting by a western scientist.

However, the most surprising finds of the trip were the two new species of mammal - the Cercarteus pygmy possum and Mallomys giant rat.

"The giant rat is about five times the size of a typical city rat," said Kristofer Helgen, a scientist with the Smithsonian Institution in Washington, D.C.

The area, known as the "lost world" because of the number of new species discovered there, is also home to colourful birds, such as this Ornate Fruit-Dove.

"With no fear of humans, it apparently came into the camp several times during the trip."

Galaxy fires powerful particle beam at neighbour

* 18:15 17 December 2007

* news service

* Stephen Battersby

A new weapon of intergalactic war has been found. A jet of hot gas and high-energy particles is shooting out from the core of a galaxy called 3C321 and hitting a neighbour, a new study reveals.

Galaxies have been known to ram into each other, but this is the first known example of attack by particle beam.

A team of astronomers noticed that 3C321 and its neighbour, which lie about 1.4 billion light years from Earth, made an unusual pair when they looked at data from the Chandra X-ray Observatory.

Both of these galaxies have active cores, where a giant black hole is feeding on gas and generating all sorts of radiation, including the X-rays picked up by Chandra. It is fairly rare for individual galaxies to be active in this way, so a pair of them was worth investigating further.

[pic]

A particle jet from a black hole at the centre of a galaxy called 3C321 (lower left) strikes the edge of a companion galaxy (upper right) in this composite image; an illustration (right) depicts the scenario (Image: X-ray: NASA/CXC/CfA/D Evans et al; Optical/UV: NASA/STScI; Radio: NSF/VLA/CfA/D Evans et al/STFC/JBO/MERLIN; illustration: NASA/CXC/M Weiss)

Two radio observatories – the Very Large Array in New Mexico, US, and the Multi-Element Radio Linked Interferometer Network in Britain – produced another surprise. Their combined radio image reveals that a jet of matter squirts out of 3C321, then suddenly turns to one side and flares out.

"We expect a jet to be a pencil beam of emission, but we saw it flaring, and wondered what was going on," says lead author Daniel Evans of Harvard University in Cambridge, Massachusetts, US.

They soon got their answer by looking at old Hubble Space Telescope images. "We saw that it was slamming into the lower half of the other galaxy," Evans told New Scientist.

Radiation blast

Any Earth-like planets that may lie in the path of this beam might be sterilised. If such a jet were aimed at Earth, it would blast the upper atmosphere with gamma radiation. That would destroy the ozone layer in months to years, says Evans, leaving earthlings exposed to carcinogenic ultraviolet rays from the Sun.

Evans hopes that this unique object will tell astrophysicists something about how particle jets interact with other matter. Many jets travel at a substantial fraction of the speed of light, so they carry an immense amount of power.

They need not be entirely destructive, however: the jet fired by 3C321 might actually be creating new stars in its neighbour by stirring up gas clouds there.

It is even possible that a jet might have sparked the activity in the companion galaxy's core a few million years ago, although there is another, more likely explanation of why the two galaxies both have active cores. They are approaching one another closely – today they lie just 20,000 light years or so apart. The resulting gravitational disruption is probably channelling gas towards both of their central black holes.

Journal reference: Astrophysical Journal (forthcoming)

The mother of all civilisations

The ruins were so magnificent and sprawling that some people believed that the aliens from a faraway galaxy had built the huge pyramids that stood in the desert across the Andes.

Some historians believed that the complex society, which existed at that time, was born out of fear and war. They looked for the telltale signs of violence that they believed led to the creation of this civilisation. But, they could not find even a hint of any warfare. It was baffling. Even years after Ruth Shady Solis found the ancient city of pyramids at Caral in Peru, it continues to surprise historians around the world. It took Ruth Shady many years and many rounds of carbon dating to prove that the earliest known civilisation in South Americas—at 2,627 BC–was much older than the Harappa Valley towns and the pyramids of Egypt.

Solis, an archaeologist at the National University of San Marcos, Lima, was looking for the fabled missing link of archaeology— a ‘mother city’—when she stumbled upon the ancient city of Caral in the Supe Valley of Peru a few years ago. Her findings were stunning.

It showed that a full-fledged urban civilisation existed at the place around 2700 BC. The archaeologist and her team found a huge compound at Caral: 65 hectares in the central zone, encompassing six large pyramids, many smaller pyramids, two circular plazas, temples, amphitheatres and other architectural features including residential districts spread in the desert, 23 km from the coast.

The discovery of Caral has pushed back the history of the Americas: Caral is more than 1,000 years older than Machu Picchu of the Incas. They built huge structures in Caral hundreds of years before the famous drainage system of Harappa and the pyramids of Egypt were even designed.

But, it was not easy for Ruth Shady to prove this. It was only in 2001 that the journal Science reported the Peruvian archaeologist’s discovery. And, despite the hard evidence backing her, she is still trying to convince people that Caral was indeed the oldest urban civilisation in the world.

"There were many problems, many of them in my own country," says Ruth Shady, on a visit to India to discuss her discovery with other historians. "The discovery of Caral challenged the accepted beliefs. Some historians were not ready to believe that an urban civilisation existed in Peru even before the pyramids were built in Egypt," she says.

Basically, there were two problems. First, for decades archaeologist have been looking for a ‘mother city’ to find an answer to the question: why did humans become civilised?

The historians had been searching for this answer in Egypt, Mesopotamia (Iraq), India and China. They didn’t expect to find the first signs of city life in a Peruvian desert. Secondly, most historians believed that only the fear of war could motivate people to form complex societies. And, since Caral did not show any trace of warfare; no battlements, no weapons, and no mutilated bodies, they found it hard to accept it as the mother city.

That’s when Ruth Shady stepped in with her discovery. "This place is somewhere between the seat of the gods and the home of man," she says, adding that Caral was a gentle society, built on trade and pleasure. "This great civilisation was based on trade in cotton. Caral made the cotton for the nets, which were sold to the fishermen living near the coast. Caral became a booming trading centre and the trade spread," she says.

Caral was born in trade and not bloodshed. Warfare came much later. This is what this mother city shows: great civilisations are born in peace. Ruth Shady continues to battle for this great truth.

Well

Can a ‘Fertility Diet’ Get You Pregnant?

By TARA PARKER-POPE

Can changing your diet improve your chances of getting pregnant? "The Fertility Diet," a new book by some prominent Harvard Medical School researchers, suggests that it can — that among other things, eating ice cream and cutting back on meat may help raise your fertility.

The problem is that much of the research behind the book doesn’t live up to its hype. "The Fertility Diet" isn’t the first to promote nutritional changes as a way to increase the odds of pregnancy; an online search will turn up any number of titles like "The Infertility Diet," "Fertility Foods" and so on.

Essentially, their recommendations are alike: a heart-healthy diet with more fruit and vegetables, less meat and bad carbs, more healthy fats and few or no trans fats.

Nola Lopez

While the messages are similar, a big difference is that the newest book comes from Harvard. As a result, it’s had an enviable amount of buzz. Newsweek even devoted its Dec. 10 cover to an excerpt.

The notion that something as simple as better eating might improve fertility is certain to raise the hopes of tens of thousands of couples. But unfortunately, the findings in this book don’t apply to a vast majority of people with infertility problems. Instead, they are based on women with ovulatory infertility, a condition caused by irregular ovulation that affects fewer than a third of infertile women.

And while it’s never a bad idea to improve your nutrition, there is no definitive evidence that many of the diet changes outlined in the book will increase a woman’s odds of getting pregnant.

"It’s marketing," said Dr. Jamie A. Grifo, a respected fertility researcher who is director of the New York University Fertility Center. "There’s a limit to what conclusions you can draw from the way they conducted the study."

The findings on fertility in the Nurses’ Health Study come from more than 18,000 women who were trying to get pregnant over an eight-year period. But while that sounds like a lot, only about 400 of the women were given diagnoses of infertility related to irregular ovulation. So many of the associations between nutrition and fertility outlined in the book are based on a relatively small number of women.

It’s important to note that while the study showed strong associations between certain habits and fertility, it did not prove that the women’s diet was what made the difference. Furthermore, it was the women themselves who reported their eating habits, and only every few years. Critics note that most people can’t remember what they ate last night, let alone over the course of a few years.

Two recommendations in "The Fertility Diet" are backed by relatively solid science. For a woman with irregular ovulation, attaining a healthy weight and taking a multivitamin with folic acid can improve her odds of getting pregnant. Being overweight or underweight has been shown to suppress ovulation, because both conditions throw off a woman’s natural hormone levels.

In a major study of vitamins and folic acid to reduce the incidence of neural-tube defects in babies, the researchers noted a trend they hadn’t expected. The women taking the vitamins were not only more likely to conceive, but also more likely to have twins.

The heart-healthy diet recommendations behind "The Fertility Diet" might influence ovulation because they affect insulin levels. Insulin levels, in turn, can affect sex-hormone-binding globulin, which can affect the amount of free androgen in a woman’s body. Too much can suppress ovulation.

Importantly, the nurses’ study found associations between fertility and certain eating behaviors, but it didn’t test whether adopting new eating habits would make a difference. Dr. Walter C. Willett, the Harvard nutrition researcher who is a co-author of the book, acknowledges the limits of the data, but adds that he believes it is "highly likely" that the diet will help some women, given what is known about dietary influence on other body functions like blood pressure.

"The underlying principles are compatible with good health and prevention of some of the complications of pregnancy," he said. "This is a good eating strategy anyway. It’s going to be clearly a safer, more modest approach to fertility than just jumping right into heavy medication."

The weakest recommendation in "The Fertility Diet" is the notion that ice cream and whole-fat dairy products will increase fertility. Even the study authors note in the book that it would be an "overstatement" to say there are even "a handful" of studies on the subject.

To their credit, the book’s authors acknowledge early on that the research has limitations and that their diet doesn’t guarantee a pregnancy. Dr. Jorge E. Chavarro, the lead author, said it had been a challenge to balance the limitations of scientific research with the commercial demands of book publishing. Even the simple title of the book, he added, belies the complexity of the findings.

"I would describe it as an apparently fertility-enhancing dietary pattern, but that doesn’t go with the flow of your reading," he said. "This is not a cure for infertility. We have been very careful in explaining what we think these dietary changes can do and what they cannot do ."

Findings

Why Nobody Likes a Smart Machine

By JOHN TIERNEY

At a Best Buy store in Midtown Manhattan, Donald Norman was previewing a scene about to be re-enacted in living rooms around the world.

He was playing with one of this year’s hot Christmas gifts, a digital photo frame from Kodak. It had a wondrous list of features — it could display your pictures, send them to a printer, put on a slide show, play your music — and there was probably no consumer on earth better prepared to put it through its paces.

Dr. Norman, a cognitive scientist who is a professor at Northwestern, has been the maestro of gizmos since publishing "The Design of Everyday Things," his 1988 critique of VCRs no one could program, doors that couldn’t be opened without instructions and other technologies that seemed designed to drive humans crazy.

Besides writing scholarly analyses of gadgets, Dr. Norman has also been testing and building them for companies like Apple and Hewlett-Packard. One of his consulting gigs involved an early version of this very technology on the shelf at Best Buy: a digital photo frame developed for a startup company that was later acquired by Kodak.

Viktor Koen

"This is not the frame I designed," Dr. Norman muttered as he tried to navigate the menu on the screen. "It’s bizarre. You have to look at the front while pushing buttons on the back that you can’t see, but there’s a long row of buttons that all feel the same. Are you expected to memorize them?"

He finally managed to switch the photo in the frame to vertical from horizontal. Then he spent five minutes trying to switch it back.

"I give up," he said with a shrug. "In any design, once you learn how to do something once, you should be able to do it again. This is really horrible."

So the bad news is that despite two decades of lectures from Dr. Norman on the virtue of "user-centered" design and the danger of a disease called "featuritis," people will still be cursing at their gifts this Christmas.

And the worse news is that the gadgets of Christmas future will be even harder to command, because we and our machines are about to go through a rocky transition as the machines get smarter and take over more tasks. As Dr. Norman says in his new book, "The Design of Future Things," what we’ll have here is a failure to communicate.

"It would be fine," he told me, "if we had intelligent devices that would work well without any human intervention. My clothes dryer is a good example: it figures out when the clothes are dry and stops. But we are moving toward intelligent machines that still require human supervision and correction, and that is where the danger lies — machines that fight with us over how to do things."

Can this relationship be saved? Until recently, Dr. Norman believed in the favorite tool of couples therapists: better dialogue. But he has concluded that dialogue isn’t the answer, because we’re too different from the machines.

You can’t explain to your car’s navigation system why you dislike its short, efficient route because the scenery is ugly. Your refrigerator may soon know exactly what food it contains, what you’ve already eaten today and what your calorie limit is, but it won’t be capable of an intelligent dialogue about your need for that piece of cheesecake.

To get along with machines, Dr. Norman suggests we build them using a lesson from Delft, a town in the Netherlands where cyclists whiz through crowds of pedestrians in the town square. If the pedestrians try to avoid an oncoming cyclist, they’re liable to surprise him and collide, but the cyclist can steer around them just fine if they ignore him and keep walking along at the same pace. "Behaving predictably, that’s the key," Dr. Norman said. "If our smart devices were understandable and predictable, we wouldn’t dislike them so much." Instead of trying to anticipate our actions, or debating the best plan, machines should let us know clearly what they’re doing.

Instead of beeping and buzzing mysteriously, or flashing arrays of red and white lights, machines should be more like Dr. Norman’s ideal of clear communication: a tea kettle that burbles as the water heats and lets out a steam whistle when it’s finished. He suggests using natural sounds and vibrations that don’t require explanatory labels or a manual no one will ever read.

But no matter how clearly the machines send their signals, Dr. Norman expects that we’ll have a hard time adjusting to them. He wasn’t surprised when I took him on a tour of the new headquarters of The New York Times and he kept hearing complaints from people about the smart elevators and window shades, or the automatic water faucets that refuse to dispense water. (For Dr. Norman’s analysis of our office building of the future, go to tierneylab.)

As he watched our window shades mysteriously lowering themselves, having detected some change in cloud cover that eluded us, Dr. Norman recalled the fight that he and his colleagues at Northwestern waged against the computerized shades that kept letting sunlight glare on their computer screens.

"It took us a year and a half to get the administration to let us control the shades in our own offices," he said. "Badly designed so-called intelligent technology makes us feel out of control, helpless. No wonder we hate it." (For all our complaining, at The Times we have nicer shades that let us override the computer.)

Even when the bugs have been worked out of a new technology, designers will still turn out junk if they don’t get feedback from users — a common problem when their customer is a large bureaucracy. Engineers have known how to build a simple alarm clock for more than a century, so why can’t you figure out how to set the one in your hotel room? Because, Dr. Norman said, the clock was bought by someone in the hotel’s purchasing department who has never tried to navigate all those buttons at 1 in the morning.

"Our frustrations with machines are not going to be solved with better machines," Dr. Norman said. "Most of our technological difficulties come from the way we interact with our machines and with other people. The technology part of the problem is usually pretty simple. The people part is complicated."

Really?

The Claim: A Little Alcohol Can Help You Beat a Cold

By ANAHAD O’CONNOR

THE FACTS When it comes to quick remedies for colds, many people insist that a glass of brandy or a hot toddy — whiskey with hot water and lemon juice — is just what the doctor ordered.

It’s not difficult to see how mild inebriation might have the potential to relieve cold and flu symptoms, but so far no study has shown that alcohol has the ability to kill germs in the bloodstream or stop a cold in its tracks. And while alcohol may provide temporary relief, it can prolong symptoms by increasing dehydration.

Nonetheless, two large studies have found that although moderate drinking will not cure colds, it can help keep them at bay. One, by researchers at Carnegie Mellon in 1993, looked at 391 adults and found that resistance to colds increased with moderate drinking, except in smokers.

Leif Parsons

Then, in 2002, researchers in Spain followed 4,300 healthy adults, examining their habits and susceptibility to colds. The study, in The American Journal of Epidemiology, found no relationship between the incidence of colds and consumption of beer, spirits, Vitamin C or zinc. But drinking 8 to 14 glasses of wine per week, particularly red wine, was linked to as much as a 60 percent reduction in the risk of developing a cold. The scientists suspected this had something to do with the antioxidant properties of wine.

THE BOTTOM LINE Alcohol will not help cure a cold, though moderate consumption may reduce susceptibility.

Laws of Nature, Source Unknown

By DENNIS OVERBYE

Correction Appended

"Gravity," goes the slogan on posters and bumper stickers. "It isn’t just a good idea. It’s the law."

And what a law. Unlike, say, traffic or drug laws, you don’t have a choice about obeying gravity or any of the other laws of physics. Jump and you will come back down. Faith or good intentions have nothing to do with it.

Existence didn’t have to be that way, as Einstein reminded us when he said, "The most incomprehensible thing about the universe is that it is comprehensible." Against all the odds, we can send e-mail to Sri Lanka, thread spacecraft through the rings of Saturn, take a pill to chase the inky tendrils of depression, bake a turkey or a soufflé and bury a jump shot from the corner.

Yes, it’s a lawful universe. But what kind of laws are these, anyway, that might be inscribed on a T-shirt but apparently not on any stone tablet that we have ever been able to find?

Are they merely fancy bookkeeping, a way of organizing facts about the world? Do they govern nature or just describe it? And does it matter that we don’t know and that most scientists don’t seem to know or care where they come from?

Apparently it does matter, judging from the reaction to a recent article by Paul Davies, a cosmologist at Arizona State University and author of popular science books, on the Op-Ed page of The New York Times.

Jeremy Traum

Dr. Davies asserted in the article that science, not unlike religion, rested on faith, not in God but in the idea of an orderly universe. Without that presumption a scientist could not function. His argument provoked an avalanche of blog commentary, articles on and letters to The Times, pointing out that the order we perceive in nature has been explored and tested for more than 2,000 years by observation and experimentation. That order is precisely the hypothesis that the scientific enterprise is engaged in testing.

David J. Gross, director of the Kavli Institute for Theoretical Physics in Santa Barbara, Calif., and co-winner of the Nobel Prize in physics, told me in an e-mail message, "I have more confidence in the methods of science, based on the amazing record of science and its ability over the centuries to answer unanswerable questions, than I do in the methods of faith (what are they?)."

Reached by e-mail, Dr. Davies acknowledged that his mailbox was "overflowing with vitriol," but said he had been misunderstood. What he had wanted to challenge, he said, was not the existence of laws, but the conventional thinking about their source.

There is in fact a kind of chicken-and-egg problem with the universe and its laws. Which "came" first — the laws or the universe?

If the laws of physics are to have any sticking power at all, to be real laws, one could argue, they have to be good anywhere and at any time, including the Big Bang, the putative Creation. Which gives them a kind of transcendent status outside of space and time.

On the other hand, many thinkers — all the way back to Augustine — suspect that space and time, being attributes of this existence, came into being along with the universe — in the Big Bang, in modern vernacular. So why not the laws themselves?

Dr. Davies complains that the traditional view of transcendent laws is just 17th-century monotheism without God. "Then God got killed off and the laws just free-floated in a conceptual vacuum but retained their theological properties," he said in his e-mail message.

But the idea of rationality in the cosmos has long existed without monotheism. As far back as the fifth century B.C. the Greek mathematician and philosopher Pythagoras and his followers proclaimed that nature was numbers. Plato envisioned a higher realm of ideal forms, of perfect chairs, circles or galaxies, of which the phenomena of the sensible world were just flawed reflections. Plato set a transcendent tone that has been popular, especially with mathematicians and theoretical physicists, ever since.

Steven Weinberg, a Nobel laureate from the University of Texas, Austin, described himself in an e-mail message as "pretty Platonist," saying he thinks the laws of nature are as real as "the rocks in the field." The laws seem to persist, he wrote, "whatever the circumstance of how I look at them, and they are things about which it is possible to be wrong, as when I stub my toe on a rock I had not noticed."

The ultimate Platonist these days is Max Tegmark, a cosmologist at the Massachusetts Institute of Technology. In talks and papers recently he has speculated that mathematics does not describe the universe — it is the universe.

Dr. Tegmark maintains that we are part of a mathematical structure, albeit one gorgeously more complicated than a hexagon, a multiplication table or even the multidimensional symmetries that describe modern particle physics. Other mathematical structures, he predicts, exist as their own universes in a sort of cosmic Pythagorean democracy, although not all of them would necessarily prove to be as rich as our own.

"Everything in our world is purely mathematical — including you," he wrote in New Scientist.

This would explain why math works so well in describing the cosmos. It also suggests an answer to the question that Stephen Hawking, the English cosmologist, asked in his book, "A Brief History of Time": "What is it that breathes fire into the equations and makes a universe for them to describe?" Mathematics itself is on fire.

Not every physicist pledges allegiance to Plato. Pressed, these scientists will describe the laws more pragmatically as a kind of shorthand for nature’s regularity. Sean Carroll, a cosmologist at the California Institute of Technology, put it this way: "A law of physics is a pattern that nature obeys without exception."

Plato and the whole idea of an independent reality, moreover, took a shot to the mouth in the 1920s with the advent of quantum mechanics. According to that weird theory, which, among other things, explains why our computers turn on every morning, there is an irreducible randomness at the microscopic heart of reality that leaves an elementary particle, an electron, say, in a sort of fog of being everywhere or anywhere, or being a wave or a particle, until some measurement fixes it in place.

In that case, according to the standard interpretation of the subject, physics is not about the world at all, but about only the outcomes of experiments, of our clumsy interactions with that world. But 75 years later, those are still fighting words. Einstein grumbled about God not playing dice.

Steven Weinstein, a philosopher of science at the University of Waterloo, in Ontario, termed the phrase "law of nature" as "a kind of honorific" bestowed on principles that seem suitably general, useful and deep. How general and deep the laws really are, he said, is partly up to nature and partly up to us, since we are the ones who have to use them.

But perhaps, as Dr. Davies complains, Plato is really dead and there are no timeless laws or truths. A handful of poet-physicists harkening for more contingent nonabsolutist laws not engraved in stone have tried to come up with prescriptions for what John Wheeler, a physicist from Princeton and the University of Texas in Austin, called "law without law."

As one example, Lee Smolin, a physicist at the Perimeter Institute for Theoretical Physics, has invented a theory in which the laws of nature change with time. It envisions universes nested like Russian dolls inside black holes, which are spawned with slightly different characteristics each time around. But his theory lacks a meta law that would prescribe how and why the laws change from generation to generation.

Holger Bech Nielsen, a Danish physicist at the Niels Bohr Institute in Copenhagen, and one of the early pioneers of string theory, has for a long time pursued a project he calls Random Dynamics, which tries to show how the laws of physics could evolve naturally from a more general notion he calls "world machinery."

On his Web site, Random Dynamics, he writes, "The ambition of Random Dynamics is to ‘derive’ all the known physical laws as an almost unavoidable consequence of a random fundamental ‘world machinery.’"

Dr. Wheeler has suggested that the laws of nature could emerge "higgledy-piggledy" from primordial chaos, perhaps as a result of quantum uncertainty. It’s a notion known as "it from bit." Following that logic, some physicists have suggested we should be looking not so much for the ultimate law as for the ultimate program.

Anton Zeilinger, a physicist and quantum trickster at the University of Vienna, and a fan of Dr. Wheeler’s idea, has speculated that reality is ultimately composed of information. He said recently that he suspected the universe was fundamentally unpredictable.

I love this idea of intrinsic randomness much for the same reason that I love the idea of natural selection in biology, because it and only it ensures that every possibility will be tried, every circumstance tested, every niche inhabited, every escape hatch explored. It’s a prescription for novelty, and what more could you ask for if you want to hatch a fecund universe?

But too much fecundity can be a problem. Einstein hoped that the universe was unique: given a few deep principles, there would be only one consistent theory. So far Einstein’s dream has not been fulfilled. Cosmologists and physicists have recently found themselves confronted by the idea of the multiverse, with zillions of universes, each with different laws, occupying a vast realm known in the trade as the landscape.

In this case there is meta law — one law or equation, perhaps printable on a T-shirt — to rule them all. This prospective lord of the laws would be string theory, the alleged theory of everything, which apparently has 10500 solutions. Call it Einstein’s nightmare.

But it is soon for any Einsteinian to throw in his or her hand. Since cosmologists don’t know how the universe came into being, or even have a convincing theory, they have no way of addressing the conundrum of where the laws of nature come from or whether those laws are unique and inevitable or flaky as a leaf in the wind.

These kinds of speculation are fun, but they are not science, yet. "Philosophy of science is about as useful to scientists as ornithology is to birds," goes the saying attributed to Richard Feynman, the late Caltech Nobelist, and repeated by Dr. Weinberg.

Maybe both alternatives — Plato’s eternal stone tablet and Dr. Wheeler’s higgledy-piggledy process — will somehow turn out to be true. The dichotomy between forever and emergent might turn out to be as false eventually as the dichotomy between waves and particles as a description of light. Who knows?

The law of no law, of course, is still a law.

When I was young and still had all my brain cells I was a bridge fan, and one hand I once read about in the newspaper bridge column has stuck with me as a good metaphor for the plight of the scientist, or of the citizen cosmologist. The winning bidder had overbid his hand. When the dummy cards were laid, he realized that his only chance of making his contract was if his opponents’ cards were distributed just so.

He could have played defensively, to minimize his losses. Instead he played as if the cards were where they had to be. And he won.

We don’t know, and might never know, if science has overbid its hand. When in doubt, confronted with the complexities of the world, scientists have no choice but to play their cards as if they can win, as if the universe is indeed comprehensible. That is what they have been doing for more than 2,000 years, and they are still winning.

Correction: December 19, 2007

An article in Science Times on Tuesday about the laws of physics and nature misstated the time in which Plato was forming his idea of a higher realm of ideal forms. It was in the fourth century B.C.; it was not "a few hundred years" after the fifth century B.C., when the Greek mathematician and philosopher Pythagoras and his followers proclaimed that nature was numbers.

Women with osteoporosis, previous vertebral fracture have increased long-term risk for new fracture

Over a 15 year period, women with low bone mineral density and a previous vertebral fracture had an increased risk of a new vertebral fracture compared to women with normal bone mineral density and no previous fracture, according to a study in the December 19 issue of JAMA.

Vertebral fractures are the most common osteoporotic fracture, with prevalence estimates of 35 percent to 50 percent among women older than 50 years. About 700,000 vertebral fractures occur each year in the United States, according to background information in the article. Women with low bone mineral density (BMD) and previous vertebral fractures have a greater risk of new vertebral fractures over the short-term, but their risk of vertebral fracture over the long-term is uncertain.

Jane A. Cauley, Dr.P.H., of the University of Pittsburgh, and colleagues examined the absolute risk of new vertebral fractures by spine and hip BMD and previous vertebral fracture status over 15 years of follow-up in a group of 9,704 white women, who were recruited at four U.S. clinical centers and enrolled in the Study of Osteoporotic Fractures. Of these, 2,680 attended a clinic visit an average of 14.9 years after entering the study. The average age of the women was 68.8 years at entry and 83.8 years at follow-up.

The researchers found that of these 2,680 women, 487 (18.2 percent) experienced a new vertebral fracture, including 163 (41.4 percent) of the 394 with a previous vertebral fracture at baseline and 324 (14.2 percent) of the 2,286 without a previous vertebral fracture at baseline. Women who experienced a new fracture also weighed less, were more likely to have a positive fracture history and a previous vertebral fracture at study entry, and less likely to report estrogen use at baseline.

Women with a previous vertebral fracture at baseline had more than four times the odds of experiencing a new vertebral fracture over follow-up compared with women without a previous vertebral fracture at baseline. The risk was greatest among women with two or more previous fractures at baseline.

Low BMD was a strong predictor of new vertebral fracture. About one-third of women with a low hip BMD measurement had a new vertebral fracture, compared with about 10 percent of women with normal BMD. The absolute risk of vertebral fractures was 56 percent among women with both a previous vertebral fracture and BMD in the osteoporotic range. In contrast, women with normal BMD and no previous fracture had an absolute risk of about 9 percent.

"Our results support the recommendation that older women with a prevalent vertebral fracture should be treated for osteoporosis irrespective of BMD. Treatment of women with prevalent asymptomatic vertebral fractures with bisphosphonates and selective estrogen receptors modulators has been shown to decrease fracture incidence," the authors write. AMA. 2007;298(23):2761-2767. Available pre-embargo to the media at )

No need for reduced alcohol consumption in later life

'A little of what you fancy' this Christmas will do no harm

Provided they stick to the same guidelines about alcohol consumption as younger adults, regular moderate drinking poses no additional risks to the over 65s, and may even bring health benefits, according to two studies from the Peninsula Medical School in the South West of England.

Researchers assessed the drinking levels of over 13,000 older people in England and the US and looked at the effects on physical disability, mortality, cognitive function, depression, and well-being. They concluded that moderate drinking is fine for the over 65s – and in some cases is better than not drinking at all.

This will be good news to the elderly who want to get into the festive spirit, and who until now have lived by the commonly held belief that they have to reduce their alcohol consumption as they get older.

"We are not advocating that elderly people should go out and get ridiculously drunk," said Dr. Iain Lang, lead author of the two studies from the Peninsula Medical School. "What we are saying is that current guidelines on drinking for the elderly are too conservative, and that a couple of drinks a day will do no harm, and will in fact have a more beneficial affect on cognitive and general health than abstinence."

Research showed that 10.8 per cent of US men, 28.6 per cent of UK men, 2.9 per cent of US women and 10.3 per cent of UK women drank more than the US National Institute on Alcohol Abuse and Alcoholism recommended limit for people aged 65 and over. The research also showed that those drinking on average more than one to two drinks a day achieved similar health results as those drinking on average more than zero to one drink a day. The worst results were in those who did not drink at all and in those who were heavy drinkers.

The shape of the relationship between alcohol consumption and the risk of disability were similar in men and women.

Said Dr. Lang: "The upshot of this research is that ‘a little of what you fancy does you good.’ There is no reason why older people should not enjoy a tipple this Christmas, as long as they are sensible about it. Previous research has shown that middle-aged people can benefit from moderate drinking – these findings show the same applies to the over-65s." More information is available by logging on at pms.ac.uk.

Dolphin 'therapy' a dangerous fad, Emory researchers warn

People suffering from chronic mental or physical disabilities should not resort to a dolphin "healing" experience, warn two researchers from Emory University. Lori Marino, senior lecturer in the Neuroscience and Behavioral Biology Program, has teamed with Scott Lilienfeld, professor in the Department of Psychology, to launch an educational campaign countering claims made by purveyors of what is known as dolphin-assisted therapy (DAT).

"Dolphin-assisted therapy is not a valid treatment for any disorder," says Marino, a leading dolphin and whale researcher. "We want to get the word out that it's a lose-lose situation Ð for people and for dolphins."

While swimming with dolphins may be a fun, novel experience, no scientific evidence exists for any long-term benefit from DAT, Marino says. She adds that people who spend thousands of dollars for DAT don't just lose out financially Ð they put themselves, and the dolphin, at risk of injury or infection. And they are supporting an industry that Ð outside of the United States Ð takes dolphins from the wild in a brutal process that often leaves several dolphins dead for every surviving captive.

Marino and Lilienfeld reviewed five studies published during the past eight years and found that the claims for efficacy for DAT were invalid. Their conclusions were published recently in Anthrozošs, the journal of the International Society for Anthrozoology, in a paper entitled "Dolphin-Assisted Therapy: More Flawed Data and More Flawed Conclusions."

"We found that all five studies were methodologically flawed and plagued by several threats to both internal and construct validity," wrote Marino and Lilienfeld, who conducted a similar review in 1998. "We conclude that nearly a decade following our initial review, there remains no compelling evidence that DAT is a legitimate therapy, or that it affords any more than fleeting improvements in mood."

An upcoming issue of the newsletter of the American Psychological Association's Division of Intellectual and Developmental Disabilities will feature another article by Marino and Lilienfeld, entitled "Dolphin-Assisted Therapy for Autism and Other Developmental Disorders: A Dangerous Fad."

"We want to reach psychologists with this message, because DAT is increasingly being applied to children with developmental disabilities, although there is no good evidence that it works," said Lilienfeld, a clinical psychologist. "It's hard to imagine the rationale for a technique that, at best, makes a child feel good in the short run, but could put the child at risk of harm."

The Emory scientists have timed their campaign to coincide with a recent call by two UK-based non-profits Ð the Whale and Dolphin Conservation Society and Research Autism Ð to ban the practice of DAT.

While Marino is against taking dolphins from the wild and holding them captive for any purpose, she finds DAT especially egregious, because the people who are being exploited are the most vulnerable Ð including desperate parents who are willing to try anything to help a child with a disability.

Many people are under the impression that dolphins would never harm a human. "In reality, injury is a very real possibility when you place a child in a tank with a 400-pound wild animal that may be traumatized from being captured," Marino says.

Dolphins are bred in captivity in U.S. marine parks, but in other countries they are often taken from the wild. "If people knew how these animals were captured, I don't think they would want to swim with them in a tank or participate in DAT," Marino says, referring to an annual "dolphin drive" in Japan. "During the dolphin drives hundreds of animals are killed, or panicked and die of heart attacks, in water that's red with their blood, while trainers from facilities around the world pick out young animals for their marine parks. They hoist them out of the water, sometimes by their tail flukes, and take them away."

Each live dolphin can bring a fisherman $50,000 or more, she says. "The marine parks make millions off of dolphins, so that's a drop in the bucket. It's an irony that dolphins are among the most beloved, and the most exploited, animals in the world," Marino says.

Vitamin B12 function may be diminished by excessive folate

BOSTON (December 18, 2007) — In a study of adults aged 20 and over, researchers at Tufts University showed that homocysteine and methylmalonic acid are at much higher levels in individuals who have a combination of vitamin B12 deficiency and high blood folate levels than in individuals who are also vitamin B12 deficient but have normal folate levels.

Homocysteine and methylmalonic acid, compounds used by enzymes that contain vitamin B12, accumulate in the blood in patients who are vitamin B12 deficient. "Finding that the combination of high blood folate levels and low vitamin B12 status is associated with even higher levels of these compounds is a strong indication that the high folate is interfering with the action of these B12-containing enzymes, thus resulting in the exacerbation or worsening of the vitamin B12 deficiency," says corresponding author Jacob Selhub, Ph.D., director of the Vitamin Metabolism Laboratory at the Jean Mayer USDA Human Nutrition Research Center on Aging at Tufts University (USDA HNRCA).

In an earlier study, Selhub and co-authors Martha Savaria Morris, Ph.D., and Paul Jacques, D.Sc, also of the USDA HNRCA, have shown that the prevalence of anemia and cognitive impairment among U.S. elderly who are vitamin B12 deficient is much worse if this B12 deficiency is also accompanied by high blood folate rather than normal blood folate. This indicates that the worsening of the vitamin B12 deficiency, as indicated by higher homocysteine and methylmalonic acid due to high blood folate, is also manifested clinically through higher prevalence of anemia and cognitive impairment.

Results of the present study are published in the December 11 issue of the Proceedings of the National Academy of Sciences. Selhub and colleagues analyzed data from 10,413 adults who participated in two National Health and Nutrition Examination Surveys (NHANES). Slightly less than half of the participants (4,940) took part in phase 2 of the NHANES III, which was conducted between 1991 and 1994. The remaining 5,473 adults took part in NHANES conducted from 1999 to 2000 and from 2000 to 2002.

The authors intentionally used one NHANES survey conducted prior to 1998, the year the Food and Drug Administration required that all enriched cereal-grain products be fortified with folic acid, the synthetic form of folate, in order to help prevent birth defects in infants. "It is important to note that these adverse interactions between high folate blood levels and vitamin B12 deficiency were seen only in the study participants from the NHANES conducted between 1999 and 2002, after the fortification of flour and other cereals with folic acid," says Selhub, who is also a professor at the Friedman School of Nutrition Science and Policy at Tufts University.

Folic acid is a synthetic form of the vitamin, which requires specific processing by the body for incorporation into the folate pool of the body. Naturally occurring folates, found in leafy vegetables, legumes and in many other fruits and vegetables, can be readily incorporated into the body’s folate pool and are believed to be beneficial even at higher intakes. "There is no reason to avoid foods with naturally occurring folate and it essential to consume B12 containing products such as eggs, meat, milk and poultry and even supplements if necessary," says Selhub. "The combination of high blood folate and normal vitamin B12 status is actually beneficial to health."

This study was supported by the Agricultural Research Service of the U.S. Department of Agriculture.

Selhub, J, Morris MS, Jacques PF. Proceedings of the National Academy of Sciences. 2007 (Dec. 11); 104 (50): 19995-20000.

Cancer and arthritis therapy may be promising treatment for diabetes

New Haven, Conn.—An antibody used to treat certain cancers and rheumatoid arthritis appears to greatly delay type 1 diabetes in mice, Yale School of Medicine researchers report in the Journal of Clinical Investigation.

"Even better, the beneficial effects of the antibody continue to be observed long after the antibody is no longer administered," the researchers said.

The antibody, rituximab (anti-CD20), depletes B cells. Experimental evidence in mutant mice indicates that B cells play a role in autoimmune diseases by interacting with T cells of the immune system. It is T cells that destroy insulin-producing cells directly in the pancreas, leading to type 1 diabetes.

"Our paper shows, for the first time, that after successful B cell depletion, regulatory cells emerge that can continue to suppress the inflammatory and autoimmune response even after the B cells return," said Li Wen, senior research scientist in the division of endocrinology. "Even more strikingly, we found that these regulatory cells include both B and T cells."

To determine if B cell depletion would work as a therapy for type 1 diabetes, Wen and her colleague at Yale, Mark Shlomchik, M.D., professor of laboratory medicine and immunobiology, developed a mouse model. They engineered mice that were predisposed to diabetes and had the human version of CD20, the molecule rituximab targets, on the surface of their B cells.

The researchers tested a mouse version of the drug to deplete B cells in mice either before diabetes onset, or within days of diagnosis with diabetes. The drug treatment significantly delayed diabetes onset in pre-diabetic mice. This translated to a 10- to 15-week delay in developing diabetes compared to mice given a "sham" treatment. The equivalent period for humans would be approximately 10 to 15 years. Of the 14 mice that already had diabetes, five stopped needing insulin for two to five months while all the sham-treated mice remained diabetic.

"These studies suggest that B cells can have dual roles in diabetes and possibly other autoimmune diseases. The B cells might promote disease initially, but after being reconstituted following initial depletion with rituximab, they actually block further disease," Shlomchik added. "This means that multiple rounds of medication to deplete the B cells might not be necessary or even advisable." Journal of Clinical Investigation: December 3, 2007

Small asteroids can pack a mighty punch

* 18 December 2007

* Michael Reilly, San Francisco

BEWARE the blast from above: small asteroids that explode before they hit the ground may be more dangerous than we thought.

Asteroids a few tens of metres in diameter rip through the atmosphere at between 40 and 60 times the speed of sound, and many explode before they hit Earth. Extreme friction and heating can cause these asteroids to flatten into pancakes, which increases drag even more and eventually tears them apart. The resultant "airburst" is thought to be behind the 1908 Tunguska explosion in Siberia, which levelled 2000 square kilometres of forest.

Because airbursts spread material over a wide area and there is no impact crater, researchers rely on computer simulations to calculate the size of the asteroids that caused them. Previous calculations for the Tunguska event suggested an asteroid around 50 metres in diameter exploding with a force of between 10 and 20 million tonnes of TNT.

Now a computer simulation by Mark Boslough of Sandia National Laboratories in New Mexico shows that a 30-metre asteroid could have been behind the Tunguska blast. That suggests smaller asteroids can do more damage than previously thought - a worry when one considers that objects smaller than 140 metres across are not currently detected as they zip round the solar system.

Previous simulations overestimated the size of the bodies responsible for airbursts because they treated them much like a nuclear explosion at a fixed point in the atmosphere, says Boslough. As a result, the damage they caused was thought to be related only to the size and temperature of the blast, and its distance away from Earth's surface. "That neglects something significant, though - momentum," says Boslough. His calculations show that the resulting fireball would continue to rocket towards Earth as it exploded. In the case of Tunguska, this jet didn't quite reach the surface - stalling at an altitude of around 5 kilometres - but a heat and shock wave would have carried on to Earth's surface to do much of the damage.

It's becoming clear that previous models aren't right, says Boslough, who presented his results at the annual meeting of the American Geophysical Union in San Francisco this month. "If one of these events hit an area of high population density, it could kill 1 million people."

Family ties that bind: Maternal grandparents are more involved in the lives of their grandchildren

As families gather round for the winter holidays, some faces may be more familiar than others.

A recent study shows that the amount of social interaction between extended family members depends on whether people are related through their mother or father.

Thomas Pollet and colleagues at Newcastle University and the University of Antwerp, Belgium, investigated how far maternal grandparents and paternal grandparents will go to maintain face-to-face contact with their grandchildren. They found that maternal grandparents were willing to travel further in order to sustain frequent (daily or a few times a week) contact with their grandchildren than paternal grandparents.

Mr Pollet says, "As the festive period approaches, we can still see that family get-togethers are integral to the celebrations. Many people will be going the extra mile to ensure they meet up – and we’ve found that’s particularly important if family members are related through mothers."

"Even in families where there has been divorce, we found consistent differences – grandparents on your mother's side make the extra effort. We believe there are psychological mechanisms at play because throughout history, women are always related by maternity whereas men can never be wholly certain they are the biological father to their children."

The authors interpret their findings as support for psychological patterns resulting from our evolutionary history. Family members related through their mothers (matrilineal kin) are predicted to matter more than those related through their fathers (patrilineal kin). Throughout human evolution, women were always related by certain maternity, whereas men could never be wholly certain that they are the biological father. Also, maternal grandparents were always more certain than paternal grandparents that a grandchildren was related to them. Thus, maternal grandparents, especially maternal grandmothers, may go the extra mile to visit their grandchildren.

For grandparents living within 19.5 miles (30 km) of their grandchildren, over 30% of the maternal grandmothers had contact daily or a few times a week. Around 25% of the maternal grandfathers had contact daily or a few times a week. In contrast, only around 15 % of the paternal grandmothers and little more than 15% of the paternal grandfathers would have contact daily or a few times a week.

The research which is published in the latest edition of the journal Evolutionary Psychology, was conducted on a sample of over 800 grandparents from a representative Dutch sample (The Netherlands Kinship Panel Study – nkps.nl ). The analyses controlled for other factors such as grandparental and child age, marital status, and number of children. The study is in the current issue of the journal Evolutionary Psychology and is available online at:

Evolution tied to Earth movement

Geologists say 'Wall of Africa' allowed humanity to emerge

Scientists long have focused on how climate and vegetation allowed human ancestors to evolve in Africa. Now, University of Utah geologists are calling renewed attention to the idea that ground movements formed mountains and valleys, creating environments that favored the emergence of humanity.

"Tectonics [movement of Earth’s crust] was ultimately responsible for the evolution of humankind," Royhan and Nahid Gani of the university’s Energy and Geoscience Institute write in the January, 2008, issue of Geotimes, published by the American Geological Institute.

They argue that the accelerated uplift of mountains and highlands stretching from Ethiopia to South Africa blocked much ocean moisture, converting lush tropical forests into an arid patchwork of woodlands and savannah grasslands that gradually favored human ancestors who came down from the trees and started walking on two feet – an energy-efficient way to search larger areas for food in an arid environment.

In their Geotimes article, the Ganis – a husband-and-wife research team who met in college in their native Bangladesh – describe this 3,700-mile-long stretch of highlands and mountains as "the Wall of Africa." It parallels the famed East African Rift valley, where many fossils of human ancestors were found.

"Because of the crustal movement or tectonism in East Africa, the landscape drastically changed over the last 7 million years," says Royhan Gani (pronounced rye-hawn Go-knee), a research assistant professor of civil and environmental engineering. "That landscape controlled climate on a local to regional scale. That climate change spurred human ancestors to evolve from apes."

Hominins – the new scientific word for humans (Homo) and their ancestors (including Ardipithecus, Paranthropus and Australopithecus) – split from apes on the evolutionary tree roughly 7 million to 4 million years ago. Royhan Gani says the earliest undisputed hominin was Ardipithecus ramidus 4.4 million years ago. The earliest Homo arose 2.5 million years ago, and our species, Homo sapiens, almost 200,000 years ago.

Tectonics – movements of Earth’s crust, including its ever-shifting tectonic plates and the creation of mountains, valleys and ocean basins – has been discussed since at least 1983 as an influence on human evolution.

But Royhan Gani says much previous discussion of how climate affected human evolution involves global climate changes, such as those caused by cyclic changes in Earth’s orbit around the sun, and not local and regional climate changes caused by East Africa’s rising landscape.

A Force from within the Earth

The geological or tectonic forces shaping Africa begin deep in the Earth, where a "superplume" of hot and molten rock has swelled upward for at least the past 45 million years. This superplume and its branching smaller plumes help push apart the African and Arabian tectonic plates of Earth’s crust, forming the Red Sea, Gulf of Aden and the Great Rift Valley that stretches from Syria to southern Africa.

As part of this process, Africa is being split apart along the East African Rift, a valley bounded by elevated "shoulders" a few tens of miles wide and sitting atop "domes" a few hundreds of miles wide and caused by upward bulging of the plume.

The East African Rift runs about 3,700 miles from the Ethiopian Plateau south-southwest to South Africa’s Karoo Plateau. It is up to 370 miles wide and includes mountains reaching a maximum elevation of about 19,340 feet at Mount Kilimanjaro.

The rift "is characterized by volcanic peaks, plateaus, valleys and large basins and freshwater lakes," including sites where many fossils of early humans and their ancestors have been found, says Nahid Gani (pronounced nah-heed go-knee), a research scientist. There was some uplift in East Africa as early as 40 million years ago, but "most of these topographic features developed between 7 million and 2 million years ago."

A Wall Rises and New Species Evolve

"Although the Wall of Africa started to form around 30 million years ago, recent studies show most of the uplift occurred between 7 million and 2 million years ago, just about when hominins split off from African apes, developed bipedalism and evolved bigger brains," the Ganis write.

"Nature built this wall, and then humans could evolve, walk tall and think big," says Royhan Gani. "Is there any characteristic feature of the wall that drove human evolution?"

The answer, he believes, is the variable landscape and vegetation resulting from uplift of the Wall of Africa, which created "a topographic barrier to moisture, mostly from the Indian Ocean" and dried the climate. He says that contrary to those who cite global climate cycles, the climate changes in East Africa were local and resulted from the uplift of different parts of the wall at different times.

Royhan Gani says the change from forests to a patchwork of woodland and open savannah did not happen everywhere in East Africa at the same time, and the changes also happened in East Africa later than elsewhere in the world.

This map shows the chain of highlands and mountain ranges that University of Utah geologists Royhan and Nahid Gani dub "the Wall of Africa." Higher elevations are shown in reddish tones and lower elevations in green and blue. The Ganis say most of this "wall" was uplifted during the past 7 million years, when humans and their ancestors evolved in Africa. Nahid Gani

The Ganis studied the roughly 300-mile-by-300-mile Ethiopian Plateau – the most prominent part of the Wall of Africa. Previous research indicated the plateau reached its present average elevation of 8,200 feet 25 million years ago. The Ganis analyzed rates at which the Blue Nile River cut down into the Ethiopian Plateau, creating a canyon that rivals North America’s Grand Canyon. They released those findings in the September 2007 issue of GSA Today, published by the Geological Society of America.

The conclusion: There were periods of low-to-moderate incision and uplift between 29 million and 10 million years ago, and again between 10 million and 6 million years ago, but the most rapid uplift of the Ethiopian Plateau (by some 3,200 vertical feet) happened 6 million to 3 million years ago.

The Geotimes paper says other research has shown the Kenyan part of the wall rose mostly between 7 million and 2 million years ago, mountains in Tanganyika and Malawi were uplifted mainly between 5 million and 2 million years ago, and the wall’s southernmost end gained most of its elevation during the past 5 million years.

"Clearly, the Wall of Africa grew to be a prominent elevated feature over the last 7 million years, thereby playing a prominent role in East African aridification by wringing moisture out of monsoonal air moving across the region," the Ganis write. That period coincides with evolution of human ancestors in the area.

Royhan Gani says the earliest undisputed evidence of true bipedalism (as opposed to knuckle-dragging by apes) is 4.1 million years ago in Australopithecus anamensis, but some believe the trait existed as early as 6 million to 7 million years ago.

The Ganis speculate that the shaping of varied landscapes by tectonic forces – lake basins, valleys, mountains, grasslands, woodlands – "could also be responsible, at a later stage, for hominins developing a bigger brain as a way to cope with these extremely variable and changing landscapes" in which they had to find food and survive predators.

For now, Royhan Gani acknowledges the lack of more precise timeframes makes it difficult to link specific tectonic events to the development of upright walking, bigger brains and other key steps in human evolution.

"But it all happened within the right time period," he says. "Now we need to nail it down."

Simple strategy could prevent half of deadly tuberculosis infections

By using a combination of inexpensive infection control measures, hospitals around the world could prevent half the new cases of extensively drug resistant tuberculosis (XDR TB), according to a new study in The Lancet by researchers at Yale School of Medicine.

Dubbed "Ebola With Wings" for its ability to spread and kill rapidly, XDR TB has been reported in 37 countries and has been identified in all regions of the world, including the United States. The disease has become an epidemic among hospitalized patients in South Africa, according to researchers on the Yale study. Cases of XDR TB have been diagnosed in every province of South Africa, and are particularly concentrated in the area surrounding Tugela Ferry.

To assess the spread of XDR TB, Yale School of Medicine M.D., Ph.D. student Sanjay Basu and the research team developed a computer model of a virtual world that incorporated over two years of data from Tugela Ferry. The model was 95 percent accurate at predicting the trends in XDR and other forms of TB in the region. The Yale study provides the first estimates of the XDR TB burden in South Africa. According to the model over 1,300 cases of XDR TB could arise in the Tugela Ferry region by the end of 2012.

"It is critically important to take steps now to prevent further spread of XDR TB," said Basu. "If we wait to act, this form of TB will spread further in the community and beyond borders. When a drug resistant strain hit New York in the 1990s, it cost over $1 billion to bring under control."

Tuberculosis is caused by bacteria that target the lungs and is spread through the air when an infected person coughs or sneezes. HIV-positive people constitute a vast majority of the XDR TB cases, given their greater risk of infection.

The authors write that the best way to address this type of TB effectively is to change the healthcare environment. Use of masks alone would prevent fewer than 10 percent of cases in the general epidemic, though they would help many healthcare workers, say the researchers. Reducing time spent in the hospital and shifting to outpatient therapy could prevent nearly one-third of cases, they note. About half of XDR TB cases could be prevented by addressing hospital overcrowding, improving ventilation, enhancing access to HIV treatment, and providing faster diagnostic tests, say the study authors.

Basu said that the problem is compounded in South Africa where there are long waiting lists of up to 70 patients hoping to gain admission to hospitals, and crowded wards with as many as 40 people packed into one room. Some of these patients have to sleep on the floor, and many travel for days to reach the hospital.

"We can do a lot to change what is going on," said senior author Gerald Friedland, M.D., a professor of medicine at Yale. "This is a train crash between the two epidemics of HIV and TB, and we have to address both problems together to fix this situation."

Other authors on the study included Jason R. Andrews, Eric M. Poolman, Neel R. Gandhi, N. Sarita Shah, Anthony Moll, Preshnie Moodley and Alison P. Galvani.

Citation: The Lancet, Early online edition (October 27, 2007)

Intensive training post-spinal cord injury can stimulate repair in brain and spinal cord

Intensive rehabilitation training for patients with spinal cord injuries can stimulate new branches growing from severed nerve fibers, alongside compensatory changes in the brain, say Canadian researchers. Most importantly, it could lead to restoring hand function and the ability to walk.

A study recently published in the journal Brain highlights the remarkable benefits of rehabilitation training after a cervical spinal cord injury—something that has been overshadowed in recent years by the promise of cutting-edge stem cell research.

"It may be that it is neglected because it seems so simple," says the study’s senior author Karim Fouad of the University of Alberta in Edmonton.

"Some people take very desperate steps when they are paraplegic. They go to other countries to receive treatments like stem cell transplantations, and most of these approaches are not really controlled trials. They undergo a lot of risk and spend a lot of money, when in fact they could see more benefits with fewer risks from sustained, intensive rehab training."

The study led by Fouad shows that when animal models with incomplete spinal cord injuries received intensive training over many weeks on a reaching task which they were able to do before their injuries, they performed significantly better than their untrained counterparts. In fact, the animals trained post-injury nearly doubled the success rate achieved by the untrained animals.

"Research has found that after incomplete spinal cord injury, there is a moderate amount of recovery based on a rewiring process, a response of the nervous system to the injury," says Fouad. "This is a naturally occurring process. What we found is that intensive rehabilitation training actually promotes this naturally occurring process. It actually enables changes in the brain and spinal cord similar to a repair process."

"The way the animals succeeded in the grasping task post-injury was not the way they did it before. They compensated. They adapted. They developed a new way to do it. What people with these injuries can take from this is that you don’t have to do things the way you used to do them before— what matters is that you attempt, practice hard and find your own adaptive strategy."

The study appears in the November 2007 issue of Brain.

Move over, silicon: Advances pave way for powerful carbon-based electronics

Practical technique shows promise of carbon material called graphene

Bypassing decades-old conventions in making computer chips, Princeton engineers developed a novel way to replace silicon with carbon on large surfaces, clearing the way for new generations of faster, more powerful cell phones, computers and other electronics.

The electronics industry has pushed the capabilities of silicon -- the material at the heart of all computer chips -- to its limit, and one intriguing replacement has been carbon, said Stephen Chou, professor of electrical engineering. A material called graphene -- a single layer of carbon atoms arranged in a honeycomb lattice -- could allow electronics to process information and produce radio transmissions 10 times better than silicon-based devices.

Until now, however, switching from silicon to carbon has not been possible because technologists believed they needed graphene material in the same form as the silicon used to make chips: a single crystal of material eight or 12-inches wide. The largest single-crystal graphene sheets made to date have been no wider than a couple millimeters, not big enough for a single chip. Chou and researchers in his lab realized that a big graphene wafer is not necessary, as long they could place small crystals of graphene only in the active areas of the chip. They developed a novel method to achieve this goal and demonstrated it by making high-performance working graphene transistors.

"Our approach is to completely abandon the classical methods that industry has been using for silicon integrated circuits," Chou said.

Chou, along with graduate student Xiaogan Liang and materials engineer Zengli Fu, published their findings in the December 2007 issue of Nano Letters, a leading journal in the field. The research was funded in part by the Office of Naval Research.

In their new method, the researchers make a special stamp consisting of an array of tiny flat-topped pillars, each one-tenth of a millimeter wide. They press the pillars against a block of graphite (pure carbon), cutting thin carbon sheets, which stick to the pillars. The stamp is then removed, peeling away a few atomic layers of graphene. Finally, the stamp is aligned with and pressed against a larger wafer, leaving the patches of graphene precisely where transistors will be built.

The technique is like printing, Chou said. By repeating the process and using variously shaped stamps (the researchers also made strips instead of round pillars), all the active areas for transistors are covered with single crystals of graphene.

"Previously, scientists have been able to peel graphene sheets from graphite blocks, but they had no control over the size and location of the pieces when placing them on a surface," Chou said.

One innovation that made the technique possible was to coat the stamp with a special material that sticks to carbon when it is cold and releases when it is warm, allowing the same stamp to pick up and release the graphene.

Chou’s lab took the next step and built transistors -- tiny on-off switches -- on their printed graphene crystals. Their transistors displayed high performance; they were more than 10 times faster than silicon transistors in moving "electronic holes" -- a key measure of speed.

The new technology could find almost immediate use in radio electronics, such as cell phones and other wireless devices that require high power output, Chou said. Depending on the level of interest from industry, the technique could be applied to wireless communication devices within a few years, Chou predicted.

"What we have done is shown that this approach is possible; the next step is to scale it up," Chou said.

Carbon electrodes could slash cost of solar panels

* 12:43 19 December 2007

* news service

* Tom Simonite

Transparent electrodes created from atom-thick carbon sheets could make solar cells and LCDs without depleting precious mineral resources, say researchers in Germany.

Solar cells, LCDs, and some other devices, must have transparent electrodes in parts of their designs to let light in or out. These electrodes are usually made from indium tin oxide (ITO) but experts calculate that there is only 10 years' worth of indium left on the planet, with LCD panels consuming the majority of existing stocks.

"There is not enough indium on earth for the future development of devices using it," says Linjie Zhi of the Max Planck Institute for Polymer Research in Mainz, Germany. "It is also not very stable, so you have to be careful during the fabrication process."

Although experimental alternatives to ITO exist, these are also unstable and of unproven efficiency. Zhi and colleagues Xuan Wang and Klaus Müllen believe they have a cheaper, more stable alternative.

They are testing solar cells with transparent electrodes made from graphene – flat sheets of carbon atoms arranged in a hexagonal structure. When rolled up, this material makes carbon nanotubes.

Excited electrons

The solar panels they created were dye-sensitised solar cells, first invented in 1991 and predicted by some to be the most likely successor to silicon-based solar cells.

Dye-sensitised solar cells use sunlight, a mixture of different dye pigments, and titanium dioxide – the main ingredient in white paint – to excite electrons. This is a process that, in some ways, mimics photosynthesis. It could make solar cells cheaper to manufacture and more efficient, in terms of power collection, than silicon-based ones.

The Max Planck team first coated their cells with a solution of flat graphite oxide flakes, each 10 to 100 nanometres across, leaving a coating on the surface. Heat treatment was then used to remove the oxygen from the layer. This causes the flakes to merge, leaving behind sheets of graphene.

"It is very stable in the face of heat and acidic conditions," says Zhi, "which makes fabrication much easier."

The group has managed to produce electrodes just 10 graphene layers thick, or roughly five nanometres. These have a transparency of about 80%, which is comparable to the indium-based electrodes normally used for dye sensitised cells. But, unlike these electrodes, graphene ones are completely transparent to infrared light, which could allow solar cells to collect more of the Sun's energy.

Crucial electrodes

The team is now working on reducing the number of layers to increase transparency, and on "ironing out" the creases that can appear in the sheets. In theory, a single perfect layer of graphene would work well enough to replace ITO.

"Replacing the ITO with graphene is a real great step forward as the transparent current collector is a critical and the most expensive part of our cell," says Michael Grätzel of the Swiss Federal Institute of Technology in Lausanne, Switzerland, who invented dye sensitised solar cells in 1991.

He says the search for improved transparent electrodes will be crucial for the success of these solar cells.

"Everything which you do with ITO should be possible [with graphene]," adds Müllen, who led the graphene research. That means LCD panels could be made with the graphene-based electrodes, he says, although his team has yet to test this idea. Journal reference: Nanoletters (DOI: 10.1021/nl072838r)

ET too bored by Earth transmissions to respond

* 16:35 18 December 2007

* news service

* Tom Simonite

Messages sent into space directed at extraterrestrials may have been too boring to earn a reply, say two astrophysicists trying to improve on their previous alien chat lines.

Humans have so far sent four messages into space intended for alien listeners. But they have largely been made up of mathematically coded descriptions of some physics and chemistry, with some basic biology and descriptions of humans thrown in.

Those topics will not prove gripping reading to other civilisations, says Canadian astrophysicist Yvan Dutil. If a civilisation is advanced enough to understand the message, they will already know most of its contents, he says: "After reading it, they will be none the wiser about us humans and our achievements. In some ways, we may have been wasting our telescope time."

In 1999 and 2003, Dutil and fellow researcher Stephane Dumas beamed messages in a language of their own design into space. Now, they are working to compose more interesting messages.

"The question is, what is interesting to an extraterrestrial?" Dutil told New Scientist. "We think the answer is using some common ground to communicate things about humanity that will be new or different to them – like social features of our society." Fortunately those subjects are already being described mathematically by economists, physicists and sociologists, he adds.

Vexing problems

One topic the two researchers are already composing messages about is the so-called 'cake cutting problem'. "How do you share out resources is a classical problem for all civilisations," he says.

Democracy is also a potentially eye- or antenna- catching subject. "The maths shows that with more than two choices, there is no perfect electoral procedure," says Dutil. He has started work on encoding this into a message in which "we can explain our methods and ask, 'What do you use on your planet?'"

Social physics – the application of mathematical techniques to societies – also provides good material potentially interesting to the alien. "We know that every human social network behaves as a gas, what we don't know is how universal that is beyond Earth." Aliens may be asking themselves similar questions, he adds.

Another fundamental challenge for very old civilisations is using resources sustainably to avoid dying out, says Dutil. "Any good examples out there could help a lot on Earth."

Human nature

Dumas has designed software that is like a word processor for composing messages in the pair's symbolic language. There is also a separate automatic decoder, which should help avoid slip-ups like the missing factor of 10 in the duo's 1999 message.

Douglas Vakoch, director of interstellar message composition at the search for extraterrestrial intelligence at the SETI Institute in Mountain View, California, US, agrees that we humans need to make our interstellar chat more compelling. "If we only communicate something the receiver already knows, it is not going to be very interesting."

Vakoch has recently been holding workshops at sociology and anthropology conferences to try and widen participation in messaging extraterrestrials beyond astrophysicists. "I think perhaps the most important question is: how do we represent what being a human is? And those disciplines can really help," says Vakoch.

'We'll get back to you'

But Vakoch points out that email-like messages may not be the best approach. One alternative is to send software code for an avatar that could answer basic alien questions. That would get around the problem of the delays produced by large distances across space.

"If someone replies to your message saying, 'I don't understand. Can you repeat that?' it will take decades, centuries or millennia to know," says Vakoch.

"Another approach is to send a lot of stuff and hope there is enough redundancy for them to spot patterns," he adds. "We could just send the encyclopaedia."

Dutil agrees other options are worth exploring, but points out that sometimes only a message will do. "It would make sense to have an 'answer phone' message ready in case we are contacted," he explains, "just to say, 'we'll get back to you,' while we figure out what to do."

Walking and moderate exercise help prevent dementia

ST. PAUL, Minn. – People age 65 and older who regularly walk and get other forms of moderate exercise appear to significantly lower their risk of developing vascular dementia, the second most common form of dementia after Alzheimer’s disease, according to a study published in the December 19, 2007, online issue of Neurology®, the medical journal of the American Academy of Neurology.

The four-year study involved 749 men and women in Italy who were over age 65 and did not have memory problems at the beginning of the study. Researchers measured the amount of energy exerted in the participants’ weekly physical activities, including walking, climbing stairs, and moderate activities, such as house and yard work, gardening, and light carpentry. By the end of the study, 54 people developed Alzheimer’s disease and 27 developed vascular dementia.

The study found the top one-third of participants who exerted the most energy walking were 27 percent less likely to develop vascular dementia than those people in the bottom one-third of the group.

Participants who scored in the top one-third for the most energy exerted in moderate activities lowered their risk of vascular dementia by 29 percent and people who scored in the top one-third for total physical activity lowered their risk by 24 percent compared to those in the bottom one-third.

"Our findings show moderate physical activity, such as walking, and all physical activities combined lowered the risk of vascular dementia in the elderly independent of several sociodemographic, genetic and medical factors," said study author Giovanni Ravaglia, MD, with University Hospital S. Orsola Malpighi, in Bologna, Italy. "It’s important to note that an easy-to-perform moderate activity like walking provided the same cognitive benefits as other, more demanding activities."

Ravaglia says it’s possible that physical activity may improve cerebral blood flow and lower the risk of cerebrovascular disease, which is a risk factor for vascular dementia, but further research is needed about the mechanisms operating between physical activity and a person’s memory.

Contrary to some reports, the study found that physical activity was not associated with a reduced risk of Alzheimer’s disease, but Ravaglia says more research is needed before concluding that Alzheimer’s disease is not preventable through exercise.

MIT corrects inherited retardation, autism in mice

Research points to potential drug treatment

CAMBRIDGE, Mass.- Researchers at MIT's Picower Institute for Learning and Memory have corrected key symptoms of mental retardation and autism in mice.

The work, which will be reported in the Dec. 20 issue of Neuron, also indicates that a certain class of drugs could have the same effect. These drugs are not yet approved by the FDA, but will soon be entering into human clinical trials.

Fragile X syndrome (FXS), affecting 100,000 Americans, is the most common inherited cause of mental retardation and autism. The MIT researchers corrected FXS in mice modeling the disease. "These findings have major therapeutic implications for fragile X syndrome and autism," said study lead author Mark F. Bear, director of the Picower Institute and Picower Professor of Neuroscience at MIT.

The findings support the theory that many of FXS's psychiatric and neurological symptoms-learning disabilities, autistic behavior, childhood epilepsy- stem from too much activation of one of the brain's chief network managers-the metabotropic glutamate receptor mGluR5.

"Fragile X is a disorder of excess-excess synaptic connectivity, protein synthesis, memory extinction, body growth, excitability-and remarkably, all these excesses can be reduced by reducing mGluR5," said Bear, a Howard Hughes Medical Institute investigator.

Individuals with FXS have mutations in the X chromosome's FMR1 gene, which encodes the fragile X mental retardation protein, FMRP. The MIT study found that FMRP and mGluR5 are at opposite ends of a kind of molecular seesaw. They keep each other in check, and without FMRP, mGluR5 signals run rampant.

Bear and colleagues study how genes and environment interact to refine connections in the brain. Synapses are the brain's connectors and their modifications are the basis for all learning and memory. There's a growing consensus among researchers that developmental brain disorders such as FXS, autism and schizophrenia should be considered "synapsopathies"- diseases of synaptic development and plasticity (the ability to change in response to experience).

Dendritic spines--little nubs on neurons' branchlike projections-receive many of the synaptic inputs from other neurons. Abnormal spines have long been associated with various forms of human mental retardation. In FXS, spines are more numerous, longer and more spindly than they should be. Thin spines tend to form weak connections.

The research team found that a 50 percent reduction in mGluR5 fixed multiple defects in the fragile X mice. In addition to correcting dendritic spines, reduced mGluR5 improved altered brain development and memory, restored normal body growth, and reduced seizures-many of the symptoms experienced by humans with FXS.

The researchers used genetic engineering to reduce mGluR5, but the same thing could be accomplished by a drug. Although not yet approved by the FDA, mGluR5 blockers are entering into human clinical trials. "Insights gained by this study suggest novel therapeutic approaches, not only for fragile X but also for autism and mental retardation of unknown origin," Bear said.

Earlier this year, MIT Picower Institute researcher Susumu Tonegawa and colleagues reported positive results using a different approach to reversing FXS symptoms. Tonegawa and colleagues identified a key enzyme called p21-activated kinase, or PAK, that affects the number, size and shape of connections between neurons.

In addition to Bear, authors include Brown University graduate student Gul Dolen; Picower Institute postdoctoral fellow Emily Osterweil, B.S. Shankaranarayana Rao of the National Institute of Mental Health and Neuroscience in India; MIT graduate students Gordon B. Smith and Benjamin D. Auerbach; and Sumantra Chattarji of the National Center for Biological Sciences and Tata Institute of Fundamental Research in India.

This work is supported by the National Institute of Mental Health; the National Institute of Child Health and Human Development; the National Fragile X Foundation; FRAXA, a Fragile X research foundation; and the Simons Foundation.

Simple push filling wins crown in battle against tooth decay

The Hall Technique, which uses preformed metal crowns pushed onto teeth with no dental injections or drilling, is favoured over traditional "drill and fill" methods by the majority of children who received it, reveals research published in the online open access journal BMC Oral Health. Tooth decay can be slowed, or even stopped, when it is sealed into the tooth by the crown.

Dr Nicola Innes, who led the Scottish research team at Dundee Dental Hospital and School, explained, "There has been a lot of debate in the UK over the best method to tackle tooth decay in children’s molars. Preformed metal crowns are not widely used in Scotland as they’re not viewed as a realistic option by dentists. We found, however, that almost all the patients, parents and dentists in our study preferred the Hall Technique crowns and also children benefited from them."

Traditionally, dentists "freeze" a decayed tooth with an injection in the child’s gum, and then drill away the decay, and fill the cavity with a metal filling. This method can be uncomfortable for the child. The Hall Technique, however, is simple. The decay is sealed into the tooth by the crown and, as sugars in the diet are unable to reach it, the decay slows or even stops. 132 children in Tayside, Scotland, had one decayed tooth filled traditionally, and another decayed tooth managed with the Hall Technique. 77% of the children, 83% of carers and 81% of dentists preferred the Hall Technique to traditional "drill and fill" methods. Dentists reported that 89% of the children showed no significant signs of discomfort with the Hall Technique, compared with 78% for the traditional fillings.

Around one in two children in Scotland has visible tooth decay at the age of 5. Most children have to accept toothache as part of normal everyday life. Two years after receiving the Hall Technique crown, however, the children’s dental health significantly improved, with less pain, abscesses and failed fillings than with the traditional "drill and fill" methods.

Dr Innes concluded "Children, parents and dentists prefer the Hall Technique. It allows dentists to achieve a filling with a high quality seal, which means we can safely leave decay in baby teeth, and not be forced to drill it away. Hall crowns will not suit every child, or every decayed tooth in a child’s mouth, but dentists may find it a useful treatment option for managing decay in children’s back teeth."

Notes to editors:

1. The Hall Technique uses preformed metal crowns (PMCs) filled with glass ionomer cement which are simply pushed onto the tooth with no caries removal as;

* PMC is cemented into place without tooth preparation or local anaesthetic injection

* Decayed dental tissue is not removed but sealed into tooth by PMC cement and so isolated from sugars in the diet.

2. Dentists ranked the degree of discomfort their patients experienced, and the children, their parents/carers and dentists stated if they preferred the Hall Technique or traditional methods of treatment.

3. A copy of the Hall Technique instruction manual can be found at:

4. Images available upon request at press@

5. The Hall Technique: A Randomized Controlled Clinical Trial of a Novel Method of Managing Carious Primary Molars in General Dental Practice; Acceptability of the Technique and outcomes at 23 months BMC Oral Health (in press)

During embargo, article available at:

After the embargo, article available at journal website:

Traffic jam mystery solved by mathematicians

Mathematicians from the University of Exeter have solved the mystery of traffic jams by developing a model to show how major delays occur on our roads, with no apparent cause. Many traffic jams leave drivers baffled as they finally reach the end of a tail-back to find no visible cause for their delay. Now, a team of mathematicians from the Universities of Exeter, Bristol and Budapest, have found the answer and published their findings in leading academic journal Proceedings of the Royal Society.

The team developed a mathematical model to show the impact of unexpected events such as a lorry pulling out of its lane on a dual carriageway. Their model revealed that slowing down below a critical speed when reacting to such an event, a driver would force the car behind to slow down further and the next car back to reduce its speed further still. The result of this is that several miles back, cars would finally grind to a halt, with drivers oblivious to the reason for their delay. The model predicts that this is a very typical scenario on a busy highway (above 15 vehicles per km). The jam moves backwards through the traffic creating a so-called ‘backward travelling wave’, which drivers may encounter many miles upstream, several minutes after it was triggered.

Dr Gábor Orosz of the University of Exeter said: "As many of us prepare to travel long distances to see family and friends over Christmas, we’re likely to experience the frustration of getting stuck in a traffic jam that seems to have no cause. Our model shows that overreaction of a single driver can have enormous impact on the rest of the traffic, leading to massive delays."

Drivers and policy-makers have not previously known why jams like this occur, though many have put it down to the sheer volume of traffic. While this clearly plays a part in this new theory, the main issue is around the smoothness of traffic flow. According to the model, heavy traffic will not automatically lead to congestion but can be smooth-flowing. This model takes into account the time-delay in drivers’ reactions, which lead to drivers braking more heavily than would have been necessary had they identified and reacted to a problem ahead a second earlier.

Dr Orosz continued: "When you tap your brake, the traffic may come to a full stand-still several miles behind you. It really matters how hard you brake - a slight braking from a driver who has identified a problem early will allow the traffic flow to remain smooth. Heavier braking, usually caused by a driver reacting late to a problem, can affect traffic flow for many miles."

The research team now plans to develop a model for cars equipped with new electronic devices, which could cut down on over-braking as a result of slow reactions.

If you don't want to fall ill this Christmas, then share a festive kiss but don't shake hands

The fight against all types of infections, from colds and flu to stomach bugs and MRSA, begins at home, with good hand hygiene, says first review of hand hygiene in the community

We've all heard people say 'I won't kiss you, I've got a cold'. But a report just published warns that we may be far more at risk of passing on an infection by shaking someone's hand than in sharing a kiss.

A group of hygiene experts from the United States and the UK have published the first detailed report on hand hygiene in the home and community, rather than in hospital and healthcare settings. Their findings are published in the American Journal of Infection Control. They say that, if we want to avoid catching flu or tummy bugs, or protect ourselves and others from organisms such as MRSA, salmonella or C. difficile, then we have to start in our own homes, by paying greater attention to good hand hygiene. They also warn that, in the event of a flu pandemic, good hand hygiene will be the first line of defence during the early critical period before mass vaccination becomes available. This new report follows on from a study published last month in the British Medical Journal which indicated that physical barriers, such as regular handwashing and wearing masks, gloves and gowns may be more effective than drugs in preventing the spread of respiratory viruses such as influenza and SARS.

Good hygiene at home prevents organisms spreading from one family member to another. By reducing the number of carriers in the community, the likelihood of infections being carried into health care facilities by new patients and visitors is reduced. Good hygiene at home also means fewer infections, which means fewer patients demanding antibiotics from the GP, and fewer resistant strains developing and circulating in the community.

Cold and flu viruses can be spread via the hands so that family members become infected when they rub their nose or eyes. The report details how germs that cause stomach infections such as salmonella, campylobacter and norovirus can also circulate directly from person to person via our hands. If we put our fingers in our mouths, which we do quite frequently without being aware of it, or forget to wash our hands before preparing food, then stomach germs can also be passed on via this route. Some of us also carry MRSA or C.difficile without even knowing, which can be passed around via hand and other surfaces to family members or, if they are vulnerable to infection, go on to become ill.

Professor Sally Bloomfield, one of the report’s authors, is the Chairman of the International Scientific Forum for Home Hygiene, the international organisation which produced report. She is also a member of the London School of Hygiene & Tropical Medicine’s Hygiene Centre. She comments: 'With the colds and flu season approaching, it's important to know that good hand hygiene can really reduce the risks. What is important is not just knowing that we need to wash our hands but knowing when to wash them. Preventing the spread of colds and flu means good respiratory hygiene, which is quite different from good food hygiene. That's why the new respiratory hygiene campaign from the Department of Health in the UK, which advises people to "catch it, bin it, kill it", is spot on'.

The authors say that breaking the chain of infection from one person to another all depends on how well we wash our hands. If we don't do it properly, washing with soap and rinsing under running water, then we might as well not do it at all. They recommend also using an alcohol handrub in situations where there is high risk, such as after handling raw meat or poultry, or when there is an outbreak of colds or stomach bugs in the family home or workplace, or if someone in the family is more vulnerable to infection. They suggest carrying an alcohol rub or sanitiser at all times so that good hand hygiene can still be observed away from home in situations where there is no soap and water available.

Carol O'Boyle, of the School of Nursing, University of Minnesota, and a co-author of the report, says: 'Hand hygiene is just as important when we are outside the home - on public transport, in the office, in the supermarket, or in a restaurant. Quite often it's not possible to wash our hands in these situations, but carrying an alcohol-based hand sanitizer means we can make our hands hygienic whenever the need arises'.

The report warns that good hygiene is about more than just washing our hands. Although the hands are the main superhighway for the spread of germs – because they are the ‘last line of defence’, surfaces from which the hands become contaminated, such as food contact surfaces, door handles, tap handles, toilet seats and cleaning cloths also need regular hygienic cleaning. Clothing and linens, baths, basin and toilet surfaces can also play a part in spreading germs between family members in the home.

Professor Elaine Larson, of the Mailman School of Public Health in New York and another co-author, says: ‘Because so much attention has been paid to getting people to wash their hands, there is a danger that people can come to believe this is all they need to do to avoid getting sick’.

Professor Bloomfield concurs. 'We hear a lot of discussion about whether being "too clean" is harming our immune systems, but we believe that this targeted approach to home hygiene, which focuses on the key routes for the spread of harmful organisms, is the best way to protect the family from becoming ill whilst leaving the other microbes which make up our environment unharmed'.

Dr. Val Curtis, Head of the London School of Hygiene & Tropical Medicine's Hygiene Centre concludes: 'Handwashing with soap is probably the single most important thing you can do to protect yourselves and your loved ones from infection this Christmas'.

Earliest Stage of Planet Formation Dated

UC Davis researchers have dated the earliest step in the formation of the solar system -- when microscopic interstellar dust coalesced into mountain-sized chunks of rock -- to 4,568 million years ago, within a range of about 2,080,000 years.

UC Davis postdoctoral researcher Frederic Moynier, Qing-zhu Yin, assistant professor of geology, and graduate student Benjamin Jacobsen established the dates by analyzing a particular type of meteorite, called a carbonaceous chondrite, which represents the oldest material left over from the formation of the solar system.

The physics and timing of this first stage of planet formation are not well understood, Yin said. So, putting time constraints on the process should help guide the physical models that could be used to explain it.

In the second stage, mountain-sized masses grew quickly into about 20 Mars-sized planets and, in the third and final stage, these small planets smashed into each other in a series of giant collisions that left the planets we know today. The dates of those stages are well established.

Carbonaceous chondrites are made up of globules of silica and grains of metals embedded in black, organic-rich matrix of interstellar dust. The matrix is relatively rich in the element manganese, and the globules are rich in chromium. Looking at a number of different meteorites collected on Earth, the researchers found a straight-line relationship between the ratio of the amount of manganese to that of chromium, the amount of matrix in the meteorites, and the amount of chromium-53.

These meteorites never became large enough to heat up from radioactive decay, so they have never been melted, Yin said. They are "cosmic sediments," he said.

By measuring the amount of chromium-53, Yin said, they could work out how much of the radioactive isotope manganese-53 had initially been present, giving an indication of age. They then compared the amount of manganese-53 to slightly younger igneous (molten) meteorites of known age, called angrites.

The UC Davis researchers estimate the timing of the formation of the carbonaceous chondrites at 4,568 million years ago, ranging from 910,000 years before that date to 1,170,000 years later.

"We've captured a moment in history when this material got packed together," Yin said.

The work is published in the Dec. 20 issue of Astrophysical Journal Letters, and was funded by grants from NASA.

Squirrels Use Snake Scent

Squirrels use shed snake skins to mask their scent from predators, a UC Davis researcher has found. (Barbara Clucas/UC Davis photo)

California ground squirrels and rock squirrels chew up rattlesnake skin and smear it on their fur to mask their scent from predators, according to a new study by researchers at UC Davis.

Barbara Clucas, a graduate student in animal behavior at UC Davis, observed ground squirrels (Spermophilus beecheyi) and rock squirrels (Spermophilus variegates) applying snake scent to themselves by picking up pieces of shed snakeskin, chewing it and then licking their fur.

Adult female squirrels and juveniles apply snake scent more often than adult males, which are less vulnerable to predation by snakes, Clucas said. The scent probably helps to mask the squirrel's own scent, especially when the animals are asleep in their burrows at night, or to persuade a snake that another snake is in the burrow.

Squirrel chewing on snake skin

The squirrels are not limited to the use of shed snake skins, said Donald Owings, a professor of psychology at UC Davis who is Clucas' adviser and an author on the paper. They also pick up snake odor from soil and other surfaces on which snakes have been resting, and use that to apply scent. Other rodents have been observed using similar behavior.

Snake-scent application is one of a remarkable package of defenses that squirrels use against rattlesnakes, Owings said. In earlier work, Owings' lab has found that squirrels can: heat up their tails to send a warning signal to rattlesnakes, which can "see" in the infrared; assess how dangerous a particular snake is, based on the sound of its rattle; and display assertive behavior against snakes to deter attacks. In addition, work by Owings' colleague, psychology professor Richard Coss, has demonstrated that these squirrels have evolved resistance to snake venom.

"It's a nice example of the opportunism of animals," Owings said. "They're turning the tables on the snake."

The other authors on the paper, which was published Nov. 28 in the journal Animal Behavior, are Matthew Rowe, Sam Houston State University, Texas, and Patricia Arrowood at New Mexico State University. The work was funded by the National Science Foundation and the Animal Behavior Society.

Deer-like fossil is a missing link in whale evolution

* 18:00 19 December 2007

* news service

* Jason Palmer

A racoon-sized mammal which lived in India about 48 million years ago, may represent one of the missing links in whale evolution, suggests a new fossil study.

The research also challenges the idea that cetaceans – the order that includes whales, dolphins, and porpoises – split from their land-dwelling forebears and returned to the water to hunt aquatic prey.

Researchers studying 48-million-year-old fossils of Indohyus – an extinct animal which may have looked like a small deer – from ancient riverbeds in Kashmir suggest that the fossils represent a likely ancestor of the cetaceans.

Evidence shows that Indohyus was at least in part an eater of vegetation and did not return to a watery life to hunt (Image: Carl Buell)

Indohyus belongs to a family known as raoellids and would have lived around the same time as early cetaceans, both having descended from a common ancestor, they suggest.

Back in 2001, Hans Thewissen at the Northeastern Ohio Universities College of Medicine, US, and his colleagues showed that cetaceans are descended from artiodactyls (even-toed ungulates), a group that includes pigs, sheep, and hippos.

But there are a number of physical features of cetaceans that had not been seen in this group, so that there was a hole in the evolutionary chain.

Now Thewissen’s study of fossils of the artiodactyl Indohyus may have filled this evolutionary hole.

Aquatic evidence

The fossils showed an asymmetry in a structure that surrounds the middle and inner ear called the auditory bulla, an asymmetry that until now had been thought unique to cetaceans. The shape of the fossils’ teeth and the position of Indohyus’s eyes, which were located nearer the top of the head than those of other artiodactyls, were also more cetacean-like than that of typical artiodactyls.

What is more, the team found that the ratios of different isotopes of oxygen in the Indohyus fossils indicated an aquatic environment, and the fossils’ bones were particularly dense – possibly adapted for a life wading in water.

The real surprise came when the team looked at the ratios of different isotopes of carbon in Indohyus's teeth, an indication of their diet. The results showed Indohyus had a diet that consisted at least in part of vegetation. This suggests that Indohyus was a shallow water wader already, and had not returned to the water simply to hunt live prey.

Food for thought

"It really adds a step we didn’t have before," Thewissen told New Scientist. "What was shocking was that we always thought early whales were chasing live prey. This study tells us that Indohyus was already aquatic, but the teeth tell us it is not a carnivore." (Watch a Nature video of Thewissen discussing Indohyus and its links to whales.)

Thewissen suggests that the common ancestor of the two groups ate on land, but took to water in times of danger (like the African mousedeer, a modern artiodactyl, see video of the creature).

The racoon-sized mammal is distantly related to pigs, sheep and hippos, but has distinctive features in common with cetaceans (Image: Jacqueline Dillard)

But the return to the seas came about before cetaceans developed their exclusive taste for live prey, and Thewissen suggests that it was this change that defined the cetacean order rather than the change in diet driving them back to water.

"It’s really very important, because it shows evidence of one of the major shifts in the evolution of mammals, the shift toward being carnivorous," says Christian de Muizon at the National Museum of Natural History in Paris, France of the differing diets of the two groups. Journal reference: Nature (DOI: 10.1038/nature06343)

Moon is younger and more Earth-like than thought

* 20:13 19 December 2007

* news service

* Maggie McKee

It's a good thing the Moon doesn't have any feelings to hurt. New research suggests it is actually 30 million years younger than anyone had thought, and that it is merely a 'chip off the old block' of Earth rather than being made up of the remnants of a Mars-sized body that slammed into Earth billions of years ago.

That violent impact was thought to have taken place 30 million years after the solar system began to condense from a disc of gas and dust 4.567 billion years ago. The event was thought to have melted the Earth, generating a magma ocean that covered the planet and allowed iron and other metals to sink to its centre, forming a core.

At the same time, the Moon was thought to have coalesced from a disc of molten debris blasted off the Earth and the Mars-sized interloper.

But new research led by Mathieu Touboul of the Swiss Federal Institute of Technology in Zurich suggests that picture is not so simple. The researchers base their analysis on studies of an isotope of the metal tungsten in lunar rocks.

That isotope, tungsten-182, is produced by the decay of two other elements: hafnium-182, which has a half-life of 9 million years, and tantalum-182. Tantalum-182, however, is not an intrinsic component of the Moon – it forms when energetic charged particles from space, called cosmic rays, slam into the lunar surface.

Previous estimates of the Moon's age were based on tungsten measurements that did not subtract the effect of the decay of tantalum. "It is crucial to remove all the tungsten-182 coming from the cosmic-ray production," Touboul told New Scientist. "Otherwise, the age one gets is too old."

Lengthy formation

When Touboul's team accounted for tantalum, they found that the giant impact had to have occurred at least 50 million years after the solar system began to form, and that the Moon had completed its formation within the next 10 million years – about 30 million years later than thought.

The revised timing of the impact implies the terrestrial planets, such as the Earth and Mars, took longer to build up from the collision of smaller 'planetesimals' than previously thought. "The age of the Moon is also the age of Earth because the Moon-forming giant impact was the last major event in Earth's formation," says Touboul.

Alan Brandon, a scientist at NASA's Johnson Space Center in Houston, Texas, US, agrees. "It may mean that Earth and Mars took at least 50 million years, and possibly hundreds of millions of years, to reach their final mass," he comments.

The researchers also found that the composition of the Moon appears identical to that of the Earth's rocky mantle, "such that a major portion of the Moon must have been from proto-Earth", Brandon told New Scientist.

Similar makeup

He says this runs counter to some computer models showing that at least 80% of the Moon is made up of material from the Mars-sized world, which is expected to have a different makeup from the Earth. "I think the Moon-forming impact models will have to be redone to try to get an explanation for why the Earth and Moon are so compositionally similar," he says.

Intriguingly, the new work suggests the Moon formed at least 16 million years after the Earth's core formed. That raises questions about how the planet's iron-rich core could have coalesced in the absence of global magma oceans produced by the Moon-forming impact.

"It could be that there were several generations of magma oceans in Earth," Brandon says. "My guess is … that the Earth probably had a magma ocean at the time the Earth's core formed," he says, adding that the giant impact may have re-melted the material millions of years later.

Journal reference: Nature (vol 450, p 1169 and 1206)

'Active glacier found' on Mars

By Paul Rincon Science reporter, BBC News

A probable active glacier has been identified for the first time on Mars.

The icy feature has been spotted in images from the European Space Agency's (Esa) Mars Express spacecraft.

Ancient glaciers, many millions of years old, have been seen before on the Red Planet, but this one may only be several thousand years old.

The young glacier appears in the Deuteronilus Mensae region between Mars' rugged southern highlands and the flat northern lowlands.

"If it was an image of Earth, I would say 'glacier' right away," Dr Gerhard Neukum, chief scientist on the spacecraft's High Resolution Stereo Camera (HRSC), told BBC News.

"We have not yet been able to see the spectral signature of water. But we will fly over it in the coming months and take measurements. On the glacial ridges, we can see white tips, which can only be freshly exposed ice."

This is found in very few places on the Red Planet because as soon as ice is exposed to the Martian environment, it sublimates (turns from a solid state directly into gas).

Flooding event

In Deuteronilus Mensae, Dr Neukum estimates that water came up from underground in the last 10,000 to 100,000 years.

"That means it is an active glacier now. This is unique, and there are probably more," said Dr Neukum.

The water subsequently froze over and glaciers developed, the researcher from the Free University in Berlin, Germany, explained.

This image from the High Resolution Stereo Camera aboard Esa's Mars Express spacecraft shows a perspective view of the glacial feature located in Deuteronilus Mensae.

Not all researchers share his view of events. Some believe that snowfall causes glaciers to develop on Mars, as it does on Earth. But Gerhard Neukum thinks there is too little precipitation on the Red Planet for this to be the case.

Glacial features have been seen before on the Olympus Mons volcano. But these are thought to be about four million years old.

Dr Neukum said glacial features would be prime locations for robotic rovers to look for evidence of life on Mars.

If microbes survive deep below Mars, they could be transported to the surface by water gushing up from deep underground.

Last month, Esa celebrated Mars Express's five thousandth orbit of the Red Planet. The unmanned probe arrived at Mars on 25 December 2003.

More evidence for new species hidden in plain sight

Two articles published today in the online open access journals BMC Evolutionary Biology and BMC Biology provide further evidence that we have hugely underestimated the number of species with which we share our planet. Today sophisticated genetic techniques mean that superficially identical animals previously classed as members of a single species, including the frogs and giraffes in these studies, could in fact come from several distinct ‘cryptic’ species.

In the Upper Amazon, Kathryn Elmer and Stephen Lougheed working at Queen’s University, Kingston, Canada teamed up with José Dávila from Instituto de Investigación en Recursos Cinegéticos, Cuidad Real, Spain to investigate the terrestrial leaflitter frog (Eleutherodactylus ockendeni) at 13 locations across Ecuador.

Looking at the frogs’ mitochondrial and nuclear DNA, the researchers found three distinct species, which look very much alike. These species have distinct geographic distributions, but these don't correspond to modern landscape barriers. Coupled with phylogenetic analyses, this suggests they diverged before the Ecuadorean Andes arose, in the Miocene period over 5.3 million years ago.

"Our research coupled with other studies suggests that species richness in the upper Amazon is drastically underestimated by current inventories based on morphospecies," say the authors.

And in Africa, an interdisciplinary team from the University of California, Los Angeles, Omaha’s Henry Doorly Zoo, and the Mpala Research Centre in Kenya has found that there may be more to the giraffe than meets the eye, too.

Their analysis of nuclear and mitochondrial DNA shows at least six genealogically distinct lineages of giraffe in Africa, with little evidence of interbreeding between them. Further divisions within these groups mean that in total the researchers have spotted 11 genetically distinct populations.

"Such extreme genetic subdivision within a large vertebrate with high dispersal capabilities is unprecedented and exceeds that of any other large African mammal," says graduate student David Brown, first author of the study. The researchers estimate that the giraffe populations they surveyed have been genetically distinct for between 0.13 and 1.62 million years. The findings have serious implications for giraffe conservation because some among these subgroups have as few as 100 members, making them highly endangered – if not yet officially recognised – species.

Sea cucumber protein used to inhibit development of malaria parasite

Scientists have genetically engineered a mosquito to release a sea-cucumber protein into its gut which impairs the development of malaria parasites, according to research out today (21 December) in PLoS Pathogens. Researchers say this development is a step towards developing future methods of preventing the transmission of malaria.

Malaria is caused by parasites whose lives begin in the bodies of mosquitoes. When mosquitoes feed on the blood of an infected human, the malaria parasites undergo complex development in the insect’s gut. The new study has focused on disrupting this growth and development with a lethal protein, CEL-III, found in sea cucumbers, to prevent the mosquito from passing on the parasite.

Human blood infected with malaria contains parasitic gametocytes – cells which can create parasite sperm and eggs in the gut of the insect. These then fertilise, kick-starting the parasite reproductive process and life cycle by producing invasive offspring called ookinetes.

These ookinetes then migrate through the mosquito’s stomach wall and produce thousands of ‘daughter’ cells known as sporozoites. After 10-20 days these are ready in the salivary glands to infect another human when the mosquito takes a subsequent blood meal.

The international team fused part of the sea cucumber lectin gene with part of a mosquito gene so that the mosquito would release lectin into its gut during feeding. The released lectin is toxic to the ookinete and therefore kills the parasite in the mosquito’s stomach.

In laboratory tests the research team showed that introducing lectin to the mosquito’s gut in this way significantly impaired the development of malaria parasites inside the mosquito, potentially preventing transmission to other people. Early indications suggest that this sea cucumber protein could be effective on more than one of the four different parasites that can cause malaria in humans.

Professor Bob Sinden from Imperial College London’s Department of Life Sciences, one of the authors on the paper said: "These results are very promising and show that genetically engineering mosquitoes in this way has a clear impact on the parasites’ ability to multiply inside the mosquito host."

However, Professor Sinden explains that there is still a lot of work to do before such techniques can be used to combat the spread of malaria in real-world scenario. This is because although the sea cucumber protein significantly reduces the number of parasites in mosquitoes, it does not totally remove all parasites from all mosquitoes and as such, at this stage of development, would not be effective enough to prevent transmission of malaria to humans.

Professor Sinden says he hopes studies such as this one, which improve scientists’ understanding of the complex process by which malaria parasites are transmitted, will lead to new advances in the quest to prevent malaria.

"Ultimately, one aim of our field is to find a way of genetically engineering mosquitoes so that the malaria parasite cannot develop inside them. This study is one more step along the road towards achieving that goal, not least because it has been shown that more than one species of malaria can be killed in this way."

About 40% of the world’s population are at risk of malaria. Of these 2.5 billion people at risk, more than 500 million become severely ill with malaria every year and more than 1 million die from the effects of the disease. Malaria is especially a serious problem in Africa, where one in every five childhood deaths is due to the effects of the disease. An African child has on average between 1.6 and 5.4 episodes of malaria fever each year.

A link between greenhouse gases and the evolution of C4 grasses

How a changing climate can affect ecosystems is an important and timely question, especially considering the recent global rise in greenhouse gases. Now, in an article published online on December 20th in the journal Current Biology, evolutionary biologists provide strong evidence that changes in global carbon dioxide levels probably had an important influence on the emergence of a specific group of plants, termed C4 grasses, which includes major cereal crops, plants used for biofuels, and species that represent important components of grasslands across the world.

C4 plants are specially equipped to combat an energetically costly process, known as photorespiration, that can occur under conditions of high temperature, drought, high salinity, and—ith relevance to these latest findings—low carbon dioxide levels. Although a combination of any of these factors might have provided the impetus behind the evolution of the various C4 lineages, it had been widely speculated that a drop in global carbon dioxide levels, occurring approximately 30 million years ago during the Oligocene period, may have been the major driving force. Establishing the link between the two, however, has proven difficult partly because there are no known fossils of C4 plants from this period. Enter Pascal-Antoine Christin and colleagues from the University of Lausanne, Switzerland, who decided to take an alternative approach to date a large group of grasses. By using a "molecular clock" technique, the authors were able to determine that the Chloridoideae subfamily of grasses emerged approximately 30 million years ago, right around the time global carbon dioxide levels were dropping. Furthermore, a model of the evolution of these grasses suggests that this correlation is not a trivial coincidence and instead reflects a causal relationship.

As the authors noted in their study, many of the C4 grasses evolved after the drop in global carbon dioxide levels 30 million years ago. How to explain this" The authors speculate that while an atmosphere low in carbon dioxide established the basic conditions necessary for C4 evolution, other ecological factors might be at work. In light of this, the authors hope to apply the same approaches used in the paper described here to investigate the role of other variables, such as drought, salinity, and flooding, in the evolution of C4 plants. In addition to improving our understanding of how climate changes influenced ecosystems in the past, such studies may allow predictions of how human activities could affect the planet in the future. Indeed, with regard to global carbon dioxide levels, Christin and colleagues write, "Besides its influence on climatic variables, increased CO2 concentration could trigger important ecological changes in major terrestrial ecosystems by affecting the distribution of C4-dominated biomes and the affiliated flora and fauna." This implies that a reversal of the conditions that favored C4 plants could potentially lead to their demise—a startling prospect if one considers the human race’s reliance on C4 crops like corn, sugarcane, sorghum, and millets.

Surgery without stitches

A thin polymer bio-film that seals surgical wounds could make sutures a relic of medical history.

Measuring just 50 microns thick, the film is placed on a surgical wound and exposed to an infrared laser, which heats the film just enough to meld it and the tissue, thus perfectly sealing the wound.

Known as Surgilux, the device’s raw material is extracted from crab shells and has Food and Drug Administration approval in the US.

Early test results indicate that it has strongest potential for use in brain and nerve surgery because it can avoid the numerous disadvantages of invasive stictches/sutures, which fail to seal and can act as a source of infection.

Up to 11% of brain surgery patients have to return for repeat surgery due to leakage of cerebro-spinal fluid (CSF) and other complications arising from sutures.

Surgical sutures date back some 4,000 years, so a new approach is long overdue, according to one of the device’s inventors and leader of the Bio/polymer Research Group, UNSW scientist John Foster.

"Others have tried surgical glues but these are mainly gel-like so bonding to the tissue is uneven often resulting in leakages and they’re not easy to use. The strongest surgical glue is so toxic that it’s limited to external applications," says Dr Foster. "Other devices use ultra-violet light to effect rather poor sealing, but UV rays are damaging to living cells

"The beauty of this is that infra-red laser doesn’t cause any tissue damage. Better still, Surgilux has anti-microbial properties, which deters post-operative infections."

Foster and his team are working with micro-surgeon Marcus Stoodley who specialises in nerve repair. Based at the Prince of Wales Hospital Stoodley is excited about early test results.

"Surgilux is well suited to repairing damaged nerves because the gold standard -- sutures – inevitably cause damage to nerves and there is always some permanent loss of function.

"Our test results with rats have shown some degree of permanent nerve recovery within six weeks of operating."

The researchers – who are looking for commercial backing to initiate clinical trials – are planning a second generation version of Surgilux that incorporates growth factors and perhaps stem cells to regenerate nerves.

Most breast cancer surgeons don’t talk to patients about reconstruction options, U-M study finds

Women more likely to choose mastectomy after discussing reconstruction

ANN ARBOR, MI – Only a third of patients with breast cancer discussed breast reconstruction options with their surgeon before their initial surgery, according to a new study from the University of Michigan Comprehensive Cancer Center.

What’s more, women who did discuss reconstruction up front were four times more likely to have a mastectomy compared to those women who did not discuss reconstruction.

"The surgical decision making for breast cancer is really centered on patient preference. Long-term outcomes are the same regardless of whether a woman is treated with a lumpectomy or a mastectomy. But that choice could have significant impact on a woman’s quality of life, sexuality and body image. It’s important for women to understand all of their surgical options – including breast reconstruction – so they can make the best choice for themselves," says study author Amy Alderman, M.D., M.P.H., assistant professor of plastic surgery at the U-M Medical School.

The study appears Dec. 21 in the online version of the journal Cancer, and will appear in the Feb. 1 print edition.

| |

|Watch related video clip. |

|For faster downloading, |

|choose the lo-res option. |

|(Windows Media Player required) |

The study looked at 1,178 women from the Detroit and Los Angeles metropolitan areas who had undergone surgery for breast cancer. Patients were contacted about three months after diagnosis and were asked whether they had discussed breast

reconstruction with their surgeon before their surgery. Patients were also asked whether knowing about reconstruction options affected their decision to receive a mastectomy.

The researchers found that younger and more educated women were more likely to discuss reconstruction with their surgeon. They also found that this discussion significantly affected a woman’s treatment decision, with women who knew about reconstruction options four times more likely to choose a mastectomy.

Breast reconstruction can be performed immediately after a mastectomy, which removes the entire breast. This type of reconstruction leads to better aesthetic outcomes and psychological benefits for the patient, compared to delayed reconstruction, previous studies have shown.

Amy Alderman, M.D., M.P.H."To many women, breast reconstruction is a symbol of hope that they can get past this cancer diagnosis. Reconstruction is not necessarily the right option for every woman and not everyone is going to choose reconstruction, but I think it’s important that every woman is informed of what the benefits of reconstruction can be for their physical and emotional well being," Alderman says.

The researchers urge general surgeons to include discussion of all surgical options – lumpectomy, mastectomy and mastectomy with reconstruction – at a point when a patient is considering her choices. General surgeons could refer patients to plastic surgeons to discuss options before the initial surgery. Decision aids should also incorporate information about reconstruction, the researchers write.

"Patients need to be educated consumers of their health care. If a physician does not bring up an option, the patient needs to ask. She needs to either ask the physician to provide the information or ask for a referral to a specialist who can provide the information. Women need to be proactive about their health care," Alderman says.

Some 180,000 Americans will be diagnosed with breast cancer this year. For information about treatment options, visit or call the U-M Cancer AnswerLine at 800-865-1125.

In addition to Alderman, study authors were Sarah T. Hawley, Ph.D., U-M Medical School and Ann Arbor VA Health Care System; Jennifer Waljee, M.D., U-M Medical School; Mahasin Mujahid, Ph.D., U-M School of Public Health; Monica Morrow, M.D., Fox Chase Cancer Center; and Steven J. Katz, M.D., M.P.H., U-M Medical School and Ann Arbor VA Health Care System.

Funding for the study was from the National Cancer Institute.

Reference: Cancer, published online Dec. 21, 2007; print issue date: Feb. 1, 2008.

Number of conflicts in the world no longer declining

The trend toward fewer conflicts reported by peace researchers since the early 1990s now seems to have been broken. This is shown in the latest annual report "States in Armed Conflict," from the Uppsala Conflict Data Program at the Uppsala University Department of Peace and Conflict Research. The findings worry the researchers. The Middle East is the region where peace initiatives are most conspicuous in their absence.

Since the most conflict-ridden years in the early 1990s, a continuous decline was registered up to 2002. Since that time the number has held steady at around 30 active armed conflicts per year. This is probably also the case for 2007.

"This is of course a cause of concern. Today’s ongoing conflicts are extremely protracted," comment researchers Professor Peter Walensteen and Lotta Harbom. "This indicates that the successful negotiation efforts of the 1990s are no longer being carried out with the same force or effectiveness."

Today’s conflicts appear to be intractable and drawn-out, and the researchers believe that the 1990s peace strategies need to be improved in order to achieve results. At the same time, there are encouraging trends. Conflicts between different groups and peoples, with no involvement of the state, are decreasing in the number of both conflicts and fatalities.

"This type of conflict often arises in the wake of civil war, but they seem to be easier to bring to an end," says Joakim Kreutz at the Uppsala Conflict Data Program.

One event that received a great deal of attention during 2007 was the violence perpetrated against demonstrating monks in Burma, but this type of violence against civilians is becoming less common. Even though there still are armed attacks on civilians in many countries, there is a great difference compared with the situation in the 1990s, when the genocide in Rwanda, for example, claimed hundreds of thousands of victims.

There are also points of light when it comes to conventional conflicts. Peace negotiations are underway in a number of conflicts, and they are also leading to peace treaties. The agreements in Nepal (from 2006) and Aceh in Indonesia (from 2005) are now being implemented with some degree of determination. Also, peace-making measures in a number of West African countries, like Sierra Leone, Liberia, and Ivory Coast, continue to be fruitful.

The Middle East is the region in which peace initiatives are most clearly conspicuous in their absence. The central importance of the region for the world’s oil supply and for world religions makes this serious. The conference in Annapolis in lat November 2007 was the first attempt since 2001 to bring the parties together. They even found it difficult to agree on the declaration that started the negotiations, notes Peter Wallensteen.

"This is a worrisome sign. At the same time, we have to welcome all attempts to bring peace to this area. It has been more than 60 years since the UN General Assembly adopted a plan for Palestine. It must be adapted to today’s reality and implemented."

During the year other regional conflict complexes have emerged and worsened. The crisis in the Sudanese region Darfur is now spreading to the surrounding countries, such as Chad and the Central African Republic.

"These developments have prompted neighboring countries to take certain peace initiatives," states Lotta Harbom. "The international mediators in the Darfur conflict, including Jan Eliasson, who is also a visiting professor at Uppsala University, are working to arrange negotiations among the parties. But thus far they have had no success."

The situation in Africa’s Horn continues to be troublesome. The region’s own conflict dynamics have come to be more and more intertwined with the US-headed war on terror. This has led to new conflict issues being added to the unresolved disputes between Ethiopia and Eritrea. Somalia has once again become a seat of conflict.

The conflicts in Iraq and Afghanistan have created more uncertainty for the neighboring states. Turkey, Iran, and Syria have shown their extreme displeasure with the activities of the Turkish-Kurdish guerilla PKK from their base in the Kurdish provinces of Iraq. Turkey’s attack in early December was predictable. The Taliban’s increased military activity in Afghanistan and Al-Qaida’s operations have influenced developments in Pakistan. This can affect the stability and chances for democracy in this nuclear-weapons state and have a negative impact on the otherwise promising drop in violence in Kashmir.

More information: Harbom, Lotta (ed.) States in Armed Conflict 2006, Uppsala University: Department of Peace and Conflict Research, or at

Light powered platinum more targeted & 80 times more powerful than similar cancer treatments

Researchers from the Universities of Warwick, Edinburgh, Dundee and the Czech Republic’s Institute of Biophysics have discovered a new light-activated platinum-based compound that is up to 80 times more powerful than other platinum-based anti-cancer drugs and which can use "light activation" to kill cancer cells in much more targeted way than similar treatments.

The platinum-based compound known as "trans, trans, trans- [Pt(N3)2(OH)2(NH3)(py)]", or a light activated PtIV complex, is highly stable and non-toxic if left in the dark but if light falls upon it becomes much less stable and highly toxic to cancer cells. In fact it is between 13 and 80 times more toxic (depending on how and on which cells it is used) to cancer cells than the current platinum based anti-cancer drug Cisplatin. Moreover it kills the cells by a different mechanism of action, so it can also kill cisplatin-resistant cells.

Professor Peter Sadler, Chairman of the Chemistry Department of the University of Warwick, who led the research project said:

"Light activation provides its massive toxic power and also allows treatment to be targeted much more accurately against cancer cells."

The compound could be used in particular to treat surface cancers. Patients could be treated in a darkened environment with light directed specifically at cancer cells containing the compound activating the compound’s toxicity and killing those cells. Normal cells exposed to the compound would be protected by keeping the patient in darkness until the compound has passed through and out of the patient.

The new light activated PtIV complex is also more efficient in its toxic action on cancer cells in that, unlike other compounds currently used in photodynamic therapy, it does not require the presence of significant amounts of oxygen within a cancer cell to become toxic. Cancer cells tend to have less oxygen present than normal cells.

Although this work is in its early stages, the researches are hopeful that, in a few years time, the new platinum compound could be used in a new type of photoactivated chemotherapy for cancer.

Note for editors: The research has just been published in PNAS (The Proceedings of the National Academy of Science, under the title "A potent cytotoxic photoactivated platinum complex". The authors are – Project leader Professor Peter Sadler, (University of Warwick) and Ana M. Pizarro (University of Warwick); Fiona S. Mackay, Stephen A. Moggach, Simon Parsons (University of Edinburgh), Julie A. Woods (University of Dundee), Pavla Heringová, Jana Kašpárková, and Viktor Brabec (Institute of Biophysics, Academy of Sciences of the Czech Republic).

To curious aliens, Earth would stand out as living planet

GAINESVILLE, Fla. — With powerful instruments scouring the heavens, astronomers have found more than 240 planets in the past two decades, none likely to support Earth-like life.

But what if aliens were hunting life outside their own planet? Armed with telescopes only a bit bigger and more powerful than our own, could they peer through the vastness of space and lock in onto Earth as a likely home to life?

That’s the question at the heart of paper co-authored by a University of Florida astronomer that appeared this week in the online edition of Astrophysical Journal. The answer, the authors say, is a qualified "yes." With a space telescope larger than the Hubble Space Telescope pointed directly at our sun, they say, "hypothetical observers" could measure Earth’s 24-hour rotation period, leading to observations of oceans and the chance of life.

"They would only be able to see Earth as a single pixel, rather than resolving it to take a picture," said Eric Ford, a UF assistant professor of astronomy and one of five authors of the paper. "But that could be enough for them to identify our planet as one that likely contains clouds and oceans of liquid water."

This research may sound whimsical, but it has a serious goal: to provide a road map for Earth-bound astronomers trying to study Earth-like planets — a task expected to become possible in coming decades as more powerful telescopes come on line, said Enric Palle, the lead author of the paper and an astronomer with the Instituto de Astrofisica de Canarias.

For humans or curious aliens, observing planets is challenging for a number of reasons – habitable planets all the more so. The planet can’t be too close or too far away from its star, or its surface would scald or freeze. And, it must have a protective atmosphere like Earth’s.

Most planets found so far are much larger than Earth, which means they are likely hot gas planets similar to Jupiter, a profoundly uninhabitable place with no solid surface and atmosphere composed largely of hydrogen and helium.

But astronomers are beginning to plan how future space telescopes could directly detect planets much closer to Earth’s size and proximity to the sun. One challenge: To figure out how to use a planet’s light to recognize if its surface and atmosphere are Earth-like.

For Ford and his colleagues, the answer lies in probing how the Earth would appear to outside or alien observers.

Astronomers have long recognized that even a large telescope would need to observe Earth for several weeks to collect enough light to identify chemicals in the planet’s atmosphere. During these observations, the brightness of the Earth would change, primarily because of clouds rotating into and out of view. If astronomers could measure Earth’s rotation period, then they would know when a given part of the planet was in view. The hitch was that astronomers were unsure whether Earth’s seemingly chaotically changing cloud patterns would make it impossible for alien observers to determine this rotation rate.

Based on data retrieved from satellite observations of Earth, Ford and his colleagues created a computer model for the brightness of the Earth, revealing that on the global scale Earth’s cloud cover is remarkably consistent — with rain forests usually turning up cloudy, arid regions clear, and so on. As a result, extraterrestrial astronomers who watched Earth for a period of several months would notice repeating patterns – a bit like watching the spots on a spinning ball come into view and then disappear. From those repeating patterns, they could then deduce Earth’s 24-hour rotation period, Ford said.

That done, the "E.T." astronomers could infer that anomalies in the pattern were caused by changing weather patterns, most prominently, clouds, he said. Although some uninhabitable planets are extremely cloudy, the repeated presence and absence of clouds indicates active weather. On Earth, this variability results in water turning from gas to a vapor and back again, so finding similar variability on another planet would be a reasonable indication of liquid water.

"Venus is always covered in clouds. The brightness never changes," Ford said. "Mars has virtually no clouds. Earth, on the other hand, has a lot of variation."

Not only that, but observers could likely also infer the presence of continents and oceans from Earth’s changing light pattern.

The research will be useful to astronomers designing the next generation of space telescopes because it provides an outline of the capabilities required for studying the surfaces of Earth-like planets, Ford said. He said it appears that zeroing in on Earth-like planets orbiting the nearest stars would require a telescope at least twice the size of the Hubble Space Telescope. Ford said he hopes that his research will help to motivate an ever larger space telescope that could search for Earth-like planets around many stars.

The other authors of the paper are P. Montañés-Rodríguez and M. Vazquez, both of the Instituto de Astrofisca de Canarias in Spain, and Sara Seager, of the Massachusetts Institute of Technology. The IAC and UF are partners in the construction of the Gran Telescopio Canarias, a 10-meter telescope in the Canary Islands, which will start operations in 2008.

The research was funded in part by a Ramon y Cajal fellowship for Palle, by a Hubble fellowship and UF for Ford, and by a NASA grant for Seager.

Mars rovers find new evidence of 'habitable niche'; perilous third winter approaches

By Lauren Gold

Inch by power-conserving inch, drivers on Earth have moved the Mars rover Spirit to a spot where it has its best chance at surviving a third Martian winter -- and where it will celebrate its fourth anniversary (in Earth years) since bouncing down on Mars for a projected 90-day mission in January 2004.

Meanwhile, researchers are considering the implications of what Cornell's Steve Squyres, principal investigator for NASA's Mars Exploration Rover mission, calls "one of the most significant" mission discoveries to date: silica-rich deposits uncovered in May by Spirit's lame front wheel that provide new evidence for a once-habitable environment in Gusev Crater.

Squyres and colleagues reported the silica deposits at the annual meeting of the American Geophysical Union in early December in San Francisco.

On the other side of Mars, Spirit's still-healthy twin Opportunity is creeping slowly down the inside of Victoria Crater, where layers of exposed rock are confirming findings made at the much smaller Eagle and Endurance craters -- and where deeper layers could offer new insight into the planet's history.

Spirit, which has been driving backward since its right front wheel stopped turning in March 2006, was exploring near a plateau in the Gusev Crater known as Home Plate when scientists noticed that upturned soil in the wake of its dragging wheel appeared unusually bright.

Top image: The deck of NASA's Mars Exploration Rover Spirit is so dusty that the rover almost blends into the background in this image assembled from frames taken by the panoramic camera (Pancam) during the period from Spirit's Sol (Martian day) 1,355 through Sol 1,358 (Oct. 26-29, 2007). The bottom image of Spirit -- taken on Sol 586 (Aug. 27, 2005) -- offers a striking comparison of the solar panels. NASA/JPL-Caltech/Cornell

Measurements by the rover's alpha particle X-ray spectrometer and mini-thermal emission spectrometer showed the soil to be about 90 percent amorphous silica -- a substance associated with life-supporting environments on Earth.

"This is one of the most powerful pieces of evidence for formerly habitable conditions that we have found," said Squyres, Cornell's Goldwin Smith Professor of Planetary Science, in a Dec. 11 interview with the BBC.

On Earth, silica deposits are found at hot springs, where hot water dissolves silica in rock below the surface, then rises and cools, causing the silica to precipitate out near the surface; and at fumaroles, where hot acidic water or vapors seep through rock, dissolving away other elements but leaving silica behind.

"Either place on Earth is teeming with microbial life," said Squyres. "So this is, either way, a representation of what in the past was a local habitable environment -- a little habitable niche on the surface of Mars."

Victoria Crater, about 800 meters (one-half mile) in diameter, has been home ground for NASA's Mars Exploration Rover Opportunity for more than 14 of the rover's first 46 months on Mars. This view shows the rover's path overlaid on an image of the crater taken by the High Resolution Imaging Science Experiment on NASA's Mars Reconnaissance Orbiter. NASA/JPL-Caltech/University of Arizona/Cornell/Ohio State University

The discovery was reminiscent of Spirit's journey to winter safety last year, when it uncovered (and briefly got mired in) patches of bright soil that contained high levels of sulfur -- another possible indicator of past hydrothermal activity.

Unlike last year, though, Spirit enters this Martian winter handicapped by dusty solar panels -- the result of giant dust storms in June and July. So the rover's power levels, which currently range between approximately 290 and 250 watt-hours (100 watt-hours is the amount of energy needed to light a 100-watt bulb for one hour; full power for the rovers is 800-900 watt-hours) -- could drop to dangerous levels in the dwindling winter sunlight.

Spirit's perch is currently at a 15-degree tilt on the north-facing slope of the Home Plate plateau, said Jim Bell, Cornell associate professor of astronomy and leader of the mission's Pancam color camera team. As the sun moves lower in the Martian sky, drivers will nudge the rover to a steeper angle.

"The fact that we've gotten to a good tilt, and we're going to get to a better tilt, is a good sign," said Bell. Still, he added, any work the rover does over the winter -- collecting Pancam images of its surroundings, for example -- will be strictly low-exertion.

"Most of 2008 is going to be a quiet time for Spirit," he said. "It's really about survival."

Where and why humans made skates out of animal bones

Archaeological evidence shows that bone skates (skates made of animal bones) are the oldest human powered means of transport, dating back to 3000 BC. Why people started skating on ice and where is not as clear, since ancient remains were found in several locations spread across Central and North Europe.

In a recent paper, published in the Biological Journal of the Linnean Society of London, Dr Formenti and Professor Minetti show substantial evidence supporting the hypothesis that the birth of ice skating took place in Southern Finland, where the number of lakes within 100 square kilometres is the highest in the world.

"In Central and Northern Europe, five thousand years ago people struggled to survive the severe winter conditions and it seems unlikely that ice skating developed as a hobby" says Dr Formenti. "As happened later for skis and bicycles, I am convinced that we first made ice skates in order to limit the energy required for our daily journeys".

Formenti and Minetti did their experiments on an ice rink by the Alps, where they measured the energy consumption of people skating on bones. Through mathematical models and computer simulations of 240 ten-kilometre journeys, their research study shows that in winter the use of bone skates would have limited the energy requirements of Finnish people by 10%. On the other hand, the advantage given by the use of skates in other North European countries would be only about 1%.

Subsequent studies performed by Formenti and Minetti have shown how fast and how far people could skate in past epochs, from 3000BC to date.

Asteroid may hit Mars in January

* 19:48 21 December 2007

* news service

* David Shiga

A newly discovered asteroid has a chance of hitting Mars on 30 January, according to preliminary calculations of its orbit. If it does hit, it will offer scientists an unprecedented opportunity to observe a brand new crater from orbit, and possibly even the impact itself.

The asteroid, called 2007 WD5, was discovered on 20 November by a 1.5 metre telescope near Tucson, Arizona, US, that combs the skies as part of NASA's efforts to detect asteroids with a chance of hitting Earth.

It is an estimated 50 metres across, putting it in the same class as the Tunguska object that exploded over Siberia in 1908, flattening trees in an area extending many kilometres from the explosion.

Orbital calculations currently give the asteroid a 1 in 75 chance of hitting Mars on 30 January 2008. That risk is likely to decrease or go away altogether when further observations refine the object's orbit.

The asteroid cannot be observed at the moment because it is too close in the sky to the Moon and is lost in its glare. The earliest opportunity to start tracking it again is likely to be the end of December or the first week of January, says Steven Chesley at NASA's Jet Propulsion Laboratory in Pasadena, California, US. "The chances are quite good that we'll be able to rule out an impact," he told New Scientist.

No object has ever been found before with such a high chance of hitting Mars or any other planet before, he says, with the exception of the comet that smacked into Jupiter in 1994 and the asteroid Apophis. Early calculations gave Apophis a 1 in 37 chance of hitting Earth in 2036, although further observations later dropped that to 1 in 45,000.

If 2007 WD5 did hit Mars, it would land not far from its equator. The exact location is not known because of the uncertainty of the object's path, but both of NASA's robotic rovers are outside the potential impact zone, with Opportunity being the closer of the two.

Meteor Crater

An impact would be an exciting opportunity for scientists, says Ray Arvidson of Washington University in St Louis, Missouri, US, a member of the rovers' science team. He estimates that a space rock of this size would blast a crater about 1 kilometre across, about the size of Meteor Crater in Arizona, US.

The rovers are too far away to be in direct danger from the impact. But could it throw up enough dust to threaten their supply of solar power?

"I don't think so, because the planet is so dusty to begin with," Arvidson told New Scientist. Dust devils and winds are kicking up dust all the time on Mars, he says. "This would be a relatively small addition in terms of the dust and most of it is going to be coarse enough to settle out near the crater," he says.

If an impact could not be ruled out ahead of time, there would probably be a concerted effort using robotic probes currently orbiting Mars to try to observe the impact, or at least its aftermath, he says. This could include NASA's Mars Reconnaissance Orbiter (MRO) and Mars Odyssey spacecraft, as well as the European Space Agency's Mars Express spacecraft, he says.

Blazing fireball

If the timing and geometry were just right, there is a chance that one of the probes would be in the right place to see the asteroid enter the atmosphere and slam into the ground.

"This is a long shot, but if everything worked out … you could see the trail as the thing comes blazing through the atmosphere and there would be a fireball extending down the trajectory of the thing, and then there's the impact, which is an explosive process," he says.

It would be a unique opportunity to observe a crater immediately after an impact, he says. The vast majority of known craters are ancient and are therefore coated with dust, although a few small ones appear to have formed in the past several years.

If an impact is not ruled out, the rovers will keep an eye out for it, says rover chief scientist Steven Squyres of Cornell University in Ithaca, New York, US. But he thinks it is unlikely that they would be lucky enough to see it entering the atmosphere. "If it were to happen, I think the thing we'd be most likely to see would be a transient increase in the amount of dust in the atmosphere," he told New Scientist.

The Doh! of technology

* 23 December 2007

* news service

* Tom Simonite * Hazel Muir

Every researcher knows the best plans can go horribly pear-shaped. Just think of the ill-fated Beagle 2 spacecraft that went missing on its way down to the surface of Mars just four years ago. What exactly went wrong is still unclear, but a mechanical fault with the landing parachute is the chief suspect. And remember the Mars Climate Orbiter, which smacked into the Red Planet's surface when it was meant to maintain an orbit of 140 kilometres? It turned out that the main contractor, Lockheed Martin, had used imperial rather than metric units as specified by NASA in the design of its navigation system. Not all such accidents make headlines, however, so we've rounded up five of the most shocking, surprising or downright silly that may have slipped under your radar. You can tell us about others here.

Memory scramble

THEY say ballooning is the least stressful way to fly. Indeed, a balloon seemed the perfect platform from which the $10 million BLAST telescope, funded by NASA, the Canadian Space Agency and the UK's Particle Physics and Astronomy Research Council, could take far-infrared snaps of star formations.

For 12 days running up to 2 January 2007, it collected valuable data as it floated 40 kilometres above Antarctica. As it descended, the gondola released the huge balloon and deployed its landing chutes as planned. But the electronics that should have released the parachutes on touchdown failed. Antarctic winds inflated them like giant spinnakers, turning the cargo into a wind-powered sled.

"It was moving as fast as I could run," recalls project leader Mark Devlin, who was following the fate of the 180-kilogram telescope and its support computers from his home in the US. "It was absolutely sickening." The support plane could only watch as the gondola bounced across the ice, strewing pieces of equipment as it went. It finally came to rest 24 hours later, when it wedged in a crevasse 200 kilometres from its landing site.

Devlin was emailed a picture of the scene, which he scrutinised for any sign of the hard drives bearing the only copy of the mission's data. "NASA paints everything white," he says, so his search was initially in vain. Fortunately, a pilot tracking the furrow gouged by the gondola spotted the package. The damaged drives eventually yielded their irreplaceable data, but the telescope was a write-off. Devlin is now fund-raising for a similar mission, with tougher electronics and one other change: "I'm thinking fluorescent orange," he muses.

The trouble with rockets

ROCKET science has a deserved reputation for being tough, so it should be no surprise that things can get a little bumpy when designing and testing a launcher. Just ask Elon Musk, the millionaire founder of PayPal and rocket company SpaceX.

His first rocket, Falcon 1, was scheduled to lift off from the Pacific atoll Kwajalein in November 2005, at which point the problems began. First, unplanned engine tests used up more liquid oxygen (LOX) fuel than expected, then the LOX generator broke down. A shipment of fuel was ordered from Hawaii, but the tanker sprang a leak and arrived only one-fifth full. There was just enough to launch, but a valve was left open during the final preparations allowing more fuel to waste away, so the launch had to be cancelled.

A fresh shipment of LOX arrived a month later but disaster struck again. High winds hit the atoll, and to be on the safe side, engineers decided to drain the fuel from the rocket. A faulty pressure valve caused a vacuum to form inside the main fuel tank, sucking in its soft sides like a crushed beer can.

After three months of repairs, by March 2006 Falcon 1 was ready to go. Seconds after launch, however, the main engine sprang a fuel leak, leaving a trail of flame in the rocket's wake as it spiralled off course and crashed within sight of the disappointed engineers. An investigation pinpointed the cause: a corroded fuel-line nut was to blame.

A year later, with the rocket rebuilt from scratch, Falcon 1 finally took off without a hitch. It managed 5 minutes of smooth flight. But a bump as the first and second stages separated confused a control system, causing it to enter an uncontrolled roll, which triggered a premature shutdown of the second-stage boosters. Falcon 1 did reach space, but not with the velocity needed to secure orbit.

Its next outing is scheduled for January 2008, when it will carry the cremated remains of 125 people, including actor James Doohan - Star Trek's Scotty. Let's hope he gets a fitting send-off.

Goodbye yellow submarine

THE submarine Autosub 2's mission might sound simple enough: film the rich diversity of species living under the permanent ice of the Antarctic. Yet exploring 15 kilometres under the Fimbulisen ice shelf had to be done autonomously, so the 7-metre sub's creators at Southampton University, UK, installed an artificially intelligent guidance system.

In 2003, Autosub 2 was sent on its maiden voyage under sea ice and completed the test mission successfully. Repeat missions in 2004 boosted the researchers' confidence and so they prepared for the big one: to explore under the permanent ice. The next year they went for it.

All looked good until five hours after launch. The £1.5 million yellow submarine called home using an acoustic distress beacon. Somehow it had lost its way, and had become wedged deep under the permanent ice.

Researchers on the support ship were stunned - there was nothing they could do to help it. "I remember it taking a while to believe the sub was gone," says Miles Pebody, a robotics engineer who worked on Autosub 2. "It would have been great to send another vehicle under to find it - but it was too dangerous."

The research ship returned for a final farewell five days later. Autosub 2 had not budged. It kept sending its distress call until its 5500 D-cell batteries went flat.

Recovering the sub could reveal what went wrong, but its icy tomb makes that practically impossible. A report last year guessed that a hardware fault had probably cut the power, triggering a decision to surface prematurely, while the sub was still under the ice.

Back to the future

IN FEBRUARY 2007, 12 F-22 Raptors, the US air force's new stealth fighters, left Hickam Air Force Base in Hawaii, bound for Okinawa, Japan, on the high-tech planes' first overseas outing. Things went smoothly until they reached the 180th meridian - otherwise known as the International Date Line.

Some of the pilots suddenly found themselves without any navigation aids. With nothing to tell them their compass heading or even whether they were level or not, it was as if the pilots had been instantaneously transported from the cockpit of the world's most advanced aircraft into one dating from the first world war.

Fortunately the skies were clear, so the squadron did an about-face and was able to follow its in-flight refuelling tankers back to Hickam.

The error was diagnosed as a problem with a "partial line of code" that had pitched the planes' computers into an infinite loop of trying and failing to calculate their position while dealing with an unexpected date. A fix was issued, and three weeks later the planes made their trip to Japan without a hitch.

"Reliance on electronics has changed the flight-test process," says Donald Shepperd, once head of the US Air National Guard. "It used to be tails falling off, now it's typos that ground a fighter."

A matter of perspective

IT'S REASSURING to know that an engineering screw-up doesn't always get you into trouble. It can sometimes even dig you out of it, as astronomers found out at the Canada-France-Hawaii telescope on Mauna Kea, Hawaii.

In early 2003, researchers began observing the skies using the telescope, equipped with a new digital camera - the biggest in the world at the time. The CFHT was fitted with four precision lenses so the camera could capture crisp images of vast areas of sky.

The results were disappointing. The images were sharp in the centre, but far more blurred than expected at the sides. Various tests failed to find a problem, much less a solution, so astronomers pressed ahead with a five-year survey using the camera, until May 2004, when a steering committee said the image quality was jeopardising the project.

A laborious investigation followed, with engineers dismantling the optics and reassembling them daily, but finding no answer. Then one day, an engineer mistakenly replaced one of the four lenses back-to-front. The images improved spectacularly.

"The next observations were just 'Wow!'," says Christian Veillet, the observatory's director. "The image quality was just what it should have been."

To this day, no one understands why the back-to-front lens works so well, or why it didn't work when it was oriented as planned. "That has been frustrating, but it would be a waste of resources to investigate, so we decided to just forget about it," says Veillet. "Now the science that is coming out is exquisite."

................
................

In order to avoid copyright disputes, this page is only a partial summary.

Google Online Preview   Download