This month’s stories include the launch of National Institutes of Health’s (NIH) new genomic data sharing policy, experts’ ridicule of clinical trials for alternative medicine, and filing of over 1,100 cases of bio-weapon release from wet-labs. In the education sector, the NIH investigates racial disparity in grant applications and US universities see a plunge in Chinese student influx.

NIH launches genomic data sharinggenome

Our DNA has the answer to everything – from our disease predispositions to what makes some of us musicians. Since the sequencing of the human genome in 2003, scientists have used genome wide association studies (GWAS) to pinpoint “hot-spots” of variation in our DNA, dubbed SNPs, which put some of us at a higher risk of certain diseases. GWAS performed on patient samples offer valuable clues to the treatment of genetic disorders. In view of this, in 2007 the NIH came up with the GWAS policy to make patient genomic data available to researchers by consolidating it in a database for Genotypes and Phenotypes (dbGaP). Over the past six years, the dbGaP has been used by over 2,200 investigators and resulted in some 900 publications marking significant scientific progress. With its help, scientists have gleaned novel insights into the genetics of cancer, psychiatric illnesses and alcoholism.

Last month, the NIH issued a genomic data sharing (GDS) policy as an extension of the GWAS policy. GDS will take effect from 2015. Until now, investigators de-identified patient information before distributing them to research entities but the new GDS policy requires obtaining the explicit, informed consent of participants when using their genomic data for research. Similar to GWAS, GDS will employ a data-access committee to scrutinise researchers’ data-access requests and be mindful of ethical issues concerning genomic data usage. Additionally, under this policy, the NIH aims to add other genomic data types from both human and non-human studies and make them available to scientists and the public alike.

Unscientific medicine does not deserve clinical trials, experts say

Many of us suffering from a chronic health problem turn to alternative medicine, such as homeopathy or touch therapy. But have we really benefited from these? David Gorsky at Wayne State University School of Medicine, Michigan and Steven Novella at Yale, Connecticut, condemn clinical trials of such dubious forms of medicine.

In the era of evidence-based medicine (EBM), where preclinical research must show clear evidence of the efficacy of a medicine before it is tested on human subjects, alternative medicine like homeopathy and Reiki are based on little to no experimental proof and have abysmally low chances of success. They do not deserve to be tried on valuable human subjects but simply add to the time and cost of randomized clinical trials (RCTs). Homeopathy, for instance, defies the laws of physics as it claims to heal by infinitely diluting chemical compounds to non-existence. Most RCTs using homeopathy have failed and in fact, in one case, the use of diluted toxins mercury and arsenic to treat diarrhea did more harm than good. Similarly, Reiki, a form of touch-therapy, claims that energy can be transferred from a universal source to the patient to facilitate healing. It has a philosophical but no scientific basis. Whether Reiki even has a placebo effect is questionable. The authors suggest that it is time to switch from EBM to science-based medicine, where only those drugs whose mechanism of action is clearly known make it to the RCTs and there is no room for medicine that is derived from pseudoscience.

Carelessness in wet-labs: the burgeoning rise in bioweapons releasebiohazard

Accidents happen – mainly due to oversight. But when dealing with dangerous germs that have the capacity to trigger an epidemic, can researchers afford to be careless? Figures show that at least 1100 lab accidents involving bacteria, viruses and toxins were reported to federal agencies between 2008 and 2012.

Researchers working with toxic agents must employ strict biosafety protocols for their handling, storage and disposal as laid down by the Centers for Disease Control and Prevention (CDC). But this summer, there has been a spurt in the release of deadly pathogens from several US labs, the result of shameful negligence on the part of investigators. Lab accidents exposing staff, lab animals and even neighborhood livestock to anthrax, hog cholera or brucellosis are alarming. The instance of an avian flu virus mix-up is probably the most frightening of all. An overworked and rushed scientist contaminated vials of a benign H9N2 strain with a highly virulent variant H5N1, as he failed to clean-up well when switching strains. It is shocking to think of the repercussions of such mistakes – for instance, a lab accident is thought to have set off the H1N1 pandemic in 2009.

CDC has been directed to ramp up security measures, re-assess biosafety levels and reduce the number of select agents in labs as well as the workforce handling them. The incidents have come as ‘wake-up calls’ to biosafety regulators.

NIH investigates racism in grant awards

Statistics show that minority scientists, particularly African American, applying for the NIH grants are only half as successful as the white majority. According to officials, this racial disparity does not reflect any real differences in the competency of the minorities. Rather it may be a sign of a skewed system of evaluation.

To bring fairness into the system, the NIH will be recruiting a team to probe the grant evaluation process starting September. First, the team will anonymize applications. They will strip applicant identifiers, such as name, race, etc., off the applications before they are sent to the reviewers. The drawback here is that some information is hard to conceal viz. the names on publications, which are crucial for the selection process. Second, they will scrutinize reviewers’ comments in successful applications to R01, a major independent research grant, for any trends in their language. One goal here is to get hold of reviewers’ unedited critiques. These may offer clues to any unintentional bias during the review process.

Even if the review process turns out to be unbiased, after all, the NIH believes that these investigations can shed light on racial disparity in the grant-writing process. Minority scientists may simply need more grant-writing practice to have their applications attract reviewers. The outcome, either way, seems to be worthwhile.

Chinese applications to US universities dwindleChina

When it comes to US schools admissions, a mighty one-third of graduate students has been Chinese, making China the biggest single source of foreign applications. But in recent years, US universities have feared that the scenario may change owing to China’s increasing investment in its own academic infrastructure. From the latest Council of Graduate School’s (CGS) survey, it appears that their fears may come true. For two years in a row, Chinese applications and acceptances numbers have remained flat.

This diminishing trend may be the result of access to world-class training at home, given the Chinese government’s efforts to maximize educational opportunities. According to another theory, it may also stem from a slump in the numbers of degree holders since China raised the bar for college admissions to counter the rise in unemployment rates.

In contrast, Indian applications have boomed by 33% this year after a 22% rise in 2013. Yet another blip in the survey is the huge rise in Brazilian applications, which have shot up by 61% since 2013. In all, the dynamics of foreign applications seem a little unpredictable at the moment.

Originally published at LabTimes

Photo credits: Miki YoshihitoMatt, Greg Jordan via Creative Commons



ChimpWe humans assume we are the smartest of all creation. In a world with over 8.7 million species, only we have the ability to understand the inner workings of our body while also unraveling the mysteries of the universe. We are the geniuses, the philosophers, the artists, the poets and savants. We are amused at a dog playing ball, a dolphin jumping rings, or a monkey imitating man because we think of these as remarkable acts for animals that, we presume, aren’t smart as us. But what issmart? Is it just about having ideas, or being good at language and math?

Scientists have shown, time and again, that many animals have an extraordinary intellect. Unlike an average human brain that can barely recall a vivid scene from the last hour, chimps have a photographic memory and can memorize patterns they see in the blink of an eye. Sea lions and elephants can remember faces from decades ago. Animals also have a unique sense perception. Sniffer dogs can detect the first signs of colon cancer by the scents of patients, while doctors flounder in early diagnosis. So the point is animals are smart too. But that’s not the upsetting realization. What happens when, for just once, a chimp or a dog challenges man to one of their feats? Well, for one, a precarious face-off – like the one Matt Reeves conceived in the Planet of the Apes – would seem a tad less unlikely than we thought.

In a recent study by psychologists Colin Camerer and Tetsuro Matsuzawa, chimps and humans played a strategy game – and unexpectedly, the chimps outplayed the humans.

Chimps are a scientist’s favorite model to understand human brain and behavior. Chimp and human DNAs overlap by a whopping 99 percent, which makes us closer to chimps than horses to zebras. Yet at some point, we evolved differently. Our behavior and personalities, molded to some extent by our distinct societies, are strikingly different from that of our fellow primates. Chimps are aggressive and status-hungry within their hierarchical societies, knit around a dominant alpha male. We are, perhaps, a little less so. So the question arises whether competitive behavior is hard-wired in them.

In the present study, chimp pairs or human pairs contested in a two-player video game. Each player simply had to choose between left and right squares on a touch-screen panel, while being blind to their rival’s choice. Player A, for instance, won, each time their choices matched, and player B won, if their choices did not. The opponent’s choice was displayed after every selection, and payoffs in the form of apple cubes or money were dispensed to the winner.

In competitive games such as this, like in chess or poker, the players learn to guess their opponent’s moves based on the latter’s past choices, and adjust their own strategy at every step in order to win. An ideal game, eventually, develops a certain pattern. Using a set of math equations, described by game theory, it is easy to predict this pattern on paper. When the players are each making the most strategic choices, the game hovers around what is called an ‘equilibrium’ state.

In Camerer’s experiment, it turned out that chimps played a near-ideal game, as their choices leaned closer to game theory equilibrium. Whereas, when humans played, their choices drifted farther off from theoretical predictions. Since the game is a test of how much the players recall of their opponent’s choice history, and how cleverly they maneuver by following choice patterns, the results suggest that chimps may have a superior memory and strategy, which help them perform better in a competition, than humans. In other words, chimps seem to have some sort of a knack when fighting peers in a face-off.

Their exceptional working memory may be a key factor for chimps’ strategic skills. A movie clip, part of a study in 2007, impressively captures the eidetic memory of a 2-year old chimp as he played a memory masking game.  It makes jaws drop to see him memorize random numerical patterns within 200 milliseconds, about half the time it takes for the human eye to blink. Memory of such incredible precision is rare in human babies and close to absent in adults, save for fictitious characters like Sheldon Cooper.

It may seem dispiriting to have chimps make chumps of us. But such human-chimp comparisons point to how the two species have evolved along different trajectories. The human brain is three times larger, and has about 20 billion neurons in the cortex, the seat of cognition, compared to 6 billion in chimps. This means that our brain is capable of highly specialized functions that a chimp brain isn’t. For example, we can build and use language in a myriad ways unlike chimps. But, to get such an advanced brain, psychologists believe that humans may have had to “tradeoff” the fine working memory and strategic thinking of the apes. Chimps use their strategic minds to get a competitive edge over their peers and climb their way up to be the alpha male. Whereas the human brain, with its unique language-related and collaborative skills, gives us a survival advantage in an egalitarian society. It’s the result of use it or lose it, where the environment has a major say.

In sum, what we garner from these studies is that every species has its own idiosyncrasies. Evolution is not just about adding on to existing prototypes, it is about fine-tuning them by eliminating the non-essential to create newer species that are, on the whole, better adapted to their surroundings — even if, in some particular ways, they are inferior.

Original story on Scientific American-Mind Matters

Photo credit: ucumari via Creative Commons


the winnowerOpen access publishing has broken the barricades in Science, allowing better access to invention and discoveries. The new journal The Winnower gives a whole new dimension to transparency in academic publishing – all the way from submission to reviewing, rejection and retraction.

The term “open access” was coined relatively recently with the dawn of the www-era. The main motive of the open access initiative has been to enhance research impact and citation. Open access publishing has broadened scientific reach and added new audiences to innovation and discoveries. Scientists, however, continue to be daunted by the onerous process of publication, sometimes including painful rejections and retractions. One of the reasons for this has been anonymous and inappropriate peer reviews. How can publishing be made more objective?

Joshua Nicholson, a PhD student at Virginia Tech, USA, stumbled on the idea of starting an open access journal very early in his career. The Winnower was launched earlier this year and is aimed at making the entire process of scientific publishing – all the way from submission to reviewing, rejection and publication – transparent. In an interview with Philip Young of Open@VT Joshua expressed that, “The main objective of The Winnower”, as the name suggests, “is to identify good pieces of research from flawed pieces based on open post-publication review.”

Unlike other peer-reviewed journals, the online journal The Winnower allows authors to instantly publish their research for a small fee, in what is called a “pre-print”. The site encourages authors to invite reviewers, which may seemingly bias the reviews. But this barely interferes with the credibility of the research given that all correspondence is readily accessible to the public. Greater transparency also eliminates personal or inappropriate reviews from anonymous reviewers. Following their reception and review by visitors to the site, the articles change status to ‘reviewed’. “Reviews are open and available for variable amounts of time allowing authors to make necessary edits,” states Joshua. Moreover, reviews can be collected over the entire lifetime of an article and serve as endorsements of visitors. Naturally, more reviews indicate more visits.

On what basis an article can be deemed reviewed and how the reviews can be ‘measured’ are some of the questions the founders are confronted with. “We want to change the conversation from ‘passing’ peer review to what is the percent confidence scientists have in this paper. To accomplish this, we will be implementing semi-structured reviews (to make them quantitative),” Joshua remarks. PLoS is working on developing a system to numerically score reviews based on select criteria and in his interview, Joshua mentions a likely collaboration withPLoS to imbibe this.

The site also employs article-level metrics for quantitative assessment of research articles. As Joshua points out, “We want to shift the focus from the journal to the article itself and we think employing various article-level metrics viz. altmetrics is the best way to do this.” The catch here is The Winnower’s lack of impact factor (IF), indexing, etc., which form the basis of paper submission and review. This is a more generic obstacle for the entire scientific community according to Joshua. He believes that newer ways to evaluate researchers, besides merely by IF, may change this scenario.

Submissions to The Winnower currently fall into 8 fields including mathematics, basic sciences and social sciences. Another interesting feature of the site is the Grain and the Chaff section. Here, authors can submit 1000-word essays of their experiences during the publication process. The Grain will be short essays on how the work was received and reviewed, rejected or accepted. The authors in this section are chosen based on the altmetric score or citations of their research papers. The Chaff is a place for authors of retracted papers to discuss the reasons behind the retraction, the fallacies or inaccuracies of their work. “We want to position papers published in The Chaff in a non-accusatory manner so that we may learn from these papers. The Chaff will not be a forum to castigate authors of retracted papers,” says Joshua.

In sum, The Winnower has all the potential to revamp scientific publishing by switching from a closed anonymous system to an open system. It also allows research to be conveyed to the public with all its imperfections and exceptions, as how Science truly is, without embellishing data in order to be accepted.

Published online LabTimes Aug 2014


Dont go by numbersMost global university rankings are based on reputational data. Such a system not only fails to capture the diversity in institutional profiles, it also tends to favor American universities. Thus, the European Commission came up with a more personalized approach.

Choosing your university is an arduous task. The numbers on university rankings make comparisons easy but our distinct priorities influence our decision for or against an institution. While an undergraduate may want to look at a university’s teaching standards or the percentage of graduate students, postdocs may be interested in sources of research funding, and foreign applicants may want to check out the university’s international reach. However, some of the major global university ranking systems, including the Academic Ranking of World Universities of the Shanghai University and the Times Higher Education World University Rankings, compile their scores solely based on international reputation or bibliometric data such as the number of research citations and academic laurels. Such “inadequate indicators” fail to accommodate the diversity of institutional profiles.

The European Commission has recently launched U-Multirank, an online tool for comparing universities worldwide. U-Multirank is a joint effort of the Centre for Higher Education Policy Studies in the Netherlands and the Centre for Higher Education in Germany. It has been widely touted as a “multidimensional” and “user-driven” ranking portal that compares universities across five dimensions: teaching and learning, research, knowledge transfer, international orientation and local engagement. While still reporting bibliometric data like other rankings, U-Multirank in addition includes “self-reported” data collected from about 500 participating universities and student surveys, data on interdisciplinary publications and collaborations with industry. What’s nice about U-Multirank is that it compares “like with like” and eliminates dissimilar profiles of institutions from obscuring the rankings. For instance, as project leader Frank Ziegele told Science, “There might be a university that has no ‘A’ (grade) for internalization because it serves primarily a local or national audience. This is perfectly fine. This university fulfills an important function for society.” Besides whole university comparisons, U-Multirank also provides field-wise rankings for physics, business, electrical and mechanical engineering with more subject areas to be added over the coming years.

This system uncovers the assets of European universities, which have been overshadowed by American institutions for a long time. The latter for example “are absolutely on top in terms of citation rates and other classical criteria” but, as per U-Multirank, “less renowned institutions, often from Europe emerge at the top of the list in other categories, such as international publications and publications with industry”, remarks Ziegele. Despite its aim to achieve fairness in the system, U-Multirank still faces a lot of criticism. Firstly, the participating universities are mostly European. Secondly, imbibing more universities into the system can cause inconsistency, let alone unreliability, of self-reported data that U-Multirank relies on. For similar reasons, the project has failed to win the support of the British higher education establishment. Moreover, some critics believe that the simplicity of traditional “league table” rankings, no matter how inaccurate, will continue to appeal to consumers.

It does seem like an ambitious call by the leaders of U-Multirank to try and collect the finest details of institutional profiles and consolidate the ratings in a standardised fashion, especially given the large number – 20,000 or so – of higher educational institutions in the world. Nevertheless, the idea of a system that captures “nuanced areas of performance” is welcome among educationists.

Published online LabTimes June, 2014

Photo: Outnumbered by Roger Smith used via Creative Commons License



Lets talk et al_Page_02Collaboration is key to success in science. However, if many groups are working together, the list of authors on the resulting paper is sometimes longer than the paper itself. A new author taxonomy system wants to give every author the credit he or she deserves. 

A grad student always fancies being part of a research paper, no matter how far behind the first author his or her name appears. It’s a gratifying feeling at least at the very first stages of your research career. But over time you realize how your contribution gets oversimplified, and almost meaningless, when, in every citation of your co-authored paper, you make a merely non-existential appearance within the italicized words, “et al.”.

Research today, especially in the life sciences, has become increasingly collaborative. Technological advancement as well as national research-assessment exercises have raised the bar for research quality. As a result, top scientific journals seek to publish research of a certain high standard that is almost always collaborative. The outcome is a burgeoning rise in multiple author manuscripts, which give little information on individual author contributions. As the author list expands, the contribution of the “8th author on a 15-author paper” is neglected. Now, how fair is that when every author of the research study has slogged for his own part?

Thus, in 2012, a small group of journal editors teamed up with Harvard University, USA, and the Wellcome Trust, UK, to develop a taxonomy of author contributions in an effort to enhance visibility of “who did what” in scientific publications. Their taxonomy-based crediting system was recently tested online among corresponding authors and received a fair amount of success. “The author number inflation on research papers makes it difficult to decipher the contribution of individual authors based on the list of author names. In a multi-author paper, usually the first and last authors are assumed to be the leads. This ambiguity means that research culture and potentially politics, rather than actual contribution, often determine author order,” says Liz Allen, a co-founder of the initiative at the Wellcome Trust, justifying the need for a systematic classification of the roles of all authors in a collaboration.

For researchers, either at the beginning of a scientific career seeking academic positions or at an advanced level competing for grants and funding opportunities, publications help gauge their scientific expertise. According to Amy Brand, a co-contributor at Digital Science in Cambridge, USA, credit for research and discovery has a “huge impact on their (researchers’) career advancement and tenure, as well as on the transparency and integrity of the scientific record”. Besides shaping career prospects, clarified roles will enable researchers to find the right experts for a methodological innovation or for future collaborations, or even journals to find the most appropriate peer reviewers.

In the online survey conducted in late 2013, the “14-role taxonomy” comprising categories ranging from study conception, methodology and analysis, to supervision, project administration and funding acquisition, was circulated to 1,200 corresponding authors of publications in PLoSNatureElsevier journals, Science and eLife. The authors were asked to classify each author’s contribution in the categories provided. The survey also encouraged feedback in terms of the exhaustiveness of the categories and ease of use of the system. The taxonomy fared well among the 230 authors who gave feedback and a good number of them found it a well-structured classification system that was unprecedented.

“The goal of the taxonomy is to provide a machine-readable standard for assigning contribution role tags to listed authors,” states Liz, “while also not adding to the researcher’s burden when preparing and submitting manuscripts.” Though the pilot experiment suffers from small sample size, the feedback provides a starting point to explore avenues for implementing such a system. In the next months, the team will be working closely with the National Information Standards Organization to better evolve the taxonomy and also include the ideas and opinions of a broader cross-section of the research community, including experts from other scientific fields to extend the reach of the taxonomy to areas outside the life sciences.

For researchers like you and me, it seems like the day is not too far out when we will get the credit we deserve for toiling in the lab, developing a technique, or designing an algorithm as part of a bigger collaborative project.

Published online LabTimes May, 2014

Photo: Venter et al, Science (2001)


2. Glacier foreland

The Hardangerjøkulen glacier retreated 34 meters in the summer of 2010

Long before man sets foot on new lands, even the most inhospitable terrain is already taken over by other life forms. Ecologists Mikael Ohlson and Sigmund Hågvar talk of their icy escapades in search of the first inhabitants of receding glaciers. What are the first creatures to colonize land that has been covered by ice for centuries?

Read more at LabTimes 02/2014


Shelf for bookshelves

Data sharing is a piece of cake in the cyber-world. Search engines display thousands of hits, leaving you baffled at the maze of articles and databases. The German initiative puts an end to this confusion by standardizing data repositories.

As researchers we all recognize the importance of keeping abreast with the latest findings in our field. It’s not just for the fear of being scooped by a competitor but also for the need to constantly reframe hypotheses, identify gaps in our understanding or adopt a novel technique that we visit PubMed and other online databases on a daily basis.

The increasing acceptance of the Open Access attitude produced an unprecedented rise in their number, allowing a quick and inexpensive access to myriad research data. But with all that access, how does a researcher choose where to deposit his data or where to look for the right kind of data? This is where the Registry of Research Data Repositories” or in short comes in. was founded in 2012 by a team led by Heinz Pampel from the Open Access Coordination Office at the German Research Centre for Geosciences in Potsdam, Germany. Its main goal is to systematically categorise existing repositories for a better visibility of reliable datasets. The registry is funded by the German Research Foundation (DFG) and is jointly led by GFZ German Research Centre for Geosciences, Berlin Humboldt University and Karlsruhe Institute of Technology. has been built on the basis that database repositories, which are tailored to serve different disciplines and have distinct styles reflecting their founding institutions, are very heterogeneous by nature. Launched as a single global portal to consolidate such varied data sets under an unified format, the project is to benefit researchers, publishers and institutions alike when it comes to data search and storage. Recently, a new collaboration between, the European Open Access infrastructure OpenAIRE and BioSharing has been announced. BioSharing is an initiative to standardise repositories and manage data sharing.

At present, the growing registry lists about 600 data repositories spanning across 140 disciplines and include, for instance, ArrayExpress, providing functional genomics data, TAIR, the Arabidopsis Information Resource and ORGIDS, the Rotterdam Glaucoma Imaging Data Sets. For those database curators interested to join, re3data.orgsets out a detailed “list of metadata properties” to define the standards for any data repository to be eligible for an entry into the registry. This “vocabulary” describes the databases’ general scope, content, infrastructure, compliance with technical, metadata and quality standards. Guided by this vocabulary, moderators of database repositories can request their infrastructures to be added by filling out a URL suggest form. Inspected and approved databases are “identified by a green check mark”.

Besides a brief description of the database, also provides information on the supporting institutions, submission and access policies, licenses and quality standards such as federal endorsements. The last feature permits the user to evaluate and compare the reliability of different database resources.

The portal is very user-friendly allowing the filtering of results either by subject, content-type or country. Subjects vary from arts, humanities and law to construction engineering and natural sciences. Clicking on the field “Neuroscience”, for example, enlists 12 repositories; a click on “Plant Sciences” gives 27 results. “Content-type” is a particularly useful filter in such cases when a user is interested solely in images, audiovisual data or source code, for example. Each repository has a set of icons against its name that identify its features such as access, certification and licensing policy and even openness to submissions without the user having to get into details.

“In the upcoming project phase the focus will be on improving usability and implementing new features. Among other things, the dialog with repositories’ operators will be supported by a workflow system,” writes Heinz Pampel in a guest post on the PLOS Tech Blog. And with secure funding until 2014, will certainly do its share to promote the standardization of data repositories, and through it, “a culture of sharing, increased access and better visibility of research data”.

Published online LabTimes Dec, 2013

Photo: Bookshelves by Germán Poo-Caamaño via Creative Commons License


Picture 2

Elimination of undesirable pluripotent cells in induced stem cells preparations. Following exposure to PluriSIn, the residual undifferentiated cells are killed (red dots) while differentiated endodermal cells are spared (green).

Stem cell therapy has all the potential it takes to revolutionize regenerative medicine. But in spite of all that it has to offer, the technology is yet not infallible for therapy. In an attempt to minimize safety issues concerning the use of stem cells in transplantation, scientists in Israel led by Nissim Benvenisty make a major stride.

Read more at LabTimes 05/2013