Today the Washington Post reported that “at least nine of the [eleven] customized stem cell colonies that Hwang had claimed to have made earlier this year were fakes” and that the two that do exist were not genetic matches harvested from patients, but were taken from a fertility clinic’s embryos. What’s more, several money transfers to Korean researchers appear to be part of an attempt to cover up the deception. Additional investigations, both into Hwang’s Snuppy cloning (published August 2005) and his cloning of human stem cells (published March 2004), are being conducted. Read the story here.
What is life? Answering this question is one of the ultimate goals of biologists. Since Mendel, SchrÃ¶dinger, Watson, Crick, Jacob, Monod and so many others, the view has emerged that life is programmed by the DNA molecule. This view culminated during the last century through the completion of the “Human Genome Project,” the sequencing of a human genome. Simultaneously, major technical advances for counting RNAs and protein species opened the maws of the OMICS world (transcriptomics, proteomics, you-name-it-omics).
This is essentially when the smoke from our pipedream of a molecular Grail was blown away by the harsh winds of reality. If DNA is the book of life, no one has the slightest idea of how to read it. One major reason for this is that making sense of the huge amount of data is too complexâ€”when one focuses on the small details the complete picture becomes blurred.
The human body is made up of molecules that fit generally into the categories of DNA, RNA, proteins, sugars, salts, fats and water. These last four groups are generally governed by the proteins in the human body and become imbalanced as a result of protein dysfunction. Technological strategies, developed as a result of the Human Genome Project, can rapidly and almost comprehensively scan through the DNA, RNA and protein molecules of the human body in order to identify differences between individuals with a disorder versus those without. These strategies are collectively known as the “-omics.”
Transcriptomics refers to the comprehensive scanning of the nearly fifty thousand currently known genes that are transcribed into RNA molecules from the three-billion-letter human genome. Each cell utilizes (expresses) different genes at different times in its development and under different physiological conditions. In general, tissues express similar sets of genes that can be used to identify those tissues in the absence of any other information. For example, the brain expresses about thirty percent of all of the known genes; those specific transcripts are different from the transcribed genome in the heart. We can therefore define molecular signatures based on expression profiles, and these profiles can then be used to automatically separate normal cells or tissues into their correct category.
The joint project between the National Cancer Institute and the National Human Genome Research Institute, announced yesterday at a press conference, is “the first attempt to leverage the mapping of the genome.” The goal of the three-year pilot program is “to speed up effective target treatments for cancer.”
Read the Washington Post’s article about the project here and the AP report here.
In the late 1800â€™s, physicists thought that the problems of physics had been mostly solved. After all, Newtonâ€™s laws described the motion of ordinary objects, Maxwellâ€™s equations explained electricity and magnetism, and thermodynamics detailed the relationship between forms of energy. But that view of the world soon changed as special and general relativity altered our views of space, time, and gravitation; statistical mechanics provided a stochastic basis for understanding bulk properties of matter; and quantum mechanics blurred the lines between particles and waves, matter and energy. The biological sciences are entering a similar phase of transition between what was and what will be our view of the world and the way it operates. The Human Genome Project has been long heralded as the means to understanding how we as beings carry on the biological processes we need to survive. Sure, if you read the papers you know that the genome sequencing has been declared finishedâ€”but we have a long way to go before the promise of the genome project is complete.
Over a half-century ago, the renowned (and eccentric)2 mathematician, Norbert Wiener, suggested that living organisms be viewed as systems governed by feedback control.3 Wiener attempted to found a new disciplineâ€”â€œcyberneticsâ€?â€”for the study of such systems. In spite of Wienerâ€™s impassioned proselytization on behalf of the new discipline, cybernetics didnâ€™t amount to much. It generated some excitement in the social sciences in the 1950s4 and then fizzled out. Engineers occasionally referred to cybernetic concepts (especially feedback) but thatâ€™s about it. In biology, especially in the emerging field of molecular biology, cybernetics proved to be a disaster.5 Strangely, at the beginning of the twenty-first century, Wienerâ€™s vision has returned with a vengeance.
The central achievement of the genomics revolution in biology arguably lies in the mapping and sequencing of the human genome and the generation of the fine haplotype maps that are being used to study human diversity. While this is an amazing accomplishment that will likely pay dividends for years, the genomeâ€™s sequence itself has taught us little that has immediate applicability in human health. This was clearly anticipated by some, who reflected that knowing the sequence of a 10 kB virus did little to immediately curb the AIDS epidemic, nor did identification of the gene for Huntingtonâ€™s disease lead immediately to a cure for that disease. In both cases, however, great strides have been made, and these great strides have followed from the integration of genomics data with data from other areas, including studies of gene expression (mRNA and protein) and general biochemistry and cell biology studies.
We are living in the era of â€˜data dumpsâ€™ in the post-genomic era. How does one go about sorting out the massive amount of scientific information, extracting what is useful and putting it to work to benefit our health, our environment and the food we eat to keep us healthy?
Imagine all this data as construction material for building a bridge to the answers of many of the questions we have been asking about physiology and metabolism for decades. The data is varied and diverse and in many instances in a code that we have to unravel. There is no biometrics for many of the tools in the toolbox in the post-genomic era; how do we keep up with the ever-accelerating technology being developed as state-of-the-art when within two years the technology might already be primitive or have shown to be unreliable?