We are living in the era of â€˜data dumpsâ€™ in the post-genomic era. How does one go about sorting out the massive amount of scientific information, extracting what is useful and putting it to work to benefit our health, our environment and the food we eat to keep us healthy?
Imagine all this data as construction material for building a bridge to the answers of many of the questions we have been asking about physiology and metabolism for decades. The data is varied and diverse and in many instances in a code that we have to unravel. There is no biometrics for many of the tools in the toolbox in the post-genomic era; how do we keep up with the ever-accelerating technology being developed as state-of-the-art when within two years the technology might already be primitive or have shown to be unreliable?
Modern tools such as microarrays are inundated with huge amounts of genome-wide data on gene-expression patterns. We also have amazing and very sophisticated computational tools doing the analysis. In order to maximize the benefits of these technologies, perhaps we need to start focusing on the basics like asking the right questions from the view of the microbe, plant or animal cell or even their interaction with each other rather than from the perception of point A leading to point B. After all, it is the regulation and expression of their physiology or metabolism under a variety of conditions that we are trying to determineâ€”what I refer to as ecophysiological. These forms of life are not only complex but are highly adaptive and, to some extent, the sum of their activities is greater than their individual genes and pathways. Doing laboratory-simulated experiments may just not be enough to fully understand structure and function of life forms. New tools are still needed that can probe without perturbing cell functions, and this is where the new nanobiotechnology promises hold potential to validate what the molecular era has uncovered.
We also need to better understand the computational tools we are employing. I am not sure that we are giving enough attention to the management and analysis of the results from mega data points obtained. Every good scientist knows that the methods one uses to analyze the data will effect how the results are interpreted. Standardization and validation of many of the tools has just not been done. If we are to benefit from now being able to see the un-sprouted seedlings in the forest to better understand the forest, then it may be time for agencies such as NIST and other groups to start validating the tools we already have in place.
There is no doubt in my mind that curiosity science is being benefited from the post-genomic era, but I would encourage many to still apply hypothesis-driven science to better understand the information we have now accumulated. This way, knowledge is acquired and translated into much needed solutions for the problems we as humans have either created or evolved into based on our lifestyle on planet Earth. Nature is about balance and our science should reflect the same balance.