Public health, social science, and the scientific method. Part II

(from cdc.gov)

[Ed: After testifying to the House Appropriations Committee in 1996, Dr. Faria was tapped to serve at the CDC on the NCIPC‘s grant review committee during the George W. Bush administration. This two-part series (Part I here), republished with permission, describes his tenure there. Originally published by World of Neurosurgery in March, 2007]

In Part I, we discussed in general terms some of the shortcomings I encountered in many of the grant proposals submitted during my stint as a grant reviewer for the Centers for Disease Control and Prevention’s (CDC) National Center for Injury Prevention and Control (NCIPC) in the years 2002 to 2004 [6].

There is no reason to believe that these epidemiologic and scientific shortcomings have been addressed and corrected in subsequent years. And from the outset, let me state that the problems do not lie with the methodology of the peer review process, but rather with the misuse of statistics and the lack of science in many, if not most, of the grant proposals. The methodology of grant review calls for the evaluation of research aims and long-term objectives, significance and originality, as well as research design. These are appropriate criteria, but perhaps, for improvement, an additional criterion should be added: Show me the science!

In Part I, we also stressed the fact that statistics are not science and cannot prove cause and effect relationships [6]. Yet, statistics are a very useful tool of science that when properly applied establish correlations as to disease processes. And we were highly critical that such simple statistical tools as P values were frequently missing in scientific grant proposals submitted for funding, although P values are important in establishing whether scientific findings reach statistical significance or are due to chance. We also discussed relative risks (RRs) and confidence interval (CI) tests as essential components of epidemiologic research. We also described the shortcomings in strategic long-term proposals in an agenda-driven public (social) health research.

Many of the proposals made in response to Healthy People 2010, sponsored by the public health’s call for research papers, have frequently more to do with social justice, expanding socialism, and perpetuating the welfare state than with making genuine advances in medicine and improving the health of humanity. In some cases, these grant proposals use or misuse statistics, collect data of dubious validity, and even use legal strategies to reach social goals and formulate health care policies that the public health researchers believe may achieve social justice. Many of these studies aimed at nothing less than the gradual radicalization of society using “health” as a vehicle [[2], [3], [7], [8], [9]]. Healthy People 2010 in short is a veritable bureaucrat’s dream, an overflowing cornucopia of public (social) health goals geared toward social and economic reconstruction of American society along socialistic lines.

I also mentioned in Part I of this article that the reader will be surprised to learn that I found probably as many lawyers and social workers as practicing physicians applying for public health scientific research grants! No wonder the science is lacking in many of these proposals, and frankly, the peer review process has been too lenient.

Before proceeding, let us once again recall the words, as we did in Part I, of Bruce G. Charlton, MD, of the University of Newcastle-upon-Tyne in his magnificent article “Statistical malpractice.” His words, although excerpted, are worth repeating:

Science versus Statistics

There is a worrying trend in academic medicine which equates statistics with science and sophistication in quantitative procedures with research excellence. The corollary of this trend is a tendency to look for answers to medical problems from people with expertise in mathematical manipulation and information technology, rather than from people with an understanding of disease and its causes. Epidemiology [is a] main culprit, because statistical malpractice typically occurs when complex analytical techniques are combined with large data sets