Public health, social science, and the scientific method. Part II

(from cdc.gov)

[Ed: After testifying to the House Appropriations Committee in 1996, Dr. Faria was tapped to serve at the CDC on the NCIPC‘s grant review committee during the George W. Bush administration. This two-part series (Part I here), republished with permission, describes his tenure there. Originally published by World of Neurosurgery in March, 2007]

In Part I, we discussed in general terms some of the shortcomings I encountered in many of the grant proposals submitted during my stint as a grant reviewer for the Centers for Disease Control and Prevention’s (CDC) National Center for Injury Prevention and Control (NCIPC) in the years 2002 to 2004 [6].

There is no reason to believe that these epidemiologic and scientific shortcomings have been addressed and corrected in subsequent years. And from the outset, let me state that the problems do not lie with the methodology of the peer review process, but rather with the misuse of statistics and the lack of science in many, if not most, of the grant proposals. The methodology of grant review calls for the evaluation of research aims and long-term objectives, significance and originality, as well as research design. These are appropriate criteria, but perhaps, for improvement, an additional criterion should be added: Show me the science!

In Part I, we also stressed the fact that statistics are not science and cannot prove cause and effect relationships [6]. Yet, statistics are a very useful tool of science that when properly applied establish correlations as to disease processes. And we were highly critical that such simple statistical tools as P values were frequently missing in scientific grant proposals submitted for funding, although P values are important in establishing whether scientific findings reach statistical significance or are due to chance. We also discussed relative risks (RRs) and confidence interval (CI) tests as essential components of epidemiologic research. We also described the shortcomings in strategic long-term proposals in an agenda-driven public (social) health research.

Many of the proposals made in response to Healthy People 2010, sponsored by the public health’s call for research papers, have frequently more to do with social justice, expanding socialism, and perpetuating the welfare state than with making genuine advances in medicine and improving the health of humanity. In some cases, these grant proposals use or misuse statistics, collect data of dubious validity, and even use legal strategies to reach social goals and formulate health care policies that the public health researchers believe may achieve social justice. Many of these studies aimed at nothing less than the gradual radicalization of society using “health” as a vehicle [[2], [3], [7], [8], [9]]. Healthy People 2010 in short is a veritable bureaucrat’s dream, an overflowing cornucopia of public (social) health goals geared toward social and economic reconstruction of American society along socialistic lines.

I also mentioned in Part I of this article that the reader will be surprised to learn that I found probably as many lawyers and social workers as practicing physicians applying for public health scientific research grants! No wonder the science is lacking in many of these proposals, and frankly, the peer review process has been too lenient.

Before proceeding, let us once again recall the words, as we did in Part I, of Bruce G. Charlton, MD, of the University of Newcastle-upon-Tyne in his magnificent article “Statistical malpractice.” His words, although excerpted, are worth repeating:

Science versus Statistics

There is a worrying trend in academic medicine which equates statistics with science and sophistication in quantitative procedures with research excellence. The corollary of this trend is a tendency to look for answers to medical problems from people with expertise in mathematical manipulation and information technology, rather than from people with an understanding of disease and its causes. Epidemiology [is a] main culprit, because statistical malpractice typically occurs when complex analytical techniques are combined with large data sets…. Indeed, the better the science, the less the need for complex analysis, and big databases are a sign not of rigor but of poor control. Basic scientists often quip that if statistics are needed, you should go back and do a better experiment…. Science is concerned with causes but statistics is concerned with correlations…. Minimization of bias is a matter of controlling and excluding extraneous causes from observation and experiment so that the cause of interest may be isolated and characterized…. Maximization of precision, on the other hand, is the attempt to reveal the true magnitude of a variable which is being obscured by the “noise” [from the] limitations of measurement.

Science by sleight-of-hand

The root of most instances of statistical malpractice is the breaking of mathematical neutrality and the introduction of causal assumptions into the analysis without scientific grounds…

Practicing science without a license

…Medicine has been deluged with more or less uninterpretable “answers” generated by heavyweight statistics operating on big databases of dubious validity. Such numerical manipulations cannot, in principle, do the work of hypothesis testing. Statistical analysis has expanded beyond its legitimate realm of activity…. From the standpoint of medicine, this is a mistake: statistics must be subordinate to the requirements of science, because human life and disease are consequences of a historical process of evolution which is not accessible to abstract mathematical analysis [4].

To reiterate, RRs can be useful when the value is well above 2.0, but, as previously stated, RRs between 0.5 and 2.0 should not be considered significant in statistical studies because this range strongly suggests that there is no difference in the rate of injuries or interventions between two study populations. Yet, this basic rule is routinely ignored in pilot studies and citations of previous research.

As with strong P values, I have found that in the rare event that a RR is significant, public health researchers invariably mention it. If it is not significant (ie, value ranges between 0.5 and 2.0), then it is not mentioned, in present or previous research, or in the pilot studies they cite to bolster their grant proposals, ongoing proposals that they use as stepping stones to apply for more grant money and further investigation.

Again, indicating the P value reveals whether the statistical difference found in the two populations under study is due to chance. It is becoming more and more infrequent now for public health researchers to submit their P values in their grant proposals, particularly if the P value is higher than .05, the value that falls below the 95% level of confidence. Approval, nevertheless, is frequently achieved, further projects are funded, and the money keeps rolling in.

To further increase the chances of approval in the peer review process, public health investigators now about routinely ignore the basic traditional rules of epidemiology such as the RR, the P value, and the CI test. These are tough tests that would disqualify many low-caliber research proposals. So it is no wonder that in a competitive world of funding grant proposals, many epidemiologists claim they are not needed. If high P values (P > .05), RRs between 0.5 and 2.0, and CI too wide for comfort were disclosed, funding for health (social) programs may not be granted. Instead, epidemiologists and other public health researchers parade complex numerical computer manipulations (eg, regression models and stratified multivariate analysis) designed to eliminate confounding variables. Nevertheless, confounding variables persist, and junk science is the result [12].

Here again is Dr Charlton writing on this subject of statistical elimination for confounding variables: “[Science by sleight of hand] commonly happens when statistical adjustments … are performed to remove the effects of confounding variables. These are manoeuvers by which data sets are recalculated (eg, by stratified or multivariate analysis) in an attempt to eliminate the consequences of uncontrolled ‘interfering’ variables which distort the causal relationship under study …. There are, however, no statistical rules by which confounders can be identified, and the process of adjustment involves making quantitative causal assumptions based upon secondary analysis of the data base … but is illegitimate as part of a scientific enquiry because it amounts to a tacit attempt to test two hypotheses using only a simple set of observations [4].”

The scientific process, like Koch’s Postulates of Pathogenicity, indeed calls for a much simpler methodology: (1) Observe a natural phenomenon. (2) Develop a hypothesis as a tentative explanation of what is occurring. (3) Test the validity of the hypothesis by performing a laboratory study and collecting pertinent data. Experimental trials (with randomization, control groups, etc) are best. (4) Refine or reject the hypothesis based on the data collected in the experiment or trial. (5) Retest the refined hypothesis until it explains the phenomenon. A hypothesis that becomes generally accepted explaining the phenomenon becomes a theory [12].

Let us return to the nuts and bolts of statistics and remember the following: For RRs greater than 1.0 (no difference at all), the lower bound of the CI test must be greater than 1.0. The inclusion of the value of 1.0 within the interval invalidates the 95% level of confidence.

Likewise, for a RR less than 1.0 to satisfy the validity of the 95% level of confidence, the range of values must be less than 1.0. Again, the inclusion of the 1.0 value or higher invalidates the 95% statistical confidence in the study. (Next time you read a scientific study, check the statistics and make sure that the P values, RR, and CI are disclosed in the scientific discussion) [12].

Unfortunately, the public health establishment is going along in relaxing these inconvenient basic rules of statistics to continue to justify studies of little scientific validity but great social engineering potential. After all, the money keeps rolling in, year after year, for further research.

It is not surprising that even though the Injury Research Grant Review Committee (now the Initial Review Group [IRG]) members are asked to review the proposals for scientific merit, the fact remains that (starting with the standard methodologies that rely largely on methods of population-based epidemiology, expansion of databases of dubious validity, complex statistical analysis subject to random errors and biases, complex numerical manipulations that are not mathematically neutral, etc, and ending with “health” goals) little attention is paid in these proposals to the scientific method. More attention, in fact, is paid to social results and preordained, result-oriented, or feel-good research than to real hard science. Science becomes a casualty in the politicized research war.

ESTABLISHING PUBLIC HEALTH CONSENSUS

As of 2004, although the standing IRG is composed of 21 members, there are so many grant proposals and so many members who apparently do not attend the meetings that the CDC contracts for its own additional reviewers, ad hoc IRG consultant members to assist with the grant review process. Many of these consultants are former IRG members who revolve back to the CDC and tend to be sympathetic to the methodology, as well as the social and political goals of the entrenched public health establishment. Thus, there is a current underlying conflict of interest intrinsic to this process. Moreover, many of these ad hoc members are statisticians, bureaucrats, epidemiologists, or public health personnel themselves who have a direct or an indirect vested interest in approving the statistical studies of their colleagues rather than promoting medical science. Many of them are epidemiologists who believe that statistics are science that prove cause and effect relationships, without the corroborating findings of clinical medicine. In other words, Koch’s Postulates of Pathogenicity, proving whether an organism causes a disease, have been thrown out the window, and with it, of course, the required steps of the scientific method.

Many public health researchers in this milieu have come to believe erroneously that pure statistics are medical science, which they are not. Statistics are a helpful tool of science, but they do not prove cause and effect relationships. Public health, as many workers have come to accept, has become a confusing mesh of social programs bridging between science and politics. Science establishes scientific facts; statistics establishes correlations. It is no wonder then that these public health proposals are fraught with methodological errors and are subject to confounding variables that refuse elimination despite complex numerical computer manipulations. Moreover, biases also enter the computations that cannot be corrected for because of poor data collection (data dredging) despite the so-called mathematically neutral statistical corrections. Someone had to step in front of the tanks in this milieu and proclaim that the emperor had no clothes, when it comes to the fact that science is lacking in many of these public (social) health proposals.

Many members and consultants of IRG have become entrenched bureaucrats in their ivory towers, who want to go along to get along with their public health paper work, rather than clinicians in the trenches of medical care delivery.

The Department of Health and Human Services (DHHS) should appoint to the IRG more clinicians, particularly practicing physicians and hard biologic scientists, microbiologists, biochemists, physiologists, pathologists, anatomists, and fewer social “scientists,” administrators, and bureaucrats. Although I met a few physicians, I did not meet a basic scientist in any of the above specialties, not one scientist involved in genuine research in the basic biologic sciences! This is a policy that must be established from above, from DHHS, and implemented by the CDC as soon as possible.

Of course there are exceptions. I met many dedicated academicians and other fellow reviewers with whom I had the pleasure of working, both IRG members and consultants. Take for instance Dr Daniel O’Leary of the State University of New York, Buffalo, New York, Dr James F. Malec of the Mayo Clinic, Rochester, Minnesota, and Dr Patricia Findley of Rutgers University, New Brunswick, New Jersey who consistently placed science above politics in the panels’ scientific discussions and conferences. There were others. But in the end, most reviewers readjust their views, if they wish to be reappointed as consultants. They must ultimately conform to the basic work milieu of the NCIPC and work as to establish the much desired public health consensus in the approval process.

Thus, the CDC staff should have less influence in the review process by reducing or eliminating the need for ad hoc members who are appointed at the discretion of its staff. These members are not only largely public health and social scientists rather than basic biologic scientists or clinicians but are also contracted or appointed by the CDC and outnumber the standing IRG members at least 3 or 4 to 1! This proposed reform gives DHHS the opportunity to bring in much-needed new blood and fresh ideas into the program.

Furthermore, there is, in the public health milieu, a vested interest in promoting unreliable population-based statistical proposals because they lend themselves to the social study of spousal abuse, domestic violence, and adolescent crime that the NCIPC of the CDC is so fond of funding. And so, unfortunately, in the public health (social) research arena, there is an overwhelming and disproportionate number of observational studies rather than experimental investigations. Clinical trials (ie, controlled, randomized, prospective trials), the most reliable of scientific research investigations, are few and far between. Most of the proposals are observational studies, which include in decreasing order of reliability, cohort stories (ie, largely prospective but uncontrolled), case control studies (retrospective and uncontrolled), and ecological studies (ie, population-based). Ecological studies are so prone to error and so utterly unreliable as to give rise to the epidemiologic term of ecological fallacy, a fallacy to which, in fact, all population-based studies are subject [[2], [12]].

There is also subliminal peer pressure to be lenient in the grant review process for accepting these proposals from other colleagues in the field because, although specific conflict of interest forms are signed and resigned, many of the reviewers themselves receive federal money, and their own turn for review and approval will come sooner or later. To sum it up, CDC grant review committee members should be composed of more clinicians and more hard basic biologic scientists who are not receiving federal money, whether as ad hoc or standing members.

Another problem is the intrusive role played by a few CDC staffers and liaison officers working for the CDC in conjunction with the various injury control centers and schools of public health being supported by the CDC. One liaison officer I worked with exerted considerable pressure on committee members to approve a center with which he had a liaison. This happened specifically at a major center that I personally inspected with a team of reviewers. When the remote possibility of the center losing its funding came up, he stated, “this is an excellent IRC (Injury Research Center)” and “a score of 1.5 was necessary to assure funding of that center.” This statement may be one of the reasons why that center received one of the highest scores of all the centers reviewed, despite the fact that the entire panel initially considered all of the large and small proposals (except for one) mediocre in methodology and lacking innovation and originality. The CDC staffer was supposed to be an observer and not discuss merit or budgeting (funding) with us. His job was only to make sure that we, the actual reviewers, consider and discuss, or not, the scientific merits of the program for referral to the entire committee.

At the same time, I want to single out, for praise among the CDC staff, Gwendolyn Haile Cattledge, PhD, Deputy Associate Director for Science/Scientific Review Administrator at the CDC in 2004, who was the embodiment of professionalism and competence during this time and all the time I worked with her. Likewise, the new Director of the NCIPC, Ileana Arias, PhD, whom I only met briefly before her appointment, should hold some promise for the future.

Next, I would like to make the following observation: Congressional prohibition of CDC use of funds for political lobbying and for gun (control) research efforts has been effective in reducing politicized result-oriented gun research in the area of violence prevention. Yet, the temptation to resume this area of pseudoscientific politically-oriented research is simmering just under the surface. Therefore, DHHS should remain vigilant in this area—that is, making sure that this prohibition wisely ordained by Congress in 1996 is obeyed and followed as to preserve the integrity of scientific research [[2], [3], [7], [8], [9]]. This vigilance on the part of DHHS is important. When I specifically asked a director of a prominent Injury Research Center if her center planned to do gun (control) research in the future, she stated, “no decision had been made but that they considered it a legitimate area of research and could very well resume doing so in the future.”

On the other hand, I have no direct criticisms of the largely good work the CDC has done in the proper application of epidemiology to medical and scientific research in the area of prevention and control of infectious and contagious diseases. That work should continue, particularly with the potential threat of bioterrorism looming in the foreseeable future [1].

To this day, Congress has expanded the budget of the NCIPC of the CDC and continues to fund increasing amounts of taxpayers’ money for more injury and “violence” research, a large part of which is of dubious validity and merely supports a burgeoning bureaucracy, expanding and duplicating into other areas of research, health policy, and politics. With all this available funding, it is no wonder that more and more young people are going into the paper work field of epidemiology and the social sciences, rather than true biologic scientific research (ie, to the basic sciences, microbiology, biochemistry, physiology, etc) and direct clinical patient care, where they will more certainly be needed for an increasingly growing and aging population.

If this trend toward the public (social) health field continues, we will have more and more young people attending law schools, social sciences, and schools of public health than nursing and medical schools. They will not be doing basic research or clinical medicine but working on purely epidemiologic studies, carrying out armchair multivariate analyses with complex numerical manipulations and regression models rather than preparing themselves for the challenges of basic science (biologic) research and attending medical and nursing schools, where there has been a perpetual need for young applicants for many years. Yes, biologic scientists, nurses, and physicians will drastically be needed for the rapidly aging population of Americans, baby boomers, who are already retiring and soon will swell the ranks of septuagenarians and octogenarians. We need more nurses, clinicians, physicians at the bedside, and public health workers in the arena of medical and health care delivery rather than solely at the computer terminals!

Frankly, money is being squandered by on public health, politicized, preconceived research toward collectivist agendas, whereas the government (and the insurance companies follow suit) keeps cutting reimbursement for the physicians and nurses who are actually ministering care to patients with real individual medical problems. It is not only a question of squandering money and misallocation of finite health care resources, but also, in the end, a question of population-based ethics vs reinstitution of the individual-based ethics of Hippocrates [[5], [10], [11]].

Again, a contributing factor to the growing problem of pseudoscientific research is, frankly, too much available money virtually allocated to a narrow area of research (injury prevention and promotion of Healthy People 2010 agenda). I believe that cost-ineffective research has been approved for the maintenance and expansion of budgets and, in some cases, the propagation of social welfare–type programs supported by junk science.

Bluntly speaking, if I may be so bold, radical surgery is needed from the top to end politicized public health “injury research” (including the wealth redistribution, socialistic goals of Healthy People 2010) and return to the field of science and genuine scientific investigation. Taxpayers’ money should be transferred from the injury prevention and the social sciences masquerading as public health to the genuine good work that the CDC is doing in the field of infectious disease control and prevention as originally intended. There are frankly too many social scientists and researchers milking the government cow at the expense of the overburdened taxpaying public.

 

faria-13wmaz-sml

—  Miguel A. Faria, Jr., M.D. is a retired Clinical Professor of Neurosurgery and Adjunct Professor of Medical History at Mercer University School of Medicine. He is Associate Editor in Chief and World Affairs Editor of Surgical Neurology International. He served on the CDC’s Injury Research Grant Review Committee.

All DRGO articles by Miguel A. Faria, Jr., M.D.

 

References

  1. Alibek, K. and Handelman, S. in: Biohazard. A Delta Book (a division of Random House), New York (NY); 2000: 270–292
  2. Arnett, J.C. Book review: junk science judo by Steven J. Milloy. Medical Sentinel. 2002; 7: 134–135
  3. Bennett, J.T. and DiLorenzo, T.J. in: From pathology to politics—public health in America. Transaction Publishers, New Brunswick (NJ); 2000: 21–115
  4. Charlton, B.G. Statistical malpractice. J R Coll Physicians. 1996; : 112–114
  5. Faria, M.A. Managed care—corporate socialized medicine. ([www.haciendapub.com])Medical Sentinel. 1998; 3: 45–46
  6. Faria MA. Part 1: Public health, social science, and the scientific method. (Details pending publication of part 1 in Surgical Neurology).
  7. Faria, M.A. Public health—from science to politics. ([www.haciendapub.com])Medical Sentinel. 2001;6: 46–49
  8. Faria, M.A. The perversion of science and medicine (Parts I-II). ([www.haciendapub.com])Medical Sentinel. 1997; 2: 46–53
  9. Faria, M.A. The perversion of science and medicine (Parts III-IV). ([www.haciendapub.com])Medical Sentinel. 1997; 2: 81–86
  10. Faria, M.A. The transformation of medical ethics through time (part I): medical oaths and statist controls. ([www.haciendapub.com])Medical Sentinel. 1998; 3: 19–24
  11. Faria, M.A. The transformation of medical ethics through time (part II): medical ethics and organized medicine. ([www.haciendapub.com])Medical Sentinel. 1998; 3: 53–56
  12. Milloy, S.J. ([http://www.cato.org])in: Junk science judo—self-defense against health scares and scams. Cato Institute, Washington (DC); 2001: 41–114