Refereeing Clinical Research Papers for Statistical Content
AMERICAN JOURNAL OF OPHTHALMOLOGYFRANK W. NEWELL, Publisher and Editor-in-ChiefTribune Tower, Suite 1415, 435 North Michigan Ave., Chicago, Illinois 60611Thomas M. Aaberg, MilwaukeeMathea R. Allansmith, BostonDouglas R. Anderson, MiamiJules Baum, BostonWilliam M. Bourne, RochesterRonald M. Burde, St. LouisFred Ederer, BethesdaFrederick T. Fraunfelder, PortlandEugene Helveston, IndianapolisFrederick A. [akobiec, New YorkMichael A. Kass, St. LouisHerbert E. Kaufman, New OrleansEDITORIAL BOARDKenneth R. Kenyon, BostonSteven G. Kramer, San FranciscoIrving H. Leopold, IrvineRobert Machemer, DurhamA. Edward Maumenee, BaltimoreIrene H. Maumenee, BaltimoreNancy M. Newman, San FranciscoDon H. Nicholson, MiamiEdward W. D. Norton, MiamiArnall Patz, BaltimoreDeborah Pavan-Langston, BostonThomas H. Pettit, Los AngelesAllen M. Putterman, ChicagoDennis Robertson, RochesterStephen J. Ryan, LosAngelesJerry A. Shields, PhiladelphiaDavid Shoch, ChicagoRonald E. Smith, LosAngelesBruce E. Spivey, San FranciscoBradley R. Straatsma, LosAngelesH. Stanley Thompson, Iowa CityE. Michael Van Buskirk, PortlandGunter K. von Noorden, HoustonGeorge O. Waring, AtlantaPublished monthly by the OPHTHALMIC PUBLISHING COMPANYTribune Tower, Suite 1415, 435 North Michigan Avenue, Chicago, Illinois 60611DirectorsA. Edward Maumenee, PresidentDavid Shoch, Vice PresidentFrank W. Newell, Secretary and TreasurerEDITORIALEdward W. D. NortonBruce E. SpiveyBradley R. StraatsmaRefereeing Clinical Research Papers for Statistical ContentFred EdererFrom time to time clinical investigators haveapproached me, a biostatistician, holding intheir hands freshly collected data, to askwhether I could help them with the analysis.All too often I need to tell these investigatorsthat it would have been better to come for helpbefore the data were collected. Clinical investi-gations need to be properly designed and prop-erly executed for the results to be properlyanalyzable.There is a common misconception about therole of biostatisticians in clinical research:namely, that their expertise lies solely in dataanalysis and that problems in study design andconduct are not in their domain. A similarmisconception appears to prevail about thestatistician's role as a referee of clinical re-search papers - that this role is limited tochecking the method of analysis: for example,to be sure that the t values and P values areright. I would like to correct that impression.735To some extent the role of the statisticalreferee is the same as that of any referee. Likeany referee, the statistical referee should ask:Is the subject matter important?Is the topic suitable for the readership of thejournal?Did the investigators use sound scientificmethods?Do the authors use a scholarly approach inpresenting and discussing their material?Is the paper well organized and does it makeenjoyable reading?Beyond these and similar general questions,the statistical referee is concerned about thefollowing issues.Is there adequate description of the source ofthe patients studied and how they wereselected?One recent article from a named eye clinicdid provide an adequate description by stating:736 AMERICAN JOURNAL OF OPHTHALMOLOGY November, 1985"We reviewed the records of 82 consecutivepatients (83 eyes) with retinal detachment com-plicated by proliferative vitreoretinopathytreated by one of us ... over a four-year periodfrom Jan. 1,1980, to Dec. 30, 1983." The opera-tive word regarding patient selection is: "con-secutive."! Lincoln Moses," writing in the NewEngland Journal of Medicine, asked, "What is aseries?" Moses requires that "all eligible pa-tients in some setting, over a stated period, aredescribed." He holds that "conclusions basedon 'selected cases' are notoriously treacherous,because selection can grossly affect the (out-come) data" and that flat a minimum, the read-er needs to know what criteria were used todetermine inclusion and exclusion of subjects,how many subjects were included ... " Clini-cal research reports often fail to specify thesource of the patients, the method of selection,or the criteria of eligibility and exclusion, leav-ing the reader uncertain about the characteris-tics of the patients to whom the results apply.Is the disease defined and are details of theexamination procedure and diagnostic criteriaprovided?Replication is the hallmark of science. Ideal-ly, a scientific investigation should be de-scribed in great enough detail so that anotherinvestigator can attempt to verify the results byreplicating the study in another laboratory. It istrue that clinical research is not an exact sci-ence. For example, diagnoses, particularly inophthalmology, often depend on visual con-ceptions of recognizable patterns that may bedifficult to define. Nevertheless, the more de-tail and the more specific the criteria reported,the better chance the reader has to understandthe implications of the research and the greaterchance other investigators have to replicate thestudy.Are all patients accounted for in the dataanalysis?Occasionally I read reports of studies of, forexample, 100 patients, but the first table pre-sents data not on 100 patients but on only 88patients and the next table on only 83. The text,unfortunately, provides no explanation of why12 patients are missing from the first table and17 from the second. Did patients die during thestudy? Were any lost to follow-up? Did somenot return for follow-up examinations? Didmedia opacities prevent viewing the retina?Missing information is a potential source ofbias." It is treacherous to assume that thecourse of the disease is the same in patients forwhom data are missing as in those for whomdata are complete.There should be a complete accounting for allpatients, so that the reader has a basis forjudging whether and to what extent the miss-ing information could constitute a source ofbias. When the amount of missing informationis more than trivial, the authors have an obliga-tion to discuss the potential effect of the miss-ing information on the conclusions they aredrawing.Variable length of follow-upIn a clinical follow-up study the patients donot generally enter the study at the same time.Rather, they enter one at a time, so that when itcomes time to analyze the results, length offollow-up varies from patient to patient. Clini-cians, unaware of the life table method, tend touse an unsatisfactory approach to the analysisof follow-up data, as illustrated in the followingexample: "The rate of complications was 19%.Follow-up averaged 24 months, ranging from 6to 52 months." The usefulness of this informa-tion is severely limited because it is not pre-sented in a way that takes into account the timedependency of the percentage of complica-tions.v' The life table method-" should be used:it permits (1) the use of all follow-up informa-tion for all patients and (2) the presentation ofthe risk of complications as a time-specific vari-able (for example, one-year, two-year, andthree-year complication rates). The life tablemethod is easily applied with help from abiostatistician.Confounding variablesIn clinical investigations we frequently makecomparisons between groups of patients. Forexample, we compare disease outcomes ingroups treated by different methods. We com-pare outcomes in men and women or in blacksand whites. In these comparisons it is impor-tant to determine whether the groups beingcompared are similarly constituted accordingto prognostic factors such as age or severity ofdisease. When the groups differ in compositionwith respect to variables that influence progno-sis ("confounding variables"), the results mustbe adjusted for the differences, or the compari-son will be biased. Most clinical investigatorswill need professional biostatistical assistanceto accomplish the adjustment.Vol. 100, No.5 Editorial 737Eyes vs patientsA potential pitfall in the statistical analysis ofocular data is related to the fact that the eye is abilateral organ. "We examined 73 eyes in 39patients." The ordinary statistical tests applyto independent observations only. Observa-tions on paired eyes are generally positivelycorrelated and not independent, facts thatmust be taken into account in statistical test-ing.""Absence of statistical significanceMost clinical investigators know what a sta-tistically significant difference is: it is a differ-ence that is unlikely to have occurred by chancealone. By analogy, some investigators interpretthe failure to find a significant difference tomean that a difference is unlikely to exist. That,however, may be a wrong interpretation. Apossible alternative explanation is that the sam-ple is too small for a difference to be detected.Absence of significance is not synonymouswith significance of absence. To prevent thisambiguity in interpretation, many statisticiansprefer confidence intervals to significance testsin assessing sampling variability because theyare more informative. to The confidence intervalfor an observed difference tells us not onlywhether the interval excludes or includes zero(that is, whether that difference is significant ornot), but also how large the difference is likelyto be.InteractionsA clinical trial is performed to determinewhether treatment A is more beneficial thantreatment B. Although no hypotheses aboutsubgroups were specified in advance, the in-vestigators scour the data and discover a largerA- B difference in response to treatment forsubgroup 1 than for subgroup 2. Statisticaltests find the difference for subgroup 1 but notthat for subgroup 2 to be significant. The inves-tigators conclude that treatment A is morebeneficial than treatment B for subgroup 1 butnot for subgroup 2. But, the conclusion is notwarranted because they have not performedthe correct test. They need to test whether theA- B difference for subgroup 1 is significantlydifferent from that for subgroup 2. Statisticianscall this a test for an interaction. Because theinteraction hypothesis was one that was sug-gested by the data rather than one that theinvestigators set out to test, the results of theinteraction test need to be interpreted withparticular caution. (We note in passing that theinvestigators' conclusion about subgroup 2 isan example of the fallacious interpretation ofabsence of statistical significance.)Design and conductThe issues that a statistical referee of a clini-cal journal faces often concern the design andconduct of the study rather than the statisticalanalysis. Clinical investigators will generallybenefit from collaborating with a biostatisticianin planning and designing a study. Great bene-fits can be derived from a properly planned anddesigned study. The statistician can be helpfulin assuring adequate sample size and in avoid-ing potential bias by such techniques as ran-domization, masking, collection of informationon potential confounding variables, and pre-venting losses to follow-up. And, of course, thestatistician can be helpful in the collection andanalysis of the data.Reprint requests to Fred Ederer, Biometry and Epide-miology Program, National Eye Institute, National Insti-tutes of Health, Bldg. 31, Rm. 6A16, Bethesda, MD20892.References1. de Bustros, S., and Michels, R. G.: Surgicaltreatment of retinal detachments complicated by pro-liferative vitreoretinopathy. Am. J. Ophthalmol.98:694, 1984.2. Moses, 1. E.: The series of consecutive cases as adevice for assessing outcomes of intervention. N.Engl. J. Med. 311:705, 1984.3. Seigel, D.: Analysis of follow-up data. Arch.Ophthalmol. 103:647, 1985.4. Hillis, A.: Improving reporting of follow-updata. Am. J. Ophthalmol. 93:250, 1982.5. Cutler, S. J., and Ederer, F.: Maximum utiliza-tion of the life table method in analyzing survival. J.Chronic Dis. 8:699, 1958.6. Kaplan, E., and Meier, P.: Nonparametric esti-mation from incomplete observations. J. Am. Stat.Assoc. 53:457, 1958.7. Ederer, F.: Shall we count numbers of eyes ornumbers of subjects? Arch. Ophthalmol. 89:1, 1973.8. Rosner, B.: Statistical methods in ophthalmolo-gy. An adjustment for the intraclass correlation be-tween eyes. Biometrics 38:105, 1982.9. Ray, W. A., and O'Day, D. M.: Statistical analy-sis of multi-eye data in ophthalmic research. Invest.Ophthalmol. Vis. Sci. 26:1186, 1985.10. Rothman, K. H. J.: A show of confidence. N.Engl. J. Med. 299:1362, 1978.