How do studies using the experimental research design differ from other types of research?

  • Journal List
  • J Athl Train
  • v.45(1); Jan-Feb 2010
  • PMC2808761

J Athl Train. 2010 Jan-Feb; 45(1): 98–100.

Abstract

Context:

The purpose of study, experimental, or research design in scientific manuscripts has changed significantly over the years. It has evolved from an explanation of the design of the experiment (ie, data gathering or acquisition) to an explanation of the statistical analysis. This practice makes “Methods” sections hard to read and understand.

Objective:

To clarify the difference between study design and statistical analysis, to show the advantages of a properly written study design on article comprehension, and to encourage authors to correctly describe study designs.

Description:

The role of study design is explored from the introduction of the concept by Fisher through modern-day scientists and the AMA Manual of Style. At one time, when experiments were simpler, the study design and statistical design were identical or very similar. With the complex research that is common today, which often includes manipulating variables to create new variables and the multiple (and different) analyses of a single data set, data collection is very different than statistical design. Thus, both a study design and a statistical design are necessary.

Advantages:

Scientific manuscripts will be much easier to read and comprehend. A proper experimental design serves as a road map to the study methods, helping readers to understand more clearly how the data were obtained and, therefore, assisting them in properly analyzing the results.

Keywords: scientific writing, scholarly communication

Study, experimental, or research design is the backbone of good research. It directs the experiment by orchestrating data collection, defines the statistical analysis of the resultant data, and guides the interpretation of the results. When properly described in the written report of the experiment, it serves as a road map to readers,1 helping them negotiate the “Methods” section, and, thus, it improves the clarity of communication between authors and readers.

A growing trend is to equate study design with only the statistical analysis of the data. The design statement typically is placed at the end of the “Methods” section as a subsection called “Experimental Design” or as part of a subsection called “Data Analysis.” This placement, however, equates experimental design and statistical analysis, minimizing the effect of experimental design on the planning and reporting of an experiment. This linkage is inappropriate, because some of the elements of the study design that should be described at the beginning of the “Methods” section are instead placed in the “Statistical Analysis” section or, worse, are absent from the manuscript entirely.

Have you ever interrupted your reading of the “Methods” to sketch out the variables in the margins of the paper as you attempt to understand how they all fit together? Or have you jumped back and forth from the early paragraphs of the “Methods” section to the “Statistics” section to try to understand which variables were collected and when? These efforts would be unnecessary if a road map at the beginning of the “Methods” section outlined how the independent variables were related, which dependent variables were measured, and when they were measured. When they were measured is especially important if the variables used in the statistical analysis were a subset of the measured variables or were computed from measured variables (such as change scores).

The purpose of this Communications article is to clarify the purpose and placement of study design elements in an experimental manuscript. Adopting these ideas may improve your science and surely will enhance the communication of that science. These ideas will make experimental manuscripts easier to read and understand and, therefore, will allow them to become part of readers' clinical decision making.

WHAT IS A STUDY (OR EXPERIMENTAL OR RESEARCH) DESIGN?

The terms study design, experimental design, and research design are often thought to be synonymous and are sometimes used interchangeably in a single paper. Avoid doing so. Use the term that is preferred by the style manual of the journal for which you are writing. Study design is the preferred term in the AMA Manual of Style,2 so I will use it here.

A study design is the architecture of an experimental study3 and a description of how the study was conducted,4 including all elements of how the data were obtained.5 The study design should be the first subsection of the “Methods” section in an experimental manuscript (see the Table). “Statistical Design” or, preferably, “Statistical Analysis” or “Data Analysis” should be the last subsection of the “Methods” section.

Table. Elements of a “Methods” Section

How do studies using the experimental research design differ from other types of research?

The “Study Design” subsection describes how the variables and participants interacted. It begins with a general statement of how the study was conducted (eg, crossover trials, parallel, or observational study).2 The second element, which usually begins with the second sentence, details the number of independent variables or factors, the levels of each variable, and their names. A shorthand way of doing so is with a statement such as “A 2 × 4 × 8 factorial guided data collection.” This tells us that there were 3 independent variables (factors), with 2 levels of the first factor, 4 levels of the second factor, and 8 levels of the third factor. Following is a sentence that names the levels of each factor: for example, “The independent variables were sex (male or female), training program (eg, walking, running, weight lifting, or plyometrics), and time (2, 4, 6, 8, 10, 15, 20, or 30 weeks).” Such an approach clearly outlines for readers how the various procedures fit into the overall structure and, therefore, enhances their understanding of how the data were collected. Thus, the design statement is a road map of the methods.

The dependent (or measurement or outcome) variables are then named. Details of how they were measured are not given at this point in the manuscript but are explained later in the “Instruments” and “Procedures” subsections.

Next is a paragraph detailing who the participants were and how they were selected, placed into groups, and assigned to a particular treatment order, if the experiment was a repeated-measures design. And although not a part of the design per se, a statement about obtaining written informed consent from participants and institutional review board approval is usually included in this subsection.

The nuts and bolts of the “Methods” section follow, including such things as equipment, materials, protocols, etc. These are beyond the scope of this commentary, however, and so will not be discussed.

The last part of the “Methods” section and last part of the “Study Design” section is the “Data Analysis” subsection. It begins with an explanation of any data manipulation, such as how data were combined or how new variables (eg, ratios or differences between collected variables) were calculated. Next, readers are told of the statistical measures used to analyze the data, such as a mixed 2 × 4 × 8 analysis of variance (ANOVA) with 2 between-groups factors (sex and training program) and 1 within-groups factor (time of measurement). Researchers should state and reference the statistical package and procedure(s) within the package used to compute the statistics. (Various statistical packages perform analyses slightly differently, so it is important to know the package and specific procedure used.) This detail allows readers to judge the appropriateness of the statistical measures and the conclusions drawn from the data.

STATISTICAL DESIGN VERSUS STATISTICAL ANALYSIS

Avoid using the term statistical design. Statistical methods are only part of the overall design. The term gives too much emphasis to the statistics, which are important, but only one of many tools used in interpreting data and only part of the study design:

The most important issues in biostatistics are not expressed with statistical procedures. The issues are inherently scientific, rather than purely statistical, and relate to the architectural design of the research, not the numbers with which the data are cited and interpreted.6

Stated another way, “The justification for the analysis lies not in the data collected but in the manner in which the data were collected.”3 “Without the solid foundation of a good design, the edifice of statistical analysis is unsafe.”7(pp4–5)

The intertwining of study design and statistical analysis may have been caused (unintentionally) by R.A. Fisher, “… a genius who almost single-handedly created the foundations for modern statistical science.”8 Most research did not involve statistics until Fisher invented the concepts and procedures of ANOVA (in 1921)9,10 and experimental design (in 1935).11 His books became standard references for scientists in many disciplines. As a result, many ANOVA books were titled Experimental Design (see, for example, Edwards12), and ANOVA courses taught in psychology and education departments included the words experimental design in their course titles.

Before the widespread use of computers to analyze data, designs were much simpler, and often there was little difference between study design and statistical analysis. So combining the 2 elements did not cause serious problems. This is no longer true, however, for 3 reasons: (1) Research studies are becoming more complex, with multiple independent and dependent variables. The procedures sections of these complex studies can be difficult to understand if your only reference point is the statistical analysis and design. (2) Dependent variables are frequently measured at different times. (3) How the data were collected is often not directly correlated with the statistical design.

For example, assume the goal is to determine the strength gain in novice and experienced athletes as a result of 3 strength training programs. Rate of change in strength is not a measurable variable; rather, it is calculated from strength measurements taken at various time intervals during the training. So the study design would be a 2 × 2 × 3 factorial with independent variables of time (pretest or posttest), experience (novice or advanced), and training (isokinetic, isotonic, or isometric) and a dependent variable of strength. The statistical design, however, would be a 2 × 3 factorial with independent variables of experience (novice or advanced) and training (isokinetic, isotonic, or isometric) and a dependent variable of strength gain. Note that data were collected according to a 3-factor design but were analyzed according to a 2-factor design and that the dependent variables were different. So a single design statement, usually a statistical design statement, would not communicate which data were collected or how. Readers would be left to figure out on their own how the data were collected.

MULTIVARIATE RESEARCH AND THE NEED FOR STUDY DESIGNS

With the advent of electronic data gathering and computerized data handling and analysis, research projects have increased in complexity. Many projects involve multiple dependent variables measured at different times, and, therefore, multiple design statements may be needed for both data collection and statistical analysis. Consider, for example, a study of the effects of heat and cold on neural inhibition. The variables of Hmax and Mmax are measured 3 times each: before, immediately after, and 30 minutes after a 20-minute treatment with heat or cold. Muscle temperature might be measured each minute before, during, and after the treatment. Although the minute-by-minute data are important for graphing temperature fluctuations during the procedure, only 3 temperatures (time 0, time 20, and time 50) are used for statistical analysis. A single dependent variable Hmax:Mmax ratio is computed to illustrate neural inhibition. Again, a single statistical design statement would tell little about how the data were obtained. And in this example, separate design statements would be needed for temperature measurement and Hmax:Mmax measurements.

As stated earlier, drawing conclusions from the data depends more on how the data were measured than on how they were analyzed.3,6,7,13 So a single study design statement (or multiple such statements) at the beginning of the “Methods” section acts as a road map to the study and, thus, increases scientists' and readers' comprehension of how the experiment was conducted (ie, how the data were collected). Appropriate study design statements also increase the accuracy of conclusions drawn from the study.

CONCLUSIONS

The goal of scientific writing, or any writing, for that matter, is to communicate information. Including 2 design statements or subsections in scientific papers—one to explain how the data were collected and another to explain how they were statistically analyzed—will improve the clarity of communication and bring praise from readers. To summarize:

  1. Purge from your thoughts and vocabulary the idea that experimental design and statistical design are synonymous.

  2. Study or experimental design plays a much broader role than simply defining and directing the statistical analysis of an experiment.

  3. A properly written study design serves as a road map to the “Methods” section of an experiment and, therefore, improves communication with the reader.

  4. Study design should include a description of the type of design used, each factor (and each level) involved in the experiment, and the time at which each measurement was made.

  5. Clarify when the variables involved in data collection and data analysis are different, such as when data analysis involves only a subset of a collected variable or a resultant variable from the mathematical manipulation of 2 or more collected variables.

Acknowledgments

Thanks to Thomas A. Cappaert, PhD, ATC, CSCS, CSE, for suggesting the link between R.A. Fisher and the melding of the concepts of research design and statistics.

REFERENCES

1. Knight K. L., Ingersoll C. D. Structure of a scholarly manuscript: 66 tips for what goes where. J Athl Train. 1996;31(3):201–206. [PMC free article] [PubMed] [Google Scholar]

2. Iverson C., Christiansen S., Flanagin A., et al. AMA Manual of Style: A Guide for Authors and Editors. New York, NY: Oxford University Press; 2007. 10th ed. [Google Scholar]

3. Altman D. G. Practical Statistics for Medical Research. New York, NY: Chapman & Hall; 1991. pp. 4–5. [Google Scholar]

4. Thomas J. R., Nelson J. K., Silverman S. J. Research Methods in Physical Activity. Champaign, IL: Human Kinetics; 2005. 5th ed. [Google Scholar]

5. Leedy P. D. Practical Research, Planning and Design. New York, NY: Macmillan Publishing; 1985. pp. 96–99. 3rd ed. [Google Scholar]

6. Feinstein A. R. Clinical biostatistics XXV: a survey of the statistical procedures in general medical journals. Clin Pharmacol Ther. 1974;15(1):97–107. [PubMed] [Google Scholar]

7. Schoolman H. M., Becktel J. M., Best W. R., Johnson A. F. Statistics in medical research: principles versus practices. J Lab Clin Med. 1988;71(3):357–367. [PubMed] [Google Scholar]

8. Hald A. A History of Mathematical Statistics. New York, NY: Wiley; 1998. [Google Scholar]

10. Fisher R. A. Statistical Methods for Research Workers. Edinburgh, Scotland: Oliver and Boyd; 1925. Cited by: Fisher RA, Bennett JH, eds. Statistical Methods, Experimental Design, and Scientific Inference: A Reissue of Statistical Methods for Research Workers, The Design of Experiments, and Statistical Methods, and Scientific Inference. New York, NY: Oxford University Press; 1993. [Google Scholar]

11. Fisher R. A. The Design of Experiments. Edinburgh, Scotland: Oliver and Boyd; 1935. Cited by: Fisher RA, Bennett JH, eds. Statistical Methods, Experimental Design, and Scientific Inference: A Reissue of Statistical Methods for Research Workers, The Design of Experiments, and Statistical Methods, and Scientific Inference. New York, NY: Oxford University Press; 1993. [Google Scholar]

12. Edwards A. L. Experimental Design in Psychological Research. New York, NY: Rinehart and Co; 1942. [Google Scholar]

13. Lang T. A., Secic M. How to Report Statistics in Medicine. Philadelphia, PA: American College of Physicians; 2006. p. 175. 2nd ed. [Google Scholar]


Articles from Journal of Athletic Training are provided here courtesy of National Athletic Trainers Association


What is the difference between research and experimental research?

01. Descriptive research refers to research which describes a phenomenon or else a group under study. Experimental research refers to research where the researcher manipulates the variable to come to an conclusion or finding.

How is experimental design is different from descriptive research?

The main difference between the two is that – descriptive research is a qualitative or quantitative approach dedicated to observing the variable demographics under its natural habitat. While experimental research includes a scientific quantitative approach to test hypotheses and theories using control variables.

Why experimental research design is the best?

Through accurate and precise empirical measurement and control an experimental design increases a researcher's ability to determine causal relationships and state causal conclusions. Experimental design as a subset of scientific investigation is a popular and widely used research approach.

How are descriptive research design and experimental research design similar and differ from each other?

Definition. Descriptive research is the type of research where characteristics of the study group or a certain occurrence are described while experimental research is the research type that manipulates variables to come to a conclusion. This is the main difference between descriptive and experimental research.