an-applied-guide-to-research-designs-2e.pdf

SAGE Research Methods

An Applied Guide to Research Designs: Quantitative,

Qualitative, and Mixed Methods

Author: W. Alex Edmonds, Thomas D. Kennedy

Pub. Date: 2019

Product: SAGE Research Methods

DOI: https://dx.doi.org/10.4135/9781071802779

Methods: Research questions, Experimental design, Mixed methods

Disciplines: Anthropology, Education, Geography, Health, Political Science and International Relations,

Psychology, Social Policy and Public Policy, Social Work, Sociology

Access Date: January 11, 2023

Publishing Company: SAGE Publications, Inc

City: Thousand Oaks

Online ISBN: 9781071802779

© 2019 SAGE Publications, Inc All Rights Reserved.

Quantitative Methods for Experimental and Quasi-Experimental Research

Part I includes four popular approaches to the quantitative method (experimental and quasi-experimental on-

ly), followed by some of the associated basic designs (accompanied by brief descriptions of published studies

that used the design). Visit the companion website at study.sagepub.com/edmonds2e to access valuable

instructor and student resources. These resources include PowerPoint slides, discussion questions, class ac-

tivities, SAGE journal articles, web resources, and online data sets.

Figure I.1 Quantitative Method Flowchart

Note: Quantitative methods for experimental and quasi-experimental research are shown here, followed by

the approach and then the design.

Research in quantitative methods essentially refers to the application of the systematic steps of the scientific

method, while using quantitative properties (i.e., numerical systems) to research the relationships or effects

of specific variables. Measurement is the critical component of the quantitative method. Measurement reveals

and illustrates the relationship between quantitatively derived variables. Variables within quantitative methods

must be, first, conceptually defined (i.e., the scientific definition), then operationalized (i.e., determine the ap-

propriate measurement tool based on the conceptual definition). Research in quantitative methods is typically

referred to as a deductive process and iterative in nature. That is, based on the findings, a theory is supported

(or not), expanded, or refined and further tested.

Researchers must employ the following steps when determining the appropriate quantitative research design.

SAGE

© 2017 by SAGE Publications, Inc.

SAGE Research Methods

Page 2 of 6 An Applied Guide to Research Designs: Quantitative, Qualitative, and

Mixed Methods

1.

2.

3.

First, a measurable or testable research question (or hypothesis) must be formulated. The question must

maintain the following qualities: (a) precision, (b) viability, and (c) relevance. The question must be precise

and well formulated. The more precise, the easier it is to appropriately operationalize the variables of interest.

The question must be viable in that it is logistically feasible or plausible to collect data on the variable(s) of

interest. The question must also be relevant so that the result of the findings will maintain an appropriate level

of practical and scientific meaning. The second step includes choosing the appropriate design based on the

primary research question, the variables of interest, and logistical considerations. The researcher must also

determine if randomization to conditions is possible or plausible. In addition, decisions must be made about

how and where the data will be collected. The design will assist in determining when the data will be collected.

The unit of analysis (i.e., individual, group, or program level), population, sample, and sampling procedures

should be identified in this step. Third, the variables must be operationalized. And last, the data are collected

following the format of the framework provided by the research design of choice.

Experimental Research

Experimental research (sometimes referred to as randomized experiments) is considered to be the most pow-

erful type of research in determining causation among variables. Cook and Campbell (1979) presented three

conditions that must be met in order to establish cause and effect:

Covariation (the change in the cause must be related to the effect)

Temporal precedence (the cause must precede the effect)

No plausible alternative explanations (the cause must be the only explanation for the effect)

The essential features of experimental research are the sound application of the elements of control: (a) ma-

nipulation, (b) elimination, (c) inclusion, (d) group or condition assignment, or (e) statistical procedures. Ran-

dom assignment (not to be confused with random selection) of participants to conditions (or random assign-

ment of conditions to participants [counterbalancing] as seen in repeated-measures approaches) is a critical

step, which allows for increased control (improved internal validity) and limits the impact of the confounding

effects of variables that are not being studied.

The random assignment to each group (condition) theoretically ensures that the groups are “probabilistically”

equivalent (controlling for selection bias), and any differences observed in the pretests (if collected) are con-

sidered due to chance. Therefore, if all threats to internal, external, construct, and statistical conclusion va-

lidity were secured at “adequate” levels (i.e., all plausible alternative explanations are accounted for), the dif-

ferences observed in the posttest measures can be attributed fully to the experimental treatment (i.e., cause

and effect can be established). Conceptually, a causal effect is defined as a comparison of outcomes derived

SAGE

© 2017 by SAGE Publications, Inc.

SAGE Research Methods

Page 3 of 6 An Applied Guide to Research Designs: Quantitative, Qualitative, and

Mixed Methods

from treatment and control conditions on a common set of units (e.g., school, person).

The strength of experimental research rests in the reduction of threats to internal validity. Many threats are

controlled for through the application of random assignment of participants to conditions. Random selection,

on the other hand, is related to sampling procedures and is a major factor in establishing external validity

(i.e., generalizability of results). Randomly selecting a sample from a population would be conducted so that

the sample would better represent the population. However, Lee and Rubin (2015) presented a statistical ap-

proach that allows researchers to draw data from existing data sets from experimental research and examine

subgroups (post hoc subgroup analysis). Nonetheless, random assignment is related to design, and random

selection is related to sampling procedures. Shadish, Cook, and Campbell (2002) introduced the term gener-

alized causal inference. They posit that if a researcher follows the appropriate tenets of experimental design

logic (e.g., includes the appropriate number of subjects, uses random selection and random assignment) and

controls for threats of all types of validity (including test validity), then valid causal inferences can be deter-

mined along with the ability to generalize the causal link. This is truly realized once multiple replications of the

experiment are conducted and comparable results can be observed over time (replication being the operative

word). Though, recently there have been concerns related to the reproducibility of experimental studies pub-

lished in the field of psychology, for example (see Baker, 2015; Bohannon, 2015).

Reproducibility could be enhanced if the proper tenets of the scientific method are followed and the relevant

aspects of validity are addressed (i.e., internal and construct). Researchers tend to gloss over these con-

structs and rarely report how they ensured the data to be valid, often assuming that a statistical analysis could

be used to “fix” or overshadow the inherent problems of the data. Bad data is clearly the issue, which lends

to a great computer science saying “Garbage in, garbage out.” To be more specific, taking the appropriate

measures to ensure design and test validity, the data will be more “clean,” which results in fewer reporting

errors in the statistical results. Although probability sampling (e.g., random selection) adds another logistical

obstacle to experimental research, it should also be an emphasis along with the proper random assignment

techniques.

Although this book is more dedicated to the application of research designs in the social and behavioral sci-

ences, it is important to note the distinction between research designs in the health sciences to that of the

social sciences. Experimental research in the health or medical sciences shares the same designs, although

the terminology slightly differs, and the guidelines for reporting the data can be more stringent (e.g., see

Schultz, Altman, & Moher, 2010, and Appendix H for guidelines and checklist). These guidelines are designed

to enhance the quality of the application of the design, which in turn leads to enhanced reproducibility. The

most common term used to express experimental research in the field of medicine is randomized control tri-

SAGE

© 2017 by SAGE Publications, Inc.

SAGE Research Methods

Page 4 of 6 An Applied Guide to Research Designs: Quantitative, Qualitative, and

Mixed Methods

als (RCT). RCT simply infers that subjects are randomly assigned to conditions. The most common of the

RCT designs is the parallel-group approach, which is another term for the between-subject approach and is

discussed in more detail in the following sections. RCTs can also be crossover and factorial designs and are

designated under the within-subjects approach (repeated measures).

Quasi-Experimental Research

The nonrandom assignment of participants to each condition allows for convenience when it is logistically

not possible to use random assignment. Quasi-experimental research designs are also referred to as field

research (i.e., research is conducted with an intact group in the field as opposed to the lab), and they are also

known as nonequivalent designs (i.e., participants are not randomly assigned to each condition; therefore,

the groups are assumed nonequivalent). Hence, the major difference between experimental and quasi-exper-

imental research designs is the level of control and assignment to conditions. The actual designs are struc-

turally the same, but the analyses of the data are not. However, some of the basic pretest and posttest de-

signs can be modified (e.g., addition of multiple observations or inclusion of comparison groups) in an attempt

to compensate for lack of group equivalency. In the design structure, a dashed line (- – -) between groups

indicates the participants were not randomly assigned to conditions. Review Appendix A for more examples

of “quasi-experimental” research designs (see also the example of a diagram in Figure 1.2).

Because there is no random assignment in quasi-experimental research, there may be confounding variables

influencing the outcome not fully attributed to the treatment (i.e., causal inferences drawn from quasi-experi-

ments must be made with extreme caution). The pretest measure in quasi-experimental research allows the

researcher to evaluate the lack of group equivalency and selection bias, thus altering the statistical analysis

between experimental and quasi-experimental research for the exact same design (see Cribbie, Arpin-Crib-

bie, & Gruman, 2010, for a discussion on tests of equivalence for independent group designs with more than

two groups).

SAGE

© 2017 by SAGE Publications, Inc.

SAGE Research Methods

Page 5 of 6 An Applied Guide to Research Designs: Quantitative, Qualitative, and

Mixed Methods

Figure I.2 Double Pretest Design for Quasi-Experimental Research

Note: This is an example of a between-subjects approach with a double pretest design. The double pretest

allows the researcher to compare the “treatment effects” between O1 to O2, and then from O2 to O3. A major

threat to internal validity with this design is testing, but it controls for selection bias and maturation. The two

pretests are not necessary if random assignment is used.

It is not recommended to use posttest-only designs for quasi-experimental research. However, if theoretically

or logistically it does not make sense to use a pretest measure, then additional controls should be imple-

mented, such as using historical control groups, proxy pretest variables (see Appendix A), or the matching

technique to assign participants to conditions.

The reader is referred to Shadish, Clark, and Steiner (2008) for an in-depth discussion of how to use linear

regression and propensity scores to approximate the findings of quasi-experimental research to experimental

research. They discuss this in the greater context of the potential weaknesses and strengths of quasi-experi-

mental research in determining causation.

https://dx.doi.org/10.4135/9781071802779

SAGE

© 2017 by SAGE Publications, Inc.

SAGE Research Methods

Page 6 of 6 An Applied Guide to Research Designs: Quantitative, Qualitative, and

Mixed Methods

  • SAGE Research Methods
  • An Applied Guide to Research Designs: Quantitative, Qualitative, and Mixed Methods
    • Figure I.1 Quantitative Method Flowchart
    • Figure I.2 Double Pretest Design for Quasi-Experimental Research
Calculate your order
Pages (275 words)
Standard price: $0.00
Client Reviews
4.9
Sitejabber
4.6
Trustpilot
4.8
Our Guarantees
100% Confidentiality
Information about customers is confidential and never disclosed to third parties.
Original Writing
We complete all papers from scratch. You can get a plagiarism report.
Timely Delivery
No missed deadlines – 97% of assignments are completed in time.
Money Back
If you're confident that a writer didn't follow your order details, ask for a refund.

Calculate the price of your order

You will get a personal manager and a discount.
We'll send you the first draft for approval by at
Total price:
$0.00
Power up Your Academic Success with the
Team of Professionals. We’ve Got Your Back.
Power up Your Study Success with Experts We’ve Got Your Back.
Open chat
1
Hello. Can we help you?