Skip to main content

Technical, practical and analytic innovations in single case designs for contextual behavioral scientists

Special Section on Technical, Practical and Analytic Innovations in Single Case Designs for Contextual Behavioral Scientists

Author(s):

Roger Vilardaga

Introduction

Methods to observe, analyze and generate knowledge geared at fostering individual behavior change have been present in the field since the 1930s. B.F. Skinner and R.A. Fisher were pioneers in the analysis of individual data, both analytically and statistically. Fisher, among many other innovations, introduced the idea of randomization tests in a brief but consequential way (Fisher, 1935), since his work inspired a lineage of statisticians who persisted in developing analytic methods not dependent on population assumptions. Skinner on the other hand, created the field of behavior analysis (Skinner, 1938), and inspired several generations of behavioral scientists who in turn developed a variety of single case designs (SCDs) focused on the analysis of individuals׳ behavior over time.

Despite these initial efforts, mainstream psychology has put little emphasis in the development and enhancement of SCDs. Almost “by default” most researchers conceive and plan their studies in terms of summarizing responses from groups of individuals. Group designs, small, medium or large, have become the standard, sometimes because these methods are perceived as the only ones to produce experimental data, and other times because they are arguably the only methods to generate results that are generalizable to a larger population.

Among group designs, randomized controlled trials (the “gold standard”) are critically important, as they have generated a host of knowledge relevant to individuals and society. These trials are critical for science and can inform the population-level impact of certain interventions, which is difficult to address with other methods. However, they can also become a “giant with feet of clay.” Millions of dollars are spent on a single rigorously designed randomized controlled trial. If it fails, millions of dollars go to waste, and little there is that can be learned. Further, even when these trials are grounded in solid basic behavioral science research, interventions tested in the laboratory do not always translate into individuals׳ natural environment. Thus the question is not how do we discourage researchers from doing randomized controlled group trials, but instead how do we harness large group trials with durable and solid “feet.”

This special issue hopefully provides an answer. Recent innovations in data analysis and technology have opened up the field to unforeseen opportunities for research and practice. These innovations have the potential to enhance SCDs and provide behavioral scientists and practitioners with the ability to (1) test their hypothesis experimentally, (2) examine the impact of new interventions on individual׳s natural environment, (3) enhance evidence-based practices. In other words, these methods provide researchers and practitioners with a solid ground to refine their research hypotheses and theory in a more agile manner (Riley, Glasgow, Etheredge, & Abernethy, 2013). Further, the importance of these low cost and high-speed methods comes at a time when both the public and research agencies demand rigorous data about the utility of a variety of emerging interventions (e.g., mHealth; Kumar et al., 2013).

This article is restricted to ACBS members. Please join or login with your ACBS account.