Skip to main content

How to Do ACT Laboratory-Based Component Studies

This is a modified list of features that Dermot Barnes-Holmes presented at the first ACT Summer Institute in Reno in 2003. I (SCH) have added a few things as well A List of Features that ACT Laboratory-Based Component Studies and Experimental Analogs of ACT Processes Should Contain Here is a list of features that we consider to be essential for conducting top quality experimental research that is designed to model and test ACT processes in laboratory component research. There are almost certainly others and the relative emphasis that is placed on each one will vary as a function of the research question and overall design of the study.

In general remember that the purpose of laboratory-based component research is primarily theoretical, so be very clear about the ideas you are testing. If you want to see if these ideas make a practical, clinical difference, that requires clinical research. But it is better to test the clinical implications of theoretical ideas that work, so laboratory-based component research is very important as part of a broader research strategy.

Here are the design features to consider:

1. The experimenter should be blind to the intervention applied to each participant (or the procedure automated; see below).

2. The experimental conditions must balance as much as possible for all relevant attribute variables (e.g., gender, psychopathology, unless the attribute(s) is the target of the analysis).

3. The experimenter should not be personally familiar with the participants and if they are, familiarity should be balanced across conditions.

4. The different interventions should be balanced in all possible ways, except for the critical difference you are seeking to manipulate (e.g., they should be the same length; they require similar levels of engagement with the material; if exercises are used that are appropriate for both conditions, they should be used in both; working should be matched where possible; method of delivery should be identical; etc).

5. The interventions should connect directly to the experimental challenge. In a pain tolerance study, for example, each of the interventions should focus on pain not anxiety or anger etc. (unless different foci are the target of the study).

6. Points 4 and 5 should be checked and supported by independent raters.

7. Where possible and appropriate, the procedure should involve requiring participants to articulate in their own words the intervention strategy that is being provided. Ideally this should be done at regular points throughout the intervention.

8. The verbal material produced under point 7 should be checked by independent raters to determine that participant “understanding” did not differ significantly across conditions, and to ensure that the manipulation successfully altered the intended behavioral process.

9. Participants should be reminded briefly of the relevant intervention strategy before the presentation of each physical or psychological challenge (e.g., CO2 inhalation, electric shock delivery, emotionally aversive pictures or video clips, spider BAT, etc).

10. Ideally, the entire procedure, including pre-intervention baseline, intervention, and post-intervention tasks should be automated. For example, the intervention could be presented via audio or video clips and these can then be checked by independent raters. Moreover, others can then take your automated procedure and attempt to replicate in a different lab. If automation is not possible, then every session should be videotaped to check for fidelity. If only some sessions are videotaped, then the experimenter should not know which ones are being taped.

11. All participants should be asked to summarize at the very end of the experiment the strategy they employed during the study so that these can be checked by independent raters.

12. Other questions of relevance should also be asked that might alter the interpretation of results. For example the participant might be asked to rate the likability or believability of the experimenter (including any video- or audio-based material), expectations for performance on the task, relevance of intervention to "real life", etc.

13. Ideally, some form of standardized self-report or other instrument should be developed to measure the extent to which participants understand and apply specific strategies.

14. For ACT / RFT studies the design of the protocols should be tied clearly to RFT concepts. Studies should not just grab a metaphor or exercise without working through how the metaphor/exercise is predicted, theoretically, to influence the participants’ responses in your study.

15. If the study is a group design it should be adequately powered to test the key hypotheses, especially if null results are to be meaningful. For example, if an interaction is possible, each individual cell size must have a large enough N to test that interaction at an adequate level (say, power of .8 assuming a sensible effect size)

16. If mediational analyses are important, the study must be powered to test these analyses. 17. Especially if null results are predicted, make sure the actual measurement characteristics, outliers, and similar issues do not undermine the calculated power.

18. Meta-analyses of ACT micro-component studies show that in general, rationale-alone interventions are weak (and without the controls specified above they are often difficult to interpret because it is not known what participants actually did in response to the rationale). If the purpose is to examine ACT components, consider including more active and experiential elements.

19. If testing multiple ACT components, consider how to assess for changes in multiple ACT processes and whether comparison conditions should tease apart the impact of individual components.

This page contains attachments restricted to ACBS members. Please join or login with your ACBS account.