Perkins School for the Blind Transition Center

Finding and Evaluating Empirically-Based Interventions

Parents are often overwhelmed with a mountain of information regarding treatments for various symptoms of autism spectrum disorders or their co-morbid disorders. Some publications claim to reverse or even cure autism. Some of these publications and advertisements are well written, logical, based upon theory, and can have intuitive appeal. In desperation parents will try the interventions that offer a glimmer of hope. At best, they have wasted their time, money, and effort. At worst, the interventions could potentially be harmful and, in extreme cases, fatal. How can parents and even treatment providers determine if an intervention is effective?

Empirically-based interventions are those interventions that are based upon research. Empirical simply means observable. The empirical practice movement that is currently shaping the thinking in the social sciences and in education has its roots in medicine. The phrase “empirical practice” can be used interchangeably with “evidence-based practice.” A practitioner uses an intervention with a client based upon evidence that the intervention is effective for a particular problem facing that population. Physicians will either conduct research to determine if a particular drug regimen is effective in ameliorating certain symptoms or will read the professional literature to see if using the drug has worked to treat the symptom presented by the patient.

Physicians who turn to the professional literature read peer reviewed journals. Peer reviewed journals are those that have an editorial board who delegate the reading of manuscripts to a panel of experts who read the manuscript without knowing who wrote the article. The panel of experts reviews the methodology of the research, the analysis of the data, and the conclusions. By reviewing the manuscript in a “blind review” (i.e., without knowing the identity of the author), the article is approved by the merits of the research alone. The reviewers are not biased by the reputation (or lack thereof) of the author. In the social sciences and education there are hundreds, if not thousands, of peer reviewed journals that are available at the local library or on-line. In the field of autism  one example of a peer reviewed journal with a double blind review process (i.e., the reviewers and the authors do not learn each other’s identities) is the Journal of Autism and Developmental Disorders.

By reading the original research from a peer reviewed journal, parents and treatment providers are going to the original source material. This is a crucial step. The popular press (e.g. newspapers and magazines) will pick up recently released research that is newsworthy. However, these types of publications attempt to translate the research into easily understandable language for mass audiences. Unfortunately, sometimes the interpretation of the results can be misleading or overly simplified. The classic example of this is when a publication states that the latest research “proves” that an intervention is effective or writes that a treatment is “clinically proven effective.” Research scientists generally do not speak or write in sweepingly global terms. The language they use is more circumscribed and is couched in probabilistic terms. They write and speak of the odds or the percentage of individuals that will experience reduced symptoms when using an intervention. Rarely will they indicate that the treatment or intervention is 100% effective with 100% of the population.

Popular press publications also accept money from sponsors who pay for advertisements. This can lead to a bias regarding what types of research are published or not. A research finding that might upset a major sponsor can result in the manuscript not seeing the light of day. Peer reviewed journals do not accept advertisements. The manuscript must be rigorous and survive on its own merit. The research findings are subject to debate and public scrutiny under this approach. Other researchers in the field will attempt to replicate the findings which lends further support that a particular intervention technique is truly effective and worthy of trying with an individual on the spectrum.

Once a parent or a treatment professional locates a peer reviewed journal to determine if an intervention technique is effective, then he or she should attempt to locate as many research studies on the technique as possible. By reading the abstracts of each article, the reader can glean a sense of how well the technique is withstanding repeated empirical testing. Researchers will generally come to a consensus as to how effective a technique is with a specific population.

The testing of interventions or drug regimens generally utilizes quantitative research techniques. Quantitative techniques test hypotheses by collecting data that are transformed into numbers. Researchers use statistical methods to analyze the data and look for relationships between variables. Quantitative research designs are on a continuum of rigor. The least rigorous design is the case study where only one subject is observed and the results are based upon the single patient. Next along the continuum are the Single Systems Designs also known as the Single Subject Designs (SSDs). These designs are considered more rigorous than a case study because more data points are collected. The subject is observed until a steady baseline of the target behavior is achieved. Next an intervention is introduced for a period of time and the effect upon the target behavior is recorded. The next phase in this research design is the withdrawal of the intervention. The target behavior is observed and data are collected. The data are plotted upon a graph to visually see the impact of the introduction and withdrawal of the intervention. This technique offers little control for extraneous factors causing the change in the target behavior so it is difficult to tell if the intervention technique really worked. It is also difficult to generalize results, i.e., say with any certainty that this technique works with other people.

Toward the right side of the continuum of increasingly more rigorous research designs are a group of research designs known as the “Quasi-Experimental Designs.” The Quasi-Experimental Designs are more rigorous than SSDs and involve enough subjects and data to allow statistical analyses. The difference between Quasi-Experimental Designs and Experimental Designs has to do with how subjects are assigned to the treatment groups. In a true experimental design subjects are assigned to the treatment or control groups randomly or by chance (e.g. via a toss of a coin). Control or comparison groups do not receive the treatment or intervention that is being tested. In a quasi-experimental design subjects are not assigned to a treatment group and a comparison group by chance. There is a reason why people are assigned to the various groups which can lead to bias. This bias may obscure whether or not an intervention is effective. The gold standard for quantitative research studies is the “double blind research study.” Both the subjects and the researchers are “blind” as to whether the individual is assigned to the intervention/treatment group or the control group. This type of design is thought to be the best at reducing bias in the results.

Being conscious of the fact that some research studies are more rigorous and better designed than others can help parents of children with autism decide whether they should attempt an intervention with a problematic behavior. The larger the number of subjects in the study, the more rigorous the design, and the more times other studies have found similar results regarding the effectiveness of an intervention technique, the more comfortable a parent or treatment provider can feel about trying a new intervention. Once you do find a study that appears to conducted in a rigorously scientific manner, also search for discussion or critiques of that article that may have been published in the same journal in subsequent issues. Beware of the publication that touts a cure based on case studies or small sample sizes. Be skeptical of the article that cites a great deal of theory, but does not provide sufficient data to demonstrate the effectiveness of a technique. Likewise, be wary of the article that offers testimonials. Individuals offering testimonials may be invested (sometimes quite literally) in insuring that the intervention technique is effective or may be emotionally invested in seeing their loved one improve.

Another category of research studies that a scrupulous consumer of research findings will come across are qualitative research studies. Generally these research designs are not used to test whether an intervention technique is effective. Qualitative techniques are used to gain a deeper understanding of a phenomenon. These techniques use small sample sizes and attempt to provide richer understandings of complex social phenomenon (e.g. How parents cope with raising a child on the autism spectrum). They are not used to generalize findings to large groups of people. Parents should not use qualitative research study findings to determine effectiveness of an intervention technique.

A final source of information regarding the effectiveness of intervention techniques for individuals on the autism spectrum is the National Autism Center’s 2009 National Standards Report. This report divides interventions into three broad categories: Established; Emerging; and Unestablished Treatments. Of the eleven established treatments, the overwhelming majority are behaviorally-based (i.e. based upon Applied Behavior Analysis). Another 22 interventions show some promise of effectiveness. However, there were a number of more recent interventions (e.g. dietary changes) that were deemed as Unestablished. Parents and treatment providers need to be critical consumers of research findings before embarking upon an intervention targeting a problem behavior involving an individual on the autism spectrum.

Ernst O. VanBergeijk, PhD, MSW is the Associate Dean & Executive Director, and Paul K. Cavanagh, PhD, MSW, is the Director of Academics and Evaluation, at New York Institute of Technology Vocational Independence Program (VIP). VIP is a U.S. Department of Education approved Comprehensive Transition and Postsecondary (CTP) program.

 

 

References

 

National Autism Center. (2009). National standards report: The national standards project-addressing the need for evidence-based practice guidelines for autism spectrum disorders. Randolph, MA: National Autism Center. Retrieved from www.nationalautismcenter.org

Have a Comment?