Thursday, November 15, 2018

Validity of Intervention Study Design


The purpose of an intervention study is to investigate a cause-and-effect relationship (or lack thereof) between an intervention or treatment (independent variable) and an observed response (dependent variable).  The validity of an intervention study depends on the degree to which researchers can control and manipulate these variables as well as control for any confounding variables.

In practice, researchers can rarely (if ever) completely control for confounding variables, especially in human studies.  A multitude of studies have been conducted that provide important data about drawing a conclusion about the relationship between an independent variable(s) and a dependent variable(s).  However, a study always has limitations because no study can be perfectly designed.

Manipulation of variables is the intentional control of variables by a researcher.  For example, a researcher may assign some study participants to receive an experimental intervention and some participants to receive a comparison intervention.  In such an example, the researcher is controlling the intervention (independent variable) and measuring the effect of the experimental intervention (dependent variable).  The manipulation of independent and dependent variables may seem relatively simple.  But, in actuality, manipulation of variables (independent, dependent, confounding) can be challenging.  At this point, we will begin discussing methods for manipulating variables, procedures for appropriate data analyses, and ways to improve the validity of a study design.

In another post, I have discussed the importance of random sampling.  In a prospective, intervention study where two or more groups are being compared, study participants who are sampled from the target population should be allocated to a group using random assignment to improve study validity.  Random assignment increases study validity by providing confidence that no bias exists in regards to differences between study participants (inter-subject variability) that may impact the measured variable (dependent variable).  In theory, random assignment should result in a balance in inter-subject variability between groups and thus, minimizing the influence of inter-subject variability on the dependent variable.

For random assignment to improve study validity, participant characteristics should be considered equivalent between groups.  Consider a study where patients are randomly sampled from a target population and then, randomly assigned to one of two groups.  By chance, some participants may have higher scores for a dependent variable while others have lower scores.  Random assignment is likely to result in a balance of high and low scores between the groups.  However, this balance does not always occur.  I will discuss some ways to address this problem later in this post.

So, how can individuals be randomly assigned to groups?  In my opinion, use of a computer software program may be the most effective and efficient method for performing random assignment.  Various software programs are available for purchase or can be downloaded at no financial cost.  Microsoft Excel is a software program that can also be used for random assignment.

Possibly, the most effective method for controlling the influence of confounding variables on a dependent variable is the use of a control group.  The change of the dependent variable in the experimental group can be compared to the change in the control group.  If there are not significant differences between the groups before receiving an intervention (baseline), then any differences in change of the dependent variable between the groups can be inferred as an effect.  A control group that receives no treatment may be the optimal means of measuring the effect in the experimental group.  However, due to various reasons (lack of feasibility, unethical to withhold treatment, etc), a comparison group is often used instead of a control group.  A comparison group may receive a "standard" treatment to determine if the experimental treatment is better than standard care.  Another way that a comparison group is used is when researchers would like to know which treatment is superior.

Certainly, creating and following a research study protocol are very important to the validity of an intervention study (or any other study).  A research study protocol also provides the opportunity for clinicians and practitioners to replicate the study methodology in another environment (for example, a patient care setting).

Although it is not possible to have absolute control of all variables and ensure that every study participant has the same experience, a reasonable degree of control is often possible.  Research study protocols are frequently very detailed and exhaustive.  Click on the following link for more information about the process of writing a research study protocol.

Another issue related to the validity of an intervention study is appropriate data analysis when some data are incomplete.  Incomplete data can be due to events such as participants withdrawing from a study or not adhering to the study protocol.  Incomplete data can compromise the beneficial effect of random assignment and decrease study statistical power (I discuss statistical power in another post).  One may think that it is logical to analyze data for only those study participants that completed the study according to protocol (referred to on-protocol, on-treatment, per-protocol, or completer analysis).  In general, an on-protocol analysis will bias the study results in favor of the treatment, resulting in an inflated treatment effect.  Consider a study in which some participants experienced adverse side effects that resulted in participants withdrawing from the study.  If an on-protocol analysis is conducted, the effect of the treatment will reflect only those who experiences benefits and not the adverse side effects.

A more conservative approach is the intention-to-treat (ITT) analysis.  With the ITT analysis, all data are analyzed according to the original random assignment.  The phrase (intention-to-treat analysis) means that the data of study participants are analyzed based on the principle that the intention is to treat all participants.  One could also argue that this approach is more reflective of clinical practice, where some patients will not complete an intervention for various reasons (such as non-adherence).

The concept and application of the ITT analysis are in-depth.  The Annals of Internal Medicine provides investigators with some additional information about the ITT analysis and suggestions for analysis of missing data.

Blinding is a method of preventing any potential bias by investigators, study participants, or both.  Blinding can be important for intervention and non-intervention studies.  The British Medical Journal has published a very brief, but informative, article on the topic of blinding.

Earlier in this post, I discussed the issue of inter-subject variability and how methods such as random assignment can reduce the negative impact of inter-subject variability on intervention study design.  For intervention studies that include only one group of participants, participants may be used as their own control.  One-group intervention studies investigate the response (effect) of a treatment in a single-group of individuals (also referred to as a repeated measures design).  The repeated measures design is efficient for controlling inter-subject differences because participants are matched with themselves.  In contrast, multiple group studies compare responses between groups of different individuals, which can result in greater inter-subject variability.  Yet, issues related to the design of repeated measure studies exist and will be discussed in a future post.

The analysis of covariance (ANCOVA) is a statistical method of controlling for confounding variables.  In short, the ANCOVA allows the researcher to select potential confounding variables (covariates) and the ANCOVA will then statistically adjust response scores to control for the selected covariates.  More information about the ANCOVA will be discussed in a future post.

In theory, the most robust method of controlling for differences between study participants is careful planning and use of inclusion and exclusion study criteria.  The purpose is to choose study participants who are homogeneous in their characteristics.  If study participants are homogeneous, then confounding variables in regards to inter-subject variability do not exist.  A major disadvantage to this method is that study findings can relate to only individuals with the same characteristics as the study participants and therefore, limits that application of the study results.   

No comments:

Post a Comment