Clinical trial design and
Audrenna Batie shares her thoughts on finding her way -- and her voice -- in the Clinical Trials Administration certificate program. Razelle Kurzrock uses molecular profiling technology to provide cancer patients with targeted, personalized treatment. You can expect to receive course and workshop discounts, important resources and tailored information based on your interests. The University of California, San Diego, is committed to protecting your privacy.
The following Privacy Policy describes what information we collect from you when you visit this site, and how we use this information. Please read this Privacy Policy carefully so that you understand our privacy practices. If you choose to register for and use services on this site that require personal information, such as WebMail, WebCT, or MyBlink, you will be required to provide certain personal information that we need to process your request.
This site automatically recognizes and records certain non-personal information, including the name of the domain and host. This site also includes links to other websites hosted by third parties. When you access any such website from this site, use of any information you provide will be governed by the privacy policy of the operator of the site you are visiting. We do not sell, trade, or rent your personal information to others.
We may provide aggregated statistical data to reputable third-party agencies, but this data will include no personally identifying information. We may release account information when we believe, in good faith, that such release is reasonably necessary to:. By using this website, you consent to the collection and use of this information by UCSD.
If we decide to change our privacy policy, we will post those changes on this page before the changes take effect. Of course, our use of information gathered while the current policy is in effect will always be consistent with the current policy, even if we change that policy later.
If you have questions about this Privacy Policy, please contact us via this form. This certificate requires an application before taking any courses. View the complete Certificate Registration and Candidacy Guidelines. Develop substantive knowledge about coordinating and managing a clinical trial, including the drug development process, human subjects protection, working with institutional review boards IRBs , and application of good clinical practice regulations and guidances from both FDA and ICH perspectives.
Learn how to set up a new clinical study; how to monitor clinical trials; as well as when, why, and how to engage clinical research organizations CROs.
Build or reinforce essential skills, such as data management, site and investigator recruitment, and project management. Discover a new understanding of oncology clinical research, from the basic science of cancer, to monitoring oncology trials. Gain an overview of the entire clinical research process to identify the various career paths within the industry. Managers in training and others who want to refine or update skills Research associates and coordinators Biomedical and research scientists Nurses and allied health professionals Statisticians and database administrators International clinical trials professionals Individuals with degrees in science, psychology or related areas who are entering the field.
Format: Online In-class options might be available for some courses. Duration: 6 - 24 months Varies by student and courses taken per quarter. Courses Delivery Method Show Legend.
Units: 1. See Details. Required Courses. Units: 3. Upcoming Start Dates: January 3, Units: 2. Upcoming Start Dates: January 31, Upcoming Start Dates: January 18, Electives 14 units are required. Upcoming Start Dates: January 11, Typically Offered: Winter, Summer.
Upcoming Start Dates: April 10, Typically Offered: Spring. Upcoming Start Dates:. Upcoming Start Dates: October 18, Typically Offered: Fall, Spring. In many situations, more than one efficacy endpoints are used to address the primary objective.
This creates a multiplicity issue since multiple tests will be conducted. Decisions regarding how the statistical error rates e. Endpoints can be classified as being objective or subjective. Objective endpoints are those that can be measured without prejudice or favor. Death is an objective endpoint in trials of stroke. Subjective endpoints are more susceptible to individual interpretation.
For example, neuropathy trials employ pain as a subjective endpoint. Other examples of subjective endpoints include depression, anxiety, or sleep quality.
Objective endpoints are generally preferred to subjective endpoints since they are less subject to bias. An intervention can have effects on several important endpoints. Composite endpoints combine a number of endpoints into a single measure. The advantages of composite endpoints are that they may result in a more completed characterization of intervention effects as there may be interest in a variety of outcomes. Composite endpoints may also result in higher power and resulting smaller sample sizes in event-driven trials since more events will be observed assuming that the effect size is unchanged.
Composite endpoints may also reduce the bias due to competing risks and informative censoring. This is because one event can censor other events and if data were only analyzed on a single component then informative censoring can occur. Composite endpoints may also help avoid the multiplicity issue of evaluating many endpoints individually. Composite endpoints have several limitations. Firstly, significance of the composite does not necessarily imply significance of the components nor does significance of the components necessarily imply significance of the composite.
For example one intervention could be better on one component but worse on another and thus result in a non-significant composite. Another concern with composite endpoints is that the interpretation can be challenging particularly when the relative importance of the components differs and the intervention effects on the components also differ.
For example, how do we interpret a study in which the overall event rate in one arm is lower but the types of events occurring in that arm are more serious? Higher event rates and larger effects for less important components could lead to a misinterpretation of intervention impact.
It is also possible that intervention effects for different components can go in different directions. Power can be reduced if there is little effect on some of the components i.
When designing trials with composite endpoints, it is advisable to consider including events that are more severe e. It is also advisable to collect data and evaluate each of the components as secondary analyses. This means that study participants should continue to be followed for other components after experiencing a component event. When utilizing a composite endpoint, there are several considerations including: i whether the components are of similar importance, ii whether the components occur with similar frequency, and iii whether the treatment effect is similar across the components.
In the treatment of some diseases, it may take a very long time to observe the definitive endpoint e. A surrogate endpoint is a measure that is predictive of the clinical event but takes a shorter time to observe. The definitive endpoint often measures clinical benefit whereas the surrogate endpoint tracks the progress or extent of disease.
Surrogate endpoints could also be used when the clinical end-point is too expensive or difficult to measure, or not ethical to measure. Surrogate markers must be validated. Ideally evaluation of the surrogate endpoint would result in the same conclusions if the definitive endpoint had been used. The criteria for a surrogate marker are: 1 the marker is predictive of the clinical event, and 2 the intervention effect on the clinical outcome manifests itself entirely through its effect on the marker.
It is important to note that significant correlation does not necessarily imply that a marker will be an acceptable surrogate. Missing data is one of the biggest threats to the integrity of a clinical trial. Missing data can create biased estimates of treatment effects.
Thus it is important when designing a trial to consider methods that can prevent missing data. Researchers can prevent missing data by designing simple clinical trials e. Similarly it is important to consider adherence to protocol e. Envision a trial comparing two treatments in which the trial participants in both groups do not adhere to the assigned intervention. Then when evaluating the trial endpoints, the two interventions will appear to have similar effects regardless of any differences in the biological effects of the two interventions.
Note however that the fact that trial participants in neither intervention arm adhere to therapy may indicate that the two interventions do not differ with respect to the strategy of applying the intervention i. Researchers need to be careful about influencing participant adherence since the goal of the trial may be to evaluate the strategy of how the interventions will work in practice which may not include incentives to motivate patients similar to that used in the trial.
Sample size is an important element of trial design because too large of a sample size is wasteful of resources but too small of a sample size could result in inconclusive results. Calculation of the sample size requires a clearly defined objective. The analyses to address the objective must then be envisioned via a hypothesis to be tested or a quantity to be estimated. The sample size is then based on the planned analyses. A typical conceptual strategy based on hypothesis testing is as follows:.
Formulate null and alternative hypotheses. Select the Type I error rate. Type I error is the probability of incorrectly rejecting the null hypothesis when the null hypothesis is true. In the example above, a Type I error often implies that you incorrectly conclude that an intervention is effective since the alternative hypothesis is that the response rate in the intervention is greater than in the placebo arm. For example, when evaluating a new intervention, an investigator may consider using a smaller Type I error e.
Alternatively a larger Type I error e. Select the Type II error rate. Type II error is the probability of incorrectly failing to reject the null hypothesis when the null hypothesis should be rejected. The implication of a Type II error in the example above is that an effective intervention is not identified as effective.
Type II error and power are not generally regulated and thus investigators can evaluate the Type II error that is acceptable.
For example, when evaluating a new intervention for a serious disease that has no effective treatment, the investigator may opt for a lower Type II error e. Obtain estimates of quantities that may be needed e. This may require searching the literature for prior data or running pilot studies. Select the minimum sample size such that two conditions hold: 1 if the hull hypothesis is true then the probability of incorrectly rejecting is no more than the selected Type I error rate, and 2 if the alternative hypothesis is true then the probability of incorrectly failing to reject is no more than the selected Type II error or equivalently that the probability of correctly rejecting the null hypothesis is the selected power.
Since assumptions are made when sizing the trial e. Interim analyses can be used to evaluate the accuracy of these assumptions and potentially make sample size adjustments should the assumptions not hold. Sample size calculations may also need to be adjusted for the possibility of a lack of adherence or participant drop-out. In general, the following increases the required sample size: lower Type I error, lower Type II error, larger variation, and the desire to detect a smaller effect size or have greater precision.
An alternative method for calculating the sample size is to identify a primary quantity to be estimated and then estimate it with acceptable precision. For example, the quantity to be estimated may be the between-group difference in the mean response. A sample size is then calculated to ensure that there is a high probability that this quantity is estimated with acceptable precision as measured by say the width of the confidence interval for the between-group difference in means.
Interim analysis should be considered during trial design since it can affect the sample size and planning of the trial. When trials are very large or long in duration, when the interventions have associated serious safety concerns, or when the disease being studied is very serious, then interim data monitoring should be considered. Typically a group of independent experts i.
The DSMB meets regularly to review data from the trial to ensure participant safety and efficacy, that trial objectives can be met, to assess trial design assumptions, and assess the overall risk-benefit of the intervention.
The project team typically remains blinded to these data if applicable. The DSMB then makes recommendations to the trial sponsor regarding whether the trial should continue as planned or whether modifications to the trial design are needed.
Careful planning of interim analyses is prudent in trial design. Care must be taken to avoid inflation of statistical error rates associated with multiple testing to avoid other biases that can arise by examining data prior to trial completion, and to maintain the trial blind.
Many structural designs can be considered when planning a clinical trial. Common clinical trial designs include single-arm trials, placebo-controlled trials, crossover trials, factorial trials, noninferiority trials, and designs for validating a diagnostic device. The choice of the structural design depends on the specific research questions of interest, characteristics of the disease and therapy, the endpoints, the availability of a control group, and on the availability of funding.
Structural designs are discussed in an accompanying article in this special issue. This manuscript summarizes and discusses fundamental issues in clinical trial design. A clear understanding of the research question is a most important first step in designing a clinical trial.
Minimizing variation in trial design will help to elucidate treatment effects. Randomization helps to eliminate bias associated with treatment selection. Stratified randomization can be used to help ensure that treatment groups are balanced with respect to potentially confounding variables.
Blinding participants and trial investigators helps to prevent and reduce bias. Placebos are utilized so that blinding can be accomplished.
Control groups help to discriminate between intervention effects and natural history. The selection of a control group depends on the research question, ethical constraints, the feasibility of blinding, the availability of quality data, and the ability to recruit participants. The selection of entry criteria is guided by the desire to generalize the results, concerns for participant safety, and minimizing bias associated with confounding conditions.
Endpoints are selected to address the objectives of the trial and should be clinically relevant, interpretable, sensitive to the effects of an intervention, practical and affordable to obtain, and measured in an unbiased manner. Composite endpoints combine a number of component endpoints into a single measure. Surrogate endpoints are measures that are predictive of a clinical event but take a shorter time to observe than the clinical endpoint of interest. Interim analyses should be considered for larger trials of long duration or trials of serious disease or trials that evaluate potentially harmful interventions.
Sample size should be considered carefully so as not to be wasteful of resources and to ensure that a trial reaches conclusive results. There are many issues to consider during the design of a clinical trial. Researchers should understand these issues when designing clinical trials. The author would like to thank Dr. Justin McArthur and Dr. The author thanks the students and faculty in the course for their helpful feedback. National Center for Biotechnology Information , U. Response adaptive — This design reduces patient recruitment to ineffective intervention arms.
It requires rapidly available measurable responses. It is infeasible in diseases and therapies with a prolonged time to outcome. This design can compromise allocation concealment and result in selection bias as trial progresses. It can also be affected by changes in patient or treatment characteristics over time that are associated with the treatment received resulting from inherent chances of prolonged recruitment schedule temporal drift.
Ranking and selection —First phase of this adaptive design has subjects randomized to many interventions and placebo. The best therapy from Phase 1 is compared with placebo in a randomized parallel or adaptive design in Phase II. The final comparison is between all subjects receiving the selected intervention versus all the subjects receiving placebo in both phases combined.
It is best suited for multiple intervention comparison in low sample size scenarios. However, there is a chance that wrong selection of the most efficacious therapy in phase I will vitiate the trial results.
Sequential adaptive design — This design allows repeated interim analysis and stoppage once end point of efficacy, safety or futility is achieved. In contrast to traditional trials, the final number of participants needed for a sequential trial is unknown at initiation. The trial ends at the first interim analysis which meets pre-set stopping criteria thereby potentially limiting the number of subjects exposed to an inferior, unsafe, or futile treatment or one that is already proven efficacious.
Analysis can be performed after each patient continuous sequential or after a fixed or variable number of patients group sequential. This design is only effective when study enrolment is expected to be prolonged and treatment outcomes occur relatively soon after recruitment, so that outcomes can be measured before the next patient or group of patients is likely to be recruited.
Challenges include the complexity of analyzing multiple treatments, Power calculation complexities, and appropriate selection of timing and number of interim analyses. An interim analysis is then performed to determine which active arm should be dropped. In the confirmatory stage of Phase III study, the treatment groups with the residual effective active arm and control arm will be investigated.
In the inference seamless approach, the subjects will carry their treatment arm from learning phase to confirmatory phase, and the data in both phases will be analyzed together. For the operational seamless variant, the data in two phases are analyzed separately. In a conventional pilot study, participants are often ineligible for analysis along with cases in future definitive studies due to concerns about selection bias, carry-over, and training effects.
Where patients are few in number as in case of rare diseases, allocating them to a pilot study rather than the definitive study could be seen as a wasteful approach. Advances in statistical software and computing power continue to allow for increasingly more complex study designs and analytical techniques, and researchers should take best advantage of these advances.
There is a concern regarding acceptability of evidence generated by alternative trial designs by regulatory authorities and peer reviewers. It is imperative to understand that same research question may be tackled through alternative designs and that there is no definitive trial design for every research question. The time frame, logistics involved, and availability of study subjects are key to selection.
Factors, such as objective of the trial, number of patients needed, length of trial, and how the variability is handled, could be important in the choice of the most suitable trial design. The readers are reminded of the fact that no trial design is perfect, and no design provides optimum answer to all research questions. In this imperfect milieu, all the above-mentioned contingencies must guide the researchers to study the most optimum design among a clutch of options and they must incorporate biostatistician in initial trial design and post-trial analysis.
National Center for Biotechnology Information , U. Indian Dermatol Online J. Brijesh Nair. Author information Article notes Copyright and License information Disclaimer.
Address for correspondence: Dr. E-mail: moc. Received Dec; Accepted Jan. This is an open access journal, and articles are distributed under the terms of the Creative Commons Attribution-NonCommercial-ShareAlike 4. This article has been cited by other articles in PMC. Fisher The delivery of an intervention whether drug, a dietary change, a lifestyle change, or a psychological therapy session counts as an intervention and hence must be dealt as a clinical trial [ Figure 1 ].
Open in a separate window. Figure 1. Uncontrolled Trials This design incorporates no control arm. Figure 2. Illustrative example In immunotherapy in warts, it is imperative to avoid an uncontrolled study. Control Arm Options in Controlled Trials Controlled trials allow discrimination of the patient outcome from an outcome caused by other factors such as natural history or observer or patient expectation. The controls which can be used are: Placebo concurrent control — Placebo is a form of inert substance, or an intervention designed to simulate medical therapy, without specificity for the condition being treated.
The downsides are potential for observer bias and difficulty in blinding in this design Active treatment concurrent control — This design involves comparison of a new drug to a standard drug or compare combination of new and standard therapies vis a vis standard therapy alone. The Declaration of Helsinki mandates the use of standard treatment as controls Dose-comparison concurrent control — Different doses or regimens of same treatment are used as active arm and control arm in this design.
This design may be inefficient if the therapeutic range of the drug is not known Historical control external and non-concurrent — Source of controls are external to the present study and were treated at an earlier time earlier therapeutic gold standard or in a different setting. Variants of Placebo Controlled Trial Designs Add-on design — This design denotes a placebo-controlled comparison on top of a standard treatment given to all patients.
If the improvement that is achievable in addition to that obtained from the standard treatment is small, the size of such trial may need to be very large Early escape design — The early escape design using a placebo control allows a patient to be withdrawn from the study as soon as a predefined negative efficacy criterion has been attained.
If the drug has a slow and deliberate effect on long-term use then that might be missed in this design Unbalanced assignment of patients to placebo and test treatment. Figure 3. Figure 4. Randomized Clinical Trials RCT In randomized controlled trials, trial participants are randomly assigned to either treatment or control arms.
Randomization Schemes in Randomized Controlled Trials to Eliminate Confounding Factors Stratified randomization — This refers to the situation in which strata are constructed based on values of prognostic variables and a randomization scheme is implemented separately within each stratum.
After all the subjects have been identified and assigned into strata, simple randomization is performed within each stratum to assign subjects to either case or control groups Block randomization — Blocking is the arranging of experimental units in groups blocks that are similar to one another.
Blocks are small and balanced with predetermined group assignments, which keeps the numbers of subjects in each group similar at all times Randomization by body halves or paired organs Split Body trials — This is a scenario most often used in dermatology and ophthalmic practice where one intervention is administered to one half of the body and the comparator intervention is assigned to other half of the body.
Paired data statistical analytic tests need to be done in this scenario Cluster randomization — Study patients and treating interventionists do not exist in isolation.
Each cluster forms a unit of the trial and either active or comparator intervention is administered for each cluster Allocation by randomized consent Zelen trials — Eligible patients are allocated to one of the two trial arms prior to informed consent. However, this design raises serious ethical uncertainties and must only be used in severely flagging trials in terms of insufficient sample size of great public health importance and is not recommended in routine clinical trial design Minimization — Stratification based on multiple co-variates age, sex, gender, baseline severity of disease, personal habits, co-morbidities, treatment naivety, etc.
RCT Designs a. Parallel group trial design Parallel arm design is the most commonly used study design. Figure 5. Cross over design Another advantage is requirement of a smaller sample size [ Figure 6 ]. Figure 6. Table 1 Points to be factored in during cross over design. Effects of intervention during first period should not carry over into second period.
In case of suspected carry over effects more complex sequences are needed which increase study duration and thus the chance of drop outs The treatment effect should be relatively rapid in onset with rapid reversibility of effect The disease has to be chronic, stable, and non-self-resolving. This design is usually avoided in vaccine trials because immune system is permanently affected.
Internal and external trial environment must remain constant over time. An accepted convention for the wash out period is five half lives of the drug involved Each treatment period must give adequate time for the intervention to act meaningfully Sensitivity of trial power to drop outs due to longer anticipated duration of trial Identification of culprit drug for delayed adverse events during later period of the study becomes difficult.
Figure 7. Randomized withdrawal design [Enrichment enrolment randomized withdrawal EERW ] In this design, after an initial open label period enrichment period during which all subjects are assigned to receive intervention, the non-responders are dropped from the trial and the responders the enriched population are randomized to receive intervention or placebo in the second phase of the trial. Figure 8. Newer Study Designs a. Adaptive randomization methods play the winner, drop the loser designs This paradigm is useful only for studies with binary outcomes and are most useful when the anticipated effect size being evaluated is large.
Table 3 Rarer designs used in clinical trials. Trial design Description Matched pairs design Patients with similar characteristics who are expected to respond similarly are grouped into matched pairs and then the members of each pair are randomized to receive either drug or control. The confounding factors are eliminated and intra-group variability is reduced Delayed start design DS This design has initial placebo-controlled phase patients randomized to treatment or placebo followed by active control phase all patients receive treatment.
Those in the initial placebo group have a delayed start on active drug. Disease progression as well as disease relapses can be studied and design evaluates the effect of the treatment on the symptoms and the evolution of the disease. There must be a sufficient number of follow-up visits to measure the treatment effect. Upside of design is that every subject is exposed to active therapy. Downsides are evaluation bias and carry over effect.
Randomized placebo phase design RPPD Subjects are first randomly allocated to either an experimental or a control group.
However, after a short, fixed time period called the placebo-phase , all patients in the control group are switched to the experimental treatment. All patients receive the tested treatment in the end - but have varying lengths of time on placebo. This design assumes that if treatment is effective, then those administered drug earlier will respond sooner. At the end of the trial, average time-to-response is compared between the two groups, most often using survival analysis methods. Upside is that all patients receive active drug by the end of trial.
There is a need to establish an effective placebo phase duration i. A longer placebo-phase duration will decrease the required sample size but increase the chance of dropouts.
Stepped wedge design SWS Intervention allocated sequentially to participants either as individuals or clusters of individuals is called stepped wedge design.
In the first step all patients are initiated on control interventions and subsequently over 4 time periods, individual or clusters of individuals are randomized to treatment arm with all patients receiving treatment by the end of the study period. The utility rests in testing medicines that are anticipated to cause harm.
It can be used when therapy cannot be initiated concurrently to all subjects simultaneously Three stage design 3S Constitutes initial randomized placebo-control phase, a randomized withdrawal stage for responders from Stage 1, and a third phase when all placebo non-responders from Stage 1 are first prescribed active treatment and then the responders from Stage 3 are randomly assigned to placebo or treatment. It is an extension of randomized withdrawal design.
This suits only to study chronic conditions where both response to therapy and withdrawal of therapy can be assessed. The withdrawal phase has to be sufficiently long so that the drug can be completely washed out and the clinical effects of therapy reversed. Requires less sample size than parallel arm design. Helps to evaluate the efficacy of a therapeutic agent in a particular patient subpopulation when efficacy in the general patient population has already been established.
Internal pilot design In a conventional pilot study, participants are often ineligible for analysis along with cases in future definitive studies due to concerns about selection bias, carry-over, and training effects. Table 2 Factors to consider in design selection.
Number and characteristics of treatments to be compared Characteristics of the disease under study Study objectives Timeframe Treatment course and duration Carry over effects Duration of the study which is linked with drop-out rates Cost and logistics Patient convenience Ethical considerations Statistical considerations Study subject availability disease rarity Inter and intra subject variability.
Table 4 Algorithm for choice of study design. Conclusion Advances in statistical software and computing power continue to allow for increasingly more complex study designs and analytical techniques, and researchers should take best advantage of these advances. Financial support and sponsorship Nil.
Conflicts of interest There are no conflicts of interest. Suggested Reading 1. Spilker B. Guide to Clinical Trials.
0コメント