Use LEFT and RIGHT arrow keys to navigate between flashcards;
Use UP and DOWN arrow keys to flip the card;
H to show hint;
A reads text to speech;
382 Cards in this Set
- Front
- Back
definition of clinical research
|
patient oriented
|
|
5 things studied in clinical research
|
drugs
surgery psychological interventions service delivery (ex. do we give ppl a 30 day supply or 100 day supply?) other interventions |
|
who conducts clinical research
|
institutions
groups of institutions pharmaceutical companies |
|
2 examples of institutions
|
hospitals
universities |
|
research vs. practice
|
research - directed to populations
practice - directed to individuals |
|
evidence based medicine
|
use of best evidence in making decisions about the care of individual patients
|
|
5 As for providing EBM
|
(evidence based medicine)
1. Ask: develop clinical Q 2. Acquire: find best evidence 3. Appraise: evaluate the evidence for validity and usefulness 4. Apply: use the results 5. Assess: evaluate performance |
|
3 approaches to clinical research
|
1. quantitative
2. qualitative 3. mixed |
|
qualitative research
|
answers how or why?
|
|
experimental vs. observational
|
experimental = researcher controls interventions AKA exposures
|
|
non randomized controlled trial
|
experimental study (researcher assigned exposures)
no random allocation |
|
descriptive study
|
observational (researcher did not assign exposures)
no comparison group |
|
experimental study
|
researcher assigned exposures/interventions
|
|
observational study
|
researcher does not assign exposures/interventions
|
|
analytical study
|
observational study
with comparison groups divided into cohort, case control, cross-sectional |
|
analytical and descriptive studies are types of
|
observational study
|
|
3 types of analytical study
|
cohort
case control cross sectional |
|
example of descriptive study
|
patient registries
for rare diseases/conditions ex. transplant registry describe treatments and outcomes for the patients |
|
cohort study
|
collect the people at the time of intervention/exposure
and then follow them and see if they have outcome |
|
case control study
|
collect people with a certain outcome (or not outcome (=control)
study them and look for people with a certain exposure |
|
example of case control
|
look at people who had heart attack
assemble a similar group with no heart attack look back and see whether they had taken Ca in the last 10 yrs |
|
what is a case, what is a control
|
case = people with the outcome
control = similar people without the outcome |
|
3 types of prospective studies
|
experimental: RCT
analytical: cohort with comparison group descriptive: cohort no comparison group |
|
3 types of retrospective studies
|
analytical: cohort with comparison
descriptive: cohort no comparison case control |
|
cross sectional study
|
grab a bunch of data about people and analyze it
|
|
example of prospective cohort study
|
framingham heart study
researchers assembled cohort and looked at heart disease outcome |
|
retrospective cohort
|
look back in time to find a group of people with an exposure
then see if they got the outcome |
|
example of retrospective cohort
|
chart review
look at patient charts find the exposure then find the outcome |
|
type of study that can be prospective or retrospective
|
cohort
|
|
type of study that is only retrospective
|
case control
once you find the outcome, the only way to look to find exposure is back |
|
type of study that is neither prospective nor retrospective
|
cross sectional
|
|
5 factors that determine what type of study we use
|
state of current knowledge
availability of comparison group ethics or feasibility of randomization frequency of event research question |
|
why randomization might not be feasible
|
people might not enroll in a study if they may get placebo
|
|
role of frequency of event in determining which study to use
|
may need very large n to find the event
it is unfeasable to assemble a group of 500 000 people and follow them forward |
|
Research question: Does taking this drug lead to increased risk of birth defects?
Can we answer this question with experimental design? |
no because it is unethical to risk birth defects
|
|
role of state of knowledge in type of studies we do
|
1. know problem exists but know little about its characteristics or possible causes = descriptive studies, cross sectional surveys
2. suspect that certain factors contribute to the problem = cross sectional, case control, cohort 3. Know that a factor contributes to the problem. Want to know the extent of the contribution. = experimental 4. Have developed an intervention - need to access the efficacy = experimental |
|
TCPS2
|
Tri-Council Policy Statement 2
|
|
TCPS2 definition of clinical trial
|
form of clinical research that evaluates the effects of >1 health related interventions on health outcomes
|
|
phase 1 AKA
|
first in man
|
|
phase 2 AKA
|
proof of principle
|
|
what is the goal of phase 1
|
safety
dose |
|
what is the goal of Phase 2
|
first in patient
preliminary safety and efficacy |
|
phase 3a
|
safety and efficacy
|
|
what phase 2 does that phase 1 doesnt
|
patients instead of healthy volunteers
|
|
what phase 3 does that phase 2 doesn't (2)
|
comparative
large scale |
|
phase 3b
|
new indications
|
|
phase 4 AKA (2)
|
surveillance
post marketing |
|
timing of phase 1
|
few weeks to months
|
|
population of phase 2 (size)
|
100-200
|
|
length of phase 2
|
1-2yrs
|
|
length of phase 3
|
2-3 yrs
|
|
hierarchy of evidence
|
meta-analysis
RCT cohort case-control case-series case report expert opinion |
|
hawthorne effect
|
participant attitude affects outcomes
|
|
3 factors that make a trial a "good trial"
|
high internal validity
precision high external validity |
|
internal validity
|
low systematic error (bias and confounding)
|
|
types of error that are found in a trial with low internal validity
|
bias
confounding |
|
precision
|
low random (chance) error
extent to which results are free from sources of variation equally likely to distort results in either direction and reduce precision |
|
confounding
|
type of bias
|
|
generalizability
|
extent to which results can be applied to other individuals or other settings
|
|
why randomize
|
balance known and unknown confounder
|
|
how to ***** the impact of the intervention in an RCT
|
do participants start the same?
are they the same at the end? |
|
what does the C mean in RCT
|
one of the treatments is considered a standard for comparison
|
|
only study design that proves causation
|
RCT
|
|
assumption of RCTs
|
assume that participants start the same and similarity is maintained at the end by similar drop out patterns
|
|
why observational studies are bad
|
variables that drive sorting (ex. income) may also affect outcome
|
|
what variables drive sorting in observational studies
|
sorting = invention or control
patient factors: health, income, education, diet provider factors: staff, costs, congestion, attitudes, skills, lead |
|
features of RCTs
|
ethics
randomization concealment of allocation blinding complete follow up |
|
type of bias reduced by randomization
|
selection bias
|
|
what is important about randomization method
|
should always be reported in a published study
|
|
selection bias
|
difference in known and unknown variables at the start that may affect outcome patterns
difference in the way people are accepted or rejected for a trial, or way in which interventions are assigned to individuals |
|
allocation concealment
|
researcher who is recruiting the patients does not know which group the next subject will be assigned to
this researcher is the one who assesses whether patients are suitable for the study or not |
|
why concealment of allocation
|
researcher may not enrol a subject if they know that he or she will be the one to get the placebo
or they may act differently when asking the patient to participate |
|
concealment of allocation in papers
|
not usually explicitly discussed
|
|
randomization in papers
|
method should be reported
|
|
SNOSE
|
sequentially numbered opaque sealed envelopes
contains the patients treatment assignment |
|
blinding
|
disguise treatment so that individuals do not know what is being given
|
|
blinding AKA
|
masking
|
|
why blinding labels are ambiguous
|
single, double, etc. doesn't indicate who was blinded
|
|
blinding in papers
|
often ambiguously labelled as single, double, etc.
look for descriptions about who was blinded |
|
why blind
|
1. avoid unequal outcome assessment
2. avoid unequal co-intervention |
|
unequal cointervention
|
physicians may give patients more treatment if they know they are receiving a placebo
|
|
measurement bias
|
unequal assessment of outcomes (often due to lack of blinding - ie. the outcome assessor knows that a patient is on placebo)
|
|
unequal assessment of outcomes (often due to lack of blinding - ie. the outcome assessor knows that a patient is on placebo)
|
measurement bias
|
|
double dummy
|
when 2 forms of drug are administered in a study and there are placebos used for each, the study is referred to as ______
|
|
when 2 forms of drug are administered in a study and there are placebos used for each, the study is referred to as ______
|
double dummy
|
|
allocation concealment vs. blinding
|
allocation means you don't know which group the next patient will be assigned to
can always conceal allocation can't always blind allocation = prevention of selection bias blinding = prevention of measurement bias and unequal treatment of groups allocation = before randomization blinding = after randomization |
|
pseudo-random or quasi random allocation methods
|
date of birth of participant
odd or even date of invitation to participate in the study |
|
why pseudo random allocation is not good enough
|
knowing which group a patient will be allocated to can affect the decision to include them in the trial or not
|
|
what is it called when the characteristics of the participants are similar across groups at the start of the study
|
balanced at baseline
|
|
how balanced groups can be achieved without randomization
|
think of list of factors you want to be similar
put patients in groups such that the 2 groups are balanced wrt these factors ex. i want an equal amount of women in both groups |
|
important methodological rule regarding randomization
|
procedure must be decided prior to beginning
one the study commences, the procedure cannot be modified |
|
purpose of block/restricted/stratified randomization
|
the number and characteristics of each study group may differ slightly at any given point in the study
keeps the number of participants in all the study groups as close as possible |
|
how block/restricted randomization works
|
study has 3 groups ABC
create 6 blocks: ABC, ACB, BAC, BCA, CAB, CBA determine how the 6 numbers of the dice will correspond to each block allocate 3 patients in a sequence if you roll 3, the first patient will be allocated to group B, second to group A, 3rd to group C |
|
how stratified randomization works
|
investigators identify factors (strata) that are related to the outcome of the study
create block randomization for each factor |
|
weighted or unequal randomization
|
investigators may not want same number of participants in each group
|
|
when weighted or unequal randomization might be used
|
concerns with adverse events in the active treatment
|
|
cluster randomization
|
randomizing people in groups
ex. hospitals, families, geographical areas effect of showing prison inmates a video on smoking cessation or when contamination occurs |
|
the way in which participants in one group are treated or assessed is likely to modify the treatment or assessment of participants in other groups
|
contamination
|
|
contamination
|
the way in which participants in one group are treated or assessed is likely to modify the treatment or assessment of participants in other groups
|
|
example of contamination
|
RCT in which patients are given a booklet with strategies to increase patient participation in treatment decisions vs. conventional practice in which the physicians start using the strategies described in the book to treat the control group patients
|
|
another fun type of randomization
|
randomize the order in which events are assessed
ex. if evaluating effects of an analgesic, investigators could randomize the order in which analgesia, adverse effects, and quality of life are assessed |
|
bias definition
|
any factor or process that tends to deviate the results or conclusions of a trial systematically away from the truth
|
|
when can bias occur
|
planning
participant selection administration of interventions measurement of outcomes analysis of data interpretation and reporting of results publication reading |
|
why randomization doesn't perfectly eliminate selection bias (difference in way participants are accepted or rejected from a trial, or which group they are assigned to)
|
investigator can circumvent the randomization (ex. can make it easier for depressed patients to be excluded, or make them more likely to be allocated to the placebo group)
this is why concealed allocation is necessary |
|
what can ALWAYS be implemented as part of a study design
|
concealed allocation
|
|
ascertainment bias
|
results of a trial are systematically distorted by knowledge of which treatment each participant is receiving
|
|
how to protect against ascertainment bias
|
blinding
|
|
is it always possible to use placebo
|
no - particularly in non-drug trials
|
|
problem with dropouts
|
if people in the treatment group are dropping out due to adverse events, the final assessment will contain less people with adverse events
|
|
how to prevent bias when there are dropouts
|
1. intention to treat
all participants that were randomized are analyzed as part of their groups whether they complete the study 2. worst case scenario sensitivity analysis. for patients in group with best outcomes, treat dropouts as if the worst possible result occurred for patients in group with worst outcomes, treat dropouts as if the best possible result occurred evaluate whether this analysis contradicts or supports the analysis created by not taking this data into account |
|
when a cross-over design is inappropriate
|
curable or lethal conditions
|
|
crossover design
|
patients receive a series of different treatment
|
|
bias during dissemination of trials
|
publication bias
language bias country of publication bias time lag bias potential breakthrough bias |
|
publication bias
|
more likely that positive (and particularly, strongly positive) results will be published
|
|
only way to prevent publication bias
|
compulsory registration of trials at inception and publication of results of all trials
|
|
CNS
|
clinical nurse specialist
|
|
components of informed consent
|
readibility of documents
education level of participants influence of health care providers on decision making therapeutic misconceptions (attributing therapeutic benefits to research when there may be none) illness severity compensation |
|
where potential participants learn about clinical trials
|
physician referral (46%)
media (35%) internet (9%) friend (8%) cold call (2%) |
|
human rights
|
assertions that call for treating humans as ends in themselves rather than means to the goals and purposes of others
|
|
3 ethical principles for trials with human participants
|
respect for persons
beneficience justice |
|
respect for persons
|
consider autonomy or vulnerability of participants
autonomous person can weigh their own goals and choices to act under this principle involves providing info, assessing understanding of the info (= informed consent) |
|
beneficence
|
obligation to protect participant from harm by maximizing benefits and minimizing harms
|
|
justice
|
treating participants fairly and equitably
particular individuals, groups or communities should neither bear an unfair share of the direct burdens of participating in research, nor should they be unfairly excluded from the potential benefits of research participation |
|
example of violating justice
|
denying inclusion without solid reason
imposing undue burden on an individual |
|
act regarding use of personal information
|
Health Insurance Portability and Accountability Act (HIPPA)
|
|
HIPPA
|
consent to access health info must be granted by the patient prior to discussion of the research study
|
|
suggestions for improving informed consent
|
know target audience
place purpose of study early in consent form logical presentation of elements of informed consent statements use active voice good organization headers 12-14 point font plenty of white space consistent use of words/terminology |
|
how to prevent relationships with healthcare providers from ruining informed consent
|
clearly delineate role as care provider from role in research study (because patients sometimes fear they will ruin their relationship with their care provider by not participating)
avoid coercion, manipulation, rational persuasion |
|
coercion
|
use of threat or force or excessive reward
|
|
manipulation
|
aimed at goals, rather than means for achieving the goald
|
|
persuasion
|
argument or reasoning
|
|
how therapeutic misconception arises
|
goals of clinical research are not the same as the goals of ordinary treatment
patient thinks there is therapeutic benefit to participating but there may be none |
|
research vs. clinical care goals
|
research: short time commitment, standardized
clinical care: time intensive, individualized |
|
patients' belief that they will be assigned to a study group based on need instead of randomization is an example of _____
|
therapeutic misconception
|
|
how illness severity compromises informed consent
|
indirectly
fatigue and anxiety can impair judgment desperation info presented when the individual has low ability to process it |
|
recommended way of dealing with obtaining proper informed consent from a severely ill patient
|
1. offer information outside the clinical context by medical personnel not associated with the patient's care, in order to avoid the interpretation of research as recommended treatment
2. offer information, then provide contemplation time |
|
people who are often not included in studies
|
women
ethnic minorities |
|
results of the distrust score experiment
|
Black people distrust their physicians when it comes to research participation more than white people (think physicians won't explain it properly, or that they are already experimenting without their permission)
gender lower education level geographic region unemployment were also factors |
|
who actively promotes the benefits of participation and acts as patient advocate
|
CRA = clinical research associate
may be the CNS or not |
|
role of CRA
|
actively promotes the benefits of participation and acts as patient advocate
|
|
factors that increased the success of the CRA in getting patients to enrol
|
CRAs confidence with study background
CRAs impression of the scientific merit of the study their impression of the study risks strength of their recommendation adequate time fraim to obtain consent |
|
definition of cohort
|
group of individuals having a statistical factor (ex. age, risk) in common
|
|
parallel group RCT
|
participants randomly allocated to treatment and comparison groups
|
|
gold standard
|
parallel group RCT
|
|
prospective cohort study
|
researcher assembles cohort and collects data into the future regarding outcome
|
|
retrospective cohort study
|
researcher looks back in the past the assemble cohort
collects data regarding past outcomes |
|
cross sectional study
|
at a given time point in a given population
measure exposure and outcome |
|
attrition bias
|
differences that arise in study groups due to exclusions of participants after randomization (ie. drop outs)
|
|
how readers should deal with drop outs
|
look for descriptions of who dropped out and why when reading the paper
|
|
when RCTs are not the best choice (7)
|
too short
intermediate outcomes not studied in target populations efficacy not effectiveness time consuming costly not always feasible or ethical |
|
efficacy vs. effectiveness
|
efficaceous = in research setting
effective = in real world setting |
|
first do no harm is from
|
Hippocratic oath
|
|
4 historical documents associate with clinical trial ethics
|
Hippocratic oath
Nuremberg code Declaration of Helsinski Belmont Report |
|
Nuremberg code
|
following war crimes in WW2
established voluntary consent |
|
Declaration of Helsinki
|
1964
world medical association became the basis for ICH-GCP guidelines |
|
Belmont report
|
1979
US regulatory document for protection of research subjects respect beneficence justice |
|
requirements that make RCTs medically, scientifically, ethically justifiable
|
appropriate treatment and comparison groups
reasonable doubt about efficacy (equipoise) benefits vs. risks compatibility with health care needs sufficiently similar to real world use sufficient sample size to find a difference |
|
equipoise
|
reasonable doubt about efficacy
|
|
reasonable doubt about comparative efficacy of two treatments (or a treatment vs. placebo)
|
equipoise
|
|
regulations for CTs in Canada
|
Health Canada Food and Drug Regulations for Clinical Trials
|
|
Guidelines for CTs in Canada
|
Tri-council policy statement 2 (TCPS2): EThical conduct for research in volving humans
|
|
guidelines for CTs internationally
|
International conference on harmonization of technical requirements for registration of pharmaceuticals for human use Good Clinical Practice guidelines
|
|
drug approvals are based on 3 things
|
safety
efficacy quality |
|
Therapeutic Products Directorate (TPD)
|
reviews trials for pharmaceuticals and medical devices
|
|
who reviews trials for pharmaceuticals and medical devices in Canada
|
Therapeutic Products Directorate (TPD)
|
|
Biologics and Genetic Therapies Directorate (BGTD)
|
reviews trials for biologics and radiopharmaceuticals
|
|
who reviews trials for biologics and radiopharmaceuticals in Canada
|
Biologics and Genetic Therapies Directorate (BGTD)
|
|
BGTD
|
Biologics and Genetic Therapies Directorate
|
|
TPD
|
Therapeutic Products Directorate
|
|
Natural Health Products Directorate
|
reviews trials for natural health products
|
|
who reviews trials for natural health products in Canada
|
Natural Health Products Database
|
|
NHPD
|
Natural Health Products Directorate
|
|
All Phase I, II and II studies involving humans must be reviewed and approved by Health Canada unless ...
|
the drug is being used as approved on the Product Monograph
|
|
components of Clinical Trial Application for HC (9)
|
1. signed forms assigning accountabilities - company/investigator
2. names of investigators 3. sites 4. protocol 5. chemistry and manufacturing info 6. results of preclinical pharm and tox 7. results of clinical studies (if not first in man) 8. investigational drug brochure 9. sample patient consent forms |
|
organization that makes the guidelines that must be followed by clinical trials
|
ICH-GCP guidelines
|
|
ICH-GCP stands for
|
International Conference on Harmonisation
Good Clinical practice |
|
REB
|
research ethics board
|
|
TCPS2
|
Tri-Council Policy Statement 2
|
|
who has to review clinical trials before they may commence
|
Research ethics board
|
|
what principles are followed by REB
|
TCPS2
|
|
GCP is misleading, it should really be
|
good clinical research practice
|
|
2 most important elements of ICH GCP
|
patient safety
study credibility |
|
definition of ICH-GCP
|
standard for all the stages of clinical trials that provides assurance that the data and results are credible, accurate
|
|
research ethics board AKA
|
institutional review board (IRB)
|
|
IRB
|
institutional review board
another name for research ethics board |
|
minimum number of members in research ethics board
|
5
|
|
qualifications of members of research ethics board
|
both medical and lay
|
|
situation in which there may be multiple research ethics boards
|
more than one institution involved
ex. U of T and hospital |
|
moral (definition)
|
virtuous in general conduct
distinction between right and wrong |
|
ethics
|
moral principles or code
|
|
core principles of TCPS2
|
respect for persons
concern for welfare justice |
|
beneficence AKA
|
concern for welfare
|
|
does informed consent need to be signed
|
no - it can be oral or implied
|
|
consent in children
|
can be given by parents
|
|
information required for informed consent
|
1. purpose of research
2. identity of researcher, sponsor 3. contact information re: research and ethical issues 4. description of design, procedures, responsibilities 5. risks and benefits 6. alternatives 7. assurance of freedom to withdraw 8. commercialization and conflicts of interest 9. description of data collected, access/confidentiality 10. payments 11. stopping rules |
|
incentives are not the same as ____
|
compensation/reimbursement
|
|
3 threats to voluntary participation
|
undue influence
coercion incentives |
|
undue influence
|
participant is recruited by an individual in position of authority and may feel as if they have no choice
|
|
coercion
|
more extreme undue influence, involves threat of harm/punishment
|
|
incentives
|
anything offered to participants to encourage participation
not the same as compensation/reimbursement |
|
vulnerable groups
|
may be unduly influenced by the expectation of a benefit or a retaliatory response due to refusal
|
|
list of vulnerable groups
|
children
pregnant women ER patients intensive care patients unconscious patients cognitively impaired terminally ill prisoners homeless/unemployed/impoverished abused aboriginal ethnic minorities students |
|
capacity
|
ability to understand information about research and appreciate the consequences of their decision
|
|
ability to understand information about research and appreciate the consequences of their decision
what is this called |
capacity
|
|
is research permitted in individuals who lack capacity
|
yes
|
|
what kind of permission is obtained for children
|
assent
|
|
assent
|
expression of approval or agreement
|
|
stopping rules (Definition)
|
rules laid down in advance that specify conditions under which the experiment will be terminated, unequivocal demonstration that one regimen in a randomized controlled trial is clearly superior to the other, or that one is clearly harmful
|
|
3 ways that a study may cease for a particular study
|
1. researchers may stop a study according to stopping rules
2. researcher may remove someone from a study (worsening health, better therapy available, non-adherence) 3. participant may withdraw |
|
do participants require a reason to withdraw from the study
|
no
|
|
when research can proceed without consent (5)
|
1. no more than minimal risk
2. impossible to carry out the research properly with consent 3. won't affect a person's welfare 4. not a therapeutic intervention 5. consent can be obtained later, if appropriate |
|
role of deception in research
|
may involve less than full disclosure or intentional provision of misleading information
must debrief cannot be used as an excuse to leave out information |
|
4 categories of risks of participating in research
|
physiological
psychological social economic |
|
example of physiological risk of research participation
|
side effects
|
|
example of psychological risk of research participation
|
stress
|
|
example of social risk of research participation
|
embarassment
|
|
3 levels of risk
|
no greater than minimal
greater than minimal but prospect of benefit greater than minimal, no prospect of benefit |
|
example of no greater than minimal risk
|
survey
non-invasive treatment |
|
example of greater than minimal risk but prospect of benefit
|
phase III trial
blood sampling psychological testing |
|
example of more than minimal risk, no prospect of benefit
|
phase I trial
invasive procedures |
|
ethical duties of an investigator regarding privacy and confidentiality - protect from (6)
|
unauthorized access
unauthorized use disclosure modification loss theft |
|
clinical equipoise
|
genuine uncertainty about which therapy or therapies are most effective for a given condition
|
|
in which studies is clinical equipoise required
|
trials in which participants are randomly assigned to different groups
|
|
when are placebo controlled trials ethically acceptable
|
does not compromise safety or health of participants
compelling scientific justification its use is scientifically and methodologically sound |
|
when are placebos used (5)
|
1. no established effective therapies
2. doubt exists about the net benefit of available therapies 3. patients are resistant to available therapies due to treatment/medical history 4. add-on trial 5. patients have provided refusal of effective therapy and withholding it will not cause serious or irreversible harm |
|
add-on trial
|
All subjects receive an existing treatment but some then receive the additional experimental drug whilst others do not or are given a placebo
|
|
what is the conflict of interest when you are a clinician-researcher
|
responsible for patient care
responsible for research |
|
other potential conflicts of interest
|
financial
intellectual |
|
conflict of interest in clinical trials
|
REB reviews potential conflicts of interest
public disclosure of conflicts of interest |
|
how are clinical trials registered
|
public registery
|
|
why clinical trials are registered
|
to increase transparency and accountability
|
|
adverse events include
|
events that do no necessarily have causal relationship with the treatment or usage
|
|
serious adverse events (5)
|
1. result in death
2. is life threatening 3. requires in-patient hospitalization or prolongation of hospitalization 4. results in persistent or significant disability/incapacity 5. is a congenital anomaly/birth defect |
|
definition of adverse events
|
untoward medical occurence
occurs after admin of product, or use of device |
|
how reporting of adverse events must proceed
|
reporting requirements and timeframe vary by local research ethics board and regulatory authority
|
|
relationship between AEs and ADRs
|
look at slide 75
all ADRs are AEs ADR is an AE that has reasonable probability of being related to treatment |
|
plagiarism vs. fabrication vs. falsification
|
plagiarism: act of misrepresenting someone else's work as your own
fabrication: set forth measurements that have not been performed falsification: ignore or change relevant data that contradict reported findings |
|
ignore or change relevant data that contradict reported findings
|
falsification
|
|
set forth measurements that have not been performed
|
fabrication
|
|
act of misrepresenting someone else's work as your own
|
plagiarism
|
|
what is not considered research misconduct
|
honest error
honest different in interpretation |
|
what to consider if a patient asks you about participating
|
balance of risks and benefits for the individual
government regulation code of research ethcis patient themself is (usually) unlikely to benefit from the study |
|
core principles of ICH-GCP vs. TCPS2
|
ICH-GCP: participant safety, study credibility
TCPS2: respect for persons, concern for welfare, justice |
|
primary vs. secondary research questions
|
primary = the main focus of the study
secondary = other additional stuff - likely to be the focus of future studies |
|
elements of the research question
|
PICOT
population intervention comparison outcome timeframe |
|
null hypothesis
|
there is no difference between the intervention and the comparison
|
|
what is the opposite of the null hypothesis
|
alternative or study hypothesis
|
|
which of the following is a testable statement
|
null hypothesis
|
|
2 types of alternative hypothesis
|
directional (the test group will be greater than the comparison)
non directional (the test group will differ from the comparison) |
|
look at slide 35 (lec 9)
|
ok
|
|
3 types of hypothesis testing
|
superiority
equivalence non-inferiority |
|
superiority
|
intended to determine if new treatment is better than the comparison by a pre-specified amount
|
|
intended to determine if new treatment is better than the comparison by a pre-specified amount
|
superiority
|
|
equivalence
|
determines if new treatment and comparison are therapeutically similar by a pre-specified amount
|
|
determines if new treatment and comparison are therapeutically similar by a pre-specified amount
|
equivalence
|
|
non-inferiority
|
intended to determine that new treatment is no worse than comparison by a pre-specified amount
|
|
intended to determine that new treatment is no worse than comparison by a pre-specified amount
|
non-inferiority
|
|
what has to be planned in a clinical trial regarding participants
|
target population
sampling methods design (parallel, cross-over, etc.) sample size allocation ratio |
|
selection criteria
|
define the study population - the kind of patients best suited to the research question
|
|
sampling procedure
|
process for picking the subgroup of the population who will actually be the subjects of the study
|
|
all members of a particular group that results are intended to be generalized to
|
target population
|
|
target population
|
all members of a particular group that results are intended to be generalized to
|
|
study sample
|
subset of people included in the study
|
|
subset of people included in the study
|
study sample
|
|
what she calls primary care
|
outpatient
|
|
what she calls secondary care
|
specialist
|
|
what she calls tertiary care
|
hospital
|
|
2 types of reasoning used when selecting participants for a clinical trial
|
ethical
scientific |
|
characteristics we consider when selecting participant for clinical trial
|
age
sex concomitant disease concomitant drugs inpatient/outpatient |
|
inclusion criteria are used to maximize (4)
|
rate of outcomes
likely benefit of intervention generalizability - study population should approximate real population ease of recruitment - super narrow criteria = too hard to recruit |
|
why you want to maximize rate of outcomes
|
ex. if you are trying to prevent heart attacks, the people most likely to have heart attacks (ie. the group that will produce the highest rate of outcomes) are those who have already had one.
|
|
exclusion criteria minimize (5)
|
harm
patients in which the intervention will be ineffective non-adherence loss of follow up practical problems |
|
example of loss to follow up
|
homeless
|
|
run in phase
|
people who drop out during the run in phase are not included for the actual study
compliance if their existing drug therapy might be assessed run in phase also determines if the patients will benefit from the intervention often done by commercial studies because you get a bigger effect size at the end |
|
representative sample
|
similar to the population on all characteristics
|
|
sampling methods
|
1. probability/random: all members of population have equal chance of being selected
|
|
type of sampling where all members of population have equal chance of being selected
|
1. probability/random ( =random within the target population, not the entire population)
2. non-probability/non-random |
|
probability/random sampling
|
all members of population have equal chance of being selected
|
|
3 types of probability/random sampling
|
simple
stratified cluster |
|
simple stratified and cluster are 3 types of
|
probability/random sampling
|
|
how often is probability/random sampling done
|
rarely
|
|
3 types of non-probability/non-random sampling
|
systematic
convenience purposive |
|
if no answer in email look up difference between systematic, convenience, purposive sampling
|
ok
|
|
2 types of selection bias
|
1. not all individuals in a population have equal chance of getting selected to participate (ie. is the group representative)
2. intervention and comparison groups differ from each other |
|
parallel design
|
treatment group and comparison group
|
|
trial design where there is treatment group and comparison group
|
parallel group
|
|
cross over study
|
2 groups
1 gets intervention, 1 gets comparison wash out period switch perform tests at the midpoint and the end |
|
wash-out period is used to prevent __ effect
|
carry over
|
|
cluster design
|
same as parallel but you take a whole group (ex. all patients assigned to a specific physician
|
|
same as parallel but you take a whole group (ex. all patients assigned to a specific physician
|
cluster
|
|
why cluster trial designs are bad
|
results of a group may differ because different physicians have different practice behaviours
|
|
example of cluster trial
|
tell physicians to teach their patients about sleep hygeine
dont tell other physicians to do this |
|
why cluster designs are used
|
you can't 'unlearn' information
it is hard in a busy practice for a doctor to remember to teach something to some patients and not others also for convenience |
|
concept of lead
|
in a study where the physician provides info to some patients but not others they may accidentally provide it to the control group
|
|
studies that follow subjects forward over time
|
longitudinal
|
|
longitudinal study
|
follows subjects forward over time
|
|
types of sample size (4)
|
fixed
mega sequential N of 1 |
|
fixed sample size
|
based on a priori sample size calculation performed by computer
|
|
sample size type that is based on a priori sample size calculation performed by computer
|
fixed size
|
|
mega trial is type of
|
fixed size
|
|
mega trial
|
large sample
|
|
large sample is what type of sample size
|
mega trial
|
|
sequential (type of sample size)
|
variable
not known at outset |
|
type of sample size that is variable
not known at outset |
sequential
|
|
why sequential sample sizes are used
|
every x number of patients they stop and do and analysis. if there is sufficient evidence of positive or negative results they stop
|
|
how sequential sample sizes are written about in papers
|
"interim analysis"
|
|
why you wouldn't do a 1:1 ratio of treatment: placebo
|
patients might not want to participate if they are less likely to get drug
|
|
3 characteristics that determine choice of intervention
|
generalizability (next best alternative for therapy)
complexity strength |
|
dont get lec 9 slide 76
|
ok
|
|
placebo is what type of control
|
negative
|
|
active treatment is what type of control
|
positive
|
|
possible choices of comparison arm
|
placebo
active treatment same intervention but different regimen/additive absence of treatment usual/standard care control |
|
head to head trial
|
compare 2 active treatments
|
|
add on trial
|
groups start with same active treatment
then add the test treament this is a way to get around placebo when it is unethical |
|
are observational studies randomized
|
no
|
|
why is randomization important
|
forms groups that are equivalent
|
|
4 types of randomization
|
simple
blocked stratified cluster |
|
simple randomization
|
randomize all subjects
|
|
what type of randomizationis it when you randomize all the subjects
|
simple
|
|
blocked randomization
|
randomize subjects within blocks to ensure equal number of subjects in all groups throughout the trial
|
|
stratified randomization
|
randomize groups that share similar characteristics
(done with block randomization) results may be analyzed by subgroups ex. randomize the males into one of two groups, then same for females |
|
note about cluster randomization
|
randomize at group level
analyze at individual level |
|
Using a random number table to assign the first patient in a study and then alternating between the two groups is random allocation.
True or False |
F
no because they established a pattern |
|
papers must specific ___ in their paper
|
randomization procedures
allocation concealment procedures (ex. SNOSE) |
|
reasons not to randomize
|
ethics
feasibility (time, complexity) expense |
|
prospective vs. retrospective
|
prospective generate new info, retrospective use existing info
|
|
RCTs are prospective or retrospective
|
prospective
|
|
open trial
|
all participants and investigators are aware of treatment assignment
|
|
what is it called when all participants and investigators are aware of treatment assignment
|
open trial
|
|
4 types of measurement bias that occur in unblinded RCTs
|
performance bias (groups get systemically treated differently)
contamination bias (control group gets the intervention) co-intervention bias (other treatment unequally applied to invention and control groups) detection bias (outcomes unequally assessed due to preconceived notions or characteristics) |
|
2 reasons not to blind
|
1. impossible (ex. a diet, surgery)
2. possible, but unacceptable (dangerous, painful, cumbersome) |
|
how to minimize bias without blinding (4)
|
1. standardize procedures
2. minimize co-intervention 3. blind outcome assessor 4. choose hard (Objective) outcome |
|
desired features of primary and secondary outcomes
|
easy to observe and record
free of measurement error clinically relevant chosen before starting study can be observed independent of treatment assignment |
|
6 Ds (outcomes of disease)
|
death
disease discomfort disability dissatisfaction destitution |
|
how is dissatisfaction measured
|
satisfaction
|
|
destitution refers to
|
economic outcomes
|
|
examples of pain outcomes (9)
|
pain
procedure success distress/anxiety sleeplessness immobility/independence isolation loss of functioning/performance secondary morbidity - phobia, depression mortality |
|
composite outcome
|
add things together
stroke or heart attack or death |
|
surrogate outcome
|
substitute outcome
associated with relevant clinical outcome |
|
outcome that is not clinically relevant in itself, but is associated with a clinically relevant outcome
|
surrogate/intermediate outcome
|
|
example of surrogate outcome
|
blood pressure
|
|
surrogate outcome AKA
|
intermediate outcome
|
|
what kind of questions should you ask to determine the rate of anticipated side effects
|
specific questions
|
|
what kind of questions should you ask to determine the rate of UNanticipated side effects
|
general questions
|
|
2 ways to evaluate outcomes
|
1. efficacy
Does the intervention work in those who actually receive it? according to treatment received 2. effectiveness Does the intervention work in those offered it according to allocation |
|
efficacy studies AKA
|
explanatory/fastidious
(fastidious = attention to detail) |
|
effectiveness studies AKA
|
management/pragmatic
pragmatic means dealing with things sensibly and realistically in a way that is based on practical rather than theoretical considerations |
|
what determines where a study is found on the efficacy-effectiveness scale (5)
|
who is selected for participation
how the intervention is delivered how compliance and follow-up are handled how outcomes are assessed how the results are analyzed |
|
example of "how the intervention is delivered" determining where a study is found on the scale
|
if a highly trained person is delivering the intervention then it is efficacy
|
|
efficacy trial - criteria for subject selection
|
strict
|
|
efficacy trial - delivery of intervention
|
strictly controlled
|
|
efficayc trial - follow up and compliance
|
strictly controlled
|
|
efficacy trials - outcome
|
often clinically relevant (may be surrogate)
may require specialized training to adjucate |
|
how are efficacy trials analyzed
|
intent-to-treat and
per-protocol (actual treatment received) |
|
opposite of intent to treat
|
per protocol (actual treatment received)
|
|
which of efficacy and effectiveness has higher internal vailidity
|
efficacy
|
|
internal validity
|
minimized bias and chance error
|
|
type of studies that efficacy trials are used for
|
cross over
dose-ranging equivalence/non inferiority |
|
effectiveness trials and subject selection
|
few restrictions
|
|
effectiveness trials and delivery of the intervention
|
little control
|
|
effectiveness trials and sutdy compliance and follow up
|
little controls
|
|
type of outcome used in effectiveness trials
|
easy to measure
clinically relevant |
|
how are effectiveness trials analyzed
|
according to original group assignment (intent to treat)
|
|
which model is selected for external validity (efficacy or effectiveness)
|
effectiveness
|
|
per protocol AKA
|
on treatment
|
|
intent to treat vs. per protocol have similar results if there are low levels of (5)
|
dropout
non compliance co-intervention contamination development of comorbidity |
|
why participants might be non-compliant
|
side effects
forget withdraw consent decide alternative treatment contraindication (develops over time) |
|
problem with effectiveness trial
|
if you do not see a result there may be too much noise inthe study
ex. lack of compliance |
|
what can you do to increase compliance
|
select interested participants through a run in phase
frequent contact and reminders |
|
measures of compliance (3)
|
pill counts
biochemical assays self-report |
|
confounding
|
an extraneous variable that correlates with both dependent and independent variables.
ex. if a study examines rates of drowning and rates of ice cream consumption, temperature may be the confounding variable |
|
an extraneous variable that correlates with both dependent and independent variables.
ex. if a study examines rates of drowning and rates of ice cream consumption, temperature may be the __________ variable |
confounding
|
|
how to avoid confounding
|
table 1
groups must be similar at the starting point |
|
Tri council Policy Statement 2 vs. Health Canada Food and Drug ___________ for Clinical Trials vs. ICH-GCP
|
TCPS2 and ICH-GCP = guidelines
Health Canada Food and Drug = regulations |
|
systemic vs. convenience vs. purposive sampling
prevalence of each |
systemic = rare
convenience = most common purposive = common in qualitative but not RCT |
|
type of sampling
every 5th person in a target sample (Rare) |
systemic
|
|
type of sampling
whoever is available |
convenience
|
|
type of sampling
choose people who have specific characteristics that you are looking for |
purposive
|
|
effectiveness trials AKA
|
pragmatic/management
|
|
2 names for randomization where the gorups are not the same size
|
weighted
unequal |
|
beneficience AKA
|
concern for welfare
|
|
concern for welfare/beneficence means
|
maximum benefit minimum risk
|