Content
Sample Size and Methods
Statistical method selection
Sample size estimation
Sample size for paired t test
Sample size for unpaired t test
Sample size for independent case-control studies
Sample size for matched case-control studies
Sample size for independent cohort studies
Sample size for paired cohort studies
Sample size for population surveys
Sample size for survival analysis
Sample size for Pearson's correlation
Statistical method selection. 1
Sample size estimation. 2
Sample size for paired t test. 2
Sample size for unpaired t test. 3
Sample size for independent case-control studies. 4
Sample size for matched case-control studies. 5
Sample size for independent cohort studies. 6
Sample size for paired cohort studies. 7
Sample size for population surveys. 8
Sample size for survival analysis. 9
Sample size for Pearson's correlation. 10
Copyright © 1990-2006 StatsDirect Limited,all rights reserved
Download a free 10 day StatsDirect trial
WHAT SORT OF INVESTIGATIONARE YOU TACKLING?
·COMPARISONOF TWO INDEPENDENT GROUPS
e.g. Birth weights of babies from a group of Asian mothers compared with thosefrom a group of European mothers.
·COMPARISONOF RESPONSE OF A GROUP UNDER DIFFERENT CONDITIONS
e.g. Lying diastolic blood pressure in a group ofsubjects when they took drug X compared with when they took drug Y.
·RELATIONSHIPBETWEEN TWO VARIABLES MEASURED ON THE SAME GROUP
e.g. Age vs. serum creatinine in a group of patientswith rheumatoid disease.
WARNING
This facility will attempt tofind the best test for your data but please remember that it is not a panaceaof statistical methodology (Bland, 1996). Ifyou have any doubt about the best method for your data you should try toconsult a statistician and you should most certainly consult a reputable textbook.
Copyright © 1990-2006 StatsDirectLimited, all rights reserved
Download a free 10 day StatsDirect trial
·pairedt test
·unpairedt test
·independent case-control
·matchedcase-control
·independent cohort
·pairedcohort
·population surveys
·survival analysis
·correlation
Menu location: Analysis_Sample Size.
At the design stage of an investigation,you should aim to minimise the probability of failingto detect a real effect (type II error, f醫(yī)學檢驗網(wǎng)alsenegative).
The probability of type II erroris equal to one minus the power of a study (probability of detecting a trueeffect). You must select a power level for your study along with the two sidedsignificance level at which you intend to accept or reject null hypotheses instatistical tests. The significance level you choose (usually 5%) is theprobability of type I error (incorrectly rejecting the null hypothesis, falsepositive).
StatsDirect estimates minimum sample sizes necessary to avoid given levels oftype II error in the comparison of means using Student t tests, the comparisonof proportions and in population surveys.
Remember that good design lies atthe heart of good research and for important studies statistical advice shouldbe sought at the planning stage.
For further reading please see Armitage and Berry,1994; Fleiss, 1981; Gardner and Altman, 1989; Dupont, 1990; Pearson andHartley, 1970.
招生簡章Copyright © 1990-2006 StatsDirectLimited, all rights reserved
Download a free 10 day StatsDirect trial
Menu location: Analysis_Sample Size_Pairedt.
This function gives you theminimum number of pairs of subjects needed to detect a true difference DELTA inpopulation means with power POWER and twosided type I error probability ALPHA (Dupont, 1990;Pearson and Hartley, 1970).
Information required
POWER: probability of detecting a true effect
ALPHA : probability of detecting a false effect (two sided: doublethis if you need onesided)
DELTA : difference in population means
SD :estimated standard deviation of paired response differences
Practical issues
·Usual values for POWERare 80%, 85% and 90%; try several in order to explore/scope.
·5% is the usual choice for ALPHA.
·SD is usually estimated from previousstudies.
·If possible, choose a range of mean differences that you want havethe statistical power to detect.
Technical validation
The estimated sample size n iscalculated as the solution of:
- where d = delta/sd, a = alpha, b = 1 - power and tv,p is a Student t quantile with v degrees of freedom and probability p. n isrounded up to the closest integer.
Copyright © 1990-2006 StatsDirectLimited, all rights reserved
Download a free 10 day StatsDirect trial
Menu location: Analysis_Sample Size_Unpairedt.
This function gives you theminimum number of experimental subjects needed to detect a true differenceDELTA in population means with power POWER andtwo sided type I error probability ALPHA (Dupont, 1990;Pearson and Hartley, 1970).
Information required
POWER: probability of detecting a true effect
ALPHA : probability of detecting a false effect (two sided: doublethis if you need onesided)
DELTA : difference in population means
SD :estimated standard deviation for within group differences
M :number of control subjects per experimental subject
Practical issues
·Usual values for POWERare 80%, 85% and 90%; try several in order to explore/scope.
·5% is the usual choice for ALPHA.
·SD is usually estimated from previousstudies.
·If possible, choose a range of differences between means that youwant have the statistical power to detect.
Technical validation
The estimated sample size n iscalculated as the solution of:
- where d = delta/sd, a = alpha, b = 1 - power and tv,p is a Student t quantile with v degrees of freedom and probability p. n isrounded up to the closest integer.
Copyright © 1990-2006 StatsDirectLimited, all rights reserved
Download a free 10 day StatsDirect trial
Menu location: Analysis_Sample Size_IndependentCase-Control.
This function gives the minimumnumber of case subjects required to detect a real odds ratio or case exposurerate with power POWER and two sided type Ierror probability ALPHA. This sample size is also given as acontinuity-corrected value intended for use with corrected chi-square andFisher's exact tests (Schlesselman, 1982;Casagrande et al. 1978; Dupont, 1990).
Information required
POWER: probability of detecting a real effect
ALPHA : probability of detecting a false effect (two sided: doublethis if you need onesided)
P0: probability of exposure in controls
*Inputeither P1 or OR
P1: probability of exposure in case subjects
OR :odds ratio of exposures between cases and controls
M :number of control subjects per case subject
Practical issues
·Usual values for POWERare 80%, 85% and 90%; try several in order to explore/scope.
·5% is the usual choice for ALPHA.
·P0 can be estimated as the populationprevalence of exposure.
·If possible, choose a range of odds ratios that you want have thestatistical power to detect.
Technical validation
The estimated sample size n iscalculated as:
- where a = alpha, b = 1 - power, y = odds ratio, nc is the continuity corrected sample size and Zp is the standard normal deviate for probability p. n isrounded up to the closest integer.
Copyright © 1990-2006 StatsDirectLimited, all rights reserved
Download a free 10 day StatsDirect trial
Menu location: Analysis_Sample Size_MatchedCase-Control.
This function gives you theminimum sample size necessary to detect a true odds ratio OR with power POWER and a two sided type I error probability ALPHA.If you are using more than one control per case then this function alsoprovides the reduction in sample size relative to a paired study that you canobtain using your number of controls per case (Dupont, 1988).
Information required
POWER : probability of detecting a real effect
ALPHA : probability of detecting a false effect (two sided: doublethis if you need onesided)
r :correlation coefficient for exposure between matched cases and controls
P0: probability of exposure in the control group
m :number of control subjects matched to each case subject
OR: odds ratio
Practical issues
·Usual values for POWERare 80%, 85% and 90%; try several in order to explore/scope.
·5% is the usual choice for ALPHA.
·r can be estimated from previous studies- note that r is the phi (correlation) coefficient that is given for a two bytwo table if you enter it into the StatsDirect r by cchi-square function. When r is not known from previous studies, someauthors state that it is better to use a small arbitrary value for r,say 0.2, than it is to assume independence (a value of 0) (Dupont, 1988).
·P0 can be estimated as the populationprevalence of exposure. Note, however, that due to matching, the control sampleis not a random sample from the population therefore population prevalence ofexposure can be a poor estimate of P0 (especially if confoundersare strongly associated with exposure, Dupont, 1988).
·If possible, choose a range of odds ratios that you want have thestatistical power to detect.
Technical validation
The estimated sample size n iscalculated as:
- where a = alpha, b = 1 - power, y = odds ratio,and Zp is the standard normal deviate for probabilityp. n is rounded up to the closest integer.
Copyright © 1990-2006 StatsDirectLimited, all rights reserved
Download a free 10 day StatsDirect trial
Menu location: Analysis_Sample Size_IndependentCohort.
This function gives the minimumnumber of case subjects required to detect a true relative risk or experimentalevent rate with power POWER and two sided typeI error probability ALPHA. This sample size is also given as acontinuity-corrected value intended for use with corrected chi-square andFisher's exact tests (Casagrande et al.1978; Meinert 1986; Fleiss, 1981; Dupont, 1990).
Information required
POWER: probability of detecting a real effect
ALPHA : probability of detecting a false effect (two sided: doublethis if you need onesided)
P0 :probability of event in controls
*inputeither P1 or RR, where RR=P1/P0
P1: probability of event in experimental subjects
RR :relative risk of events between experimental subjects and controls
M :number of control subjects per experimental subject
Practical issues
·Usual values for POWERare 80%, 85% and 90%; try several in order to explore/scope.
·5% is the usual choice for ALPHA.
·P0 can be estimated as the populationprevalence of the event under investigation.
·If possible, choose a range of relative risks that you want have thestatistical power to detect.
Technical validation
The estimated sample size n iscalculated as:
- where a = alpha, b = 1 - power, nc is the continuity corrected sample size and Zp is the standard normal deviate for probability p. n isrounded up to the closest integer.
Copyright © 1990-2006 StatsDirectLimited, all rights reserved
Download a free 10 day StatsDirect trial
Menu location: Analysis_Sample Size_PairedCohort.
This function gives you theminimum number of subject pairs that you require to detect a true relative risk RRwith power POWER and two sided type I errorprobability ALPHA (Dupont, 1990;Breslow and Day, 1980).
Information required
POWER: probability of detecting a real effect
ALPHA : probability of detecting a false effect (two sided: doublethis if you need onesided)
r :correlation coefficient for failure between paired subjects
* input either (P0 and RR) or (P0 and P1), where RR=P0/P1
P0: event rate in the control group
P1: event rate in experimental group
RR :risk of failure of experimental subjects relative to controls
Practical issues
·Usual values for POWERare 80%, 85% and 90%; try several in order to explore/scope.
·5% is the usual choice for ALPHA.
·r can be estimated from previous studies- note that r is the phi (correlation) coefficient that is given for a two bytwo table if you enter it into the StatsDirect r by cchi-square function. When r is not known from previous studies, someauthors state that it is better to use a small arbitrary value for r,say 0.2, than it is to assume independence (a value of 0) (Dupont, 1988).
·P0 can be estimated as the populationevent rate. Note, however, that due to matching, the control sample is not arandom sample from the population therefore population event rate can be a poorestimate of P0 (especially if confoundersare strongly associated with the event).
·If possible, choose a range of relative risks that you want have thestatistical power to detect.
Technical validation
The estimated sample size n iscalculated as:
- where a = alpha, b = 1 - power andZp is the standard normal deviate for probability p.n is rounded up to the closest integer.
Copyright © 1990-2006 StatsDirectLimited, all rights reserved
Download a free 10 day StatsDirect trial
Menu location: Analysis_Sample Size_PopulationSurvey.
This function gives you theminimum number of subjects that you require for a survey of a population forthe proportion of individuals in that population displaying a particularfactor, with a specified tolerance (Colton, 1974;Feinstein, 2002).
Information required
cc :confidence level (1-alpha, where alpha is the two sidedprobability of detecting a false effect: double alpha if you need a one sided estimate)
N :population size
p%: estimate of rate (as a percentage) at whichcharacteristic occurs in the population
d%: absolute deviation from p% that you would tolerate (i.e. p% give ortake d%)
Technical validation
The estimated sample size n iscalculated, using simple Gaussian theory, as:
- where p is p%/100, d is d%/100,and z is a quantile from the standard normaldistribution for a two tailed probability of 1-cc. n is rounded up to theclosest integer.
Copyright © 1990-2006 StatsDirectLimited, all rights reserved
Download a free 10 day StatsDirect trial
Menu location: Analysis_Sample Size_SurvivalTimes.
This function gives you theminimum number of subjects that you require to detect a true ratio of mediansurvival times (hr) with power POWER and two sided type I errorprobability ALPHA (Dupont, 1990;Schoenfeld and Richter, 1982).
The method used here is suitablefor calculating sample sizes for studies that will be analysedby the log-rank test.
Information required
POWER: probability of detecting a real effect
ALPHA : probability of detecting a false effect (two sided: doublethis if you need onesided)
A :accrual time during which subjects are recruited to the study
F :additional follow-up time after the end of recruitment
* input either (C and r) or (C and E), where r=E/C
C: median survival time for control group
E: median survival time for experimental group
r :hazard ratio or ratio of median survival times
M:number of controls per experimental subject
Practical issues
·Usual values for POWERare 80%, 85% and 90%; try several in order to explore/scope.
·5% is the usual choice for ALPHA.
·C is usually estimated from previousstudies.
·If possible, choose a range of hazard ratios that you want have thestatistical power to detect.
Technical validation
The estimated sample size pergroup n is calculated as:
- where a = alpha, b = 1 - power andZp is the standard normal deviate for probability p.n is rounded up to the closest integer. (1+1/m)/p is substituted for 2/p in thefirst equation if the experimental and control group sizes are unequal.
Copyright © 1990-2006 StatsDirectLimited, all rights reserved
Download a free 10 day StatsDirect trial
Menu location: Analysis_Sample Size_Correlation.
This function gives you theminimum number of pairs of subjects needed to detect a true difference inPearson's correlation coefficient between the null (usually 0) and alternativehypothesis levels with power POWER and twosided type I error probability ALPHA (Stuart and Ord, 1994;Draper and Smith, 1998).
Information required
POWER: probability of detecting a true effect
ALPHA : probability of detecting a false effect (two sided: doublethis if you need onesided)
R0: correlation coefficient under the null hypothesis (often 0)
R1 :correlation coefficient under the alternative hypothesis
Practical issues
·Usual values for POWERare 80%, 85% and 90%; try several in order to explore/scope.
·5% is the usual choice for ALPHA.
·Two sided in this context means R0 not equal to R1, one sided wouldbe R1 either greater than or less than R0 which is rarely appropriate becauseyou can seldom say that a difference in the unexpected direction would be of nointerest at all.
·Statistical correlation can be misleading, remember to think beyondthe numerical association between two variables, and not to infer causality tooeasily.
Technical validation
The sample size estimation usesRonald Fisher's classic z-transformation to normalize the distribution of KarlPearson's correlation coefficient:
This gives rise to the usual testfor an observed correlation coefficient (r1) to be tested for its differencefrom a pre-defined reference value (r0, often 0), and from this the power andsample size (n) can be determined:
StatsDirect makes an initial estimate of n as:
StatsDirect then finds the value of n that satisfies the followingpower (1-b) equation:
The precisevalue of n is rounded up to the closest integer in the results given by StatsDirect.