Indmedica Home | About Indmedica | Medical Jobs | Advertise On Indmedica
Search Indmedica Web
Indmedica - India's premier medical portal

Indian Journal of Community Medicine

On Validity of Assumptions while Determining Sample Size

Author(s): S. B. Sarmukaddam, S. G. Garad

Vol. 29, No. 2 (2004-04 - 2004-06)


Main assumptions while calculating sample size are that the sampling method used is simple random sampling, the proportion or variability in the population is known, and only one variable is of interest. Any or all of these assumptions could be wrong. Error in the estimate of required sample size w ill increase proportionately to departure of these assumptions from the reality. Consequences of these assumptions and few suggestions to overcome them at least partially, and few other closely related important issues like confidence interval versus test of significance, statistical significance versus medical clinical significance, and rapid assessment are also discussed briefly in this communication. One should always remember that the size of the sample should be large enough so that one does not fail to detect important findings, but a large sample will not neccesarily help one to distiguish between the merely statistically significant and medically/clinically important findings.


in our day to day practical life sampling is very commonly used, where gar purpose is to determine the population characteristics only by ti erving a finite sub-set of individuals taken from it. If the population from which the sample drawn is homogeneous with respect to characteristic under study, a small sample drawn in any manner will do. But this is not always the case. Many times we have to deal with population having viability. In most of the investigations pertaining to medical and health field. where our interest usually centres on the assessment of the general magnitude and the study of variation with respect to one or more characteristics relating to individuals belonging to a population, the researcher has to face several technical problems viz. determination of sample size required for the investigation; method of selecting the sample from , the population; appropriate test statistic and its estimation; g eneralisation of the results obtained from the sample for the population under study; etc.

The sample will be considered as probability sample or random sample if drawn in such a manner that each unit in the population has a known ;probability of entering into the sample. Nonprobability samples are ones in which the probability of being selected is unknown. Time and cost constraints lead researchers to use nonprobability samples. But what gives probability sampling an advantage over many other ways of choosing a part of the population (added advantage apart from advantages like greater speed, reduced cost, greater accuracy, greater scope, etc.) is that, when the estimates of the population characteristics are made from the sample results, the precision of these estimates can also be gauged f rom the sample results themselves. In any case, a sample should be a fair representative of the target population.

If a sample is too small it may be impossible to make sufficiently precise and confident generalisations about the situation in the parent population. On the other hand, it is wasteful to study a sample larger than required. Side effect of studying a large sample is that the difference however small will be statistically significant and there may hence be a possibility to ascribe false importance to trivial differences. Therefore it is said that samples which are too small can prove nothing; samples which are too large can, prove anything. Nevertheless, very large samples (as large as possible) are needed for estimation. However, for testing of hypothesis. a sample size should be just appropriate.

To calculate the appropriately required sample size, there are numerous methods but it is essential to consider the validity of assumptions involved, while using them. Purpose of the present study is to highlight the consequences of violation or departure of the assumptions involved in calculation of appropriate sample size in most frequently occurring situations and to encapsulate a few related points.


Now we will see a general procedure of determining sample size. Let the population parameter under estimation be denoted by M and its sample estimate by m. Let b be the absolute difference between the two, i.e : S = (M -ml. Suppose the investigator requires that this difference should not exceed a specified limit L in at least (1-a) 100% times of repeated samples. The quantity L is called precision and (1-a) the confidence level is the chance of being wrong. If a Gaussian (i.e. normal) form of distribution is assumed valid, then L=2*SE(m) where the coefficient 2 is taken from the fact that for standard Gaussian distribution the interval (-2, 2) covers nearly 95%v of the probability. For other values of a, the table of probability of Gaussian curve needs to be consulted to find a cut-off Z1- a such that the probability between -Z 1 a and Z1-a is (1-a). The formula L=2*SE(m) is basic for calculation of sample size. SE (m) would invariably have n in the denominator which could then be worked out when other values are known. But a difficulty is that SE(m) would many times contain unknown parameter such as a population variability measure 6 or population proportion it. This is to be substituted by its estimate, which may be taken from a previous study or estimated from a pilot study. Sometimes it even involves a guess work.

Table 1:

Expected for
assumed 'p'
1 30% 10% 20%-40% 84
2 40% 10% 30%-50% 96
3 50% 10% 40%-60% 100
4 60% 10% 50%-70% 96
5 70% 10% 60%-80% 84

Table 2

Expected for
assumed 'p'
6 50% 20% of p= l 0% 40%-60% 100
7 5% 20% of p=1% 4%-6% 1900
8 1% 20% of p=0.2% 0.8%-1.2% 9900

Table 3

Expected for
assumed 'p'
9 30% 3% 27% -33% 933
10 60% 3% 57% -63% 1067
11 50% 3% 47%-53% 1111
12 10% 2% 8%-12% 900
13 30% 2% 28%-32% 2100

n= 4*P*(1-P)/L2


For estimating proportion the above formula for calculation of required sample size reduces to either side of the estimate, and p is the expected or assumed proportion (or it could be in terms of percentage) of a characteristic in the population. Thus, there are 95% chances that a 95% C.I. calculated with our estimate will include the real population proportion 'P'. We will discuss the ill-effects of the assumption regarding value of 'L' under two headings (i) when L is absolute value (ii) when L is relative to estimate.

(i) It is often said that if the investigator does not have any idea about p, then one can take a value of p equal to 50% while calculating the required sample size because it yields largest value of 'n'. But it is true only for a given fixed level of precision 'L'. And for this, one has to, choose an absolute value for 'L'. Fixing or choosing such an absolute level of precision many times could be meaningless. For example, fixing L=10% yields largest 'n' for p = 50% (refer to table 1), but if real P is only 5%, then L = 10% is not at all useful. Therefore, 'L' should be relative to estimate, e.g. L =10% of 'p'.

(ii) Let us apply the above, formula to get required 'n' in following situations.

Note that when 'L' is considered as relative to 'p', the sample size is the least for p = 50%. Situations 6 to 8 above, implies that the required n is 19 times or 99 times bigger for 20% (of 'p') precision if prevalence is 5% or 1% respectively instead of 50% if assumed. If in reality the prevalence is 5% but since you do not have any idea about it, while calculating required sample size, you take p=50% which yields n=100 for 20% precision (i.e. your estimate should be in the range of 20% from the reality). You may conduct an investigation of size n=100. Reverse calculations show that now (since the real prevalence is only 5%) your precision in the estimate is not 20% but about 87% i.e. your estimate may differ by 87%. It will be much more than 100% if the real prevalence is only 1%G, but you still work with n=100 only. The chances of observing cases will also diminish. For example, if prevalence is 5 per 1,000, then the probability of observing at least one case among a sample of size 100 is only 0.39. For prevalence of 1 in 1,000, they are only 0.095. It is not very uncommon to come across such low prevalence.

While sampling with respect to rare item i.e. P is small (n*P should be at least ≥ 5 here P is proportion;)1 but not known in advance, Haldene's2 method of continuing sampling until m of the rare items have been found in the sample, called as inverse sampling, is very useful. If n is the sample size at which the mth rare item appears, (m>1), unbiased estimate of P is p = (n-l) / (n-l). For N (population size), P small, and m ≥10, a good approximation to variance of p may be shown to be equal to [(mp2 g Q) /(m-l)2] .


Let us assume that we want to design a household survey to estimate the proportion of families with certain attributes. For one attribute the value of P is expected to lie between 30% and 60%. We want to estimate exact P with a maximum error of 3%. For another attribute the value of P is expected to lie between 10% and 30%. We want to estimate exact P even for this attribute but with precision not exceeding 2%. Suppose both attributes can be observed in one survey and it is desired to estimate P values for both the attributes with accuracies as specified earlier. The question is "How large n is needed for the survey if SRS is to be used?"

Situation 9 and 10 in table 3 indicates n=933 for p=30% and n=1,067 for p=60% but n is maximum for p= 50%, which is n=1,111 (situation 11). Therefore, for attribute one we need n =1,111. Remember that if 50% is included in the range, then take 50% as 'p' because that yields the maximum 'n'. For other attribute these values are given in situations 12 and 13 which indicate that we need n=2,100 for other attribute. If 50% is not in the range, use 'p' as that value which is nearest to 50%. For common survey covering both the attributes maximum n is taken as it covers conditions for both the attributes. Therefore for combine survey, we need n=2,100. Whenever the range of 'p' is wide, fix the precision relative to the smallest possible 'p' value in the range.


Above formula assumes simple random sampling but this is unlikely to be the sampling method of choice in actual field survey3. If another sampling method is used, a larger sample size is likely to be needed because of the "design effect" which is the ratio of the variance of the estimate obtained from sample selected by the design under consideration to the variance of the estimate obtained from a simple random sample of the same number of units. For a cluster sampling strategy the design effect, in most situations, might be estimated as 2. This would mean that, to obtain the same precision, twice as many individuals would have to be studied as with the simple random sampling strategy. For stratified random sampling in few situations where the stratification is nearly perfect the design effect might be between 0.40 to 0.80 only. In practice, the use of naturally occurring strata seems to result in about a 20% reduction in variance when compared to simple random sampling4. But it is impossible to make any universal statement for any design because the value of design effect-depends not only on sampling design but also on 'which variable (its variability in the population) is under study'. Nevertheless much amount of reduction in sampling error can very well, be achieved by using the appropriate sampling design. A sampler should maximally utilise the known information about the population while planning a sample survey.

Choice of an appropriate design depends on the situation and availability of the information about the population from which the sample is to be selected. It can be easily shown that the sampling scheme which is more appropriate in a given situation yields a standard error which is much smaller and a sampling scheme which is not appropriate in given situation will yield standard error which is much larger. The required sample size for a given situation may be calculated by any of the methods available3,5,6,7,8. The sample size should be modified/adjusted taking into consideration the design effect, as all the methods available for calculation of required sample size assume simple random sampling.

In medical research, situations are frequently encountered where the comparison between two treatments is to be made on the basis of the proportion of patients showing a particular response or the proportion in a specified state. Fleiss et al9, give a formula for the general case of comparing two proportions where the two samples are not of equal size, and their formula is suitable for the inverse problem of estimating power from known sample sizes. Various approximate methods for determining the sample size required for comparison of two proportions are available 9,10,11. An arithmetical study of these methods, in comparison with an exact method done by Radhakrishna12, indicates that for the equal sample size in the two groups situation, the arc sine method with a continuity correction offers the closest approximation.

Sokal and Rohlf3 minimize the chances of underestimating the sample size required to detect a difference of |p1 - p2| at given levels of significance and power. The formula in Rosner14 yields sample size estimates that are generally about 5% smaller than those based on the Sokal and Rohlf formula. Other formulas, commonly given and used, derestimates the sample size by roughly 50%15. Errors of diagnosis, i.e. misclassification of cases as noncases or noncases as cases, are known to occur in field trials of vaccines on account of imperfect sensitivity and specificity of the diagnostic test. Vaccine trials are usually based on large number of subjects distributed over relatively wide area, and there may be many physicians involved in diagnosis of the new cases in the study population. Under these conditions, accurate and clear-cut definition of disease and standardised test procedures are very essential. Consequences of two types of misdiagnosis on the interpretation of the findings of field trials are highlighted by Radhakrishna16.

Apart from the technical considerations, the decision on sample size depends on the resources available for the survey. It is well known that larger the sample size, better it would be from the point of view of achieving higher precision of the estimate(s) as well as enabling one to perform more elaborate analysis of the data. However, larger the size greater is requirement for resources and strain on having an effective management and supervision of the entire survey work which can be quite crucial. Moreover that may lead to larger non- sampling error.

While planning studies, the model and often the first question asked to the consulting statistician seems to be "how large a sample do I need ?". The problem of choosing the number of observations is in some ways one of the most difficult in applied statistics. It takes technical knowledge and experience to approach the problem properly. Unfortunately to laymen, the problem of selecting a sample size may appear to be easy one; one that statistician deal with routinely many times each day. The problem is not routine. In fact, sample size estimation is one of the last questions to he answered, not the first. And it is all the more important to choose an appropriate method of selecting the sample (i.e, sampling design) from population. Any study has two important aspects, one generalizibility and the other validity. Sample size and sample selection are vital for generalizibility which is no doubt important. But the study should be valid in the first place. Methods used in the study are responsible for increasing the validity of the study and so of the results. Therefore, it is got always the sample size which is important, methodology used should be sound and followed rigorously.

One argument often heard is that the samples of size 1,000 to 5,000 people are too small to be used to estimate characteristics of the entire country, say India's population of over 944 million people. Actually such samples are often statistically quite acceptable. The reason is that once N is so large that the sampling fraction (n/N) becomes negligible in the formula for variance of the mean, then only the absolute value of n is important. From this point of view, it does not matter much whether the sampled population is city of size 100,000 or the whole nation of 944,580,000 because the variance of a sample mean of size n = 1,000 is essentially (62 /1,000) in both cases. Sometimes, in larger populations there is a tendency for 6-2 to be larger, even for the same measurement. This implies a larger variance in a larger target population. However in practice, this increase is rarely anywhere near in proportion to the increase in the population size, and may not exist at all. Usually, small samples can be used quite effectively to yield good estimates for large populations.

It is important to note that determination of sample size requires the specification of estimator and sampling design. Consequently, it is first necessary to consider which design and estimator has the best chance of being efficient. There are many good text books on sampling where investigators from health field should specifically refer to17,18. Effective statistical help to biological and medical research demands thorough involvement of the statistician. In any investigation, a statistician who is to be effective needs to be involved at all stages. Unless he can interact thoroughly with those who bring other expertise to the study his own skills will not be used to the best advantage. Dr. Finney19 in an excellent paper, presents and comments on 22 questions which need to be answered in the course of planning a comparative experiment.

Few other very important and relevant points like confidence interval compared with test of significance, clinical/medical significance compared with statistical significance, and rapid assessment methods, which are closely related with sample size are discussed below.

I. Confidence interval vs. Test of significance

If we were to repeat the random sampling process infinite number of times, and if we calculate a 95% confidence interval from each of those samples and kept track of all of those intervals, and if we were somehow able to learn the true mean of the parent population, we would find that the true population mean was included in 95% of the intervals. This is the most rigorous mathematical explanation of confidence interval. A somewhat simpler explanation is - there is a 95% chance that the true mean will be found in the confidence interval calculated from the sample mean. No investigator plans to repeat his sampling process an infinite number of times, but draws only one sample. The "confidence interval", however, tells him only what might happen if he drew many other samples of the same size (from the same population).

Consider a situation in which a surgeon has done 10 operations of a particular type with complete success without a single complication. Thus the complication rate is p = 0 in this sample. Can this be granted to conclude that the complication rate would continue to be zero for all such operations in future? Or is this just good luck in the 10 patients which happened to be operated during that period? In statistical language, can p = 0 be used as an estimate of population proportion (π)? The answer obviously is no. The true complication rate can be estimated only by obtaining a CI for it The complication rate could be as high as 27%1

Though the P-value is the final common pathway for nearly all statistical tests it conveys no information about the extent to which two group differs or two variables/characteristics are associated. Highly significant P-values can accompany negligible differences if the study is large and unimpressive nonsignifcant P-values can accompany strong associations if the study is small. P-values, therefore, are not good measures of the strength of the relation between study variables.

Confidence intervals focus one's attention on the magnitude of an estimate of meaningful parameter (e.g, odds ratio) and as separate matter, on the precision of that estimate. That is, they convey information about both strength of an association and the precision with which it is estimated. Significance tests, on the other hand, blend together the magnitude of the estimate and the hypothetical role that random error may have had in producing it i.e. it does not distinguish. between these two different concepts.

The confidence interval provides a range of possibilities for the population value rather than an arbitrary dichotomy based on statistical sig nificance or level of significance. Population value of the parameter (e.g. population odds ratio or the difference between population means) is much more likely to be near to the middle of the confidence interval than towards the extremes. A confidence interval can in fact be viewed as summary of the results of many statistical tests and is therefore clearly more informative than the result of a single test that considers only the null value.

However, there are situations in which reporting of a point estimate and a corresponding P-value may be more informative than a confidence interval of some specified confidence level. Fieiss20 has given few of the valid applications of tests of significance in epidmiological research. They are:

The actual P-value is helpful in addition to the confidence interval, and preferably both should be presented. If one has to be excluded, then the choice depends on situation and/or purpose, for example, confidence interval construction may be more useful while/investigating relationships or in association studies where we deal with measures of risk or risk factor(s) identification/confirmation. For further details refer to Rothman21 or Gardener and Altman22.

  1. Refutation of an earlier finding.
  2. Identification of confounders
  3. Subgroup analyses and interactions.
  4. Nonparametric survival analysis.
  5. Multiple comparisons.

In summary, CIs give a range of values within which the parameter (e.g., p, RR or OR) is likely to fall. When reporting CIs, the clinical investigator does not use a p value; however, the parameter estimate, the level of confidence, and the. standard deviation of the estimate (i.e. the standard error) are reported. Conversely, when a hypothesis has been tested, the investigator should report the p value, the parameter estimate, and the standard deviation of the estimate (i.e. the standard error). Sample size is as important for estimation of CIs as it is for testing hypothesis. In general, if Ho is rejected, the corresponding CI does not contain the parameter under Ho. For completeness, it is a good practice for clinical investigators to provide enough information that both CIs and p values are obvious to anyone reading the report.

II. Statistical significance vs. Clinical significance

Nearly all information in medicine is empirical in nature and is gathered from samples of subjects studied from time to time. Besides all other sources of uncertainty the samples themselves tend to differ from one another. For instance, there is no reason that the 10-year survival rate of cases of leukaemia in two groups of 100 each, the first group born on odd days of any month and the second group on even days of any month, is different, but there is a high likelihood that this would be different. This happens because of sampling error or sampling fluctuation. This depends on two things; (i) the sample size n, and (ii) the intrinsic interindividual variability in the subjects. The former is fully under the control of the investigator. The latter is not under human control, yet its influence on medical decisions can be minimised by choosing an appropriate design and by using appropriate methods of sampling. The sources of uncertainty other than intrinsic in the subjects, such as due to observer variations and measurement errors, are minimised by adopting suitable tools for data collection.

Sample size n plays a dominant role in statistical inference. The SE can be substantially reduced by increasing n. This helps to increase the reliability of the results. A narrow CI is then obtained that can really help in drawing a focused conclusion. At the same time, a side effect of large n is that a very small difference can become statistically significant. This may or may not be clinically/medically significant. Confidence interval construction in various situations and for various parameters, their interpretation, logic of test of hypothesis/significance, contrasting medical/clinical significance with statistical significance, etc. are very nicely discussed in detailed with several medical examples by Dr. Indrayan in his excellent forth-coming book1.

Clinical significance goes beyond arithmetic and is determined by clinical judgement.

Nevertheless measures such as number needed to treat could be of help to sort out 'whether the benefits of a particular treatment are big enough. The results of most clinical trials are presented as relative risk reduction or odds ratios, but these ignore the role of event rate on overall clinical benefit. Therefore in clinical trials a better quantitation of overall clinical benefit is provided by presenting results, as number needed to treat. Number needed to treat, which can be used either for summarizing the results of a theraputic trial or for medical decision making about an individual patient23, is defined as the number of people that needed for a given duration to prevent one death or one adverse event. Altman24 gives for NNTs. NNT equal to 1 indicates that all treated patients will benefit. Less effective treatments have higher values. A positive number indicates that the treatment benefits the patient and a negative number that patient is harmed by the treatment: NNT expresses efficacy in a manner that incorporates both the baseline risk without therapy and the risk reduction with therapy. It is more useful than absolute risk reduction because it tells clinicians and patients, how much effort they must expend to prevent one event. It allows comparisons with the amounts of effort that must be expended to prevent the same or other events in patients with other disorders. It can incorporate the harm as well as the benefit of therapy. Christopher25 has extended the number needed to treat concept to compare strategies for disease screening. He developed a new statistic termed the number needed to screen defined as number of people that need to be screened to prevent one death or one adverse event which could form the basis of strategy for disease screening.

Looking for clinical significance even when the results are statistically significant is very important. There are situations where a result could be clinically important but is not statistically significant. Consideration of these two possibilities lead to two very useful yardsticks for interpreting an article on a clinical trial. These yardsticks are (i) if the difference is statistically significant is it clinically significant as well? and, (ii) if the difference is not statistically significant was the trial big enough to show a clinically important difference if it had occurred?

It is possible to determine a head of time how big the study should be. But most trials that reach negative conclusions either could not or would not put enough patients in their trials to detect clinically significant differences. That is, the errors of such trials are very large and their power is very low. Indeed, when Freiman et al,26 reviewed a long list of trials that had reached "negative" conclusions they found that most of them had too few patients to show risk reductions of 25% or even 50%. Sackett et al27 gives tables to find out if the sample size was adequate to detect. 25% or 50% risk reduction. Distinction also , must be made between 'not significant' and 'insignificant'. Statistical tests are for the former and not for the latter. A statistically 'not significant' difference is not necessarily 'insignificant'.

III. Rapid assessment

Literature on methodological aspects of rapid assessment is scarce and widely scattered. However, there are two notable exceptions28,29 In these there are number of articles which are concerned with sampling methods that reduce the time and resources required to collect and analyse data from individuals. They deal in particular with extending the survey sampling technique used by the Expanded Programme on Immunisation to other areas, with applying lot quality assurance sampling (LQAS), and with the application of case-control methodology to rapid health assessment. Because it requires small sample sizes, LQAS would be a worthwhile cost-effective method for rapid assessment in a number of situations. One such possible application is AIDS 28. Many of the methods proposed for rapid assessment are not yet on a firm scientific footing, and need to be further developed and evaluated.


  1. Indrayan A. and Sarmukaddam S.B. Medical Biostatistics (In press, Marcel Dekker Inc., New York).
  2. Haldane JBS. On a Method of Estimating Frequencies. Biometrika 33(1945) 222.
  3. Lwanga SK. and Lemeshow S. Sample Size Determination in Health' Studies. A Practical Manual (World Health Organisation, Geneva) 1991
  4. Williams B. A Sampler on Sampling ( John Wiley & Sons, New York) 1977.
  5. Fleiss J.L. Statistical Methods fir Rates and Proportions (John Wiiey &. Sons, New York) 1973.
  6. Sarmukaddam S.B. and Kharshikar A.V. Sample size for pre-measure and post- measure panel type prevalence study. In: Visveswara Rao et al. (Eds.) Statistics in Health and Nutrition pp 402-406 (National Institute of Nutrition, Hyderabad) 1990.
  7. Cohen Jacob Statistical Power Analysis for the Behavioural Sciences (Academic Press, New York) 1977.
  8. Lemeshow S., Hosmer D.W., KIiar L. and Lwanga S.K. Adequacy of Sample Size in Health Studies (Wiley, Chichester) 1990.
  9. Fleiss J.L., Tytun A. and Ury H.K. A simple approximation for calculating sample sizes for comparing independent proportions. Biometrics 1980; 36:343.
  10. Jeyaseelan L. and Rao P.S.S. Methods of determining sample sizes in clinical trials. Indian Pediatrics 1989: 26:115.
  11. Feigel P. A graphical aid for determining sample size when comparing two independent proportions. Biometrics 34 (1978) Ill.
  12. Radhakrishna S. Computation of sample size for comparing two proportions. Indian J MedRes 77 (1983) 915.
  13. Sokal R.R. and Rohif R.J. Biometry the principles and practice of statistics in biological research, 3rd ed. (WH Freeman, New York) 1995.
  14. Rosner B. Fundamentals of biostatistics. 2nd ed. (Duxbury Press, Belmont) 1995.
  15. Tolley E.A. Biostatistics for Hospital Epidemiology and Infection control. In: Mayhall C.G. (Ed.) Hospital Epidemiology and Infection Control pp 49-80 (Lippincott Williams & Wilkins. Philadelphia) 1999.
  16. Radhakrishna S., Nair NGK and Jayabal P. Implications of misdiagnosis in field trials of vaccines. Indian J Med Res 1984: 80:711.
  17. World Health Organisation Sampling methods in morbidity surveys and public health investigations. Tenth report of the WHO expert committee on health statistics. WHO technical report series no. 336, 1966.
  18. World Health Organisation Health surveys. World health statistics quarterly 38, 1985.
  19. Finney D.J. The questioning statistician. Statistics in Medicine 1982; 15.
  20. Fleiss JL Significance Tests Have a Role in Epidemiological Research. American J of Public Health 1986; 76:559.
  21. Rothman KJ. A show of confidence. New England) Med 2 Sp 1978; 25p : 362.
  22. Gardener M.J. and Altman D.G. Confidence intervals rather than P­ values: estimation rather than hypothesis testing. BMJ 1986: 292:746.
  23. Chatellier G., Zapletal E., Lemaitre D., Menard J. and Degoulet P. The number needed to treat: a clinically useful nomogram in its proper context. BMJ 1996; 312:426.
  24. Altman D.G. Confidence intervals for the number needed to treat. BMJ 1998; 317:507
  25. Christopher R. Number needed to screen: development of statistic for screening. B.MJ 317 (1998) 507.
  26. Freiman J.E., Chalmers T.C., Smith H. and Kuebler R.R. The importance of beta, the type II error and sample size in the design and interpretation of the randomised control trial: survey of 71 negative" trials. N Engl J Med 1976; 2996 : 690
  27. Sackett DL., Haynes R.B., Guyatt G.H, and Tugwell P. Clinical epidemiology. A basic science for clinical medicine. ( Little, Brown and company, Boston) 1991.
  28. Rapid epidemiologic assessment: the evolution of a new discipline. Int. J Epidemiol 1989; Sp.2
  29. Epidemiological and statistical methods for rapid health assessment introduction. World health .statistics quarterly 1991; 44.

Dr. Sanjeev Sarmukaddam, 25, Sangeet Sadhana, Krishna Colony 11th lane, Paramhans Nagar, Paud Road, Pune - 411 038

Access free medical resources from Wiley-Blackwell now!

About Indmedica - Conditions of Usage - Advertise On Indmedica - Contact Us

Copyright © 2005 Indmedica