# Margin of error and confidence level relationship counseling

LO Explain the connection between the sampling distribution of a . Since the width of the confidence interval is a function of its margin of error, let's look. (a) the average number of exclusive relationships college students in this sample have been in is given sample, the margin of error changes as the confidence level changes. . counselors recommend students apply to roughly 8 colleges. relationships using this sample. between and exclusive relationships margin of error to be 5 items at the 95% confidence level. . states that counselors recommend students apply to roughly 8 colleges. What.

And then the third one is the independence condition, and there's two ways to meet this. But let's say that we meet these conditions for inference, what do we do? Well we come up, we set up a confidence level, confidence level, for our confidence interval that we're about to construct.

But from that confidence level, you can calculate a critical value. And the way that you do that, you just look up in a z-table, and once again all of this is review.

And now we're ready to calculate the confidence interval, confidence interval. It is going to be equal to our sample proportion plus or minus our critical value, our critical value, times the standard deviation of the sampling distribution of the sample proportion. Now there is a way to calculate this exactly if we knew what p is. If we knew what p is, this would be the square root of p times one minus p over n. But if we knew what p is, then we wouldn't even have to do this business of constructing confidence intervals.

So instead, we estimate this. We say look, an estimate of the standard deviation of the sampling distribution, often known as the standard error, an estimate of this is going to be the square root of, instead of the true population parameter, we could use the sample proportion. So p hat times one minus p hat, all of that over n. Now the whole reason why I did this, this is covered in much more detail and much slower in other videos, is to see the parallels between this and a situation when we're constructing a two-sample confidence interval or z-interval for a difference between proportions.

What am I talking about? Well let's say that you have two different populations. So this is the first population, and it has some true proportion of the folks that let's say are left-handed, and then there's another population.

So let's call that p two. You know maybe this is freshmen in your high school or college and maybe this is sophomores, so two different populations, and you wanna see if there's a difference between the proportion that are left-handed, say.

And so what you could do, just like we've done here, is for each of these populations, you'll take a sample here, we'll call that n one, and then from that sample you calculate a sample proportion, let's call that p one.

And then from this second population, we do the same thing. This is n two. Notice n one and n two do not have to be the same sample size. That's a common misconception when doing these things. These could be different sample sizes. And then from that sample, you calculate the sample proportion.

Now after you do that, you would wanna check your conditions for inference. And it turns out that the conditions for inference would be exactly the same. Do both of these samples meet the random condition? Do both of these samples meet the normal condition? And do both of these samples meet the independence condition?

And if both samples meet these conditions for inference, then we would have to calculate our critical value. And you would do it the exact same way. I'll just write it down again. This translates to a proportion of 0. The sample size is computed as follows: This is a situation where investigators might decide that a sample of this size is not feasible. Suppose that the investigators thought a sample of size 5, would be reasonable from a practical point of view. Recall that the confidence interval formula to estimate prevalence is: Assuming that the prevalence of breast cancer in the sample will be close to that based on national data, we would expect the margin of error to be approximately equal to the following: The investigators must decide if this would be sufficiently precise to answer the research question.

Note that the above is based on the assumption that the prevalence of breast cancer in Boston is similar to that reported nationally. This may or may not be a reasonable assumption. In fact, it is the objective of the current study to estimate the prevalence in Boston. Sample Sizes for Two Independent Samples, Continuous Outcome In studies where the plan is to estimate the difference in means between two independent populations, the formula for determining the sample sizes required in each comparison group is given below: Recall from the module on confidence intervals that, when we generated a confidence interval estimate for the difference in means, we used Sp, the pooled estimate of the common standard deviation, as a measure of variability in the outcome based on pooling the datawhere Sp is computed as follows: If data are available on variability of the outcome in each comparison group, then Sp can be computed and used in the sample size formula.

However, it is more often the case that data on the variability of the outcome are available from only one group, often the untreated e. When planning a clinical trial to investigate a new drug or procedure, data are often available from other trials that involved a placebo or an active control group i. The standard deviation of the outcome variable measured in patients assigned to the placebo, control or unexposed group can be used to plan a future trial, as illustrated below.

Note that the formula for the sample size generates sample size estimates for samples of equal size. If a study is planned where different numbers of patients will be assigned or different numbers of patients will comprise the comparison groups, then alternative formulas can be used. An investigator wants to plan a clinical trial to evaluate the efficacy of a new drug designed to increase HDL cholesterol the "good" cholesterol.

The plan is to enroll participants and to randomly assign them to receive either the new drug or a placebo. HDL cholesterol will be measured in each participant after 12 weeks on the assigned treatment. The investigator would like the margin of error to be no more than 3 units. How many patients should be recruited into the study? The sample sizes are computed as follows: To plan this study, we can use data from the Framingham Heart Study. In participants who attended the seventh examination of the Offspring Study and were not on treatment for high cholesterol, the standard deviation of HDL cholesterol is We will use this value and the other inputs to compute the sample sizes as follows: Again, these sample sizes refer to the numbers of participants with complete data.

In order to ensure that the total sample size of is available at 12 weeks, the investigator needs to recruit more participants to allow for attrition. An investigator wants to compare two diet programs in children who are obese.

### Power and Sample Size Determination

One diet is a low fat diet, and the other is a low carbohydrate diet. The plan is to enroll children and weigh them at the start of the study. Each child will then be randomly assigned to either the low fat or the low carbohydrate diet. Each child will follow the assigned diet for 8 weeks, at which time they will again be weighed. The number of pounds lost will be computed for each child.

How many children should be recruited into the study? To plan this study, investigators use data from a published study in adults.

**What Factors Affect the Margin of Error? (In a Confidence Interval for One Mean)**

Suppose one such study compared the same diets in adults and involved participants in each diet group. The study reported a standard deviation in weight lost over 8 weeks on a low fat diet of 8. These data can be used to estimate the common standard deviation in weight lost as follows: We now use this value and the other inputs to compute the sample sizes: Again, these sample sizes refer to the numbers of children with complete data. In order to ensure that the total sample size of is available at 8 weeks, the investigator needs to recruit more participants to allow for attrition.

It is extremely important that the standard deviation of the difference scores e. Sample Sizes for Two Independent Samples, Dichotomous Outcome In studies where the plan is to estimate the difference in proportions between two independent populations i.

In order to estimate the sample size, we need approximate values of p1 and p2. Thus, if there is no information available to approximate p1 and p2, then 0. Similar to the situation for two independent samples and a continuous outcome at the top of this page, it may be the case that data are available on the proportion of successes in one group, usually the untreated e.

If so, the known proportion can be used for both p1 and p2 in the formula shown above. The formula shown above generates sample size estimates for samples of equal size. Interested readers can see Fleiss for more details. An investigator wants to estimate the impact of smoking during pregnancy on premature delivery. Normal pregnancies last approximately 40 weeks and premature deliveries are those that occur before 37 weeks.

The sample sizes i. We will use that estimate for both groups in the sample size computation. Is attrition an issue here? Answer Issues in Estimating Sample Size for Hypothesis Testing In the module on hypothesis testing for means and proportions, we introduced techniques for means, proportions, differences in means, and differences in proportions. While each test involved details that were specific to the outcome of interest e.

For example, in each test of hypothesis, there are two errors that can be committed. The first is called a Type I error and refers to the situation where we incorrectly reject H0 when in fact it is true. The second type of error is called a Type II error and it is defined as the probability we do not reject H0 when it is false. In hypothesis testing, we usually focus on power, which is defined as the probability that we reject H0 when it is false, i.

Power is the probability that a test correctly rejects a false null hypothesis. A good test is one with low probability of committing a Type I error i. Here we present formulas to determine the sample size required to ensure that a test has high power. The effect size is the difference in the parameter of interest that represents a clinically meaningful difference. Similar to the margin of error in confidence interval applications, the effect size is determined based on clinical or practical criteria and not statistical criteria.

The concept of statistical power can be difficult to grasp. Before presenting the formulas to determine the sample sizes required to ensure high power in a test, we will first discuss power from a conceptual point of view. We compute the sample mean and then must decide whether the sample mean provides evidence to support the alternative hypothesis or not.

This is done by computing a test statistic and comparing the test statistic to an appropriate critical value. However, it is also possible to select a sample whose mean is much larger or much smaller than When we run tests of hypotheses, we usually standardize the data e. To facilitate interpretation, we will continue this discussion with as opposed to Z. The rejection region is shown in the tails of the figure below.

Rejection Region for Test H0: This concept was discussed in the module on Hypothesis Testing.

Now, suppose that the alternative hypothesis, H1, is true i. The figure below shows the distributions of the sample mean under the null and alternative hypotheses. The values of the sample mean are shown along the horizontal axis. Distribution of Under H0: The critical value The upper critical value would be The effect size is the difference in the parameter of interest e. The figure below shows the same components for the situation where the mean under the alternative hypothesis is Figure - Distribution of Under H0: Notice that there is much higher power when there is a larger difference between the mean under H0 as compared to H1 i.

A statistical test is much more likely to reject the null hypothesis in favor of the alternative if the true mean is 98 than if the true mean is Notice also in this case that there is little overlap in the distributions under the null and alternative hypotheses. If a sample mean of 97 or higher is observed it is very unlikely that it came from a distribution whose mean is In the previous figure for H0: The inputs for the sample size formulas include the desired power, the level of significance and the effect size.

### Confidence intervals for the difference between two proportions (video) | Khan Academy

The effect size is selected to represent a clinically meaningful or practically important difference in the parameter of interest, as we will illustrate. The formulas we present below produce the minimum sample size to ensure that the test of hypothesis will have a specified probability of rejecting the null hypothesis when it is false i.

In planning studies, investigators again must account for attrition or loss to follow-up. The formulas shown below produce the number of participants needed with complete data, and we will illustrate how attrition is addressed in planning studies. Sample Size for One Sample, Continuous Outcome In studies where the plan is to perform a test of hypothesis comparing the mean of a continuous outcome variable in a single population to a known mean, the hypotheses of interest are: The formula for determining sample size to ensure that the test has a specified power is given below: Similar to the issue we faced when planning studies to estimate confidence intervals, it can sometimes be difficult to estimate the standard deviation.

In sample size computations, investigators often use a value for the standard deviation from a previous study or a study performed in a different but comparable population.

## Introduction

An investigator hypothesizes that in people free of diabetes, fasting blood glucose, a risk factor for coronary heart disease, is higher in those who drink at least 2 cups of coffee per day.

A cross-sectional study is planned to assess the mean fasting blood glucose levels in people who drink at least two cups of coffee per day. The mean fasting blood glucose level in people free of diabetes is reported as The effect size is computed as: The effect size represents the meaningful difference in the population mean - here 95 versusor 0. In the planned study, participants will be asked to fast overnight and to provide a blood sample for analysis of glucose levels.

Therefore, a total of 35 participants will be enrolled in the study to ensure that 31 are available for analysis see below. Sample Size for One Sample, Dichotomous Outcome In studies where the plan is to perform a test of hypothesis comparing the proportion of successes in a dichotomous outcome variable in a single population to a known proportion, the hypotheses of interest are: The formula for determining the sample size to ensure that the test has a specified power is given below: The numerator of the effect size, the absolute value of the difference in proportions p1-p0again represents what is considered a clinically meaningful or practically important difference in proportions.

We first compute the effect size: A medical device manufacturer produces implantable stents. How many stents must be evaluated? Do the computation yourself, before looking at the answer. Answer Sample Sizes for Two Independent Samples, Continuous Outcome In studies where the plan is to perform a test of hypothesis comparing the means of a continuous outcome variable in two independent populations, the hypotheses of interest are: The formula for determining the sample sizes to ensure that the test has a specified power is: ES is the effect size, defined as: Recall from the module on Hypothesis Testing that, when we performed tests of hypothesis comparing the means of two independent groups, we used Sp, the pooled estimate of the common standard deviation, as a measure of variability in the outcome.

Sp is computed as follows: