# Type i error and ii relationship tips

### Introduction to Type I and Type II errors (video) | Khan Academy

Example: A large clinical trial is carried out to compare a new medical treatment with a standard one. The statistical analysis shows a. Keywords: Effect size, Hypothesis testing, Type I error, Type II error . In some ways, the investigator's problem is similar to that faced by a judge judging a. Type I and Type II errors are dependent. In other words if Type I error rises,then type II lowers. So, if we assume Type II error constant, then yes with increasing.

The trial analogy illustrates this well: Which is better or worse, imprisoning an innocent person or letting a guilty person go free?

## Type I Error and Type II Error

Trying to avoid the issue by always choosing the same significance level is itself a value judgment. Sometimes different stakeholders have different interests that compete e. Similar considerations hold for setting confidence levels for confidence intervals.

Claiming that an alternate hypothesis has been "proved" because it has been rejected in a hypothesis test. This is an instance of the common mistake of expecting too much certainty.

There is always a possibility of a Type I error; the sample in the study might have been one of the small percentage of samples giving an unusually extreme test statistic. This is why replicating experiments i. The more experiments that give the same result, the stronger the evidence. There is also the possibility that the sample is biased or the method of analysis was inappropriate ; either of these could lead to a misleading result.

## Type I and type II errors

This could be more than just an analogy: Consider a situation where the verdict hinges on statistical evidence e. This is consistent with the system of justice in the USA, in which a defendant is assumed innocent until proven guilty beyond a reasonable doubt; proving the defendant guilty beyond a reasonable doubt is analogous to providing evidence that would be very unusual if the null hypothesis is true.

There are at least two reasons why this is important. First, the significance level desired is one criterion in deciding on an appropriate sample size.

### probability - Are probabilities of Type I and II errors negatively correlated? - Cross Validated

Second, if more than one hypothesis test is planned, additional considerations need to be taken into account. See Multiple Inference for more information. But we might be wrong in either of these scenarios and that's where these errors come into play.

Let's make a grid to make this clear. So there's the reality, let me put reality up here, so the reality is there's two possible scenarios in reality, one is the null hypothesis is true and the other is that the null hypothesis is false, and then based on our significance test, there's two things that we might do, we might reject the null hypothesis, or we might fail to reject the null hypothesis.

And so let's put a little grid here to think about the different combinations, the different scenarios here.

• Introduction to Type I and Type II errors
• Type 1 errors
• Hypothesis testing, type I and type II errors

So in a scenario where the null hypothesis is true, but we reject it, that feels like an error. We shouldn't reject something that is true and that indeed is a Type I error. You shouldn't reject the null hypothesis if it was true. And you can even figure out what is the probability of getting a Type I error. So one way to think about the probability of a Type I error is your significance level.

Now, if your null hypothesis is true and you failed to reject it, well that's good. This we can write this as, this is a correct conclusion.

The good thing just happened to happen this time. Now, if your null hypothesis is false and you reject it, that's also good. That is the correct conclusion. But if your null hypothesis is false and you failed to reject it, well then that is a Type II error.

That is a Type II error.