When is standard error significant




















Another use of the value, 1. Consider, for example, a researcher studying bedsores in a population of patients who have had open heart surgery that lasted more than 4 hours. Suppose the mean number of bedsores was 0. If the standard error of the mean is 0.

This is interpreted as follows: The population mean is somewhere between zero bedsores and 20 bedsores. Given that the population mean may be zero, the researcher might conclude that the 10 patients who developed bedsores are outliers.

That in turn should lead the researcher to question whether the bedsores were developed as a function of some other condition rather than as a function of having heart surgery that lasted longer than 4 hours. The standard error of the estimate S. Specifically, it is calculated using the following formula:. Therefore, the standard error of the estimate is a measure of the dispersion or variability in the predicted scores in a regression.

In a scatterplot in which the S. When the S. Figure 1. Low S. Figure 2. Large S. Every inferential statistic has an associated standard error. Although not always reported, the standard error is an important statistic because it provides information on the accuracy of the statistic 4.

As discussed previously, the larger the standard error, the wider the confidence interval about the statistic. In fact, the confidence interval can be so large that it is as large as the full range of values, or even larger.

In that case, the statistic provides no information about the location of the population parameter. And that means that the statistic has little accuracy because it is not a good estimate of the population parameter. In this way, the standard error of a statistic is related to the significance level of the finding. When the standard error is large relative to the statistic, the statistic will typically be non-significant.

However, if the sample size is very large, for example, sample sizes greater than 1,, then virtually any statistical result calculated on that sample will be statistically significant. For example, a correlation of 0. However, a correlation that small is not clinically or scientifically significant. When effect sizes measured as correlation statistics are relatively small but statistically significant, the standard error is a valuable tool for determining whether that significance is due to good prediction, or is merely a result of power so large that any statistic is going to be significant.

The answer to the question about the importance of the result is found by using the standard error to calculate the confidence interval about the statistic. This is true because the range of values within which the population parameter falls is so large that the researcher has little more idea about where the population parameter actually falls than he or she had before conducting the research.

When the statistic calculated involves two or more variables such as regression, the t-test there is another statistic that may be used to determine the importance of the finding. That statistic is the effect size of the association tested by the statistic. Consider, for example, a regression.

Suppose the sample size is 1, and the significance of the regression is 0. The obtained P-level is very significant. However, one is left with the question of how accurate are predictions based on the regression? The effect size provides the answer to that question. In a regression, the effect size statistic is the Pearson Product Moment Correlation Coefficient which is the full and correct name for the Pearson r correlation, often noted simply as, R.

If the Pearson R value is below 0. It is calculated by squaring the Pearson R. It is an even more valuable statistic than the Pearson because it is a measure of the overlap, or association between the independent and dependent variables. We "reject the null hypothesis. I am playing a little fast and lose with the numbers. There is, of course, a correction for the degrees freedom and a distinction between 1 or 2 tailed tests of significance.

With a good number of degrees freedom around 70 if I recall the coefficient will be significant on a two tailed test if it is at least twice as large as the standard error. So twice as large as the coefficient is a good rule of thumb assuming you have decent degrees freedom and a two tailed test of significance.

Less than 2 might be statistically significant if you're using a 1 tailed test. More than 2 might be required if you have few degrees freedom and are using a 2 tailed test.

Confidence intervals and significance testing rely on essentially the same logic and it all comes back to standard deviations. If you can divide the coefficient by its standard error in your head, you can use these rough rules of thumb assuming the sample size is "large" and you don't have "too many" regressors.

Picking up on Underminer, regression coefficients are estimates of a population parameter. Due to sampling error and other things if you have accounted for them , the SE shows you how much uncertainty there is around your estimate.

Just another way of saying the p value is the probability that the coefficient is do to random error. Also, SEs are useful for doing other hypothesis tests - not just testing that a coefficient is 0, but for comparing coefficients across variables or sub-populations. Sign up to join this community. The best answers are voted up and rise to the top. Stack Overflow for Teams — Collaborate and share knowledge with a private group. Create a free Team What is Teams? Learn more. Understanding standard errors on a regression table Ask Question.

Asked 6 years, 11 months ago. Active 3 years, 2 months ago. Viewed 59k times. Improve this question. Amstell Amstell 1 1 gold badge 5 5 silver badges 19 19 bronze badges. If you are concerned with understanding standard errors better, then looking at some of the top hits in a site search may be helpful. That's nothing amazing - after doing a few dozen such tests, that stuff should be straightforward.

Add a comment. Active Oldest Votes. Improve this answer. Underminer Underminer 3, 1 1 gold badge 19 19 silver badges 35 35 bronze badges. The standard error? The variability? The coefficient? Such testing is easy with SPSS if we accept the presumption that the relevant null hypothesis to test is the hypothesis that the population has a zero regression coefficient, i.

Our test criterion will be that the null hypothesis shall be refuted if there is less than a certain likelihood e.

Note that we cannot conclude with certainty whether or not the null hypothesis is true.



0コメント

  • 1000 / 1000