Talk:Confidence interval
This is the talk page for discussing improvements to the Confidence interval article. This is not a forum for general discussion of the article's subject. |
Article policies
|
Find sources: Google (books · news · scholar · free images · WP refs) · FENS · JSTOR · TWL |
Archives: Index, 1, 2, 3, 4Auto-archiving period: 3 months |
This level-5 vital article is rated C-class on Wikipedia's content assessment scale. It is of interest to the following WikiProjects: | |||||||||||||||||||||
|
Wiki Education Foundation-supported course assignment[edit]This article was the subject of a Wiki Education Foundation-supported course assignment, between 27 August 2021 and 19 December 2021. Further details are available on the course page. Student editor(s): Philipphaku. Peer reviewers: BadEnding, Ruby00269, Gs5730. Above undated message substituted from Template:Dashboard.wikiedu.org assignment by PrimeBOT (talk) 18:18, 16 January 2022 (UTC) With regards to the approachability of this article[edit]Why not use the Simple English version of this complicated article (link below)? It seems more accessible for the average reader than the in-depth one here. https://simple.wikipedia.org/wiki/Confidence_interval DC (talk) 14:26, 30 March 2016 (UTC Thank you for providing the link to the simple.wikipedia.org page. I found it to be more accessible just as you said. Thank you! -Anon 14:54 UTC, 15 Nov 2020 |
The article contradicts itself
[edit]Due to this edit, the introduction is currently spreading precisely the misunderstanding that the article later warns about. The introduction says:
- Given observations and a confidence level , a valid confidence interval has a probability of containing the true underlying parameter.
In direct contradiction, the article later rightly warns:
- A 95% confidence level does not mean that for a given realized interval there is a 95% probability that the population parameter lies within the interval [...].
Joriki (talk) 11:23, 12 May 2020 (UTC)
Another incorrect statement:
- Therefore, there is a 5% probability that the true incidence ratio may lie out of the range of 1.4 to 2.6 values.
According to the textbook Introduction to Data Science, the statement in question is false: a confidence level of does not mean we can say the interval contains the underlying true parameter with probability . How many references do we need before we can remove that misleading claim from the introduction?
TheKenster (talk) 20:06, 3 November 2020 (UTC)
- I am a statistics expert. In light of Wikipedia:Be_bold, I have corrected the mistakes mentioned in this subsection, as well as a couple others that I caught. Stellaathena (talk) 23:30, 17 November 2020 (UTC)
Dtlfg (talk) 09:20, 3 August 2021 (UTC)
The section Examples - Medical Examples still contains that contradiction:
- Furthermore, it also means that we are 95% confident that the true incidence ratio in all the infertile female population lies in the range from 1.4 to 2.
It seems the formal defintion is purely wrong. The given definition is the definition of credible interval. Unfortanately I do not find a source which give a correct definition. Here what I think it is, if I find a source someday I will correct : Let be a realisation (in general a statitistic from a set of independant following the same distribution) of a random variable following a distribution , being a parameter we have to estimate. Let be a function from to (the power set of ) which assigns to each a set which verify . The confidence interval is then defined as the set . — Preceding unsigned comment added by Samuelboudet (talk • contribs) 15:15, 9 October 2022 (UTC)
- The correct definition can be found here:
An X% confidence interval for a parameter θ is an interval (L,U) generated by a procedure that in repeated sampling has an X% probability of containing the true value of θ, for all possible values of θ (Neyman 1937).
— Morey, Richard D.; Hoekstra, Rink; Rouder, Jeffrey N.; Lee, Michael D.; Wagenmakers, Eric-Jan (2016). "The fallacy of placing confidence in confidence intervals". Psychonomic Bulletin & Review. 23 (1): 103–123. doi:10.3758/s13423-015-0947-8. PMC 4742505. PMID 26450628.- The key point being that the confidence level is the long-run frequency at which the procedure will produce intervals containing the true parameter. It does not say anything about the individual intervals that are produced – and indeed, that article as well as https://bayes.wustl.edu/etj/articles/confidence.pdf show that the post-data probability that the interval contains the true parameter can be very different from the confidence level.
- Spidermario (talk) 08:13, 11 October 2022 (UTC)
The third interpretation, using statistical significance or lack thereof, may be a problematic interpretation to include. Statistical significance is a relational concept between a sample estimator and the population's parameter suggested by the null hypothesis. When we calculate a test statistic to compare the difference between the sample estimate and the null's assertion of the truth we take advantage of the assumption of the null's truth to ascertain a p-value. The simplicity of the confidence interval is that it is oblivious to the truth. Specifically, if there is a true Θ and we repeat our study process many times then we would expect 100(1-α)% of the intervals we generate to contain the true θ regardless of the existence of a null hypothesis or whatever value it purports. Saying that a 95% interval suggests a range of values that would not be statistically significant from the sample estimator places the condition of truth on the sample and assumes that the parameter value may be a range of possible outcomes, conditions in opposition to statistical theory.
Confidence intervals are built in a way such that they almost always cover the truth. Since we never really know the truth we cannot make any kind of statements about whether the right answer is in any interval we observe. All we know is that there is a pretty good chance that our interval is one of the right ones. This upsets people because we did not know the truth before the study and now that we have an interval we still do not know the truth. At the end of the day, that will always be the problem when we have to use a sample to make inference about a population. Quantifying uncertainty is not the same as making it go away completely. — Preceding unsigned comment added by 99.116.222.7 (talk) 16:48, 15 May 2023 (UTC)
- Aren't those exactly the same thing?
- The statement "the estimated CI contains the true parameter" means the same as the statement "the true parameter is within the estimated CI"; i.e. if the estimated CI contains the true parameter then the true parameter is within the measured CI, and if the true parameter is within the estimated CI then the the estimated CI contains the true parameter.
- There isn't any scenario where the truth value of the two statements differs, therefore their probability is the same.
- I think the point about how "this upsets people because we did not know the truth before the study and now that we have an interval we still do not know the truth" is inaccurate: in both cases we're acknowledging we don't know the truth, if we did we wouldn't need a confidence interval. 2001:818:DA5F:AF00:5DF0:7BCC:8C9B:3197 (talk) 23:30, 21 June 2024 (UTC)
- They are not the same thing.
- Having computed a specific interval, “the probability that the true parameter is within this specific confidence interval” is meaningless for a frequentist – since neither the parameter nor the interval is a “random variable”, there is no (frequentist) probability at play here: either the parameter is in the interval or it isn’t. You can calculate a Bayesian probability for that statement, conditional on all the data at hand, but it’s not necessarily going to be equal to the confidence level.
- The confidence level, instead, is the answer to “the probability that if we conduct a random trial, the random data we will get will lead to a confidence interval that contains the parameter”. In other words, the distinction is not just the order of the words, it’s the fact that the probabilistic statement now refers to all the random confidence intervals we could generate if we repeated the trial, rather than to the (fixed) parameter in relation to a (fixed) calculated interval.
- It’s kind of the same difference as that between the accuracy of a test and its predictive value. Imagine a disease with 90% prevalence, and a test with 90% sensitivity and 90% specificity. Its “accuracy” (analogue to the confidence level) is 90%: if we take a random patient and have them take the test, we are 90% likely to get a correct test result (“the estimated CI contains the true parameter”).
- But if we now conduct the test, get a negative result, and ask the probability that the true disease status matches the result of the test (“the true parameter is within the estimated CI”), it’s not 90%, it’s 50%. It doesn’t matter that before the experiment, we had a pretty good chance of producing a test result that would match the patient’s true disease status. We have now produced the test result, it’s negative, and we know that it means it’s less likely to be “one of the right ones”.
- This is discussed in the link I posted above: https://link.springer.com/article/10.3758/s13423-015-0947-8
- As well as in: https://bayes.wustl.edu/etj/articles/confidence.pdf
- tl;dr: “the probability that the parameter is in the estimated interval” means (say) “P(θ ∈ [12.3, 14.7] | data)”, whereas “the probability that the estimated interval contains the parameter” (really, “will contain”) is “P(data such that θ ∈ interval(data))”.
- Spidermario (talk) 08:38, 22 June 2024 (UTC)
History needs expansion
[edit]Perhaps someone can expand the history section. It used to be longer, but most of it was irrelevant.
Was interval estimation used before Neyman? What did his contemporaries think? What about their adoption besides medical journals?
FRuDIxAFLG (talk) 06:22, 19 January 2022 (UTC)
- So far as I know, the doctrine of confidence intervals, or confidence 'belts', was indeed introduced by Neyman, but the 1937 paper cited in the article was not his earliest statement. Several earlier papers are included in the CUP selection of Neyman's early statistical papers. There was also an important 1934 paper by E. S. Pearson and C. Clopper. As to what Neyman's contemporaries thought, when Neyman expounded the method at the Royal Statistical Society in 1934, both A L Bowley and R A Fisher had strong reservations about its value. (Bowley made a bad joke about 'confidence tricks'.) Neyman's 1937 paper was in part an attempt to answer these criticisms. For interval estimation in general, Fisher's obscure doctrine of 'fiducial probability' predates Neyman's doctrine, and Neyman first presented confidence limits as an extension of Fisher's idea, but Fisher himself did not like the comparison. 2A00:23C8:7907:4B01:A825:2DCD:C5C3:B3EF (talk) 16:41, 25 February 2022 (UTC)
- I rewrote the history section, though I didn't look at the talk page and it still only answers the first of FRuDIxAFLG's three questions. Several papers before 1937 are now cited, including the 1934 one by Clopper and E. Pearson. As shown in the quotation that I added, the 1934 paper by Neyman is the first one where he presented the theory of confidence intervals. Merrick08 (talk) 13:22, 16 June 2023 (UTC)
India Education Program course assignment
[edit]This article was the subject of an educational assignment at College of Engineering, Pune supported by Wikipedia Ambassadors through the India Education Program. Further details are available on the course page.
The above message was substituted from {{IEP assignment}}
by PrimeBOT (talk) on 19:55, 1 February 2023 (UTC)
- C-Class level-5 vital articles
- Wikipedia level-5 vital articles in Mathematics
- C-Class vital articles in Mathematics
- C-Class Statistics articles
- High-importance Statistics articles
- WikiProject Statistics articles
- C-Class mathematics articles
- High-priority mathematics articles
- India Education Program student projects