Home Print this page Email this page Small font sizeDefault font sizeIncrease font size
Users Online: 907

 

Home About us Editorial board Search Ahead of print Current issue Archives Submit article Instructions Subscribe Contacts Login 
     


 
 Table of Contents    
SHORT COMMUNICATION  
Year : 2015  |  Volume : 5  |  Issue : 2  |  Page : 51-52
Misguided by the "P" factor


Department of Oral and Maxillofacial Pathology, Narayana Dental College and Hospital, Nellore, Andhra Pradesh, India

Click here for correspondence address and email

Date of Web Publication17-Aug-2016
 

How to cite this article:
Reginald BA. Misguided by the "P" factor. J Educ Ethics Dent 2015;5:51-2

How to cite this URL:
Reginald BA. Misguided by the "P" factor. J Educ Ethics Dent [serial online] 2015 [cited 2024 Mar 28];5:51-2. Available from: https://www.jeed.in/text.asp?2015/5/2/51/188569


From the time it was introduced nine decades ago, the P value has been misunderstood, misinterpreted, and probably manipulated, to suit the outcome needs of many studies. At times, the obsession of achieving the P value blinds us to such an extent that we tend to overlook the clinical significance and the ill effects that it may bear on future studies.

Literature defines significance as a "quality of being important" but is statistics a mere measure of importance? History dates back to the man, R. A. Fisher, who first introduced the P value as a "rough numerical guide of the strength of evidence against the null hypothesis, to be used flexibly within a context of a given problem."

The objectivity of studies are being lost, resulting in a rather subjective decision, given the curriculum, resources, study design, time, and other such cofounding factors. In the midst of this subjectivity, it is stated that the researcher often uses statistics to claim proof and scientific breakthrough in the form of statistical significance. [1]

We do not replicate research discoveries, but claim conclusive research findings solely on the basis of a single study assessed by a so-called formal statistical significance. [2] Hence, the question remains, are we really justifying the use of the P factor? Schools of thought vary, but this warrants a cautious approach and a conscious understanding of the usage of the P value, making it a tool of significance and not the ultimatum.

The P value does not quantify magnitude or importance; therefore, it is important to include appropriate statistical measures in addition to P value. These include statistical power, effect size, and sample size that play an important role along with confidence interval (CI). Lang, [3] suggested, using CI to provide scientific evidence of the magnitude of an observed effect than to rely on P value alone.

Power and sample have a positive relation where, increased power needs an increased sample, but the sample is inversely related to effect. Meaning, that to detect a small effect, a large sample is required and all these in turn tend to affect the value and interpretation of the P value.

This suggests that statistical significance must not be driven by a sample size, but should reflect the phenomena. In practice, to obtain a P < 0.05 we need to reject the null hypothesis, under a false pretense/assumption that no effect actually exists, thus, the P value giving us the probability of observing the sample data, albeit in favor of rejecting the null hypothesis. This is unlike that of the power, which is the probability of detecting an effect where one actually exists. [1]

Steven Goodman described in his article, "A dirty dozen" the 12 common misconceptions of P value and observed that significance is merely worthy of attention and warrants more experimenting and, therefore, is not proof in itself. [4]

Many of our studies range from being significant to highly significant depending on the number of .000 that we are able to obtain, which is deceptive and misleading, as P value neither has strength nor magnitude. [5]

In addition, it is often that we misinterpret statistical significance with that of clinical significance. Therefore, it is time we rethink our stand on the conclusions we reach and question ourselves if our results are clinically important or merely statistically significant and if so, is the burden entirely on the "P."

Financial support and sponsorship

Nil.

Conflicts of interest

There are no conflicts of interest.

 
   References Top

1.
Hayat MJ. Understanding statistical significance. Nurs Res 2010;59:219-23.  Back to cited text no. 1
    
2.
Loannidis JP. Why most published research findings are false. PLoS Med 2005;2:e124.  Back to cited text no. 2
    
3.
Lang TS, Secic M. How to report Statistics in Medicine: Annotated Guidelines for Authors, Editors and Reviewers. Philadeplhia: American College of Physicians; 2006. p. 490.   Back to cited text no. 3
    
4.
Goodman S. A dirty dozen: Twelve P-value misconceptions. Semin Hematol 2008;45:135-40.  Back to cited text no. 4
    
5.
Glaser DN. The controversy of significance testing: Misconceptions and alternatives. Am J Crit Care 1999;8:291-6.  Back to cited text no. 5
    

Top
Correspondence Address:
Bernard Ajay Reginald
Department of Oral and Maxillofacial Pathology, Narayana Dental College and Hospital, Chinthareddypalem, Nellore - 524 003, Andhra Pradesh
India
Login to access the Email id

Source of Support: None, Conflict of Interest: None


DOI: 10.4103/0974-7761.188569

Rights and Permissions




 

Top
 
 
  Search
 
  
    Similar in PUBMED
   Search Pubmed for
   Search in Google Scholar for
    Email Alert *
    Add to My List *
* Registration required (free)  


    References

 Article Access Statistics
    Viewed3263    
    Printed187    
    Emailed0    
    PDF Downloaded337    
    Comments [Add]    

Recommend this journal