Nonstandard Errors

Big Differences in Financial Research Results


Nonstandard Errors: Big Differences in Financial Research Results Study

An actual study investigates the concept of nonstandard errors (NSEs) in empirical finance, emphasizing the variability in research outcomes due to different analytical paths taken by researchers. This variability is compared to standard errors (SEs), which measure uncertainty in estimates of population parameters. The primary objective is to understand the extent of NSEs, their impact on research conclusions, and how they can be mitigated through peer feedback and quality measures.

The study involved 164 research teams (RTs) testing the same hypotheses using identical data from Deutsche Börse on EuroStoxx 50 index futures, spanning 17 years of trading records. This setup was designed to measure how different analytical paths lead to variability in results. The research teams were tasked with estimating average yearly changes in six predefined hypotheses related to market characteristics such as market efficiency, bid-ask spreads, and trading volumes.

The project proceeded through multiple stages:

  • Initial Analysis: RTs conducted independent analyses and submitted their findings.
  • Peer Feedback: RTs received anonymous feedback from peer evaluators (PEs) and could update their analyses accordingly.
  • Competitive Papers Review: RTs reviewed the five best papers (based on peer scores) and could refine their analyses further.
  • Final Submission: RTs submitted their final results, incorporating all feedback and additional insights.

Key Findings

  1. Magnitude of NSEs: The study found that NSEs are sizable and comparable to SEs. For example, the median SE across RTs for the hypothesis on market efficiency was 2.5%, while the interquartile range (IQR) of estimates (NSE) was 6.7%. This indicates that the choice of analysis path introduces substantial variability.
  2. Impact of Quality on NSEs: Higher quality research tends to have smaller NSEs. Papers with higher reproducibility scores and higher peer ratings showed reduced NSEs. Conversely, team quality based on top publications and academic seniority did not significantly reduce NSEs, highlighting the importance of methodological rigor over reputational factors.
  3. Effectiveness of Peer Feedback: The peer feedback process significantly reduced NSEs. Each feedback stage contributed to narrowing the dispersion of estimates, with a cumulative reduction of 47.2% across all stages. This underscores the value of iterative peer review in refining research quality and consistency.
  4. Underestimation of NSEs: Researchers generally underestimated the magnitude of NSEs. An incentivized belief survey revealed that many RTs did not fully anticipate the extent of variability in estimates across different research teams.
  5. Drivers of NSEs: The multiverse analysis revealed that key analytical decisions significantly contribute to NSEs. For instance, the choice of frequency for variance ratio calculations in market efficiency studies led to varying conclusions, with higher frequencies indicating a decline in efficiency and lower frequencies suggesting an increase.

Detailed Results

The study provides detailed insights into how different research paths affect the conclusions drawn from the same data set. By analyzing multiple hypotheses across several market characteristics, the researchers demonstrated the pervasive nature of NSEs in empirical finance.

Market Efficiency (RT-H1): The median estimate of annual change in market efficiency was -1.1% with an NSE of 6.7%. This large dispersion indicates that different methodological choices lead to significantly different conclusions about market trends.

Bid-Ask Spread (RT-H2): The hypothesis on the realized bid-ask spread showed a median estimate of -0.0% with an NSE of 7.5%. This suggests that while the average change might be close to zero, the paths taken by different researchers lead to varied interpretations.

Client Volume Share (RT-H3): The median change in the share of client volume in total volume was -3.3%, with an NSE of 1.2%, indicating relatively lower variability compared to other hypotheses.

Gross Trading Revenue (RT-H6): This hypothesis had one of the highest dispersions, with a median change of 0.0% and an NSE of 21.4%, reflecting significant differences in analytical approaches.

Standard Errors: The study also highlighted the standard errors reported by RTs, showing a median SE of 2.5% for market efficiency. The discrepancy between SEs and NSEs emphasizes that traditional measures of uncertainty (SEs) do not capture the full extent of variability introduced by different analytical paths.

Reproducibility and Peer Ratings: The reproducibility score, measured by Cascad, and peer ratings were significant predictors of smaller NSEs. Higher scores in these areas correlated with less dispersion in research outcomes, suggesting that methodological rigor and thorough peer review can mitigate the impact of nonstandard errors.

Multiverse Analysis: The multiverse analysis demonstrated how specific analytical decisions, such as the choice of data frequency, can lead to different results. This approach helped identify key decision points that contribute to NSEs, providing a roadmap for improving consistency in empirical research.

The study underscores the importance of recognizing and addressing nonstandard errors in empirical finance. NSEs represent a significant source of uncertainty that can influence research conclusions and policy decisions. By highlighting the variability introduced by different analytical paths, the research advocates for improved methodological standards, thorough peer review, and enhanced reproducibility measures.

Summary of Main Results:

  • Nonstandard Errors (NSEs): NSEs are substantial and comparable to standard errors, indicating significant variability in research outcomes due to different analytical paths.
  • Impact of Quality: Higher methodological quality and peer feedback significantly reduce NSEs, while team reputation and seniority have a lesser impact.
  • Peer Feedback: Iterative peer review stages effectively narrow the dispersion of estimates, enhancing research consistency.
  • Underestimation of NSEs: Researchers often underestimate the extent of NSEs, highlighting the need for greater awareness and methodological rigor.
  • Drivers of NSEs: Key analytical decisions, such as data frequency choice, are major contributors to NSEs, pointing to the need for standardized methodologies.

By addressing these issues, the study aims to improve the reliability and robustness of empirical research in finance, ultimately contributing to more informed and consistent policy and investment decisions.

Download

[Source: Nonstandard Errors, in: Journal of Finance, Vol. 79, Issue 3, pp. 2339-2390, 2024]

 

 

[ Source of cover photo: Generated mit AI ]
Risk Academy

The seminars of the RiskAcademy® focus on methods and instruments for evolutionary and revolutionary ways in risk management.

More Information
Newsletter

The newsletter RiskNEWS informs about developments in risk management, current book publications as well as events.

Register now
Solution provider

Are you looking for a software solution or a service provider in the field of risk management, GRC, ICS or ISMS?

Find a solution provider
Ihre Daten werden selbstverständlich vertraulich behandelt und nicht an Dritte weitergegeben. Weitere Informationen finden Sie in unseren Datenschutzbestimmungen.