Skip to main content
Log in

Measuring economic journals’ citation efficiency: a data envelopment analysis approach

  • Published:
Scientometrics Aims and scope Submit manuscript

Abstract

This paper by using data envelopment analysis (DEA) and statistical inference evaluates the citation performance of 229 economic journals. The paper categorizes the journals into four main categories (A–D) based on their efficiency levels. The results are then compared to the 27 “core economic journals” as introduced by Diamond (Curr Contents 21(1):4–11, 1989). The results reveal that after more than 20 years Diamonds’ list of “core economic journals” is still valid. Finally, for the first time the paper uses data from four well-known databases (SSCI, Scopus, RePEc, Econlit) and two quality ranking reports (Kiel Institute internals ranking and ABS quality ranking report) in a DEA setting and in order to derive the ranking of 229 economic journals. The ten economic journals with the highest citation performance are Journal of Political Economy, Econometrica, Quarterly Journal of Economics, Journal of Financial Economics, Journal of Economic Literature, American Economic Review, Review of Economic Studies, Journal of Econometrics, Journal of Finance, Brookings Papers on Economic Activity.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

Notes

  1. KIEL internal rankings for 2009 can be downloaded from http://www.ifw-kiel.de/academy/Journal%20Ranking%203%20Jan%2009.pdf. Accessed 13 November 2010.

  2. ABS Academic Journal Quality Guide can be found at http://www.the-abs.org.uk/?id=257. Accessed 13 November 2010.

  3. RePEc data can be retrieved from http://ideas.repec.org/top/top.journals.simple.html.

  4. Data from Social Science Citation Index can be retrieved from http://thomsonreuters.com/products_services/science/science_products/a-z/social_sciences_citation_index. Accessed 13 November 2010.

  5. SCOPUS data can be retrieved from http://www.scopus.com/home.url. Accessed 13 November 2010.

  6. Data from Econlit database can be retrieved from http://www.aeaweb.org/econlit/journal_list.php. Accessed 13 November 2010.

  7. When a journal was not in the SSCI database more than 5 years, the latest impact factor (i.e. for 2009) has been used.

  8. In Kiel report the journals take the values from “A” (high quality journal) to “D” (lower quality journal). In addition we sign the value of 4 to “A”, 3 to “B”, 2 to “C” and 1 to “D”. Equally, in the ABS report four values can be assigned for journals’ quality (1, 2, 3 and 4). The highest quality in a journal is a signed with “4” whereas the lowest quality with “1”. In contrast with the KIEL quality assessment the ABS “grasps” the quality of the journals within their subject area (i.e. Accounting and Auditing, Finance, Economics, etc.).

  9. The results of the BCC model are available upon request.

  10. We did the categorization in such a way in order to be comparable to the two main quality ranking reports as introduced by Kiel institute (which separates the journals into four categories from “A” to “D”) and to ABS (which also separates economic and other discipline journals into four quality categories from 1 to 4).

  11. As stated previously only the economic journals which are registered and measured in the 6 main databases/reports (Econlit, SSCI, RePEc, Scopus, Kiel rankings, ABS quality rankings report) are considered for evaluation.

  12. We assumed that journals’ self citation have been used to promote and support the ongoing research.

References

  • Bakkalbasi, N., Bauer, K., Glover, J., & Wang, L. (2006). Three options for citation tracking: Google Scholar, Scopus and Web of Science. Biomedical Digital Libraries, 3(7), 1–8.

    Google Scholar 

  • Banker, R. D., Charnes, A., & Cooper, W. W. (1984). Some models for estimating technical and scale inefficiencies in data envelopment analysis. Management Science, 30(9), 1078–1092.

    Article  MATH  Google Scholar 

  • Bar-Ilan, J. (2010). Citations to the “Introduction to infometrics” indexed by WoS, Scopus and Google Scholar. Scientometrics, 82(3), 495–506.

    Article  Google Scholar 

  • Bauer, K., & Bakalbassi, N. (2005). An examination of citation counts in a new scholarly communication environment. D-Lib Magazine, 11(9). http://www.dlib.org/dlib/september05/bauer/09bauer.html.

  • Boles, J. N. (1967). Efficiency squared—Efficient computation of efficiency indexes. In Western Farm Economic Association Proceedings 1966 (pp. 137–142).

  • Boles, J. N. (1971). The 1130 Farrell efficiency system—Multiple products, multiple factors. Berkeley: Giannini Foundation of Agricultural Economics, University of California.

    Google Scholar 

  • Bollen, J., & Van de Sompel, H. (2008). Usage Impact Factor: The effects of sample characteristics on usage-based impact metrics. Journal of the American Society for Information Science and Technology, 59(1), 136–149.

    Article  Google Scholar 

  • Bonaccorsi, A., & Daraio, C. (2008). The differentiation of the strategic profile of higher education institutions. New positioning indicators based on microdata. Scientometrics, 74(1), 15–37.

    Article  Google Scholar 

  • Bonaccorsi, A., Daraio, C., & Simar, L. (2006). Advanced indicators of productivity of universities. An application of robust nonparametric methods to Italian data. Scientometrics, 66(2), 389–410.

    Article  Google Scholar 

  • Burton, M. P., & Phimister, E. (1995). Core journals: A reappraisal of the Diamond list. Economic Journal, 105(429), 361–373.

    Article  Google Scholar 

  • Charnes, A., Cooper, W. W., & Rhodes, E. L. (1978). Measuring the efficiency of decision making units. European Journal of Operational Research, 2(6), 429–444.

    Article  MathSciNet  MATH  Google Scholar 

  • Coelli, T. J., & Perelman, S. (1999). A comparison of parametric and non-parametric distance functions: With applications to European railways. European Journal of Operational Research, 117(2), 326–339.

    Article  MATH  Google Scholar 

  • Coelli, T. J., Rap, D. S. P., O’Donnell, C. J., & Battese, G. E. (2005). An introduction to efficiency and productivity analysis (2nd ed.). New York: Springer Science.

    Google Scholar 

  • Cook, W. D., Golany, B., Penn, M., & Ravin, T. (2007). Creating a consensus ranking of proposals from reviewers’ partial ordinal rankings. Computers & Operations Research, 34(4), 954–965.

    Article  MATH  Google Scholar 

  • Cook, W. D., Ravin, T., & Richardson, A. J. (2010). Aggregating incomplete lists of journal rankings: An application to academic accounting journals. Accounting Perspectives, 9(3), 217–235.

    Article  Google Scholar 

  • Debreu, G. (1951). The coefficient of resource utilization. Econometrics, 19(3), 273–292.

    Article  MATH  Google Scholar 

  • Diamond, A. M. (1989). The core journals of economics. Current Contents, 21(1), 4–11.

    Google Scholar 

  • Efron, B. (1979). Bootstrap methods: Another look at the jackknife. Annals of Statistics, 7(1), 1–16.

    Article  MathSciNet  MATH  Google Scholar 

  • Etxebarria, G., & Gomez-Uranga, M. (2010). Use of Scopus and Google Scholar to measure social sciences production in four major Spanish universities. Scientometrics, 82(2), 333–349.

    Article  Google Scholar 

  • Farrell, M. (1957). The measurement of productive efficiency. Journal of the Royal Statistical Society Series A, 120(3), 253–281.

    Article  Google Scholar 

  • Førsund, F. R., Kittelsen, S. A. C., & Krivonozhko, V. E. (2009). Farrell revisited—Visualizing properties of DEA production frontiers. Journal of the Operational Research Society, 60(11), 1535–1545.

    Article  Google Scholar 

  • Førsund, F. R., & Sarafoglou, N. (2002). On the origins of data envelopment analysis. Journal of the Productivity Analysis, 17(1/2), 23–40.

    Article  Google Scholar 

  • Franceschet, M. (2010). A comparison of bibliometric indicators for computer science scholars and journals on Web of Science and Google Scholar. Scientometics, 83(1), 243–258.

    Article  Google Scholar 

  • Garfield, E. (1955). Citation indexes to science: A new dimension in documentation through association of ideas. Science, 122(3159), 108–111.

    Article  Google Scholar 

  • Garfield, E. (1979). Citation indexing: Its theory and applications in science, technology and humanities. New York: Wiley Interscience.

    Google Scholar 

  • Garfield, E. (2005). The agony and the ecstasy—The history and meaning of the journal Impact Factor. In International congress on peer review and biomedical publication, Chicago.

  • Glanzel, W., & Moed, H. F. (2002). Journal impact measures in bibliometric research. Scientometrics, 53(2), 171–193.

    Article  Google Scholar 

  • Halkos, G., & Tzeremes, N. (2007). International competitiveness in the ICT industry: Evaluating the performance of the top 50 companies. Global Economic Review, 36(2), 167–168.

    Article  Google Scholar 

  • Halkos, G., & Tzeremes, N. (2010). The effect of foreign ownership on SMEs performance: An efficiency analysis perspective. Journal of Productivity Analysis, 34(2), 167–180.

    Article  Google Scholar 

  • Harvey, C., Kelly, A., Morris, H., & Rowlinson, M. (2010). Academic journal quality guide, Version 4. The Association of Business Schools.

  • Hirsch, J. E. (2005). An index to quantify an individual’s scientific research output. Proceedings of the National Academy of Sciences, 102, 16569–16572.

    Article  Google Scholar 

  • Hoffman, A. J. (1957). Discussion on Mr. Farrell’s Paper. Journal of the Royal Statistical Society Series A, 120(III), 284.

    Google Scholar 

  • Kalaitzidakis, P., Mamuneas, T. P., & Stengos, T. (2003). Rankings of academic journals and institutions in economics. Journal of the European Economic Association, 1(6), 1346–1366.

    Article  Google Scholar 

  • Kalaitzidakis, P., Mamuneas T. P., & Stengos, T. (2010). An updated ranking of academic journals in economics, WP 10-15. The Rimini Centre for Economic Analysis.

  • Kiel (2010). Criteria for research publications. Kiel Institute for World Economy. http://www.ifw-kiel.de/academy/criteria-for-research-publications.

  • Klavans, R., & Boyack, K. (2009). Toward a consensus map of science. Journal of the American Society for information science and technology, 60(3), 455–476.

    Article  Google Scholar 

  • Koczy, L. A., & Strobel, M. (2007). The ranking of economics journals by a tournament method. Mimeo.

  • Kodrzycki, Y. K., & Yu, P. (2006). New approaches to ranking economic journals. Contributions to Economic Analysis and Policy, 5(1), Art. 24.

    Google Scholar 

  • Koopmans, T. C. (1951). An analysis of production as an efficient combination of activities. In T. C. Koopmans (Ed.), Activity analysis of production and allocation (pp. 33–97). New York: Wiley.

    Google Scholar 

  • Kousha, K., & Thelwall, M. (2008). Sources of Google Scholar citations outside the Science Citation Index: A comparison between four science disciplines. Scientometrics, 74(2), 273–294.

    Article  Google Scholar 

  • Laband, D. N., & Piette, M. J. (1994). The relative impacts of economics journals: 1970–1990. Journal of Economic Literature, 32(2), 640–666.

    Google Scholar 

  • Leydesdorff, L., de Moya-Anegon, F., & Guerrero-Bote, V. P. (2010). Journal maps on the basis of Scopus data: A comparison with Journal Citation Reports of the ISI. Journal of the American Society for Information Science and Technology, 61(2), 352–369.

    Google Scholar 

  • Liebowitz, S. J., & Palmer, J. C. (1984). Assessing the relative impacts of economics journals. Journal of Economic Literature, 22, 77–88.

    Google Scholar 

  • Liner, G. H., & Amin, M. (2004). Methods of ranking economic journals. Atlantic Economic Journal, 32(2), 140–149.

    Article  Google Scholar 

  • Lopez-Illescas, C., de Moya-Anegon, F., & Moed, H. F. (2008). Coverage and citation impact of oncological journals in the Web of Science and Scopus. Journal of Infometrics, 2, 304–316.

    Article  Google Scholar 

  • Lovell, C. A. L., & Schmidt, P. (1988). A comparison of alternative approaches to the measurement of productive efficiency. In A. Dogramaci & R. Färe (Eds.), Applications of modern production theory: Efficiency and productivity. Boston: Kluwer.

    Google Scholar 

  • Meho, L. I., & Yang, K. (2007). Impact of data sources on citation counts and ranking of LIS faculty: Web of Science versus Scopus and Google Scholar. Journal of the American Society for Information Science and Technology, 58(13), 2105–2125.

    Article  Google Scholar 

  • Moed, H. F. (2010). Measuring contextual citation impact of scientific journals. Journal of Infometrics. doi:10.1016/j.joi.2010.01.002.

  • Norris, M., & Oppenheim, C. (2007). Comparing alternatives to the Web of Science for coverage of the social sciences’ literature. Journal of Infometrics, 1(2), 161–169.

    Article  Google Scholar 

  • Noruzi, A. (2005). Google Scholar: The new generation of citation indexes. Libri, 55(4), 170–180.

    Article  Google Scholar 

  • Palacio-Huerta, I., & Volij, O. (2004). The measurement of intellectual influence. Econometrica, 72(3), 963–977.

    Article  Google Scholar 

  • Pinski, G., & Narin, F. (1976). Citation influence for journal aggregates of scientific publications: Theory, with application to the literature of Physics. Information Processing & Management, 12(5), 297–312.

    Article  Google Scholar 

  • Pudovkin, A. I., & Garfield, E. (2004). Rank-normalized Impact Factor: A way to compare journal performance across subject categories. In Proceedings of the 67th ASIS&T annual meeting, 17, November, 2004. http://www.garfield.library.upenn.edu/papers/asistranknormalization2004.pdf.

  • Pujol, F. (2008). Ranking journals following a matching model approach: An application to public economic journals. Journal of Public Economic Theory, 10(1), 55–76.

    Article  Google Scholar 

  • Rainer, K. R., & Miller, M. D. (2005). Examining differences across journal rankings. Communications of the ACM, 48(2), 91–94.

    Article  Google Scholar 

  • Ritzberger, K. (2008). A ranking of journals in economics and related fields. German Economic Review, 9(4), 402–430.

    Article  Google Scholar 

  • Schneider, F., & Ursprung, H. W. (2008). The 2008 GEA journal-ranking for the economics profession. German Economic Review, 9(4), 532–538.

    Article  Google Scholar 

  • Shephard, R. W. (1970). Theory of cost and production function. Princeton: Princeton University Press.

    Google Scholar 

  • Simar, L., & Wilson, P. W. (1998). Sensitivity analysis of efficiency scores: How to bootstrap in non parametric frontier models. Management Science, 44(1), 49–61.

    Article  MATH  Google Scholar 

  • Simar, L., & Wilson, P. W. (2000). A general methodology for bootstrapping in non-parametric frontier models. Journal of Applied Statistics, 27(6), 779–802.

    Article  MathSciNet  MATH  Google Scholar 

  • Simar, L., & Wilson, P. W. (2002). Non parametric tests of return to scale. European Journal of Operational Research, 139(1), 115–132.

    Article  MathSciNet  MATH  Google Scholar 

  • Simar, L., & Wilson, P. (2008). Statistical interference in nonparametric frontier models: Recent developments and perspectives. In H. Fried, C. A. K. Lovell, & S. Schmidt (Eds.), The measurement of productive efficiency and productivity change (pp. 421–521). New York: Oxford University Press.

    Google Scholar 

  • Theussl, S., & Hornik, K. (2009). Journal ratings and their consensus ranking. In Operations research proceedings 2008. Berlin: Springer-Verlag. doi:10.1007/978-3-642-00142-0_65.

  • Zitt, M., & Small, H. (2008). Modifying the journal Impact Factor by fractional citation weighting: The Audience Factor. Journal of the American Society for Information Science and Technology, 59(11), 1856–1860.

    Article  Google Scholar 

Download references

Acknowledgments

We would like to thank Professor Tibor Braun and the anonymous reviewers for the comments and suggestions made in an earlier version of our paper. Finally, we would like to thank Panayiotis Tzeremes for his assistance in collecting journals’ information. Any remaining errors are solely the authors’ responsibility.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to George Emm Halkos.

Appendix: methodology and statistical techniques applied

Appendix: methodology and statistical techniques applied

Based on the work by Koopmans (1951) and Debreu (1951) the production set Ψ constraints the production process and is the set of physically attainable points (x, y):

$$ \Uppsi = \left\{ {\left( {x,y} \right) \in \left. {\Re_{ + }^{N + M} } \right|x\;{\text{can}}\;{\text{produce}}\;y} \right\} $$
(4)

where \( x \in \Re_{ + }^{N} \) is the input vector and \( y \in \Re_{ + }^{M} \) is the output vector. For the input oriented efficiency score a country operating at the level (x, y) is defined as:

$$ \theta \left( {x,y} \right) = { \inf }\left\{ {\left. \theta \right|\left( {\theta x,y} \right) \in \Uppsi } \right\} $$
(5)

Furthermore, DEA became more popular when introduced by Charnes et al. (1978) in order to estimate Ψ allowing for constant returns to scale (CRS model). Later, Banker et al. (1984) introduced a DEA estimator allowing for variable returns to scale (VRS model). In our case, when evaluating journals’ citation performance input orientation of DEA models have been applied due to the fact that input quantities appear to be the primary decision variables (Coelli and Perelman 1999; Coelli et al. 2005; Halkos and Tzeremes 2010). The quality of the papers appeared in a journal but also the number of the papers to be published (i.e. the number of issues and volumes) is subject to the editors’ decision. Therefore the decision makers have most control over the input compared to the outputs used. Furthermore, the CRS model developed by Charnes et al. (1978) can be calculated as:

$$ \hat{\Uppsi }_{\text{CRS}} = \left\{ {\begin{array}{*{20}c} {\left( {x,y} \right) \in \left. {\Re^{N + M} } \right|y \le \sum\limits_{i = 1}^{n} {\gamma_{i} y_{i} ;x} \ge \sum\limits_{i = 1}^{n} {\gamma_{i} x_{i} } \;{\text{for}}\;\left( {\gamma_{1} , \ldots \gamma_{n} } \right)} \hfill \\ {{\text{such}}\;{\text{that}}\;\gamma_{i} \ge 0,\;i = 1, \ldots n} \hfill \\ \end{array} } \right\} $$
(6)

The VRS model developed by Banker et al. (1984) allowing for variable returns to scale can then be calculated as:

$$ \hat{\Psi }_{{{\text{VRS}}}} = \left\{ {\begin{array}{*{20}c} {\left( {x,y} \right) \in \Re ^{{N + M}} \left| {y \le \sum\limits_{{i = 1}}^{n} {\gamma _{i} y_{i} ;x} \ge \sum\limits_{{i = 1}}^{n} {\gamma _{i} x_{i} } } \right.\;{\text{for}}\;\left( {\gamma _{1} , \ldots \gamma _{n} } \right)} \hfill \\ {{\text{such}}\;{\text{that}}\;\sum\limits_{{i = 1}}^{n} {\gamma _{i} = 1} ,\;\gamma _{i} \ge 0,\;i = 1, \ldots n} \hfill \\ \end{array} } \right\} $$
(7)

Then in order to obtain the corresponding input oriented DEA estimators of efficiency scores we need to plug in \( \hat{\Uppsi }_{\text{CRS}} \) and \( \hat{\Uppsi }_{\text{VRS}} \) respectively in Eq. 5 presented previously.

Simar and Wilson (1998, 2000, 2008) suggest that DEA estimators were shown to be biased by construction. They introduced an approach based on bootstrap techniques (Efron 1979) to correct and estimate the bias of the DEA efficiency indicators. The bootstrap bias estimate for the original DEA estimator \( \hat{\theta }_{\text{DEA}} (x,y) \)can be calculated as:

$$ \widehat{\text{BIAS}}_{B} \left( {\hat{\theta }_{\text{DEA}} (x,y)} \right) = B^{ - 1} \sum\limits_{b = 1}^{B} {\hat{\theta }_{{{\text{DEA}},b}}^{*} (x,y) - \hat{\theta }_{\text{DEA}} (x,y)} $$
(8)

Furthermore, \( \hat{\theta }_{{{\text{DEA}},b}}^{*} (x,y) \) are the bootstrap values and B is the number of bootstrap replications. Then a biased corrected estimator of θ(xy) can be calculated as:

$$ \begin{aligned} \hat{\hat{\theta }}_{\text{DEA}} (x,y) = & \hat{\theta }_{\text{DEA}} (x,y) - \widehat{\text{BIAS}}_{B} \left( {\hat{\theta }_{\text{DEA}} (x,y)} \right) \\ = & 2\hat{\theta }_{\text{DEA}} (x,y) - B^{ - 1} \sum\limits_{b = 1}^{B} {\hat{\theta }_{{{\text{DEA}},b}}^{*} (x,y)} \\ \end{aligned} $$
(9)

However, according to Simar and Wilson (2008) this bias correction can create an additional noise and the sample variance of the bootstrap values \( \hat{\theta }_{{{\text{DEA}},b}}^{*} (x,y) \) need to be calculated. The calculation of the variance of the bootstrap values is illustrated below:

$$ \hat{\sigma }^{2} = B^{ - 1} \sum\limits_{b = 1}^{B} {\left[ {\hat{\theta }_{{{\text{DEA}},b}}^{*} (x,y) - B^{ - 1} \sum\limits_{b = 1}^{B} {\hat{\theta }_{{{\text{DEA}},b}}^{*} (x,y)} } \right]^{2} } $$
(10)

We need to avoid the bias correction illustrated in (9) unless:

$$ \frac{{\left| {\widehat{\text{BIAS}}_{B} (\hat{\theta }_{\text{DEA}} (x,y))} \right|}}{{\hat{\sigma }}} > \frac{1}{\sqrt 3 } $$
(11)

Following Shephard (1970) the input distance function can be expressed as \( \hat{\delta }_{\text{DEA}} \left( {x,y} \right) \equiv \frac{1}{{\hat{\theta }_{\text{DEA}} \left( {x,y} \right)}} \) then we can construct bootstrap confidence intervals for \( \hat{\delta }_{\text{DEA}} \left( {x,y} \right) \) as:

$$ \left[ {\hat{\delta }_{\text{DEA}} \left( {x,y} \right) - \hat{\alpha }_{1 - \alpha /2} ,\hat{\delta }_{\text{DEA}} \left( {x,y} \right) - \hat{\alpha }_{\alpha /2} } \right] $$
(12)

In order to choose between the adoption of the results obtained by the CCR (Charnes et al. 1978) and BCC (Banker et al. 1984) models in terms of the consistency of our results obtained we adopt the method introduced by Simar and Wilson (2002). Therefore, we compute the DEA efficiency scores under the CRS and VRS assumption and by using the bootstrap algorithm we test for the CRS results against the VRS results obtained such as:

$$ H_{ 0} :\;\Uppsi^{\vartheta } \;{\text{is}}\;{\text{CRS}}\;{\text{against}}\;H_{1} :\;\Uppsi^{\vartheta } \;{\text{is}}\;{\text{VRS}} $$
(13)

The test statistic is given as:

$$ T\left( {X_{n} } \right) = \frac{1}{n}\sum\limits_{i = 1}^{n} {\frac{{\hat{\theta }_{{{\text{CRS}},n}} \left( {X_{i} ,Y_{i} } \right)}}{{\hat{\theta }_{{{\text{VRS}},n}} \left( {X_{i} ,Y_{i} } \right)}}} $$
(14)

Then the p-value of the null hypotheses can be approximated by the proportion of bootstrap samples as:

$$ p{\text{-value}} = \sum\limits_{b = 1}^{B} {\frac{{I\left( {T^{ * ,b} \le T_{\text{obs}} } \right)}}{B}} $$
(15)

where B is 2000 bootstrap replications, I is the indicator function and \( T^{*,b} \) is the bootstrap samples and original observed values are denoted by T obs.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Halkos, G.E., Tzeremes, N.G. Measuring economic journals’ citation efficiency: a data envelopment analysis approach. Scientometrics 88, 979–1001 (2011). https://doi.org/10.1007/s11192-011-0421-y

Download citation

  • Received:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11192-011-0421-y

Keywords

Mathematics Subject Classification (2000)

Navigation