This article analyzes bank bankruptcy regimes across 142 countries. By employing factor analysis, we identify five main dimensions of bank bankruptcy frameworks: (1) difficulty of forbearance and ease of court appeal, (2) availability of supervisory tools, (3) court involvement, (4) supervisory powers with respect to managers, and (5) supervisory powers with respect to shareholders and preinsolvency phase. We use cluster analysis to identify and group countries according to two prevalent types of bank bankruptcy frameworks: a court-led and administrative bank bankruptcy regime. Administrative bank bankruptcy regimes are associated with less court involvement in the resolution process, less likely forbearance, a higher possibility of court appeal, greater availability of supervisory tools, weaker supervisory powers with respect to managers and stronger supervisory powers with respect to shareholders, and a preinsolvency phase as opposed to the court-led bank bankruptcy regimes. Administrative bank bankruptcy regimes are also associated with fewer creditor rights, less government effectiveness, and lower institutional quality than court-led bank bankruptcy regimes. We find some evidence that the type and main dimensions of a bank bankruptcy regime are related to the occurrence and severity of the global financial crisis.
COBISS.SI-ID: 22101222
This paper identifies the main dimensions of capital regulation. We use survey data from 142 countries from the World Bank's (2013) database covering various aspects of bank regulation. Using multiple explorative factor analysis, we identify two main dimensions of capital regulation: complexity of capital regulation and stringency of capital regulation. We show that even coun- tries with a common legal and regulatory framework differ substantially in terms of capital regulation. For example, the level of stringency of capital regulation varies substantially across the EU countries, potentially distorting the level playing field.
COBISS.SI-ID: 22176742
In this paper, we provide original evidence on the economic role of placement agents as the financial intermediaries between general and limited partners in private equity. Our research is based on 902 private equity funds raised over the period of 1990 to 2011. This data shows that general partners hire placement agents to provide funding for approximately one tenth of the private equity funds they manage. The multitude of services provided by placement agents adds value to both general as well as limited partners. We find a positive impact from the placement agent’s relative fees on a fund’s performance. Similar to other financial intermediaries, the costliness of the placement agent decreases with the investment amounts committed by the limited partners. The fee levels are determined by negotiations with a general partner as well as by the phenomenon of free riding. Further, we find that placement agents do not take advantage of the heterogeneity of the fund’s returns and the potentially high benefits of successful investment picks. Namely, they predominantly prefer to charge their clients fixed fees. We conclude from our findings that limited partners succeed in picking better performing funds because they invest relatively higher amounts of their available allocations of private equity into the funds that yield higher returns.
COBISS.SI-ID: 22207718
As a generalization of the factor-augmented VAR (FAVAR) and of the Error Correction Model (ECM), Banerjee and Marcellino (2009) introduced the Factor-augmented Error Correction Model (FECM). The FECM combines error-correction, cointegration and dynamic factor models, and has several conceptual advantages over the standard ECM and FAVAR models. In particular, it uses a larger dataset than the ECM and incorporates the long-run information which the FAVAR is missing because of its specification in differences. In this paper, we examine the forecasting performance of the FECM by means of an analytical example, Monte Carlo simulations and several empirical applications. We show that FECM generally offers a higher forecasting precision relative to the FAVAR, and marks a useful step forward for forecasting with large datasets.
COBISS.SI-ID: 21575654
In this paper we address the problem of projecting mortality when data are severely affected by random fluctuations, due in particular to a small sample size, or when data are scanty. Such situations may emerge when dealing with small populations, such as small countries (possibly previously part of a larger country), a specific geographic area of a (large) country, a life annuity portfolio or a pension fund, or when the investigation is restricted to the oldest ages. The critical issues arising from the volatility of data due to the small sample size (especially at the highest ages) may be made worse by missing records; this is the case, for example, of a small country previously part of a larger country, or a specific geographic area of a country, given that in some periods mortality data could have been collected just at an aggregate level. We suggest to ‘replicate’ the mortality of the small population by mixing appropriately the mortality data obtained from other populations. We design a two-step procedure. First, we obtain the average mortality of ‘neighboring’ populations. Three alternative approaches are tested for the assessment of the average mortality; conversely, the identification and the weight of the neighboring populations are obtained through (standard) optimization techniques. Then, following a sort of credibility approach, we mix the original mortality data of the small population with the average mortality of the neighboring populations. In principle, the approach described in the paper could be adopted for any population, whatever is its size, aiming at improving mortality projections through information collected from other groups. Through backtesting, we show that the procedure we suggest is convenient for small populations, but not necessarily for large populations, nor for populations not showing noticeable erratic effects in data. This finding can be explained as follows: while the replication of the original data implies the increase of the size of the sample, it also involves a smoothing of data, with a possible loss of specific information relating to the group referred to. In the case of small populations showing major erratic movements in mortality data, the advantages gained from the larger sample size overcome the disadvantages of the smoothing effect.
COBISS.SI-ID: 22307558