Category Financial Econometrics and Empirical Market Microstructure

Setting VaR Limits Based on Portfolio Insurance and Quantile Hedging

These drawbacks are addressed in a dynamic model proposed by StraBberger (2002). In this model, the market risk of a stock portfolio[36] is managed through VaR limits in a continuous time. The underlying idea is a combination of portfolio insurance with synthetic put options (Rubinstein and Leland 1981) and “quantile hedging” (Follmer and Leukert 1999). As in the model by Beeck et al. (1999), the annual risk limit is defined as a maximum cumulative loss over a year, and is dynamically adjusted for the trader’s daily P&L. However, the annual risk limit is translated not into a daily VaR limit, but directly into a daily position limit using the daily VaR parameters.[37] The daily position limit is adjusted using a risk- aversion scalar (at) and, by construction, is equal to or smaller ...

Read More

Monte-Carlo Simulation Schema

One of the key requirements for the stress-testing model is the ability to estimate changes in the rating structure of the portfolio over time. The most obvious approach for this task is to incorporate migration matrixes into the model. Due to the dependence of the rating migration dynamics on the economic cycle, it is recommended to use different migration matrixes for stress and expansion scenarios.

We propose the following Monte-Carlo simulation schema, which takes into account the proposed density function (3) and migration matrixes:

1. For the given macro-variable dynamics (from the macro-forecast) for the stress­testing period, conditional PDs are calculated [using (2)] for each rating class— Thsi.

2. The normal random variable Z is generated (systemic factor).

3...

Read More

Smoothing Data for Further Analysis and Preliminary Observations

In this work several microstructure variables were researched, including

• Stock return and price

• Price change and its absolute value

• Spread and relative spread (ratio of spread to price)

Due to systematic noise in microstructure data, it is necessary to smooth the data for further analysis. In this work, one of the modern wavelet methods was used. The basic principle of wavelet smoothing is performing wavelet decomposition and applying a “smoothing” transformation for wavelet coefficients for a certain threshold level. By looking at the smoothed trajectory of a variable, we can already discern whether its behavior is regular or not (Antoniou and Vorlow 2005)...

Read More

Comparison of Ratings: Methods and Algorithms

The rating process has some problems, such as

• A relatively small number of updated communicative ratings.

• Difficulties of comparison of estimation between different rating agencies.

• Absence of any integrative effect from available competitive estimations of independent agencies.

• A demand for extended usage on independent rating estimations primarily owing to modeling techniques.

We aim to achieve a comparison capability of independent estimations of different ratings. In this way the elaboration and development of the approaches and methods are especially urgent because of synergy opportunities connected with the limitations mentioned above...

Read More

Macro Micro Polarity Management

As discussed above, the flux between immediate visible risks and longer term fragilities presents a perpetual challenge. It’s not a problem that can be solved with better statistical models. Indeed, better data and more precise analytics can lead to overconfidence. This was part of the problem in the subprime crisis (“we were busy looking at sand corns through a microscope when the tsunami hit” recalled a bank risk manager). This is a classic polarity management challenge. Polarities are interdependent opposites which power all complex systems. Barry Johnson’s seminal “Polarity Management: Identifying and Managing Unsolvable Problems” (1996) is an excellent primer.

8 Six Macro vs. Micro Risk Management Polarities

As you read the pairs below, consider which requires greater atte...

Read More

Examples of Using the Test

It is worth noting that, in general, the result may significantly affect the use of data in levels or differences, since levels and differences often correspond to different levels of rank correlation. In both the examples below, the data is used in absolute increases of what is displayed on the relevant charts. Also, during computation, abnormally high values in the beginning or end of the series are not noticed, allowing us to use the same weight for observation as in Brodsky et al. (2009) or Penikas (2012).

Following Penikas (2012), the test statistic has been applied to determine the structural shift to quarterly observations of U. S. GDP in the first quarter of 1947 to the second quarter of 2012. There are a total of 262 observations and 261 observations for differences...

Read More