home

Cost risk.

Braids

Risk management in an enterprise cannot be a set of momentary actions. In any case, this is a whole process of directed actions. Moreover, the risk management process must be part of overall business management to achieve results.

As such, the risk management process includes a specific set of steps. It should be noted that in practice these stages are not necessarily implemented in strict sequence, but can be performed in parallel. The general scheme of risk management is presented in Figure 4.1.

As we see in this figure, there is a general sequence of actions that reflects the logic of the risk management process (thick arrows). In addition, there are feedback connections between stages, i.e. at any of them you can return to the previous one. At the last stage, as we will see later, a general assessment and analysis of the process is carried out. The results of this stage will be taken into account during the further implementation of each stage of the risk management process. This is shown by the arrows on the right.

At stage 3, decisions are made about the risk management methods used, which may require clarification of information about risks (stage 1) or determine the design of the monitoring process (stage 5).

So, this is the logic of the sequence of implementation of risk management stages in an enterprise. Now let's look at each of these stages in a little more detail. Stage 1. Risk identification and analysis. Under risk identification

    understand the identification of risks, their specificity due to the nature and other characteristic features of risks, highlighting the features of their implementation, including studying the amount of economic damage, as well as changes in risks over time, the degree of relationship between them and the study of factors influencing them. This process involves determining the following points:

    sources of uncertainty and risk;

    consequences of risk realization;

    information sources;

    numerical determination of risk;

At this stage, first of all, an information base is created for the implementation of the further risk management process: information about the risk and its consequences, the amount of economic damage, quantitative assessment of risk parameters, etc. Additionally, it should be noted that risk identification and analysis is not a one-time task a set of actions. Rather, it is a continuous process carried out throughout the entire risk management algorithm.

Stage 2. Analysis of risk management alternatives. There is a whole range of different methods to reduce the degree of risk and the amount of damage. At this stage, these methods are reviewed and analyzed in relation to a specific situation. That is, the manager decides how to reduce the risk and losses in the event of a risk situation, and looks for sources to cover this damage.

The risk management methods themselves are quite diverse. This is due to the ambiguity of the concept of risk and the presence of a large number of criteria for their classification. In the next section of this chapter we will look at the main methods in more detail, but here we will limit ourselves to just a brief overview of them.

Firstly, approaches to risk management can be grouped as methods for minimizing the negative impact of adverse events as follows.

    Risk aversion(Risk elimination) is a set of measures leading to complete avoidance of the influence of the adverse consequences of a risk situation.

    Risk Reduction(Risk reduction, Risk mitigation) are actions leading to a reduction in damage. In this case, the company takes on risks (Risk retention, Risk assumption).

    Transfer of risk(Risk transfer) are measures that allow you to shift responsibility and compensation for damage arising from the occurrence of a risk situation to another entity.

    From another point of view, risk management methods can be classified according to the ratio of the time of implementation of control measures and the occurrence of a risk situation.

    Pre-event risk management methods– measures taken in advance aimed at changing significant risk parameters (probability of occurrence, extent of damage). These include methods of risk transformation (Risk control, Risk control to stop losses), which are mainly associated with preventing the realization of risk.

    Typically, these methods are associated with preventive measures.– carried out after the occurrence of damage and aimed at eliminating the consequences. These methods are aimed at generating financial sources used to cover damage. These are mainly risk financing methods (Risk financing, Risk financing to pay for losses).

In graphical form, both classifications given here are presented in Figure 4.2.

Stage 3. Selection of risk management methods. Here the manager forms an anti-risk policy for the company, as well as a policy aimed at reducing the degree of uncertainty in its work. The main issues that need to be addressed are as follows:

    selection of the most effective risk management methods;

    determining the impact of the selected program on the overall risk in the organization’s activities.

In essence, the choice of risk management methods comes down to the calculation of an economic and mathematical model, where the criteria and limitations are the economic and probabilistic characteristics of risk (determined at the first stage of the risk management process). However, other parameters can be added here, for example, technical or social.

When developing a risk management system, a manager must take into account, first of all, the principle of its effectiveness. It lies in the fact that control actions should not focus on all risks, but, first of all, on those that have the greatest impact on the company’s activities. Under conditions of, say, budget constraints, the most insignificant risks should be discarded in order to save resources (passive strategy). At the same time, using the freed funds, intensive work is carried out with more serious risks (active strategy).

The result of this stage is a risk management program for the enterprise. It represents a detailed description of the activities that need to be taken, resource and information support, criteria for determining the effectiveness of the program, distribution of responsibilities, etc.

Stage 4. Execution of the selected risk management method. Here the program developed at the previous stage is directly implemented. The issues that are resolved at this stage relate to the technical specifics of the decisions made. The main ones are the following:

    specific activities to be implemented;

    the timing of these activities;

    sources and composition of resources necessary to carry out this work;

    identification of responsible persons.

In this way, contradictions and ambiguity in planning and monitoring the execution of the risk management program are eliminated.

Stage 5. Monitoring results and improving the risk management system. This stage implements feedback in the risk management system. The first task of this relationship is to determine the overall efficiency of the system as a whole. In addition, bottlenecks and weaknesses of risk management in the enterprise are highlighted.

The second task is to analyze the risks realized during the period. Here the reasons for their implementation and associated changes in the risk management program, if any, should be identified.

As the name of the stage suggests, it is aimed not only at monitoring the risk management process, but also at identifying those improvements that can improve the efficiency of the system. Thus, to the indicated tasks we can add the following questions that the manager concerns when implementing this stage:

    the contribution of each implemented activity to the overall effectiveness of the system;

    possible adjustments to the composition of these activities;

    flexibility and efficiency of the decision-making system.

Among other things, at this stage the information base about risks is replenished. The updated information is used in the next cycle of the risk management process.

A feature of efficiency calculations at this stage is the consideration of hypothetical losses. This is due to the fact that during the analyzed period the risks may not have materialized at all, and the costs of operating the risk management system are incurred in any case. If we take into account only real losses, then in some cases the ratio of losses and costs will indicate zero efficiency of the risk management system. However, the absence of losses can serve as evidence of its high efficiency.

The main goal of assessing the effectiveness of implemented measures is to adapt their system to a changing external environment. Its achievement is carried out, first of all, through the following changes.

    Replacing ineffective measures with more effective ones (within existing restrictions).

    Changing the organization of execution of the risk management program.

In recent decades, the world economy has regularly fallen into a whirlpool of financial crises. 1987, 1997, 2008 almost led to the collapse of the existing financial system, which is why leading experts began to develop methods that can be used to control the uncertainty that dominates the financial world. In the Nobel Prizes of recent years (received for the Black-Scholes model, VaR, etc.) there is a clear tendency towards mathematical modeling of economic processes, attempts to predict market behavior and assess its stability.

Today I will try to talk about the most widely used method for predicting losses - Value at Risk (VaR).

Concept of VaR

An economist's understanding of VaR is as follows: “An estimate, expressed in monetary units, of the amount that losses expected during a given period of time will not exceed with a given probability.” Essentially, VaR is the amount of loss on an investment portfolio over a fixed period of time, if some unfavorable event occurs. “Unfavorable events” can be understood as various crises, poorly predictable factors (changes in legislation, natural disasters, ...) that can affect the market. One, five or ten days are usually chosen as the time horizon, due to the fact that it is extremely difficult to predict market behavior over a longer period. The acceptable risk level (essentially a confidence interval) is taken to be 95% or 99%. Also, of course, the currency in which we will measure losses is fixed.
When calculating the value, it is assumed that the market will behave in a “normal” way. Graphically this value can be illustrated as follows:

VaR calculation methods

Let's consider the most commonly used methods for calculating VaR, as well as their advantages and disadvantages.
Historical modeling
In historical modeling, we take the values ​​of financial fluctuations for the portfolio already known from past measurements. For example, we have the performance of a portfolio over the previous 200 days, based on which we decide to calculate VaR. Let's assume that the next day the financial portfolio will behave the same as on one of the previous days. This way we will get 200 outcomes for the next day. Further, we assume that the random variable is distributed according to the normal law, based on this fact, we understand that VaR is one of the percentiles of the normal distribution. Depending on what level of acceptable risk we have taken, we select the appropriate percentile and, as a result, we obtain the values ​​that interest us.

The disadvantage of this method is the impossibility of making predictions for portfolios about which we have no information. A problem may also arise if the components of the portfolio change significantly in a short period of time.

A good example of calculations can be found at the following link.

Leading component method
For each financial portfolio, you can calculate a set of characteristics that help assess the potential of assets. These characteristics are called leading components and are usually a set of partial derivatives of the portfolio price. To calculate the value of a portfolio, the Black-Scholes model is usually used, which I will try to talk about next time. In a nutshell, the model represents the dependence of the valuation of a European option on time and on its current value. Based on the behavior of the model, we can evaluate the potential of the option by analyzing the function using classical methods of mathematical analysis (convexity/concavity, intervals of increasing/decreasing, etc.). Based on the analysis data, VaR is calculated for each of the components and the resulting value is constructed as a combination (usually a weighted sum) of each of the estimates.

Naturally, these are not the only methods for calculating VaR. There are both simple linear and quadratic price prediction models, as well as a rather complex variation-covariance method, which I did not talk about, but those interested can find a description of the methods in the books below.

Criticism of the technique

It is important to note that when calculating VaR, the hypothesis of normal market behavior is accepted, however, if this assumption were correct, crises would occur once every seven thousand years, but, as we see, this is absolutely not true. Nassim Taleb, a famous trader and mathematician, in his books “Fooled by Randomness” and “The Black Swan” severely criticizes the existing risk assessment system, and also proposes his solution in the form of using another risk calculation system based on the lognormal distribution.

Despite criticism, VaR is quite successfully used in all major financial institutions. It is worth noting that this approach is not always applicable, which is why other methods have been created with a similar idea, but a different calculation method (for example, SVA).

In response to the criticism, modifications of VaR have been developed, based either on other distributions or on other methods of calculation at the peak of the Gaussian curve. But I will try to talk about this another time.

If we divide the factors to be analyzed into primary and secondary, it turns out that in every business there is a great variety of both. It is clear that few people know all of them. Therefore, in the early 1990s. The management of J.P. Morgan bank gave its “risk managers” the task of finding a format that was easy to understand and that would aggregate and unify primary and secondary risks in different areas of the business. This is how Value-at-Risk, better known as VaR, came into being. Today it is a standard risk control tool.

The return and risk profile of some financial instruments is distributed linearly. Let's say you bought a stock, and for every unit change in its price, the outcome of your position will change by the same number of units. This is an example of a primary risk. Changes in the prices of derivative instruments also mainly depend on changes in the prices of the underlying assets (in our example, stocks). However, they are also sensitive to changes in other variables that we discussed in the options chapter, such as changes in volatility and interest rates, and changes in timing. These are some secondary variables. Because of them, the prices of derivative instruments do not change linearly in relation to the price of the underlying asset.

Probably, management would not have faced the question of creating VaR if derivative instruments had not appeared, for example, options, the price of which non-linearly depends on the variables that determine it. It is important that the reader believes that a portfolio of loans is the same portfolio of options, only for loans. We will discuss the details later, and in this chapter we will demonstrate the principles of operation, capabilities and limitations of the model on a simpler asset.

The role of correlation in reporting should be noted. The management of a large bank needs two or three simple reports on a huge number of different positions in different products. If you “drive” them all into one model, then even at today’s computer speeds, data processing will take too much time. It’s easier to start from certain underlying assets and supplement them with a matrix of correlations with other assets, even if there are few positions, as in the example where you bought LUKOIL shares and sold Rosneft shares. The system should estimate the correlation and guess how much you could lose if prices don't behave as you expected. If you don't evaluate the correlation and treat the risks of two stocks as independent, you will actually overstate them, since in practice they move in the same direction most of the time. Finding a statistically justified size of potential maximum losses is precisely the main task of Value-at-Risk. This term is translated as a cost measure of risk.

More precisely, VaR is the maximum amount:

  • unchanged position;
  • over a given period of time (the standard horizon is one to ten days);
  • for a given implied volatility;
  • for a given level of confidence (the number of standard deviations from the mean).

The main variations in constructing VaR are the estimate of expected volatility and the number of standard deviations. The first parameter is needed to understand the most likely estimate of losses that can be expected within 2/3 of a given period of time. The second is the maximum deviation for 1/3 of the time.

Volatility, or price variability, is called “standard deviation” in statistics. The models use our old friend expected volatility, which is calculated as the estimated (expected) spread between closing prices over a given period of time.

VaR calculation examples

Let's say you sold an option to increase the price of stock X (a call option on stock X). Your portfolio now consists of one call sold, stock price - 100.0, strike price - 100.0, expected volatility - 19.1%, call exercise (specified period) in 30 days. A volatility of 19.1% suggests that within one day the stock's market price deviation (one-day standard deviation) will be approximately ±1% for 2/3 of the period under consideration (30 days).

How many standard deviations are correct to use to calculate VaR? In other words, how do you capture price movements in the remaining 1/3 of the time horizon that will exceed the volatility expected by the market? Most students of statistics know what a bell curve is and the fact that in a normal distribution, 99% of events fall within three standard deviations. But in practice, this value is more likely to be four standard deviations (Table 1), so they should be used to capture movements that are not explained by the normal distribution.

Table 1. Revaluation of an option (see example) when the price of the underlying asset changes (the next day)

The value of the underlying asset is not the only value that changes within the time horizon. The expected volatility price of an option can fall or rise. Accordingly, the model should be tested for different levels of expected volatility.

For example, the model might limit changes in volatility to 15%. This means that if at the moment the expected volatility is 19.1%, then the next day it will be within the range (16.61%, 21.97%). Let's re-evaluate our portfolio given the new constraints (Tables 2 and 3).

Table 2. Option revaluation when volatility changes (next day)

By comparing this data, you can search through a grid of values ​​that defines the value of the portfolio in intervals ranging from constant to extreme for the period in question (the next day).

Table 3. Revaluation of an option when both the price of the underlying asset and volatility change

By subtracting the current value of the portfolio from the results obtained, we obtain a series of revaluations for all variations for the period under consideration (Table 4).

Table 4. The financial result of option revaluation when both the price of the underlying asset and volatility change

The revaluation showing the maximum loss (-2.81) is VaR for a period of one day and with a confidence level of 98% (with an underlying asset value of 104 points and a volatility of 21.97%). Many products have not only a spot price, but also forward curves, that is, prices for the same product for future delivery that fluctuate even with a stable spot. In the foreign exchange market, for example, forward curves are the result of the relationship between the interest rates of two currencies. In the case of commodity futures, forward curves are the result of a forecast of future market conditions. For example, the forward curve changes when expectations about a shortage in the supply of a good at the expiration date of the contract change. In addition to the forward curves of the underlying asset (terms price structure), there are forward volatility curves (volatility structure). To simplify the calculation of VaR, it is recommended to modify the forward price of each period using the appropriate standard deviation.

By analogy, volatility varies along the entire forward curve.

By combining the underlying asset and volatility curves, we obtain the desired risk matrix based on price fluctuations of the underlying asset, its volatility and forward curves.

Model Variations

Please note that all calculations for trading divisions are carried out for a given period - usually one day. In practice, the market can move in one direction for much longer. Consequently, the maximum values ​​of losses can follow one after another for several days, the number of which, as shown by price dynamics in the fall of 2008, can be significant. Therefore, calculations are prepared for management for a period of ten days. However, this is a rather conservative approach, since with negative dynamics the position can also change, i.e. traders can reduce positions, and credit departments can sell part of the portfolio. In this case, the predicted losses may decrease.

Because there are different formulas for estimating volatility and required standard deviations, when you hear that, say, a given position could lose $10 million, that doesn't mean it's 10 times less or has 10 times less risk than a position that could. lose $100 million. This is not a trivial remark: for example, at the end of the second half of 2011, the VaR announced by Goldman Sachs bank was $100 million for all positions in all offices of the world. At the same time, in some medium-sized Russian banks it exceeded $15 million. It is probably wrong to assume that their risk level was a sixth of the risk of the world's largest trader. Rather, the formulas used to determine risk were much more conservative.

At the beginning of August 2011, at the height of the crisis associated with the downgrade of the US credit rating and the banking crisis in Europe, it was reported that Goldman Sachs had incurred losses of $100 million as a result of two trading sessions. In other words, the correctness of the VaR calculation was confirmed .

However, the scandal at J.P. Morgan due to losses in derivatives portfolios, which occurred in May 2012, once again showed that VaR models can also be “twisted” and underestimate risk indicators.

Stress tests

VaR is a method of probabilistically measuring possible outcomes, including maximum losses, over a given period of time (“time horizon”). When calculating it, it is assumed that the composition of the original portfolio and with a certain level of confidence (in statistical terms) do not change. In stress tests, we do not consider the worst-case state of the current market, but create stress scenarios based on historical worst-case market scenarios. In other words, your portfolio's losses are calculated based on what the market has experienced over the past 30–40 years. If your portfolio contains mostly purchased positions, you take their worst-case movement when creating a stress test. If you have mostly sold positions, then the stress test is based on moments of unbridled growth. In both situations, the stress test reveals nightmare scenarios.

A significant difference between stress tests and VaR calculations is the treatment of correlations. VaR calculations assume an observable level of correlation between different positions in a portfolio. When considering stress test scenarios, we can reject the observed correlations, which leads to an increase in possible losses. Thus, our positions in the shares of LUKOIL and Rosneft will be considered as completely independent.

Moreover, it may take into account not the current, but the maximum historical volatility, for example, a 30% drop or 40% rise in one of these stocks on one of the days of the crises of 1998 or 2008, at the choice of the risk manager.

The idea of ​​a lack of correlation between similar assets sold and purchased can be compared to the fact that, for example, the cost of milk and the cost of cows can go in different directions: the price of milk (LUKOIL shares) will double, and the price of cows (Rosneft shares) will fall by twice. In other words, at the same oil price, such price dynamics will be small. If we take this as a basis, then all Russian banks should close, since the fluctuations in interest rates demonstrated in 2008 indicate the enormous risk of their current operations.

In order not to close banks, they choose some “reasonable” scenarios. As a result of such “smoothing out” of worst-case scenarios, as shown by the crises that occurred in Russia (1998) and the West (2007–2009), pre-crisis stress tests underestimated the maximum losses. Pointing out this, the risk manager will say that “as a result of this underestimation, most bank executives were not sufficiently concerned about the proposed scenarios and were unable to close out risky positions in a timely manner.” He will recommend that when conducting stress tests it is better to err on the side of conservative estimates and overestimating the risk of scenarios. In practice, this means that in pre-crisis times, managers had to do business on a much smaller scale. Whether this conclusion is correct or not, it is through modernized stress tests that Western regulators seek to reduce bank leverage.

Interaction of Volitarity, Correlation and Liquidity

It should be noted that “habitual (historical) correlation” is a very impractical term. 10-year versus 1-year asset correlations can be very different. Therefore, you have to choose the period for which the historical correlation is taken to use it in models. However, the higher the market volatility, the more difficult it is to maintain the usual relationships. In other words, an increase in volatility is accompanied by a change in correlations.

One of the reasons for their violation is liquidity gaps. Increased volatility is causing market participants to reduce their position sizes. Since the number of buyers also decreases, when selling, markets are faced with “liquidity gaps,” i.e., prices do not move smoothly, but in jumps. Moreover, since different asset groups have different customer bases, liquidity gaps affect their prices differently.

Therefore, it is precisely this that is the main enemy of correlation stability. Such “gaps” are difficult to express mathematically. Therefore, we repeat, options traders are betting on the possibility of these gaps appearing, overestimating the expected volatility. Given the value of such expert adjustments, VaR models of assets on which options are traded use expected rather than actual volatility. However, for some assets there is no active options market. What volatility should be used in VaR calculations?

If options are not traded on the desired asset, models can use the expected volatility of a similar asset, taking into account some correlation coefficient between changes in the prices of these assets. Thus, a relatively small group of traders who trade options on liquid assets and determine expected volatilities on them unexpectedly supplies this critical parameter for calculating the maximum losses of a significant part of the market.

An interesting detail that once again demonstrates the limitations of even such “sophisticated” logical constructs that underlie the modern risk measurement system: as we have already said, expected volatility is itself a commodity, and its price is subject to fluctuations due to supply and demand. It turns out that one large buyer or seller can distort the volatility in a certain market, and this will affect the assessment of losses of an entire market segment!

The relationship between credit and market risk

As we will see in the next part of the book, interest rates on credit products consist of risk-free rates and a charge for credit risk (credit spread). Credit spreads are typically packaged into interest rates on credit products, but they can be simply isolated (see Chapter 8). Moreover, these purified credit spreads exist as financial products. Commercial bankers call them guarantees (essentially a sale of a client's unfunded credit risk) and investment bankers call them credit default swaps (CDS). Warranty prices rarely change. But credit swaps are traded on the market, and therefore their prices are often subject to changes.

Most large companies and banks have public debt. And since they exist, it means that from the funds intended to repay them, it is possible to allocate a fee for credit risk, i.e., buy or sell a credit swap.

In this case, the counterparty credit risk limit becomes subject to market volatility, which means it can be calculated using VaR. If this methodology is adopted, then, as elsewhere, such an assessment is distorted by changes in the liquidity of credit markets. The fact is that although credit spreads were originally calculated based on bond prices, these markets now exist in parallel. Since the liquidity of bonds and credit swaps of the same issuer differs, it turns out that, theoretically, different assessments of credit risk coexist in the two markets. In this regard, risk managers can take any of them as a basis. Their preferences influence the size of the limits on the counterparty, as well as changes in the timing of their revision: the more unstable the market they take as a basis, the more often the limits can be revised following changes in volatility. This process can introduce unnecessary volatility into an already standardized banking business, the stability of which risk managers, on the contrary, must protect.

Additional complications caused by excess control may be a consequence of risk managers' attitudes toward “asymmetric risk.” From a statistical point of view, a price deviation can lead to the same risk both in the case of its increase and in the case of its decrease. However, the fall of the ruble is associated with a decline in the reliability of the Russian banking system, as is the case in other developing countries. Thus, if a Russian bank sells a long-term forward contract on the strengthening of the ruble, then if the ruble strengthens and losses occur on the sale, the bank will be able to pay, since the strengthening of the ruble is usually associated with the growth of the Russian economy and prosperity in the world as a whole. But if a bank sells dollars for a long term, then in a crisis situation it will be difficult for it to recoup losses, since in the financial market they will coincide with an increase in defaults in the loan portfolio caused by the difficult economic situation. Thus, risk that is symmetric in terms of market risk may be asymmetric in credit risk calculations for the same transactions.

The more details we mention, the more obvious it is that the risk analysis process is difficult to strictly regulate. It must take into account asymmetrical situations that are often revealed when analyzing life realities. One more example.

At the beginning of 2007, an analysis was carried out of the credit risk that a Russian bank faced in relation to Citibank in the event of the purchase of a call option on Sberbank shares from the latter. In fact, credit risk appeared in the event of a sharp rise in price, if at the same time Citibank became unable to fulfill its obligations. Since the option was short-term, such a situation could only arise if Citibank suddenly went bankrupt.

At that time, no one suspected that the world was on the verge of a serious crisis. The business position was that only a sudden collapse of global financial markets could lead to the failure of an international bank - such as Citibank. Consequently, no matter how good results Sberbank shows, in a global crisis its shares will also fall. In this case, the option will not be exercised, and therefore the credit risk when purchasing a call option from Citibank was small. But when Sberbank acquired a put option from Citibank, this analysis did not work. However, risk experts believed that when buying call and put options, the credit risk was symmetrical. The option became exercisable in November 2007 and actual events confirmed the business's correct understanding of the concept of asymmetry in the credit claim of the transaction.

Risk management is one of the key areas of banking business. Risk management models allow financiers and managers, i.e. generalists, to quickly assess the risk of little-known products in a single format for all diverse businesses under their control. This is precisely the main value of such models. Therefore, the functional area of ​​a bank’s activity called “risk management” is becoming an increasingly important tool in unifying the methodology for making decisions about different forms of risk, i.e., about the amount of risk resources available to banks.

However, as with any tool, risk-measuring models need to be used intelligently, not outsourced to highly specialized modelers, but rather by understanding the assumptions built into the calculations. We demonstrated this by the example of the difference in approaches to the issue of risk symmetry. Such situations are reminiscent of the famous joke about a dinosaur: when a man is asked what is the chance of meeting such an animal on the street, he says that there is none: “They are extinct!” The next one to answer the question is a blonde, by the way, a certified specialist in statistics. In her opinion, the chances are 50/50: “I’ll either meet you or I won’t.” In situations where managers (and not only risk managers) use quantitative analysis without taking into account practical logic, each risk turns into absolute, i.e., not weighted by the likelihood of the critical situation being analyzed. Then you won’t meet a dinosaur and you won’t do business. Therefore, the use of models such as VaR or stress testing should be meaningful.

conclusions

The bank owns certain resources, which represent the volumes of several types of risk that it can accept. The key ones are liquidity risk, credit risk, interest rate risk and currency risk.

The volume of liquidity risks is a resource, the mismanagement of which poses the greatest threat to the bank. At the same time, it is also the main source of income arising from the difference between the cost of attracting and placing assets, and the calculation of these indicators is subjective and is associated with the transfer pricing methodology, which is the most politicized issue. The complexity of ways to increase funds and allocate them is known to everyone. Top management in both developed and developing countries constantly finds itself involved in some form of debate about excess and insufficient liquidity. Credit risk is a more specific category than liquidity, but it is also subject to policy influences. The divisions that are strong in this sense “squeeze” the weaker ones, blocking their access to the limits. However, this policy often hides a lack of understanding of the capabilities of other products.

In other words, everyone has studied only the one they are specifically involved in, and management does not know enough about the entire product line to facilitate effective dialogue between product divisions. Currency and interest rate risks, which many include in liquidity analysis, are easier to analyze as separate topics, although, on forwards, these two resources are closely intertwined.

The importance of separation seems clear to everyone, but in practice it is very complex, for example, due to the difference in how different transactions are reflected in accounting. As a result of such entanglements, a bank may have sufficient liquidity, but movements in exchange rates or interest rates can reduce its profits to zero.

In practice, currency and interest rate risk reserves are also the subject of some approved internal policy. Like liquidity risk, commercial divisions prefer not to consider them. By optimizing profit along the credit curve, i.e., receiving the maximum profit for credit risk in absolute terms (without reference to the borrower’s credit curve), they completely ignore the question of the optimal distribution of these resources, believing that their management belongs to “someone” in bank treasury.

Thus, in banks, responsibility for all the most important resources is initially blurred. The problem is aggravated by the fact that they are not considered resources, other than liquidity. They are called "limits". This book will try to show that a change in terminology can lead to a change in ideology. And in the context of considering the potential for taking different types of risks, perceived as resources, and not as limits, we will show ways to increase the efficiency of their use.

Price risk(price risk) - the likelihood of unexpected financial losses from changes in the level of prices for products or individual products in the upcoming period or purchase and sale transactions. Price risk can be insured by an enterprise by carrying out an operation.

The main types of price risks include:

  • increase in purchase prices for raw materials, materials, components;
  • the likelihood of competitors setting prices below market prices (for products sold by the enterprise);
  • changes in government pricing regulation;
  • the likelihood of introducing new taxes and other payments that are included in the price of products;
  • reduction in the level of goods on the market;
  • increase in prices and tariffs for services of other organizations (electricity, transport services, etc.).

Price risk is associated with determining the price of products and services sold by an enterprise, and also includes the risk in determining the price of necessary means of production, raw materials used, materials, fuel, energy, labor and capital (in the form of interest rates on loans). According to some calculations, a 1% error in the price of sold products leads to losses amounting to at least 1% of sales revenue. If the demand for a given product is elastic, then losses may amount to 2-3%. With product profitability of 10-12%, a 1% error in price could mean a 5-10% loss in profit. Price risk increases significantly in conditions.

Price risk is one of the most dangerous types of risk, since it directly and significantly affects the possibility of loss of income and profit of a commercial enterprise. Price risk constantly accompanies the economic activities of an enterprise.

Price risk is the risk of loss due to future changes in a commodity or financial instrument. There are three types of price risks: , and . Traditionally, the term “price risk” covers not only the possibility, but also the possibility of obtaining.

Price risk is the risk of changes in the price of a debt obligation due to an increase or decrease in the current level. Price risk is the risk that the value (or portfolio) will decline in the future. Also a type of mortgage period risk that arises in the production segment when establishing loan repayment terms for the borrower before establishing the terms for selling the security on the secondary market. With a general increase in the level of interest rates on loans during the production cycle, the lender may be forced to sell the loan issued at .

Regulatory price risk is the risk that arises when regulators limit what they can charge.

Risk assessment is a set of analytical measures that make it possible to predict the possibility of obtaining additional business income or a certain amount of damage from a risk situation that has arisen and the untimely adoption of measures to prevent the risk.

The degree of risk is the probability of a loss event occurring, as well as the amount of possible damage from it.

  • May be:
  • acceptable - there is a threat of complete loss of profit from the implementation of the planned project;
  • critical - it is possible that not only profits will not be received, but also revenues and losses will be covered at the expense of the entrepreneur’s funds;

catastrophic - loss of capital, property and bankruptcy of the entrepreneur are possible.

Quantitative analysis is the determination of the specific amount of monetary damage of individual subtypes of financial risk and financial risk in the aggregate.

Sometimes qualitative and quantitative analysis is carried out on the basis of assessing the influence of internal and external factors: an element-by-element assessment of the specific weight of their influence on the work of a given enterprise and its monetary value is carried out. This method of analysis is quite labor-intensive from the point of view of quantitative analysis, but brings its undoubted fruits in qualitative analysis. In this regard, more attention should be paid to the description of methods for quantitative analysis of financial risk, since there are many of them and some skill is required for their competent application.

In relative terms, risk is defined as the amount of possible losses related to a certain base, in the form of which it is most convenient to take either the property status of the enterprise, or the total cost of resources for a given type of business activity, or the expected income (profit). Then we will consider as losses a random deviation of profit, income, revenue downward. compared to expected values. Entrepreneurial losses are primarily an accidental decrease in entrepreneurial income. It is the magnitude of such losses that characterizes the degree of risk. Hence, risk analysis is primarily associated with the study of losses.

Depending on the magnitude of probable losses, it is advisable to divide them into three groups:

  • losses, the value of which does not exceed the estimated profit, can be called acceptable;
  • losses whose value is greater than the estimated profit are classified as critical - such losses will have to be compensated from the entrepreneur’s pocket;
  • Even more dangerous is catastrophic risk, in which the entrepreneur risks incurring losses exceeding all his property.

If it is possible in one way or another to predict and estimate possible losses for a given operation, then a quantitative assessment of the risk that the entrepreneur is taking has been obtained. By dividing the absolute value of possible losses by the estimated cost or profit, we obtain a quantitative assessment of the risk in relative terms, as a percentage.

Saying that risk is measured by the magnitude of possibilities. probable losses, the random nature of such losses should be taken into account. The probability of an event occurring can be determined by an objective method or a subjective one. The objective method is used to determine the probability of an event occurring based on calculating the frequency with which the event occurs.

The subjective method is based on the use of subjective criteria, which are based on various assumptions. Such assumptions may include the judgment of the assessor, his personal experience, the assessment of a rating expert, the opinion of a consulting auditor, etc.

Thus, the basis for assessing financial risks is to find the relationship between certain amounts of losses of an enterprise and the likelihood of their occurrence. This dependence is expressed in the plotted curve of the probabilities of occurrence of a certain level of losses.

Fitting the curve is an extremely complex task that requires financial risk officers to have sufficient experience and knowledge. To construct a curve of the probabilities of a certain level of losses (risk curve), various methods are used: statistical; cost feasibility analysis; method of expert assessments; analytical method; method of analogies. Among them, three should be especially highlighted: the statistical method, the method of expert assessments, and the analytical method.

The essence of the statistical method is that the statistics of losses and profits that occurred in a given or similar production are studied, the magnitude and frequency of obtaining a particular economic return are established, and the most probable forecast for the future is compiled.

Undoubtedly, risk is a probabilistic category, and in this sense, it is most reasonable from a scientific point of view to characterize and measure it as the probability of a certain level of losses occurring. Probability means the possibility of obtaining a certain result.

Financial risk, like any other, has a mathematically expressed probability of loss, which is based on statistical data and can be calculated with fairly high accuracy. To quantify the amount of financial risk, it is necessary to know all the possible consequences of any individual action and the likelihood of the consequences themselves.

In relation to economic problems, the methods of probability theory come down to determining the values ​​of the probability of the occurrence of events and to selecting the most preferable from possible events based on the largest value of the mathematical expectation, which is equal to the absolute value of this event multiplied by the probability of its occurrence.

The main tools of the statistical method for calculating financial risk: variation, dispersion and standard (mean square) deviation.

Variation is a change in quantitative indicators when moving from one result option to another. Dispersion is a measure of the deviation of actual knowledge from its average value.

The degree of risk is measured by two indicators: the average expected value and the variability (variability) of the possible result.

The average expected value is related to the uncertainty of the situation and is expressed as a weighted average of all possible outcomes E(x), where the probability of each outcome (A) is used as the frequency or weight of the corresponding value (x). In general, it can be written like this:

E(x)=A1X1 +A2X2+···+AnXn.

The average expected value is the value of the event magnitude that is associated with an uncertain situation. It is a weighted average of all possible outcomes, where the probability of each outcome is used as the frequency, or weight, of the corresponding value. In this way, the result that is supposedly expected is calculated.

Cost feasibility analysis is focused on identifying potential risk areas taking into account the financial stability of the company. In this case, you can simply make do with standard methods of financial analysis of the results of the activities of the main enterprise and the activities of its counterparties (bank, investment fund, client enterprise, issuing enterprise, investor, buyer, seller, etc.).

The method of expert assessments is usually implemented by processing the opinions of experienced entrepreneurs and specialists. It differs from statistical only in the method of collecting information to construct a risk curve.

This method involves collecting and studying estimates made by various specialists (of the enterprise or external experts) of the probabilities of occurrence of various levels of losses. These assessments are based on taking into account all financial risk factors, as well as statistical data. The implementation of the method of expert assessments becomes significantly more complicated if the number of assessment indicators is small.

The analytical method of constructing a risk curve is the most complex, since the underlying elements of game theory are accessible only to very narrow specialists. The most commonly used subtype of analytical method is model sensitivity analysis.

The sensitivity analysis of the model consists of the following steps: selection of a key indicator in relation to which the sensitivity is assessed (internal rate of return, net present value, etc.); choice of factors (inflation level, state of the economy, etc.); calculation of key indicator values ​​at various stages of the project (purchase of raw materials, production, sales, transportation, capital construction, etc.).

The sequences of costs and receipts of financial resources formed in this way make it possible to determine the flows of funds for each moment (or period of time), i.e. determine performance indicators. Diagrams are constructed that reflect the dependence of the selected resulting indicators on the value of the initial parameters. By comparing the resulting diagrams with each other, it is possible to determine the so-called key indicators that have the greatest impact on the assessment of the project’s profitability.

Sensitivity analysis also has serious shortcomings: it is not comprehensive and does not clarify the likelihood of alternative projects being implemented.

The method of analogies when analyzing the risk of a new project is very useful, since in this case data on the consequences of the impact of unfavorable financial risk factors on other similar projects of other competing enterprises is examined.

Indexation is a way to preserve the real value of monetary resources (capital) and profitability in the face of inflation. It is based on the use of various indices.

For example, when analyzing and forecasting financial resources, it is necessary to take into account price changes, for which price indices are used. Price index is an indicator characterizing price changes over a certain period of time.

Thus, existing methods for constructing a curve of the probabilities of a certain level of losses are not entirely equivalent, but one way or another they make it possible to make an approximate assessment of the total volume of financial risk.

Source - O.A. Firsova - METHODS OF ASSESSING THE DEGREE OF RISK, FSBEI HPE "State University - UNPC", 2000.