PARIS SCIENCES & LETTRES (PSL)

Thank your for your subscribe

Oops something went wrong. Please check your entry

How do financial mathematics specialists imagine the markets in five to ten years? What exactly will their role look like? To understand these issues, we need to take a closer look at the way this field has developed and at the issues that have shaped the discipline.

6

September 2013

lire en français

lire en français

In the late 1980s, when I began to get involved in the training of future professionals of the financial sector, mathematics did not play the role they were given afterwards. But with Helyette Geman (ESSEC), I had spent a whole year in a bank analyzing and explaining to practitioners the first stochastic interest rate models. At that moment, I experienced a genuine intellectual meeting.

In fact, it soon emerged that there would be a growing need for professionals able of understanding maths: quantitative issues were becoming a central part of finance. Meanwhile, we also needed to train mathematicians able to understand finance: “quants” that have enough understanding of the markets to implement, evaluate and use appropriate models. In 1990, with Helyette Geman, we created a Probability and Finance option in the Master of Advanced Studies of Probability at the Université Pierre et Marie Curie-Paris VI, jointly accredited by the École Polytechnique, ENPC and ESSEC. It was the first training of its kind in the scientific community.

However, at that time, the main issue wasn’t only about the use of high-level mathematics in the world of finance: it was also the beginning of a new conception of risk.

From the start, one of the aims of the world of finance consisted in managing and reducing risks as much as possible. During the late 1980s, a quantum leap was achieved thanks to the quick development of dynamic strategies, as opposed to techniques that had previously been applied by ensuring an average risk through the weighting of different asset classes.

During the development of financial mathematics in the early 1990s, the first challenge was to implement a dynamic hedging, supported by increasingly sophisticated mathematical models. And as the models strengthened, researchers and practitioners started to examine more closely, not only the models themselves, but the disruptive factors, the random elements that distorted curves.

Indeed, markets can be represented as dynamic systems disturbed by “noise”. It is precisely this noise (i.e. the day to day volatility) which received, in the mid-1990s, all the attention. Not without consequences, as we shall see. And it would be interesting to know what led traders, those who trained them, and those who conceived the software they use, to focus on this immediate aspect.

During those years, from a technical point of view, the context was changing rapidly. With the development of the processing power of computers, simple models (binomial, with the possibly of additional parameters) that could be calculated by hand were replaced by more sophisticated calculations, involving specialized software. In 1990, in Chicago, they would use the same computer systems as in 1973! And other world stock markets were not much more advanced.

This situation didn’t last very long and progresses have been meteoric. On one hand, we have a very quick revolution of the market size and on the other, an equally rapid implementation of sophisticated techniques to apprehend them. For instance, partial differential equations are very quickly calculated by computers. This paves the way for more complex probabilistic techniques, which allow more refined models. For example, techniques such as Monte-Carlo, which consists in repeating a lot of times a phenomenon in order to obtain a more reliable approximation of the true value of the mathematical expectation.

The nature of financial expertise changed completely. From a scientific point of view, it was very exciting. The aim was to integrate increasingly complex equations, such as stochastic processes (random time-dependent phenomena), and compare the obtained models with the movements actually observed. Brownian “noise” becomes the object of attention, because there is a difference between the statistical level of noise and that observed in facts.

This is explained by the fact that market participants react to immediate information, in other words, that noise creates noise. This phenomenon was exacerbated by the regulators’ decision in 1998 to provide operators with daily information centered on the concept of value at risk. Fundamentally, it wasn’t a bad idea: it provided richer and more complete information, and changes from one day to the other of the market price contain much useful information, for example on liquidity.

But it went too far, in the sense that it led to underestimate the information based on the historical behavior of prices. Young traders we had trained had no experience or enough hindsight to free themselves from this focus on immediate data. They were mathematicians, not economists: they didn’t care enough about the past or the underlying. Both the models and the young people who used them were lacking the ability to articulate the risk of the underlying and that related to the actual market. Models become more sophisticated, but the same logic has persisted: we buy, we sell on a daily basis, and the statistics and historical information are neglected, and even disappear from models. But these are fundamental.

This is a lesson for all players, but especially for those who, like me, are in charge of training: trainings focused on prices will have to rely more heavily on the historical aspect. Actually, it’s been almost fifteen years since the regulators are asking for it. And the recent crises have further increased the need to change the approach. Easier said than done! To take an example, the calculation of counterparty risk is more complicated on a yearly basis than on an instant basis.**Today’s challenges**The world in which we live today does not know the same expansion as the years 1990-2000. Moreover, the crisis has highlighted the importance of the hidden, implicit links, and more generally of what we call the “systemic risk”. To put things simply, until 2008 the focus was set on the “risk of noise”. Since 2008, we are learning to manage the risk of chaos.

In terms of risk management, this implies especially a stronger consideration of collateralization. In this context, financial mathematics are set to undergo an inflection. Earlier systems need to integrate different techniques: collateral management requires to follow several curves at the same time; taking into account the systemic risk poses considerable problems, since it requires both a better understanding and to prevent the creation of systems that by responding to signals from systemic risk, will increase this risk.

Let’s take as example, the credit derivatives that proliferated in the 2000s and that have played a crucial part in the 2008 crisis.

Credit derivatives have grown significantly in recent years, and even before the crisis there were doubts about the robustness of the models on which they relied. As early as 1998, the U.S. regulator – who kept in memory the Black Monday of October 1987 – asked financial institutions to produce a daily Value at Risk of market risks, that is to say of the aggregate activity of a trading room. Although computers have gained significant power and that the models have been refined, it was – and remains – a real challenge for financial institutions and also for training facilities that provide the quants who will implement this new measure. Exchanges between the academic world and the market have intensified, especially around the relevance of VaR as a risk measure. In particular, we had discussed with professionals from trading desks, in the years preceding the crisis, of the risk of default. To summarize, the management of the risk of default was okay as long as the default was probable, but what would actually happen if default really occurred was never contemplated. The discussions were not successful (it’s always difficult to discuss during a financial bubble!) and the issue is now back on the agenda.

It is all the more urgent to address this problem that the concentration of the financial sector, further enhanced by the crisis, tends to make of each trading room a systemic player. In this regard, it should be noted that until recently, mathematicians would validate models without knowing the size of the “poses” (exposures) which were behind. That’s absurd!

All in all, a number of developments are underway or will be required, regarding both practices and training. Trainers are encouraged to put particular emphasis on statistics and to get students to work on a vision of global quantitative risk. This is now a central aspect. Some teachings have been reinforced: regulation, market risk.

The range of the considered disciplines raises another problem. Indeed, a few months of training are barely enough for the student – even the most brilliant – to assimilate all of stochastic calculus, finance, statistics, law... The question of the time needed to ingest all this science is complicated because banks “hunt” our students already during their internships.

Quants are very active in the areas of overall risk (risk analysis, simulations), but also increasingly in model validations. Ultimately, with the consciousness that when they build or validate a model, it will have an impact on prices. Taking into account the systemic dimension is a real challenge, but it offers a real opportunity for financial mathematics. Markets have worked for more than fifteen years on an imaginary real time: they need to build a different relation with time. Concerning this precise issue, quantitative finance has its word to say.

The quantitative view of finance will not set the pace, although in some sectors (asset management, finance) some dream of going back to the old-style, less sophisticated way of doing things. That will certainly not eliminate systemic risks! More generally, rare risks don’t exist: we can work at least on this basis, except to play with fire. Today, it is crucial to develop new tools (and train professionals) for risk detection, by reworking on the determination of the exposure to risk.

The models are structurally imperfect. At best, they simplify the reality; but a model is wrong by definition. One important challenge is to clarify the use that can be made of them. Basically, the issue is to use them while knowing that they are false and, if possible, understand their limits. This is a critical issue for professionals, today: they must be able to understand the mechanics and identify the logic of what they use. And be able to worry if they don’t understand the logic or spot aberrations. I believe that the issue of the position regarding the model is absolutely essential.

A problem in this respect is that the regulator fixes a number of positions, instead letting them evolve. It’s dangerous to think that defining a standard requires a consensus. This does not capture complexity, which requires a different approach. The more time is introduced in the analysis, the more complex it becomes because it involves thousands of factors. Under these conditions, one can rely on a standard and strong enough vision. But then, it is important to explore the areas where we can’t explain everything.

Another problem: during the crisis, much attention was paid to economists, when these are disconnected from the technological realities (high-frequency trading, new software...). The same goes for regulators and policy makers. There is therefore a danger of creating a regulation completely disconnected from market reality. I strongly believe in the necessity of a dialogue between the various players and the different disciplines involved.