Asset management, Behavioral finance, Risk management, Trading, Trend following

On model risk in quantitative trading

  • Quantitative strategies have become increasingly popular in trading and investing
  • The experience with using them has been mixed, largely as a result of three categories of problems
  • Still, quantitative approach is well worth exploring and offers important advantages to their users

Over the past few years, the use of quantitative strategies has become increasingly popular in trading and investment management. According to JPMorgan, passive and quantitative investors now account for 60% of equity assets under management (vs. 30% ten years ago) and only about 10% of trading volumes originate from fundamental discretionary traders.[1] Appealing new buzzwords like, robo-advising, artificial intelligence and machine learning stoked the imagination of many investors and boosted the quantitative trading gold rush.

Take the $6.3 trillion behemoth, Black Rock. They assembled a quant crew counting more than 90 professionals including 28 PhDs and numerous data scientists to crunch through market data and formulate winning investment strategies. In 2015, Black Rock even poached one of Google’s leading scientists, Bill McCartney to develop the group’s machine learning applications. Black Rock President Rob Kapito himself articulated the great expectations from this endeavour:

As people get the data and learn how to use the data, I think there is going to be alpha generated and, therefore, will give active managers more opportunity than they‘ve had in the past to actually create returns.” [2]

In pursuit of such lofty goals, the considerable costs of recruiting the best and the brightest seem well justified. Thus far however,[3] the results for Black Rock and other practitioners have been mixed at best and active managers have struggled to generate alpha and retain investors. In fact, it seems that most quantitative strategies have tended to underperform or even generate losses. Some large-sample empirical observations seem to suggest as much.

What crowd-sourced quantitative strategies tell us about the problem

Over the last 15 years we saw a proliferation of online retail FX and derivatives brokerages that have attracted traders in their millions. Small investors, day traders, and a variety of quants and other retail speculators jumped at the opportunity to try their luck trading currencies, equity indices or commodity futures. The lure of easy profits has turned a vast number of people into speculators and thousands of individual traders, research institutions and universities have been diverting time and talent to the pursuit of the quantitative Holy Grail.

We know a bit about the results of those endeavors thanks to the online competitions of developers of quantitative trading strategies. In December 2006, world’s most popular trading platform provider MetaQuotes, organized world’s first Automated Trading Competition. The $80,000 prize attracted 258 developers. More joined over the following six years and through 2012, a total of 2,726 quants competed in MetaQuotes’ challenge. The result of this endeavour was not encouraging: only 567 of these (21%) finished their competitions in the black while 79% of them lost money.[4]

Given that all of the participants wanted to win the prize money and that all of them had a certain minimum degree of sophistication (sufficient at least to master using MetaQuotes’ programming platform and formulate some kind of a quantitative trading strategy), the fact that nearly 80% of them lost money should be a sobering revelation. Granted, MetaQuotes contestants’ incentives were not the same as those of professional asset managers.[5] However, the nature of the problem is the same for all: finding a quantitative edge in financial and commodity markets. Their substantial failure in this endeavour is an indication of the difficulty we face when attempting to crack this problem.

And while professional money managers clearly appear to have a keener appreciation of risk, they appear to draw similar quality results. As Bank of America reported in July 2018, only 17% of all large cap active quant mutual funds managed to outperform Russell 1000 benchmark in June 2018 with the average fund delivering a 0.1% return for the month vs. 0.6% return for Russell 1000. This is rather a typical outcome.

The problems with quantitative investing appear to be pervasive and robust as we witnessed over time, from the spectacular implosion of LTCM 20 years ago to last summer’s GAM debacle when the $150 billion asset manager had to freeze fund withdrawals after steep losses at one of its quant funds triggered a surge in client redemptions. These are far from isolated incidents.

The reasons why quantitative strategies are difficult to build

Broadly speaking, in any given year, about two thirds of all active managers – whether quantitative or discretionary – underperform their benchmarks. Over longer time periods – 5 or 10 years – around 90% of all active managers underperform. All this begs the question: why is it so difficult and unlikely for speculators to beat the markets? I have pondered this problem and researched its various aspects for a long time during the past 20 years. It’s a rich and fascinating subject to explore, but for the present discussion, we can condense it down to three categories of problems: (1) the conceptual problem (2) the model risk and (3) organizational issues.

1) Conceptual challenges

In formulating various trading algorithms, quantitative analysts typically work with ideas and theories borrowed from natural sciences. But while physics and applied mathematics deal with the mechanical and logical attributes of natural phenomena, markets reflect the aggregate psychology of their human participants. The difference is very significant. Interaction of inanimate particles or fluids might be sufficiently well understood to make prediction of certain behaviors possible. By contrast, human conduct doesn’t conform to the crisp laws of physics or mathematics. In his book, “My Life as a Quant,” physicist and quantitative analyst Emanuel Derman[6] reflects on this point:

In physics, the beauty and elegance of a theory’s laws, and the intuition that led to them, is often compelling, and provides a natural starting point from which to proceed to phenomena. In finance, more of a social than a natural science, there are few beautiful theories and virtually no compelling ones, and so we have no choice but to take the phenomenological approach. There, much more often, one begins with the market’s data and calibrates the model’s laws to fit…[7]

What Derman relates is a formidable challenge for quantitative analysts and their employers. Starting with data and working backwards toward a working hypothesis hinges on inventiveness and conceptual thinking in a domain that is complex as well as abstract. Mired in numbers and lacking any tangible concepts to grasp upon, quantitative analysts can easily churn out erroneous hypotheses whose flaws could be very difficult to recognize. In such a domain, intellectual exertion can distort ideas and lead analysts to lose sight of common sense and clear thinking. The more abstract the subject matter, the more ways we have to reach mistaken conclusions.

2) Model risk

Even supposing that we have done a good job analyzing data and that we formulated a valid hypothesis, we still face another daunting challenge: making sure that our models correctly fulfil their intended purposes. This problem spills into the domain of software programming.

Models are normally implemented in software programs that may require thousands of lines of code, large databases and a suitable user interface. Creating such programs involves its own peculiar set of risks which only rarely receive adequate attention. Software code is seldom free of errors, and these are often extremely difficult to identify before they cause an adverse outcome. If you pay attention to the daily news flow, you’ll notice countless examples of model/software issues that result in serious setbacks. Here are a few examples:

  • On October 1, 2013 the U.S. administration under President Barack Obama launched the much anticipated government medical insurance market and its website, Healthcare.gov. The government spent some $600 million developing the website which turned out to be such an unmitigated disaster that fully ten days after launch, not a single person could be confirmed to have successfully enrolled.
  • In March of 2013, UK intelligence agency MI5 scrapped a major IT project to centralize the agency’s data stores. The work became such a morass that MI5’s director at the time, Sir Jonathan Evans decided to abandon the project altogether and restart from scratch with a completely new team of IT professionals. According to The Independent, the abandonment of the project cost MI5 about $140 million.
  • In late 1999, the Mars Climate Orbiter crashed into Mars because an engineer at the Jet Propulsion Laboratories failed to convert British measurement units to the metric system.
  • Shortly afterwards, a sister space vehicle, the Mars Polar Lander, also smashed into Mars because the line of software code that was supposed to trigger the vehicle’s braking process was missing.
  • In 1996, the European space probe Ariane 5 disintegrated 40 seconds after launch due to an error in the computer program controlling the rocket’s engines.

The list is long and interesting, including issues with motor vehicles, advanced military hardware and software, communication and navigation technology, medical diagnosing and treatment systems and just about every other kind of technology that uses computer software to function. In the financial industry, software errors don’t cause things to blow up so they can remain hidden or even go undetected for a long time. However, every now and again things get bad enough to attract some publicity.

On August 1, 2012, New York brokerage Knight Capital implemented a trading algorithm that in a very short time caused the firm a direct cash loss of $440 million and a market cap loss of about $1 billion. The faulty algorithm bought securities at the offering price and sold them at the bid, and continued to do this some 40 times per second. Over about thirty minutes’ time, the algorithm wiped out four years’ worth of profits. In another example, in June 2010, an international bank’s algorithmic trading system acted on bad pricing inputs by placing 7,468 orders to sell Nikkei 225 futures contracts on the Osaka Stock Exchange. The total cost was more than $182 million. While the pricing error would have been rather obvious to any human participant, the trading algorithm proceeded to execute approximately $546 million of the orders before the error was caught. These two quantitative trading debacles are not isolated stories. I believe that model risk events are pervasive, but the vast majority of them remain unknown outside of the firms that experience them.

Over the years I have personally come across a good many cases where an important part of a firm’s business process got bogged down due to poorly designed software tools. In each of these cases, frustration with the software dragged on for months and even years and I am not aware of even a single case where the issues were resolved in a satisfactory way. The usual course is eventually to abandon the software tools and return to the old manual process. The main reason these things happen is the lack of appreciation on the part of decision makers of just how difficult it is to build, implement and maintain well-functioning software.

Indeed, coding errors are a disconcertingly common. According to a multi-year study of 13,000 programs by Watts S. Humphrey of Carnegie Mellon University’s Software Engineering Institute, professional programmers on average make as many as 100 to 150 errors per 1,000 lines of code. As anyone who has ever worked in software engineering can attest, certain kinds of coding errors can be very difficult to identify and correct.

3) Organizational issues

Another important aspect of quantitative modelling involves organizational issues. This is particularly the case in larger organizations where quantitative analysis work is separate from, and subordinate to the key decision making functions. Particularly in organizations run by clubby management cliques, decisions are frequently based on influence, authority, or group allegiance rather than on a clear-minded analysis of ideas and facts. In such organizations, quality ideas are less likely to be recognized and given support. This is a weakness of many large organizations, even if it isn’t directly apparent to outside observers.

Getting it right is important and well worth it

Without a doubt, a confluence of the three categories of problems we just discussed makes the pursuit of the quantitative Holy Grail a difficult and unlikely problem. But these are not impossible problems and given the numerous important advantages quantitative trading offers over discretionary speculation, exploring their use is a worthwhile and necessary pursuit.

Amongst their many merits, we can back-test quantitative strategies and measure their actual performance against expected results.  Quantitative strategies also impose decision-making discipline, and eliminate human shortcomings like emotion or distraction. In this they minimize the rogue trader risk which can be particularly destructive for trading firms.

Getting it right with quantitative approach to speculation is advisable and necessary for many market participants. To achieve success, they must start with clear thinking and proceed with meticulous and disciplined adherence to best practices in systems engineering. Learning about systems engineering or acquiring the talent with this skill set should prove a very worthwhile investment.

It is worth emphasizing however, that systems engineering is not the same thing as quantitative analysis or software coding. In my experience, quantitative analysts, or quants may be very good at math or physics and even have decent programming skills, but very few of them have any systems engineering skills and are liable at times to produce models that can be complex, difficult to maintain, and incomprehensible – in some cases even to their very author.

 

Alex Krainer is a hedge fund manager and commodities trader based in Monaco. He’s wrote the book “Mastering Uncertainty in Commodities Trading

Mastering Uncertainty in Commodities Trading

Trading and hedging commodity price risk

Mastering Uncertainty in Commodities Trading

[1] Durden, Tyler: “Quants Dominate The Market; Unexpectedly They Are Also Badly Underperforming It” – ZeroHedge, 15 July 2017.

[2] Kapito made these remarks at a Barclays conference in September 2016. Source: Durden, Tyler: “BlackRock’s Robo-Quants Are On Pace To Post Record Losses” – ZeroHedge, 11 January 2017.

[3] I have sought to untangle this mystery at some length in my book, “Mastering Uncertainty in Commodities Trading.”

[4] Robson, Ben. “Currency Kings” – McGraw-Hill Education, 2017.

[5] MetaQuotes contestants traded fictitious trading accounts and therefore had no down-side risk nor skin in the game apart from the effort to put together their strategies. They could therefore make gambles that would be unacceptable to most managers running real investment assets.

[6] Emanuel Derman had been the chief quantitative analyst at Goldman Sachs for 17 years.

[7] Derman, Emanuel “My Life as a Quant: Reflections on Physics and Finance” John Wiley & Sons, Inc., Hoboken, New Jersey, 2004

Standard

Leave a comment