Today I taught a really nice paper to my MBA class, "The High-Frequency Trading Arms Race" by Eric Budish, Peter Cramton and John Shim. I've been fascinated by high frequency trading for a while (Some previous posts in the new "trading" label on the right.)
Eric, Peter and John look at the arbitrage between the Chicago S&P500 e-mini future and the New York S&P500 SPDR. This is a nice case, because there are no fancy statistical strategies involved: high speed traders simply trade on short-run deviations between these two essentially identical securities. Some cool graphs capture the basic message.
First, we get to look at the quantum-mechanical limits of asset pricing. At a one hour frequency, the two securities are perfectly correlated.
But as we look at finer and finer time intervals, price changes become less and less correlated. If the ES rises in Chicago, somebody has to send a buy message to New York. We write down Brownian motions for convenience, but when you actually look at very high frequency they break down.
It's not obvious this activity "adds liquidity." If you leave a SPY limit order standing, then the fast traders will pick you off when they see the ES rise before you do. The authors call this "sniping."
You can see the more jagged ES price. According to authors, it seems that Chicago is where the "price discovery" happens, as the Chicago prices lead New York. The New York SPY includes dividends, which the Chicago futures do not, so New York is the natural home of "long term" traders. Once again, we see the interesting pattern of one market for "price discovery," and then that price communicates to another market.
How has high frequency trading affected the market? On the left, they plot correlation as a function of time interval over time. In 2005, the correlation of price changes was still zero at 100 ms. By 2011, 100ms price changes rose to 0.5 correlation, and the correlation stayed pretty good down to 40 ms. The boundary of high correlation got shorter and shorter.
Next, they calculate potential arbitrage opportunities: times when the price difference exceeds the bid-ask spread. They also calculate how long those opportunities last. As you can see below, the effect of high frequency trading has been to dramatically reduce the duration of arbitrage opportunities. Once the prices diverge by more than the bid-ask spread, in 2005 that divergence could last 100 ms. Now, that divergence seldom lasts more than 10 ms.
You might think that the profitability of arbitrage has declined. In one sense it has not:
These are the profits per opportunity, essentially a measure of how wide the price spreads are independently of how long they last. The price dispersion seems to be the same as ever, it just goes away much more quickly than it used to.
In sum, we get here a very clean case of what high frequency trading does and how it affects prices in one market. It is lovely to see the effect of "arbitrageurs" making markets "more efficient."
But is this efficiency really worth it? Does society really gain enough from having New York SPDR prices reflect Chicago future prices 100 ms sooner, to justify laying ever faster cable between the two places? Does high frequency trading make markets "more liquid" or just "more efficient?"
The theory part of the paper examines the "arms race" of high frequency trading. That race is especially clear here. If others trade this opportunity at 10 ms, and you can get there in 9 ms, you get to pick off all the stale quotes and leave the other traders nothing. In turn, this arms race results because both markets are limit order books in which you get everything if you place an order one nanosecond before the other guy, yet prices must be discrete. Classic economics predicts an overinvestment in speed in that game.
The theory part of the paper explores this arms race game and a natural proposal: Why not have an auction once per second? You submit anonymous bids, and once per second supply equals demand.
This means that people submitting the same price may have to share fulfillment of the order, and wait a second if they want to buy more. I'm all for efficient markets, but maybe one second is efficient enough.
Put another way, if it is advantageous to specify a minimum tick size, so prices become discrete, maybe it is advantageous to specify a minimum time interval as well. Computers operate on a "clock" so that all the signals settle down before information is transferred. That might be a good design for markets as well. The main question I can see is how this impacts simultaneous orders put in different places.
Hopefully, I can summarize the theory in a future post.
Eric, Peter and John look at the arbitrage between the Chicago S&P500 e-mini future and the New York S&P500 SPDR. This is a nice case, because there are no fancy statistical strategies involved: high speed traders simply trade on short-run deviations between these two essentially identical securities. Some cool graphs capture the basic message.
First, we get to look at the quantum-mechanical limits of asset pricing. At a one hour frequency, the two securities are perfectly correlated.
But as we look at finer and finer time intervals, price changes become less and less correlated. If the ES rises in Chicago, somebody has to send a buy message to New York. We write down Brownian motions for convenience, but when you actually look at very high frequency they break down.
It's not obvious this activity "adds liquidity." If you leave a SPY limit order standing, then the fast traders will pick you off when they see the ES rise before you do. The authors call this "sniping."
You can see the more jagged ES price. According to authors, it seems that Chicago is where the "price discovery" happens, as the Chicago prices lead New York. The New York SPY includes dividends, which the Chicago futures do not, so New York is the natural home of "long term" traders. Once again, we see the interesting pattern of one market for "price discovery," and then that price communicates to another market.
How has high frequency trading affected the market? On the left, they plot correlation as a function of time interval over time. In 2005, the correlation of price changes was still zero at 100 ms. By 2011, 100ms price changes rose to 0.5 correlation, and the correlation stayed pretty good down to 40 ms. The boundary of high correlation got shorter and shorter.
Next, they calculate potential arbitrage opportunities: times when the price difference exceeds the bid-ask spread. They also calculate how long those opportunities last. As you can see below, the effect of high frequency trading has been to dramatically reduce the duration of arbitrage opportunities. Once the prices diverge by more than the bid-ask spread, in 2005 that divergence could last 100 ms. Now, that divergence seldom lasts more than 10 ms.
You might think that the profitability of arbitrage has declined. In one sense it has not:
These are the profits per opportunity, essentially a measure of how wide the price spreads are independently of how long they last. The price dispersion seems to be the same as ever, it just goes away much more quickly than it used to.
In sum, we get here a very clean case of what high frequency trading does and how it affects prices in one market. It is lovely to see the effect of "arbitrageurs" making markets "more efficient."
But is this efficiency really worth it? Does society really gain enough from having New York SPDR prices reflect Chicago future prices 100 ms sooner, to justify laying ever faster cable between the two places? Does high frequency trading make markets "more liquid" or just "more efficient?"
The theory part of the paper examines the "arms race" of high frequency trading. That race is especially clear here. If others trade this opportunity at 10 ms, and you can get there in 9 ms, you get to pick off all the stale quotes and leave the other traders nothing. In turn, this arms race results because both markets are limit order books in which you get everything if you place an order one nanosecond before the other guy, yet prices must be discrete. Classic economics predicts an overinvestment in speed in that game.
The theory part of the paper explores this arms race game and a natural proposal: Why not have an auction once per second? You submit anonymous bids, and once per second supply equals demand.
This means that people submitting the same price may have to share fulfillment of the order, and wait a second if they want to buy more. I'm all for efficient markets, but maybe one second is efficient enough.
Put another way, if it is advantageous to specify a minimum tick size, so prices become discrete, maybe it is advantageous to specify a minimum time interval as well. Computers operate on a "clock" so that all the signals settle down before information is transferred. That might be a good design for markets as well. The main question I can see is how this impacts simultaneous orders put in different places.
Hopefully, I can summarize the theory in a future post.
Post a Comment