IOSCO, the International Organization of Securities Commissions, has set August 12th as the deadline for responses to its Consultation Report on the “Impact of Technological Changes on Market Integrity and Efficiency”. IOSCO’s input has a long term impact on the evolution of rules by national regulators, and in this instance IOSCO is responding to a specific request from the G20 and the Financial Stability Board.
The Consultation Report reviews some of IOSCO’s previous work on related topics. Whether intended or unintended, the document’s use of phrases such as "no universally acknowledged method for determining", "precise quantitative assessment ... is challenging", "empirical evidence is still scarce", and "further research is necessary" simply serve to highlight the paucity of detailed empirical evidence to support well-crafted regulation in this space.
The report does clearly set out two useful definitions of regulators’ key goals:
“Market integrity is the extent to which a market operates in a manner that is, and is perceived to be, fair and orderly and where effective rules are in place and enforced by regulators so that confidence and participation in the market is fostered.
Market efficiency refers to the ability of market participants to transact business easily and at a price that reflects all available market information. Factors considered when determining if a market is efficient include liquidity, price discovery and transparency.”
And there are definitely some interesting nuggets, including the following on page 12 (my emphasis added)…
“For instance, the use of sophisticated low-latency algorithmic trading techniques may prompt less sophisticated traders to withdraw from the market as a result of their fear of being gamed by low latency firms that use faster technology.
Some anecdotal evidence presented to IOSCO suggests this may be particularly true of traditional institutional investors, who, as fundamental investors, are supposed to base their trading decisions on the perceived fundamental value of securities. If such participants withdraw, reflecting a loss of faith in the integrity of the market, the information content of public market prices, may be altered as a knock-on effect. This may potentially result in a less efficient price formation process and possibly cause others to reduce their participation.”
The above quote raises some interesting questions -
• Are “traditional” institutional investors and the brokers that service them really to be considered “less sophisticated”? Based on their public marketing materials, many large brokers now offer algorithmic trading solutions that encapsulate the same techniques as used by low-latency algorithmic firms.
• Does IOSCO consider “anecdotal evidence” of a “fear of being gamed” to be adequate grounds for further regulation, or are they actively trying to highlight the need to establish clearer empirical evidence (to establish how widespread the ‘fears’ are and/or to establish whether the ‘fear’ is warranted by evidence of actual gaming)?
• Absent fundamental investors, does it make sense that the “information content” of prices would be altered (and in a good or bad way), and would this necessarily result in “less efficient price formation”?
A related question is raised on page 27;
“… a challenge posed by HFT is the need to understand whether HFT firms’ superior trading capabilities result in an unfair advantage over other market participants, such that the overall fairness and integrity of the market are put at risk. In the case of HFT, it has been argued that this advantage arises due to the ability to assimilate market signals and execute trades faster than other market participants.”
With or without empirical evidence, there’s no denying that many institutional investors are afraid of HFT (which encapsulates a number of strategies employed by different types of market participant), and that perception of whether the market is “fair” is of huge significance. So how do we know if the fears of gaming and unfair advantage are rational or irrational?
There are three key questions to answer:
1. Is there actually a zero-sum game competition between institutional investors and firms using HFT in which one side is the winner and the other the loser?
2. If there is such a competition, how could we measure the extent to which institutional investors are losing out?
3. If institutional investors are losing out, how can we determine whether the winners enjoy an unfair advantage, or are behaving in ways which constitute market manipulation/abuse?
1. Is there a zero-sum game competition?
- For various reasons the market has struggled to reach a consensus on this question:
- Firms using HFT implement a number of strategies – from those that provide resting liquidity to those that are entirely aggressive in nature. So some HFT activities may reduce investors’ execution costs, whilst others may exacerbate market momentum. So there may be a mixture of win-win and win-lose (we discussed this before).
- It’s not clear how the profits of HFT liquidity providers compare to the profits of traditional market makers whose role they are fulfilling to some extent (by bridging the temporal gap between the arrival of natural buyers and sellers), but if traditional market makers were ‘squeezed out’ by more efficient and automated firms, surely that should represent an overall saving to market users?
- But another intriguing question is the extent to which HFT strategies actually compete with other (“less sophisticated”) market participants…
- o One thing we monitor at Turquoise is each member’s ‘hit rate’ - their ability to capture the ‘displayed liquidity’ they see when originating aggressive orders.
- If speed conveyed a material advantage, and if HFT and other participants were competing directly for the same liquidity, then we would (for example) expect co-located algorithmic firms to have a higher ‘hit rate’ than non co-located agency brokers (who by virtue of being slower would miss out on capturing liquidity).
- But we actually see the exact opposite – with apparently less-sophisticated agency brokers achieving higher hit rates (consistently above 95% in some cases) compared to below 80% for their supposed ‘competitors’. What this probably means is that there is not a direct competition between these firms with different types of strategy and trading horizon. Whilst firms using HFT may compete with one another, and are very focussed on latency as a source of relative advantage, brokers executing institutional flow are seemingly uncorrelated, and have not tended to focus so much on latency because it doesn’t appear to be necessary to achieve best execution with a high degree of certainty.
- So we would recommend that regulators investigate whether our data is representative of the broader market, in which case there might not be a case to answer in respect of speed conveying an advantage.
- Any “advantage” enjoyed by firms using HFT, and/or the “gaming” for which they might be responsible, should presumably be reflected in higher trading costs for traditional investors, and should be measurable by Transaction Cost Analysis (TCA) providers. But how might this be measured in isolation from all the other dynamic factors in the market? We should be looking for evidence for rising realised trading costs (market impact) or a rising proportion of orders which cannot be completed due to prices moving adversely (opportunity cost) in a manner that controls for concentration amongst asset managers, general market volatility, and other such factors. We suggest two areas for consideration where there should be data readily available to facilitate discussion.
- First, for index managers who have less choice regarding their holdings and typically complete execution of all orders (and hence their costs should materialise as market impact rather than opportunity costs), we should search for evidence of growing underperformance vs. their index benchmarks. Such a trend, if present, will be difficult to attribute to specific aspects of market structure, but might support or challenge the concern that current market structure is somehow disadvantaging institutional investors.
- Second, for asset managers more widely, and looking at opportunity costs, we should look for evidence of a degradation in costs for liquid stocks (where HFT activity is more prevalent, and fragmentation is greater) relative to illiquid stocks. We would expect that TCA providers may have data to support such a study.
- We would recommend that regulators search for empirical evidence to support the argument that institutions are being disadvantaged by either the market structure or the behaviour of some participants.
3. And if there is evidence of institutional investors systematically losing out to faster or more sophisticated market participants, how do we determine if this is due to an “unfair advantage”, to “gaming”, or to factors that will be eroded naturally over time?
- IOSCO suggests that the “advantage arises due to the ability to assimilate market signals and execute trades faster”. It seems likely to us that it is the first of those two points which matters most, and as such, we wonder whether anything can or should be done, since IOSCO itself promotes efficient markets in which the “price that reflects all available market information”. That also leads us to conclude that suggested initiatives such as minimum resting times and order-to-trade ratio caps that seek to control or limit execution will have no positive impact in terms of reducing any information advantage enjoyed by firms using HFT (but will have a host of negative consequences for market quality and costs to issuers and investors).
- Others have suggested non-HFT participants cannot afford to make the infrastructure investments that HFT firms make, and that the “unfair advantage” flows from this “barrier to entry”. But it seems obvious from conversations with brokers and technology vendors that any such “barriers” are rapidly reducing through a rapid commoditisation of hardware and software solutions to enable low-latency data processing and trading.
- And returning to the definition of market efficiency used by IOSCO, we have questioned before whether markets might have become too transparent, and too efficient for the liking of many institutional investors seeking to trade large size. Does HFT vex institutional investors precisely because it ensures that “prices reflect all available information” - particularly when the “information” in question is the institutions’ unfulfilled trading intentions. Have the developments in European market structure and growth of HFT created a market more suited for ‘retail sized’ business? And does the creation of an efficient ‘retail sized’ market ignore the needs of the institutional investor community? Philip Warland, Head of Public Policy at Fidelity International, expressed concerns of this nature at the recent SunGard City Day in London, saying “We have spoken to the European Commission to highlight that too much transparency actually undermines our ability to achieve best execution, and will ultimately hurt investor returns.”
- And finally, how can we determine if “gaming” plays an element in the discovery of such “information”, and how do we write market rules that preclude such behaviour? This is possibly the most challenging and contentious issue of all and one on which, as the author of rules for our own market, we would welcome thoughtful contributions. We look forward to the European Commission’s proposals on Market Abuse in this respect.
And on a separate but related topic, we note that the SEC has voted unanimously for adoption of a “Large Trader Reporting Regime”, under which a unique Large Trader ID (LTID) will be assigned to every large market participant (brokers, proprietary traders, hedge funds and asset managers), and member firms will, upon request, report all trades (with timestamps) by those firms to the SEC. The assignment of these unique IDs for each market participant will allow regulators to piece together the activity of these firms irrespective of how many brokers they use for execution. But it also opens the door to two further developments –
- If brokers were required to pass on the LTID on every order routed to market, surveillance by venues could then be undertaken at the granularity of the end client. This would reduce false positives (which arise because brokers typically trade for many clients simultaneously) and allow for surveillance of end participants independent of how many brokers they use. Of course, venues would likely need to adjust their trading interfaces to accommodate the LTID on order messages.
- Provision of LTIDs to the venues would remove a significant obstacle to the creation of a Consolidated Audit Trail through which markets might be required to disclose to regulators in real time the detailed activity (orders and trades) of all participants – although a number of other practical and philosophical issue remain (see a prior blog on this topic).
P.S. We encourage our clients to complete the Automated Trader 2011 Algorithmic Trading Survey