Monday 22 November 2010

Lots of Cooks

IOSCO (the International Organization of Securities Commissions) recently published a Consultation Report titled “Issues Raised by Dark Liquidity”. And the ECON Committee of the European Parliament voted on 9 November to adopt a report intended to influence the MiFID review, with an emphasis on encouraging the use of pre-trade transparent venues. So how similar were the recommendations?

IOSCO’s headline recommendations from its press release were rather mundane, reflecting existing best practice rather than suggesting any radically new ideas. But reading between the lines…


“Principle 3: In those jurisdictions where dark trading is generally permitted, regulators should take steps to support the use of transparent orders rather than dark orders executed on transparent markets or orders submitted into dark pools. Transparent orders should have priority over dark orders at the same price within a trading venue.”


Not too exciting really, given that most exchanges, MTFs, and ECNs already apply price-visibility-time priority. So why state the obvious, and what didn’t they say?

First, the US practice of Flash Orders (which the SEC proposes to ban) probably contravene these guidelines, especially where marketable orders are reflected to preferred liquidity partners (whose liquidity is “dark”) prior to interacting with lit bids or offers in the venue’s book.

Second, the report goes on to emphasise that rather than restrict dark orders per se, regulators should instead “look at ways to incentivize market participants within the regulatory framework to use transparent orders… the key interest is in taking steps to ensure that there are adequate transparent orders in the marketplace.”

  • In Europe, MiFID sought to promote transparency by banning most hidden and discretionary orders in lit books. However, rather than encourage the use of transparent order types as anticipated, this tougher stance spawned the creation of discrete dark books (both MTF and broker-operated) which are isolated from the lit orders books. The effect has been to demote transparent books in the order routing hierarchy for certain types of flow.

  • If European regulators were to take IOSCO’s recommendations to heart, they might instead consider encouraging more flexible use of dark orders within transparent order books (e.g. relaxing the LIS constraint), so as to encourage participants who want to post dark orders to use these venues. After all, lit books offer far greater certainty of execution, and so would be very attractive for smaller hidden orders.

  • Unfortunately, the political tide seems to be flowing in the other direction, with the ECON Committee contemplating whether further restrictions to dark order types and venues would “encourage” liquidity into lit venues. Personally, I doubt that “forcing” such orders to be displayed will produce the desired outcome. Lit markets have evolved, and put simply, they are just too transparent and too efficient for some investors/orders – with prices changing more quickly than ever to reflect slight imbalances between supply and demand. So I worry that further restrictions will drive liquidity out of the markets altogether.

Third, giving transparent orders priority over dark orders within each venue can only be expected to have a beneficial impact if much of the dark liquidity is in venues that also have transparent orders. With most of the dark liquidity residing in discrete dark MTFs or BCNs, this prescription lacks impact. Both the SEC and Canadian IIROC have considered going further, giving transparent orders priority vs. dark orders at the same price across markets, e.g. by requiring that dark pool executions always offer both participants material price improvement (e.g. at least one tick) vs. the best available lit price for equivalent size.

Turning to the ECON Committee report, it’s clear that it is a product of negotiation & compromise, and an attempt to square the many competing vested interests. The general thrust is that since MiFID introduced competition, the pace of innovation has outstripped the ability of regulators to keep up, and also that competition has produced some unexpected or unwelcome outcomes (including the emergence of non-display MTFs and broker crossing networks). The final report avoids some of the more over-protective prescriptions that had been in circulation previously, and which would have harmed market quality, for which the committee should be commended. But even in this compromise document, there were a few recitals/recommendations that caught my eye:


  • “Whereas market fragmentation in equities trading has had an undesired impact upon liquidity and market efficiency…”
    I think a more positive view of MiFID is warranted, because although many participants are still adjusting to the new landscape, spreads and liquidity depth are demonstrably superior in stocks that are subject to competitive trading (thankfully Spain’s failure to implement MiFID properly creates a useful control group), and given that trading tariffs are some 90% lower than they were pre-MiFID.

  • “Asks for an investigation by the Commission into the effects of setting a minimum order size for all dark transactions, and if it could be rigorously enforced so as to maintain adequate flow of trade through the lit venues in the interests of price discovery;”
    The target here is both MTF/Exchange dark pools and broker crossing networks. But what’s missing is any discussion as to what is meant by “adequate”. Certainly, there is no sign that price formation has been in any way weakened thus far, and economic theory suggests that price formation is much more robust than regulators/politicians are giving credit for. Personally, I have more sympathy for the SEC’s emphasis on “avoiding a two tier market” as a rationale for ensuring the pre-eminence of venues with non-discriminatory access. Again, I believe it would be better for market efficiency and liquidity if regulators allowed small dark transactions to be handled by lit venues, rather than see them ban such order types altogether.

  • “Suggests ESMA conduct a study of the maker/taker fee model to determine whether any recipient of the more favourable "maker" fee structure should also be subject to formal market maker obligations and supervision;”
    To me, this suggests a limited understanding of the topic and of market structure. Firstly, MTFs treat all members equally – and hence all MTF members (basically every major bank or brokerage house in Europe) are receiving the “maker” rebates for a proportion of their business. Secondly, and related to the point of equal treatment, most European markets no longer have a concept of market makers with particular privileges and obligations for liquid stocks. And thirdly, some MTFs pay rebates for passive liquidity, whilst others pay rebates for aggressive flow, making receipt of rebates a poor basis to define “market making”.

  • “Requests that no unregulated market participant be able to gain direct or unfiltered sponsored access to formal trading venues and that significant market participants trading on their own account be required to register with the regulator…”
    Many proprietary trading firms have been told by their domestic competent authorities that their activities are not subject to regulation, so this would be a significant change.

  • “Calls for an investigation into whether to regulate firms that pursue HFT strategies to ensure that they have robust systems and controls… and the ability to demonstrate that they have strong management procedures in place for abnormal events”
    Clearly, HFT firms are risk management specialists (people trading with their own money and seeking to capture low-alpha opportunities have a healthy appreciation for risk), and faced with failings of market infrastructure/regulation during the US flash crash, they behaved appropriately by stopping trading. Perhaps regulators would have preferred them to “stand in front of the train”, and keep buying the face of a tsunami of sell orders – but that would simply convert a brief liquidity shock (albeit with nasty implications for consumer confidence in the integrity of markets) into a more serious systematic-risk issue that could have bankrupt firms and/or their clearing brokers.

  • “Asks for an investigation into OTC trading of equities and calls for improvements to the way in which OTC trading is regulated with a view to ensuring the use of RMs and MTFs in the execution of orders on a multilateral basis and of SIs in the execution of orders on a bilateral basis increases, and that the proportion of equities trading carried out OTC declines substantially”
    This is actually one of several recitals explicitly calling for a reduction OTC trading in favour of transparent exchanges and MTFs. Still, the suggestion that “improvements to the way in which OTC trading is regulated” should lead to less OTC trading seems somewhat odd.

Even though some of the Committee’s recommendations might benefit MTFs such as Turquoise at the expense of OTC, I’m troubled by assertions/conclusions that contradict the empirical evidence and by the emphasis on “protecting” lit markets by restricting innovation and investor/intermediary choice, rather than by allowing them the flexibility to compete.

Hopefully CESR/ESMA will find a way to address the Committee’s concerns whilst recognising that MiFID is actually working, and that much of the anxiety over fragmentation and “opacity” can be attributed to growing pains that will subside as participants become accustomed to the new market structure.

As for unintended consequences, I’m willing to bet that these efforts to drive trading towards transparent venues will lead to a further proliferation of non-display MTFs. Any takers?

Monday 18 October 2010

The Fifteen Year ITCH

Back in May, a small US broker dealer published a white paper highlighting that information published in relation to some exchange or MTF dark pools could allow participants to identify the direction and longevity of (supposedly dark) resting orders. In Europe, another broker-dealer brought the issue to the attention of their buyside clients, who demanded that problem be addressed, forcing the affected venues to amend their data feeds (which they did within days). All water under the bridge, or so I thought.

But recently there have been further accusations that exchanges continue to “deliberately sell confidential order data to HFT firms”, with some suggesting that there’s a grand conspiracy amongst exchanges, regulators and HFT firms to defraud institutional investors. These new allegations are being levelled at exchanges in relation to their public lit order books. I think they’re wide of the mark and reveal a lack of appreciation for how public data feeds work.

Here’s a quote from my previous (in fact, first) blog entry:
“When exchanges first started offering electronic order entry disseminating a public data feed, participants wanted to be able to identify their own orders and executions in the public data feed. This allowed participants to know their queue position in the order book, and to display this on a client front-end. It allowed them to perform better transaction cost analytics – by identifying which executions on the ‘tape’ were theirs. It allowed them to measure the latency of the public market data against their own Execution Reports. And it allowed multiple OMS and EMS systems within the firm to identify their own orders & executions without having to feed each system with drop-copies of the order entry/execution feed.”

I repeat this to illustrate that the inclusion of OrderIDs in exchanges’ public data feeds was a response to participant demand. Dark pools aside, participants do expect exchanges and MTFs to provide this information.

First, brokers and market data vendors need to build and maintain a copy of the order book for each instrument – converting individual order-add/amend/delete and trade messages disseminated by the market into a depth-of-book representation that can be used for trading decisions. The OrderID assigned to each order is essential to facilitating this process.

Second, investors, brokers and markets all need to be able to relate individual trades back to the orders they belong to. This is essential for order management (e.g. to know the cumulative traded quantity and residual quantity for an order), for transaction cost analytics and for regulatory compliance amongst other things. The way this is typically achieved is by making the OrderID an attribute of each Trade. So it’s easy to find and sum all the Trades linked to a particular OrderID. This is turn makes it important that an OrderID is persisted throughout its lifetime – as changing the OrderID half way through would cause the ‘loss’ of the related trades, and consequently over-trade errors (which I know to be true from personal experience – a lowlight of my days as a trader).

But, I also said then:
“The specs make it relatively easy to identify iceberg/reserve orders as soon as the visible peak is first refreshed, and also to identify pegged orders as soon as they are modified by the market.”

This observation seems to form the basis of the ongoing allegations. So I want to delve a little deeper and explain why changing the way these data feeds work would represent a huge cost to the industry for little or no benefit:

How does it work exactly?

  • When an Order is first received, an OrderID is assigned to it, and both communicated back to the participant (in the acknowledgement message) and disseminated in the public market data (allowing the participant to see where they stand in the book).
  • From that point forward, the OrderID is used to communicate any events affecting the order:
    • Each execution reported back to a participant carries the OrderID to which the trade belongs. If it was a visible order that traded, a single “Order Executed” message in the public data feed tells recipients that there has been a trade against the specified order, and hence that the remaining quantity in the book should be reduced. If it was the visible portion of an iceberg/reserve order, most communicate that there was an execution against the order, but that it remains alive with a new display quantity (typically with a loss of time priority).
    • For each order amendment or cancellation, the public data-feed refers to the OrderID and communicates which attributes have been amended.
    • And when the exchange automatically adjusts the price of a pegged order, the data feed refers to the OrderID and communicates the new price. (I believe some exchanges go so far as to label pegged orders explicitly, although I confess I don’t see a good reason to do so).

The persistence and dissemination of the OrderID in the public data feed is key to enabling participants to trade and manage their orders effectively, but also makes the market more transparent than some participants may have appreciated.

Exchange and HFT detractors argue that even if we arrived at this situation innocently, it’s still wrong, and that institutional traders had no idea that their information was being ‘compromised’ in this fashion. Since nothing has changed since the now widely-used ITCH protocol arrived on the scene over 15 years ago (with its specification public ever since), it’s somewhat surprising that this is news to some market professionals. But, timing aside, how serious are the concerns about information leakage regarding iceberg and pegged orders, and should markets be changing their public data feeds to assuage the critics?

Ceasing to publish any OrderIDs in the public data would render market data useless and break most OMS systems. But, in relation to iceberg and pegged orders, could exchanges switch from publishing amendments to the existing OrderID to instead sending a cancellation of the order and then the addition of a replacement order (with a different OrderID)? This sounds alluring, except that:

  1. Without significant design changes, this would break the link between trades and the OrderID, and as I explained above, that’s not a good idea. Avoiding this would require very substantial investment by data vendors, brokers, OMS vendors etc – for which there is very little appetite.
  2. This would double the volume of market data being disseminated in relation to order amendments and iceberg executions (two messages instead of one).
  3. And most importantly - it wouldn’t materially reduce information leakage. Even if replacement OrderIDs were used in the scenarios above, iceberg executions and pegged order amendments would still glaringly obvious to any consumer of the data feed from the immediacy with which they followed executions or peg reference-price changes. In short, we’d make making market data less efficient and forcing an overhaul of OMS systems for no good reason.

So what should an institution do?

Frankly, they probably shouldn’t worry about it too much. If they’re using a sophisticated broker, then the broker will have already developed their algorithms to mitigate the risk of information leakage. They could insist that brokers don’t use the iceberg functionality of exchanges – but such a decision would come with a cost of less participation in the marketable liquidity passing through the exchange. They could insist that their brokers don’t use exchange pegging functionality – but I expect they’d find that most brokers already don’t (in fact, due to lack of demand, Turquoise didn’t implement pegging functionality in its new Millennium Exchange platform). Or they could insist that brokers cancel and replace orders rather than amending them (and again would find that this is common practice for many big brokers already).

The simple fact is that ‘lit’ markets are very transparent, exactly as regulators wanted them to be. Meanwhile, all the empirical data suggests that this improved transparency, the competition amongst various markets, and the supplanting of traditional market making by the electronic variety have coincided with an ongoing reduction in the total transaction costs experienced by institutional and retail investors alike. And whilst correlation doesn’t prove causation, this positive trend reduces the force of arguments that this level transparency is harmful to institutional investors.

Thursday 30 September 2010

HFT Bashing

HFT bashing by politicians and the media seems to be locked into a self-reinforcing cycle. The media cites growing concerns amongst politicians and public officials who in turn point to the media as evidence of public concern warranting intervention.

There is lots of woolly thinking that ought to be challenged:

  • Recent suggestions that medium-term volatility in asset prices relative to underlying fundamentals is attributable to HFT are nonsense. By the most commonly accepted definition, HFT firms end each day (if not each minute) with no positions, so they cannot affect inter-day supply and demand unless their presence is precluding other classes of long term liquidity-supplying investors from participating in markets. If the medium-term volatility is growing, so are the opportunities for such contrarian investors, and it’s hard to see how HFT would be keeping them away.
  • Allegations that HFTs caused the May 6th flash crash by ‘withdrawing their liquidity’ are inconsistent with arguments that during normal market times the liquidity they provide is ‘ethereal’.
  • Talk of “growing evidence” that HFT is bad is too often just a reference to the clamouring of market participants with a particular vested interest or to the circus of media coverage. Indeed, the only empirical study I’m aware of seems to contradict most of the popular arguments levelled against HFT. It’s encouraging to hear that the UK Treasury is commissioning an independent study by one of its economists.
  • The fact that HFT firms are profitable (although less so of late according to the published results of several large players) does not necessarily mean that normal investors are losing out. It may be that HFT firms have supplanted traditional market makers and specialists and are providing the market with liquidity at lower cost than was previously the case.

And yet, despite the lack of coherent argument or empirical evidence thus far that HFT is detrimental, the debate seems to be progressing inexorably towards the adoption of measures to forcibly constrain HFT.

Clearly I’m un-persuaded that forcibly constraining HFT will improve market quality or benefit long-term investors, but intellectual curiosity compels me to consider some of the measures being recommended (usually by politicians) as a way to limit HFT participation in our public markets:

  • A proposed “minimum quote duration” would be counterproductive (by which I mean idiotic). It misses the point that not all HFT is ‘passive’ by nature – there are plenty of HFT firms whose trading flow is entirely ‘aggressive’. A minimum quote duration would allow aggressive HFT firms to systematically exploit brokers (and their investor clients) who would be unable to adjust/cancel their orders in response to price movements in other relate instruments.
  • Rationing orders or capping the “order-to-trade ratio” of individual HFT firms might well reduce the number of orders/quotes each individual firm generates – but the medium term impact is likely to be the emergence of more HFT firms. If there are profitable strategies that individual firms are prevented from executing, others will eventually discover those strategies.
  • Imposing a tax on order messages or cancellations “to cover the cost of market infrastructure” seems somewhat draconian. Is it appropriate for politicians to tell commercial companies they must charge more for a service?
    • In the US context, where there is a centrally funded consolidated quote system (which seems increasingly obsolete), wouldn’t it be better to start by reforming the problematic formula that defines how consolidated quote data revenues are shared amongst markets (and then their participants) in relation to the number of quotes and trades generated (and which arguably creates commercial incentives to update quotes more frequently)?
    • In Europe, where regulators already worry that broker crossing networks and MTF dark pools might undermine price formation in lit markets, would introducing additional costs to provide liquidity in lit markets make sense? If the goal is to drive more liquidity towards the central, lit markets – is a new tax on their use the optimal way to achieve it?
  • The (apparently popular) suggestion that “market makers” should be subject to “obligations” to provide liquidity in times of market stress doesn’t appear to be grounded in reality.
    • Most European equity markets no longer have a formal market-maker designation – so rulebooks would need to be changed to establish which firms would be burdened by these new obligations.
    • No firm would take on an obligation to lose money (for that is exactly what is required to stabilise markets in times of stress) without being offered some counterbalancing privileges. Given MiFIDs emphasis on the “fair and non-discriminatory treatment” of all participants by exchanges and MTFs, should we welcome the creation of a privileged elite amongst market participants?
    • The reality is that the economics of liquidity provision have changed dramatically. Traditional market-making no longer exists because it ceased to be profitable. Given the competitive nature of HFT (many firms say that a typical strategy has a “shelf life” of only a few weeks), it’s unlikely that they have the scope to absorb the losses required by new obligations if they’re to be in any way effective. It just won’t work without sufficient “incentives” – which takes me back to the dangers of creating a privileged few.
    • It won’t work anyway. The benefits are illusory. No trading firm would sign up to the potential for unlimited losses – they’ll always want an “out” in extreme cases. For example, how could they be compelled to continue supplying liquidity if there’s a failure by exchanges to process orders or publish data in a timely fashion (as apparently happened on May 6th)?
  • Suggesting that all markets “synchronise their market data output” so as to prevent latency arbitrage is a total nonsense that would either require us to suspend the laws of physics or mandate all markets and all participants to operate from a single geographic location. In a technology enabled and geographically dispersed world there is no “single NBBO/EBBO” – it depends where you are relative to the different market centres.
  • Significantly increasing tick sizes would reduce the potential for client orders to be “stepped ahead of” at marginal cost. There would potentially be more liquidity at each price point, and thus stronger incentives to leave quotes live for longer (so as to reach the front of the queue). The minimum hurdles for a strategy to be profitable would be larger, and hence there would presumably be less HFT. This sounds like a viable approach, but unlike the other suggestions, the inherent costs are more obvious:
    • Spreads would be wider, resulting in higher trading costs for all market participants and for retail investors in particular (they typically submit marketable orders).
    • Wider spreads would increase the opportunity for brokers to offer clients price improvement via their internalisation services (BCNs, SIs) or via price-referencing midpoint MTFs – potentially resulting in more volume migrating away from lit markets.

So there’s no free lunch - surprise, surprise.


I eagerly await the conclusions of the Treasury’s study. And if the conclusion is that HFT is detrimental to market quality, I expect a lively debate about what to do about it. In the meantime, I expect that regulators will prioritise the prevention of another flash crash – and so expect (and support) further developments around market-wide volatility interruptions.

Wednesday 1 September 2010

Is laissez-faire fairest?

In the debate about high frequency trading, the arguments that HFT has distorted the market can be divided into two categories.
  • One set of arguments suggests that HFT firms are playing within the rules, but that the rules are wrong. Blame for this is often laid at the feet of market operators or regulators.
  • Another set of arguments suggest that HFTs are playing outside of the rules, and that the rules are not being adequately enforced. Blame for this is often laid at the feet of market operators or regulators.

I have shared my own (broadly positive) opinions on HFT previously, but the propensity of others to blame market operators for changes to the nature of the markets lead me to question the approach to surveillance and enforcement in our (now) competitive market landscape.

An easy observation is that no-one is sufficiently well informed to reliably spot market manipulation:

  • As an MTF operator, we are only responsible for conduct of participants on our MTF. Assuming that a devious participant smart enough to manipulate the market would be also smart enough to disguise their intentions by using multiple MTFs or exchanges, we can only hope to catch the stupid ones.
  • Secondly, the data we have for our own MTF only identifies the participant entering the order. If we identify a set of orders or trades that constitute (in our judgement) suspicious activity by a participant, we cannot know (without calling them to ask) whether they are attributable to one end-client or many. So a surveillance system looking for certain patterns of behaviour typically generates many “false positives” that turn out to be unconnected trades by a number of different end clients. Add to that the fact that end clients (whether asset managers, hedge funds or high-frequency prop trading) can and do split their business amongst multiple brokers, and it becomes harder still to identify what individual participants are doing.
  • So if somebody wanted to monitor trading activity across all venues, they would need to combine “attributed” (identifying the owner) and “privileged” (including non-public information on hidden and iceberg orders) data from all the venues. This just isn’t possible today.

This situation is a few years old, but the US Flash Crash has recently prompted regulators to try and tackle it – with the SEC having made the most detailed proposals. Broadly speaking, there are two proposed technical solutions to the problem of fragmented/incomplete data.

  1. Markets should be told by brokers who the underlying client is. A “client identifier” will be included with each order sent to market. It will not be disseminated in the market data, but will be available for surveillance purposes. This would allow individual markets to better identify behaviour of individual clients, irrespective of how many brokers they use.
  2. There should be a “consolidated audit trail” to which all markets (and not just equity markets) will contribute their attributed data feed - including every order, amendment, cancellation and trade for every broker and identifying the underlying client for each. The entity receiving this consolidated information could then be responsible for surveillance across the multiple venues.

There are, however, some practical problems with these proposals:

  • Markets will have to amend their interface protocols to accommodate the new client identifier
  • It’s not clear if there’s an existing convention for client identifiers (e.g. BIC codes) that will cover all of the intended firms, or whether a new standard needs to be defined (and then a directory maintained).
  • Markets will then need to develop a secure (and presumably standardised) way of publishing the attributed and privileged data to the consolidated audit trail.
  • The consolidated audit trail for US security and derivative markets has been estimated by the SEC to cost $4billion to set up, and a further $3billion each year to operate. For many participants, that seems like too high a price to pay for an unquantifiable improvement in market quality (although I note that a number of vendors have approached the SEC with lower-cost proposals).

Assuming the practical problems can be overcome, there are still some other thorny issues to resolve:

  • Given our fragmented regulatory landscape, who will undertake surveillance using the consolidated audit trail?
  • Do the regulators have the necessary expertise, or might they outsource the function (as recently proposed to the SEC by Senator Edward Kaufman)?
  • And if the individual undertaking the analysis is smart enough to make sense of the consolidated audit trail and understand the trading strategies behind a (presumably anonymous) participant’s behaviour, how do we reassure participants that their intellectual property will be safeguarded if that individual decides it’s time to become a trader?

And that brings me to my real question – Do regulators, market operators, participants or academics actually agree on what constitutes illegal or immoral behaviour?

I believe there is consensus with respect to insider dealing and front-running of client orders, but what about practices that might amount to “market manipulation” depending on the intent of the participant?

The falling costs and lower latencies stimulated by competition amongst markets have changed trading behaviours dramatically – and many of the practices (such as high order cancellation rates, the presence of orders at multiple price points, the presence of orders on both sides of the book, or speedy position reversals) that might have previously been associated with market manipulation are now routinely exhibited by legitimate trading strategies (whether market making or the algorithmic execution of client orders).

I tried to explain this difficulty to a (non UK) regulator, who first told me that we should identify orders submitted where “the participant doesn’t really want to trade”. This is tricky for market operators, since our markets only accept firm orders (there is no risk-free option to post into the order book during continuous trading), and because our mind-reading skills are not as developed as regulators apparently suppose. And if the risk of trading truly is the most effective disincentive to submitting “misleading” orders, then is competition amongst market participants the best way of achieving a fair market?

He responded that we should discourage speculative orders priced far from the prevailing BBO – but in my mind that’s just a recipe for shallow and volatile markets. I would argue, for example, that rather than attack the use of “stub quotes”, the correct response to the Flash Crash should be to encourage more participants to place “speculative” orders so that competition amongst them creates a deeper and more stable market.

What about “momentum ignition”, described by the SEC as the practice of “spoofing” algorithms or human traders into crossing the spread by “stepping ahead” of them in the order book, or printing small trades at the Bid or Offer, thus creating momentum which increases transaction costs for customer orders… How can regulators or market operators draw a line between “igniting momentum” and the (presumably legitimate) practice of detecting an imbalance in supply and demand and stepping ahead to profit from the anticipated price movement? Most brokers seem to take the view that it is their responsibility (and not regulators’ or market operators’) to protect their clients, improving their algorithms such that they are less prone to being spoofed and consequently trading “at the wrong price”.

Setting aside the obvious abuses of insider dealing and front running, if markets are competitive by nature, is it realistic to “protect” participants from one another? And if there are limitations to how “safe” we can make the market, how do we at least ensure that they are “fair” (so that no one participant or group enjoys an innate advantage over another)? Before we design the all-singing, all-dancing technical solution to the problem of policing fragmented markets, we need to be clear about what we’re trying to achieve. What would it take to identify and police all types of “manipulative” trading strategy, how effective can the policing be, and do market participants think it’s worth bearing the cost? And to what extent should we rely on surveillance and enforcement to create a fair market, versus competition amongst participants seeking best execution for themselves or their clients?

We invite feedback from brokers, competitors, regulators and institutional investors on our approach and our views.



P.S.

  • Turquoise takes great pride in its sophisticated surveillance capabilities, places great emphasis on the quality of its marketplace, and takes its regulatory responsibilities very seriously. We raise this topic because we think it is a market-wide issue which we cannot address alone.
  • Turquoise has retained its number one position in the dark for a third successive month, slightly increasing its share of dark trading.
  • Our migration to Millennium Exchange takes place next month. Please let us know if you need our assistance in making preparations. The dress rehearsals are on September 11th and 18th.

Thursday 29 July 2010

Blowing Our own Trumpet

CESR has just published their Technical Advice on the MiFID Review - so I'll probably have something to say once I've digested the 162 pages. But in the meantime, here is the text of the Press Release we issued this morning.

Turquoise number one MTF dark pool for second month running

  • July dark volumes show Turquoise extending lead ahead of competitors
  • Lit pool also showing steady growth – now 2nd largest MTF for majority of stocks listed

Turquoise’s pan-European mid-point book looks set to remain the largest MTF dark pool for the second month running, extending its lead over its nearest competitor during July as the number of active participants continued to grow. According to statistics from Thomson Reuters, it is the only dark MTF to have exceeded €4 billion of traded value in Stoxx600 Europe constituents for the month to date, and is on track to beat its record performance in June despite lower overall market volumes during July.

David Lester, CEO of Turquoise, said:

“We are grateful for the support of our clients in driving the success of our mid-point book. Clients are responding to the growing pool of liquidity and expressing their support for our functionality roadmap which will offer participants greater control and choice when trading in our dark pool. ”

Turquoise has also seen steady growth in its lit pan-European order books, especially pronounced in mid and small cap segments where it has become the second largest MTF for the significant majority of stocks.

David Lester added:

“With growing and diverse liquidity in both our order books, we have positive momentum leading up to the launch of our new trading platform. Given the number of clients and prospects actively testing the new system, we expect the growth to continue once we switch over to Millennium Exchange in October.”

Chart - Mid-point Book Consideration Traded by Month




Tuesday 20 July 2010

A Change is Gonna Come...

It would seem that we now have a European equivalent of the SEC’s far-reaching Concept Release. Kay Swinburne’s draft report on MiFID, submitted to the European Parliament’s Committee on Economic and Monetary Affairs, is wide-ranging and ambitious.

The following highlights grabbed my attention;


With regard to Dark MTFs & BCNs, the report

  • Explicitly acknowledges that BCNs are different to dark MTFs in that they are an extension of the ‘traditional, discretionary broker-client relationship’.
  • Calls for BCNs to disclose to regulators the details of orders matched in the system (which is a significant volume of data), as well as information on the trading methodology, level of broker discretion, and methods of access.
  • Calls for an investigation into whether there should be a volume threshold above which BCNs must convert into MTFs.
  • Calls for an investigation into setting a minimum order size on BCNs and MTFs as a way of encouraging greater flow of trade to lit venues in the interests of price discovery. And, calls for a review to consider whether such a minimum size threshold be applied to the Reference Price waiver upon which dark midpoint MTFs are reliant (but which does not currently apply to BCNs).
  • Calls for a consultation on whether market-making within BCNs should be permitted, or whether they should be restricted to the crossing of ‘buy side customer orders’.
  • Calls for a review to consider reducing the current Large in Scale thresholds, and also to broaden the Reference Price waiver to allow matching anywhere in the spread (which could be intended to allow the waiver, including a potential minimum size threshold, to be applied to BCNs also).

With regard to HFT, Co-location and Sponsored Access, the report

  • Blames the US May 6th ‘flash crash’ on the withdrawal of HFT liquidity, and suggests a study into whether ‘informal market makers’ receiving a ‘maker’ rebate should have formal liquidity provision obligations and supervision.
  • Calls for HFT firms to be regulated to ensure they have robust risk controls in place, and for market operators to stress-test their systems and introduce volatility interrupts and circuit breakers so as to avoid a European ‘flash-crash’.
  • Calls for an investigation into the true impact/contribution of HFT trading on other market users, particularly institutional investors.
  • Calls for unregulated proprietary trading firms to execute into markets through regulated firms (currently, MiFID allows these firms to join exchanges and MTFs directly).
  • Calls for ‘unfiltered sponsored access’ to be expressly prohibited and for the Commission to adopt IOSCO’s principles on sponsored access, relating to the contractual arrangements and respective responsibilities for risk controls & filters – including an obligation for the sponsoring firm to have ‘pre-trade filters’ in place (although it doesn’t address the role of MTFs/exchanges in implementing pre-trade filters on behalf of the sponsor).
  • Calls for trading venues providing co-location themselves, or indirectly via third parties, to ensure their co-location arrangements provide equal latency to all co-located customers (which could drive an interesting intrusion by market operators into the commercial affairs of independent data-centre operators).

With regard to market data, the report

  • Calls for CESR to clarify and tighten post-trade reporting standards to ensure greater consistency so as to better facilitate data consolidation.
  • Calls for venues to unbundle pre and post-trade data so that post-trade data can be acquired (and consolidated) more cheaply.
  • Calls for the establishment of a working group to ‘overcome the barriers’ to a European Consolidated Tape and establish a privately run system (without any taxpayer funding).

And on other miscellaneous topics, the report

  • Calls for all ‘equity like’ instruments including ETFs and DRs to be captured in the scope of MiFID.
  • Supports the extension of MiFID to derivative instruments
  • Requests that the Council consider extending the MiFID per & post-trade transparency requirements to all non-equity instruments subject to significant secondary trading (including an explicit mention of government and corporate bonds, though no mention of FX).
  • Suggests that regulators must have sufficient data to be able to ‘re-create the order book’ – so as to understand the market dynamic and participants’ involvement (similar to the SEC’s $4billion proposal for a ‘Consolidated Audit Trail’ to gather attributed (identifying the underlying end-client) order & trade data across all market venues).
  • Suggests that ‘flash orders’ that undermine the equal treatment of all exchange/MTF customers be banned (although it is not clear whether the routing services offered by Chi-x and BATS, and delivered via relationships with selected market participants, are intended to be captured by this).

A great deal of thought (and I suspect lobbying) has gone into this report, and the challenges now will be;

  • To prioritise amongst the many recommendations to identify those that best promote competition and safeguard the efficiency and integrity of our markets, and to determine how much can be done and how quickly.
  • To conduct the multiple investigations, consultations and reviews in an efficient and transparent fashion, ensuring that all the relevant parties have sufficient opportunity to contribute.
  • To address the many inter-related issues in a holistic manner, without unintentionally advantaging or disadvantaging different categories of market participant through the uneven introduction of new rules.

I’ll return to many of these topics in future posts (although they will likely be every few weeks going forward).

Friday 9 July 2010

Trade At

In its ‘Concept Releasepublished earlier this year, the SEC asked for feedback on a ‘Trade At’ rule. Given that we don’t even have an EBBO or ‘Trade Through’ rule in Europe (indeed, I’ve previously argued against introducing either), you might assume that this idea would have no applicability to our markets – but it is an interesting (and contentious) topic. I decided to blog on this after reading a couple of related posts (titled ‘Recipe for a Toxic Market’) on TabbForum.

The current US Trade-Through Rule principally applies to market centres – and essentially stops any market from trading outside of the NBBO. So if a marketable order cannot be filled at the NBBO in a particular market, the market must either reject the order or onward-route it to a market that can satisfy it at the NBBO. The whole ‘Flash Order’ debate in the US was about market operators trying to avoid returning or routing orders to their competitors (as the rule requires) by instead introducing a brief delay during which it would seek to match these orders against selected liquidity partners.

Some interpreted the SEC’s ‘Trade At’ proposal to imply a tightening of the Trade Through rule, effectively enforcing time-priority across the competing lit venues. This could have the same effect as mandating a virtual Central Limit Order Book (CLOB), possibly undermining competition.

But the real thrust of the debate appears to be whether the ‘Trade At’ rule should apply to non-displayed order matching, including broker internalisation. What exactly does this mean?
The proposed ‘Trade At’ rule is that, when non-displayed orders are matched, brokers and market operators should be required to price-improve on the BBO by a minimum amount - either a full price-tick or a minimum proportion of the spread. In other words, they cannot ‘Trade At’ the BBO price without trading with the best publicly displayed Bid or Offer – effectively giving priority to the participant prepared to display their limit order.


Where displayed markets include non-displayed orders they effectively have a Trade At rule, in that displayed orders typically take priority over non displayed orders to give an execution priority of price, display type. For a non-displayed order to execute, it must be a minimum price increment better than the best displayed price. The same is also true of midpoint (dark) books operated by MTFs under the reference price waiver – they never give a non-displayed order priority over a displayed order at the same price.

Brokers internalise client flow whenever they can – whether they use an (American) ATS or a (European) BCN/BCS or SI as the platform. This makes economic sense both for the broker and for the clients – the broker avoids the exchange and clearing fees associated with trading in a Public Limit Order Book (PLOB), and the clients interact with natural liquidity without signalling their intentions publicly. Often, though not always, the trade will happen inside the spread, offering both clients price improvement versus what they might have achieved in the public market. The more a broker internalises, the more competitive its commission rates for clients can be.

Those market operators accustomed to concentration rules might argue that internalisation shouldn’t be permitted. But it’s also hard to make a principled argument as to why a broker should be forced to buy services from an exchange/MTF and CCP if those services are not actually needed (which is the case if the broker has two clients willing to trade with one another). Especially where the level of post-trade transparency is adequate, it’s not clear who benefits (other than the exchange and CCP) from forcing brokers to use services they don’t need. And if the trade is being executed inside the spread, then there are not, by definition, any other market participants who are advertising their willingness to interact with either the buyer or the seller at that price.

On the other hand, many internalised transactions (especially of retail orders in US markets) happen at (or very close to) the public BBO. This raises two interesting questions;

  1. If brokers commit capital to internalise flow at the BBO whenever it is attractive to them to do so, what can be said about the flow that they don’t internalise?
    • Some argue that such non-internalised exhaust flow is ‘more informed’, and hence makes the PLOBs less attractive for posting limit orders than they would be if the flow reaching them was ‘more balanced’.
    • Is there a point at which, when internalisation reaches a certain threshold, the mix of marketable flow reaching PLOBs becomes less attractive, leading to wider spreads?
  2. When internalisation happens at the BBO, is this fair to the participant bidding/offering at the BBO in a lit PLOB?
    • Firms post their bids & offers publicly, releasing information which may impact the price, in return for a greater certainty of trading. If the stock can trade repeatedly at your advertised price, but you don’t get filled, does this undermine the incentive to post displayed limit orders in the first place?

Of course, certainty of trade has already been undermined by having multiple competing markets (each with its own queue) – but at least in this case competition is between participants prepared to display their quotes publicly.

So there is a rational economic argument that internalisation of retail flow at the BBO will ultimately lead to wider spreads, (though proving that it’s actually happening would be rather tricky).

Proponents of a Trade At rule argue that this would continue to allow unimpeded matching of client flow within the spread, whilst driving more market-making activity (where the broker buys at the Bid or sells at the Offer) into the public markets. Greater marketable flow reaching the public markets would strengthen incentives for others to post displayed limit orders – driving tighter public spreads.

Firms that prefer to see more liquidity transact in public limit order books (exchanges, MTFs, prop-traders, brokers without the scale to internalise) therefore should be expected to support a Trade At rule. Unsurprisingly, this is indeed the case, although the politics of advocating something that’s potentially unpopular with your largest customers means that the contribution to the debate from exchanges and MTFs is somewhat subdued.

Still, collectivism isn’t a popular concept in capital markets – so arguing that brokers should consume and pay for services that they don’t need because “it’s in the public good” is contentious. And rightly so – legislating a revenue stream is hardly a recipe for competitive behaviour. Indeed, some might characterise it as a form of ‘concentration rule’ – albeit one that doesn’t mandate a single CLOB.

So how do you balance the legitimate (at least as far as I’m concerned) right of brokers to internalise client flow with the public good ensuring PLOB’s remain attractive places to display limit orders?


  • You could quite reasonably take the view retail brokers are perfectly equipped to decide what is best for them and their clients – whether that is routing to an exchange or trading OTC against a broker that (through internalisation) offers lower (or zero) commissions.
  • You could also argue sensibly that, absent any evidence that spreads are actually widening, there are insufficient grounds to consider such a significant change in regulation.
  • If, however, regulators were serious about pursuing a Trade At rule, then one would want either
    • A manageable way to exempt brokers from the requirement to interact with the public markets if they’re already contributing publicly to the BBO price at which they want to trade (because in such a scenario, it is harder to argue that the internalisation has undermined the incentive to post a limit order publicly), OR
    • Alternatively, if an exemption-based approach was impossible to monitor effectively, another solution could be for regulators to mandate a Trade At rule without exemptions, but leave it to market operators to offer reduced cost (and non-cleared) ‘own firm preferencing’ within their order books such that brokers could ‘outsource’ their internalisation.

As I said, contentious. What do institutional investors think?

Friday 2 July 2010

Distortionary Investing

I’ve been catching up on some academic literature from some respected finance professionals (call me sad if you like, but you’re the one reading about somebody else who reads academic literature)…

Apparently, there is a group of market participants that don’t care about the true value of the companies they trade. They buy or sell without doing any analysis of fundamentals underpinning stock prices. They pay no attention to news of information flow. And they’re prone to exacerbating market trends (amplifying volatility). Left unchecked, these rogue participants will undermine efficient price formation to the detriment of all market participants, and ultimately weaken capital formation in the real economy.

Maybe you think I’m talking (again) about high frequency traders (I know, I’ve being doing so a lot recently, but there’s just so much being said on the topic that I don’t believe to be true). Actually, I’m talking about the buyside – and more specifically, index managers and momentum managers.

To quote a 2005 paper titled “Momentum and index investing: Implications for market efficiency” by Professor Ron Bird and colleagues;


  • “The future outlook for market efficiency looks bleak. Arguably, index and momentum investors together represent a large segment of the investor universe, and both are responsible for pricing inefficiency. Perhaps policymakers can do something at the margins to induce more fundamental investing by lowering barriers to arbitrage. We, however, remain pessimistic, distortionary investing seems to have taken on a momentum of its own.”

Of course, fears that passive management would take over the world (often made by active fundamental-based managers trying to sustain higher fees) proved to be a little overblown. Predictions about a supposed threshold for indexed assets (as a % of overall market cap) beyond which price formation would break down proved groundless.

Why did I dismiss these arguments at the time?

  • Firstly, I was never convinced that fundamental investors had a common view of ‘fair price/value’ – so even in a world of only active managers, it didn’t seem controversial to suggest we’d still see price swings in response to trading volumes. And if that weren’t the case, then there would likely be insufficient volumes to allow investors to enter/exit positions.
  • Next, even if indexers or momentum investors were driving prices away from fair value, to me this seemed essential to creating opportunities for (supposedly) smarter value and contrarian investors/traders to provide liquidity at the margins. I figured that active managers should have seen (supposedly) dumb indexers and momentum investors as the sucker at the table.
  • And lastly, at least with respect to index investors, it seemed odd to suggest, irrespective of the proportion of total assets indexed, that they could seriously impact price formation given their buy & hold (forever) strategy. Prices move in response to supply and demand, and if they don’t trade (except when stocks enter/exit the index), then they don’t influence price formation.

So is today’s debate simply history repeating itself? It’s certainly not that straightforward, but the comparison is amusing.


  • Indexers invest, but don’t trade, whilst some HFT firms (who end the day flat) trade, but don’t invest. So indexers don’t really influence supply or demand at all, whist HFT firms influence supply and demand equally across the course of the day.

  • I’m yet to see any solid statistical evidence that HFT firms exacerbate intra-day volatility. Equally, comparing the Spanish market (where competitive trading remains a pipe dream) to others in Europe, Cheuvreux recently reported (in their Navigating Liquidity paper, appended as annex here) that they found no evidence of HFT firms reducing intra-day volatility. This suggests to me that there is a balance between momentum-based HFT strategies (that amplify volatility) and reversion strategies (that dampen volatility). In other words, to the extent that HFT firms are amplifying volatility, there are institutional brokers and other HFTs developing trading strategies that exploit this.

  • And despite the shrill nature of complaints about HFT in the blogosphere, most buyside firms and brokers I talk to are more sanguine. They recognise the improvements to market efficiency that competition amongst markets has delivered, and they fully understand lower trading costs have lead to a growth in high frequency strategies. They’re prepared to work with their brokers to evolve their trading strategies to cope with the new market context. The most thoughtful and informed article on the topic I’ve read of late is here on Institutional Investor.

In the highly competitive exchange/MTF environment in which we find ourselves in (both in the US and in Europe), it’s probably fair to say that, just as lower frictional costs have helped the growth of HFT volumes, so the growth of volumes from HFT firms have been an important factor in allowing markets to lower their tariffs. Any substantial reduction in volumes could force an increase in tariffs, with the danger of kicking off a vicious circle of lower volumes and higher costs – impacting brokers and investors alike. I think that would be a bad trade – although I’m prepared to listen to (coherent) arguments to the contrary.


As ever, I welcome your feedback.

P.S.

  • Turquoise was the largest non-display midpoint MTF during the month of June, surpassing Chi-x for the first time. Thank you to those of you who helped us achieve this milestone. Our integrated (displayed) order book volumes are also increasingly often ahead of BATS in certain segments – particularly mid-cap indices such as the MDAX and FTSE250.
  • We have now confirmed the timing of our migration to the Millennium Exchange trading platform. Please see the market announcement OP/271/10 under the ‘Operational’ section.

Friday 25 June 2010

Luddites unite

I’m almost too tired to blog. Now you may ask “What on earth could leave Natan too tired to have an opinion?” Well, I have just finished reading a sixteen page interview with the principals at Themis Trading.

One again these self-proclaimed defenders of “fair markets” make dozens of claims about how exchanges, brokers and high-frequency traders are conniving to screw both retail and institutional investors. Here are some of my favourite excerpts:

  • “But, let me be clear. We have no inside knowledge of these[HFT] firms. This is just what we hear in the market.”
    Dare I say that this could be a weakness in their position?
  • “We have May 6 now to prove that HFT doesn’t increase market liquidity.”
    Strikes me that it also proved that the market isn’t particularly liquid when electronic market makers are forced out by stale data and unresponsive exchanges.
  • “They provide it [liquidity] when they want to, not when the market needs them to. And only if their profit is virtually guaranteed… They are also liquidity demanders. Thesame guys who provide liquidity when they want to also demand liquidity when they need to. On May 6, they demanded liquidity.”
    Firstly, I imagine there are some HFT folks who will be delighted to know that their profits are virtually guaranteed. Secondly, what point are they making – that HFT firms trade for profit?
  • “The basic problem, in our view, is the for profit exchange model, which is filled with inherent conflicts of interest… Traditionally, the exchange business wasn’t really very competitive, almost utility-like”
    Hang on, now I’m the one being blamed? They didn’t like exchanges when they were uncompetitive and slow, and they don’t like them competitive and fast. I know corrolation does not prove causation, but I think there might be an argument that a profit motive and competition amongst exchanges has spurred innovation, driven efficiencies and lower costs. I think their point is that because HFTs trade the most volume, exchanges are more likely to cater to their needs than to those of institutional brokers (or end investors) - which they support with...
  • “Well, because we are not on the inside of these robots’ algorithms and their trading strategies to see exactly what’s going on, nor are we involved in the meetings in which we believe the exchanges are complicit in so much of what’s going on, it’s hard for us to come back with specifics when defenders of HFT say, “Oh, you don’t have the data to back it up.””
    So they’ve insulted HFTs, accused the exchanges of being complicit, what next – suggest that every other broker on the street is also involved in the great conspiracy?
  • “Most institutional algos use a smart router to route orders in small pieces throughout the day. The pecking order of these routers differs depending on which broker sponsors the algo. But a common goal is to always route to the least expensive destination first. Most of the time this means routing to a dark pool before routing to a displayed liquidity venue."
    Aha, no surprise there then. Well, European MTF dark pools are more expensive than lit pools, so the argument that brokers use them to reduce costs doesn’t stack up. And in such a competitive environment, nor does the suggestion that the majority brokers act against their clients’ best interest – if that were true, Themis would be a large brokerage house rather than just “two or three guys”.
  • On the OrderID and Side-of-aggressor data from dark pool data fees they say
    “By the way, they did get rid of them awfully quick overseas after we called attention to them. They were able, technologically, to do it in a heartbeat over there when some institutions started to boycott their European dark pools. Though, frankly, we’re a little skeptical that they took out everything we’d find objectionable if we had the regulatory power to comb through their records.”
    I’m sceptical they care about the truth – but it’s important to note that they don’t need any “regulatory power” – our public feeds are exactly that – public. So come and take a look.
  • “Almost everyone else seems to have a vested interest”
    Really, I’m lost for words.

I guess an informed debate is too much to expect?

I'd welcome your comments...

Friday 18 June 2010

Dangerous Opacity

Last week I blogged about the impact of electronic market making on lit books, concluding that whilst it has been positive, institutional investors sometimes need alternatives which allow trading with less market impact. We also believe in the power of competition to drive innovation, reduce costs, and make our industry more efficient.

Apparently, one of our largest competitors has reached a different conclusion.
Executives of NYSE Euronext have argued that competition has driven over-fragmentation(
1), that established markets should not be able to use maker-taker pricing(2), and that alternatives to lit books are ‘ not legitimate’ and are causing ‘dangerous opacity’ that will undermine price formation and confidence in our markets(1).

What is motivating these arguments?

Does NYSE Euronext really believe that fragmentation can go or has gone too far?

  • One feature of US markets that really interests me is that most market operators operate multiple order books with differing tariff structures – fragmenting their own market so as to address different customer segements. NYSE was amongst the first to do so; it operates its hybrid electronic/human Classic (floor) market and also the fully electronic NYSE Arca – and does not seem about to merge them together. So they apparently have no problem with contributing to fragmentation in the US.
  • My opinion is that European investors, brokers, market operators and regulators are not yet fully accustomed to competition and fragmentation. But, having worked in New York when ECNs first proliferated, I’m not worried. It will take some time, but wider adoption of Smart Routing, more efficient clearing arrangements, further standardisation of tick sizes and volatility interruptions, and consolidation amongst venues will make today’s concerns about fragmentation seem quaint.

Does NYSE Euronext really believe that use of maker-taker tariff models by established venues distorts the market?

  • Apparently not - NYSE Arca in the US operates maker-taker pricing, and has done for years. As for Europe, it’s difficult to comment, since details of the exact pricing incentives available to ‘liquidity providers’ are not readily available on their website.

And does NYSE Euronext really believe that everything except block trading should be completely transparent so as to avoid ‘dangerous opacity’?

  • It doesn’t seem so to a casual observer. The NYSE Classic market (the ‘floor’) in particular stands out for having an unusually opaque model. By according them what they refer to as “parity”, certain members get to jump the queue completely;

    • Orders from Designated Market Makers (rebranded ‘specialists’) get parity with other participants’ orders. This means that if there are 5,000 shares on the bid from a number of ‘normal’ members, and the DMM subsequently bids for 500 shares, the first 1,000 shares of an incoming sell order will be split 50/50 between the DMM and the other queued limit orders. DMM’s get this privilege as compensation for their obligations to maintain a “fair and orderly” market – though given the amount of money DMM priviledges change hands for, parity clearly has significant economic value.
    • Similarly, each individual ‘Trading Floor Broker’ also gets parity with normal members and with DMMs, and can jump the queue without even needing to display their orders publically. They also get unique visibility of, and access to, market-depth data that is not visible to normal members. I’m not sure they have any obligations, but giving parity to floor brokers allows them to attract order flow and hence ensures that the floor looks like a hive of activity on television.

  • Personally, I don’t like it when people cut in front of me in a queue, and given a choice, I wouldn’t choose to stand in a queue if it was specifically designed to work that way – although I welcome the fact that particpants are offered the choice. But, in my mind, NYSE Euronext’s commitment to this model does rather undermine their argument that all non-block trading should be fully pre-trade transparent to avoid ‘dangerous opacity’.

A lot is at stake in the MiFID review – and the debate is especially heated around the topic of non-displayed trading – both in MTF Midpoint Books, and Broker Crossing/Internalisation Systems. The exchanges that pre-MiFID enjoyed the protection of “concentration rules” are struggling to come to terms with a landscape in which they face competition – both from market operators and (to some extent) from their clients. So they try to persuade politicians, regulators and the general public that we’re approaching a precipice and need to turn back, arguing that whilst MiFID was intended to unleash competition and choice for market participants, the exchanges were supposed to win.

At Turquoise we believe that;

  • Transparent price-time markets are not ideal for every order. There is a legitimate market need for alternatives to lit orders books.
  • Competition is somewhat distorted and price formation potentially affected by inconsistencies in the rules applicable to MTF and broker-operated non-display facilities. But, rather than trying reduce choice for investors, a better solution would be improved post-trade transparency (to avert worries about price formation) and a relaxation of the waivers that limit innovation (and thus competitiveness relative to broker crossing systems) of Regulated Markets and MTF-operated non-display venues.
  • Concentration rules were eliminated because they had proven to be a barrier to competition and innovation. Brokers now have greater flexibility to internalise, and whilst in theory that could ultimately lead to lower volumes or wider spreads in public limit order books, there is no evidence to suggest either US or European markets are close to such a scenario in reality.

Tuesday 15 June 2010

Slow down, you move too fast...

Before tackling some of the potential downsides, here’s why speed and capacity are important:
  • Market confidence suffers most when participants worry that they cannot trust the prices they see, or that they have lost control of their orders. Capacity and speed are essential to maintaining the confidence of investors, without which, liquidity suddenly evaporates (as it did in US markets on May 6th).
  • The guarantee that markets will be highly responsive, and that an order can be amended or cancelled at any time, gives participants the confidence to expose limit orders that they might otherwise withhold from the market.
  • The ability to execute instantaneously across multiple markets, instruments and asset classes leads to less risk for arbitrageurs, resulting in greater market efficiency of correlated assets. This reduces hedging costs and encourages liquidity provision and capital commitment.
  • Markets with inadequate capacity or throughput reject orders at busy times (either implicitly or explicitly), reducing the liquidity that might otherwise be available just when it’s needed the most.

For the above reasons, greater speed and capacity drive narrower spreads, which reduce overall transaction costs, improve overall investment returns, and help companies raise capital more cheaply (helping the real economy grow).

But what about relative speed and relative costs – are long term investors being systematically disadvantaged by those trading faster and at higher frequency? Do faster markets benefit one group of participants more than, or even at the expense of, others?

I think there’s irrefutable evidence that competition amongst markets and amongst electronic market makers has resulted in a dramatic narrowing of spreads. Unambiguously, tighter spreads mean improved execution quality for retail orders (the majority of which are marketable). So it’s hard to think of the ‘amateur’ market participants as victims in our new competitive markets.

What about institutional investors with large orders? Are market professionals the victims?

  • Would markets be better without electronic market making (EMM) firms?
    No. We know exactly what markets look like when EMM firms either cannot or choose not to participate. EMM firms operate with extremely low margins, and are incredibly sensitive to risk – so they’re the first to be forced out of the market when systems slow down. May 6th was instructive – when EMMs back away, volatility increases dangerously.
  • Is the liquidity provided by EMM firms ‘real’ or ‘ephemeral’? If you take liquidity from an EMM that immediately unwinds for a profit, would you have been better off not taking their liquidity in the first place? If they end the day with a flat book, but have made money, has that come out of the institution’s pocket?
    In price-time markets, you always get the best available price at the time. Liquidity offered by EMM firms is just as real as that offered by any other participant – except it’s often more competitively priced. And if the EMM firm with which you trade is able to profitably unwind that risk in other stocks or asset classes, rather than by directly covering the position, then you’ve accessed liquidity that wouldn’t have otherwise been available. Their profit is not necessarily your loss.
  • Is it fair that some participants invest in the fastest technology and most sophisticated algorithms, and co-locate their systems beside exchanges, potentially allowing them to react more quickly than other participants?
    Fair or not, it’s absurd to imagine that we can effectively legislate against profit-maximising behaviour by market participants. If co-location was prohibited, then firms will instead congregate in data-centres adjacent to the markets. And there’s no way to ensure that all participants receive all data simultaneously – even if we had a consolidated EBBO (which I’ve argued against previously). To borrow a phrase from politics – it’s about “equality of opportunity” – we have to recognize how much fairer and more efficient markets are now we have a level playing field with many firms competing to add or remove liquidity than they were with ‘designated’ specialists or market makers (who enjoyed special privileges).
  • Do some automated trading firms exacerbate volatility or engage in ‘momentum ignition’ by stepping ahead of large orders, forcing them to revise their buys upwards and sells downwards?
    Here’s the rub. Efficient markets are supposed to ensure that the price “reflects all available information” – and automated trading firms specialise in recycling the information represented by market data back into the price. If the market is aware of a large buying or selling interest, it’s only natural that prices move – with “market impact” and “market efficiency” being closely related. For such trading strategies to be viable, automated traders rely on the propensity of brokers executing client orders to “chase” a stock up or down regardless of “fair value”. Whilst this all within the rules (let’s trust that market surveillance is effective in spotting rule-breaking), it can be frustrating and costly for a trader who sees their order being stepped ahead of.

So there might be some scenarios in which automated traders take advantage of institutional flow – but what, if anything should we do about it? Is it a matter of brokers becoming more sophisticated, or clients changing their trading style? Or do we need regulatory intervention to protect one category of professional participant from another?

Two thoughts spring to mind:

  • Just as most financial professionals oppose protectionism in the real economy, we shouldn’t ask or expect regulators to make financial markets ‘safe’. It’s the competition amongst market participants that drives market efficiency forwards and encourages brokers to invest in smarter trading strategies.
  • Any regulatory ‘cure’ (such as a transaction tax, or other constraints on trading activities) would be far worse than the problem they’re trying to fix – ultimately sapping liquidity from markets and driving spreads and volatility higher, to the detriment of institutional and retail participants alike.

The real conclusion to draw is that transparent price-time markets aren’t ideal for every order, and so brokers and market operators need the flexibility to offer investors alternative solutions that allow them to access liquidity in the manner most appropriate to the order in question. Given individual investor’s preference to trade with less market impact, post-trade transparency is essential to ensure these alternatives still support efficient price formation.

Wednesday 26 May 2010

What are 'Dark Pools' for?

Depending on who you ask, you will get different answers to the question. As ever, some context is helpful.

First, it’s important to recognise what has happened to our ‘lit’ markets. Competition has driven trading and clearing fees lower (including rebates for firms adding liquidity to order books), allowing traders to profit from smaller bets on smaller price movements – so the prevalence of higher-frequency proprietary-trading has increased (and will continue to do so as costs fall further). At the same time, partly in the pursuit of best execution in light of the first phenomenon, and partly to reduce their own costs, brokers have increased their reliance on algorithmic trading. Both of these factors have lead to an ‘atomisation’ of liquidity in the market – with smaller, faster orders and trades – an environment which poses challenges for those trying to execute institutional-sized orders without being discovered or taken advantage of by short-term/momentum traders. Dark pools, with no pre-trade transparency, offer an alternative in which market impact should be lower, although certainty and immediacy of execution are also sacrificed.

Second, as buyside institutions have increasingly used ‘low cost execution channels’, brokers have become ever more sensitive to the proportion of the commissions they earn which are paid away to exchanges, MTFs and CCPs. Broker internalisation has evolved from (expensive) sales-traders matching blocks telephonically towards (inexpensive) automated matching in broker-operated crossing networks which are free (for the broker at least) and which (by virtue of not using a CCP) don’t incur clearing charges either.

For brokers executing institutional orders, two plausible answers to our opening question could be;

  • “The pursuit of best execution, as an alternative to ‘atomised’ lit order books”
    This rationale would work for both broker crossing networks and MTF/exchange venues
  • “Cost-avoidance and margin-preservation”
    This rationale is applicable to broker internal crossing networks only, since MTF/exchange dark pools are actually more expensive that lit books. So brokers using our midpoint book are putting the pursuit of best execution above their own margins.
It is the first of any of these answers that resonates with institutional investors, many of whom equate dark pools with less information leakage, lower market impact and better execution quality. Precisely because dark pools do not offer the same certainty or immediacy of execution offered by lit markets, institutions come to meet other price-sensitive (rather than time-sensitive) participants who care more about mitigating market impact than they do about immediacy. But institutions (and others) question whether dark pools are delivering on those promises.

And they have a point, because as soon as there’s some liquidity resting in midpoint dark pools, other participants or flows (with different objectives) are attracted to participate. Anyone about to lift the offer or hit the bid in a lit market should be tempted to ‘pass through’ the midpoint dark pool first for ‘price improvement’ (especially if it’s fast enough). So both for firms trading on behalf of clients and on their own behalf, there is another answer to our opening question;
  • “For price improvement against the lit market quote for aggressive orders”
But – there’s an asymmetry here – with some participants resting passive orders in dark pools to avoid ‘atomised’ lit markets (hoping to meet counterparties of similar patient profile), and others using them for aggressive order flow destined for those same lit markets. This asymmetry leads to the passive participants being ‘adversely selected’ – trading against the same aggressive/immediacy-seeking flow that they wanted to avoid, and giving up half the spread unnecessarily (as this flow might otherwise have hit their bid or lifted their offer).

Some execution venues exacerbate this asymmetry by offering aggressive routing strategies that ‘bundle’ access to the dark and lit order books into a single high-speed order type – effectively baking this adverse selection into their market structure. They do so because it earns them much more money (they charge both sides of the trade in the midpoint book, rather than rebating one party in the lit book), encourages the use of their (higher priced) midpoint book as a way to ‘intercept’ aggressive liquidity before it reaches their lit book, and reduces the average time between submission of a passive hidden order and receiving a first execution (a factor to which many broker routing algorithms are extremely sensitive).

Unlike our competitors, and despite some market demand, Turquoise does not offer such strategies. Even though it would attract more flow and increase match rates, we believe that exacerbating this asymmetry would lead to measurably poorer execution quality for those posting passively in our midpoint book, and would ultimately undermine its value proposition.
So our own answer to the opening question is;

  • “For meeting the legitimate market need for an alternative to atomised lit books, with a service that offers measurably different trading characteristics.”

It’s not helping anyone to create a service which mimics lit order book characteristics with the lights turned out, and so we’re prepared to forgo some short-term growth to build a differentiated market which offers a demonstrably safer place to trade institutional client orders. We invite brokers and their clients to measure the execution quality they receive in our midpoint book and to discuss the results with us.

We will return to this topic in a future post and explain some of the other ways in which we are helping brokers achieve superior execution quality in our midpoint book.

Wednesday 19 May 2010

Where there’s fire there’s smoke....


A recent white paper by Themis Trading, titled “
Data Theft on Wall Street” has stirred up a lot of debate. The authors allege that exchanges, ECNs and MTFs are deliberately disseminating data that allows sophisticated high-frequency trading firms to take advantage of less sophisticated market participants. Is this true?

They cite two particular problems;

  • When matching non-displayed orders, exchanges/ECNs/MTFs such as Nasdaq, BATS, Turquoise and Chi-x reveal information about which side of the match was passive and which was aggressive – effectively alerting participants to the possible presence of hidden resting orders on one side.
  • By reporting a consistent OrderID for multiple executions and amendments of non-displayed, iceberg & pegged orders, they allow participants to identify such orders and infer information about their pricing strategy or non-displayed size.

Since its acquisition by the London Stock Exchange Group, Turquoise has sought to differentiate its pan-European Midpoint Dark Book from competitors’ offerings by making it a ‘safer’ place for brokers to trade institutional client orders. Accordingly, Turquoise’s matching systems (both existing and new) do not reveal information about which side of a non-displayed execution is passive or aggressive respectively, and nor do they reveal if the same non-displayed passive order is executed against more than once. For our new matching system, these were deliberate design decisions which actually required specific software development – because the standard behaviour of other markets is to release such information. Why would they do that?

It’s not as sinister as the authors of the Themis Trading paper would have you believe.

When exchanges first started offering electronic order entry disseminating a book feed, participants wanted to be able to identify their own orders and executions in the public data feed. This allowed them to know their queue position in the order book, and to display this on a client front-end. It allowed them to perform better transaction cost analytics – by identifying which executions on the ‘tape’ were theirs. It allowed them to measure the latency of the public market data against their own Execution Reports. And it allowed multiple OMS and EMS systems within the firm to identify their own orders & executions without having to feed each system with drop-copies of the order entry/execution feed.

So participants initially welcomed these new enriched market data specs because they allowed for more sophisticated order & data management, and helped drive the evolution of sophisticated execution algorithms that need information on queue position to estimate execution probability.

But – the authors do have a point. The specs make it relatively easy to identify iceberg/reserve orders as soon as the visible peak is first refreshed, and also to identify pegged orders as soon as they are modified by the market. And whist Turquoise has unilaterally addressed the issues in relation to non-displayed orders (driven by the quest for a competitive advantage), a solution to improve the ‘anonymity’ of iceberg or pegged orders would require non-trivial development by vendors and brokers.

The authors call for immediate regulatory intervention. In my opinion that’s unnecessary - the exchange business is incredibly competitive, and if market participants truly want a ‘safer’ venue there will always be one or more to choose from.

We invite feedback from brokers, competitors, regulators and investors on our approach and our views.



Update – May 21st 2010

On May 19th & 20th, European buyside investors reacted to the Themis Trading white paper on by asking brokers to exclude the Chi-x and BATS dark books from their routing strategies. Many brokers did so, resulting in a significant drop in volumes in Chi-x and BATS midpoint order books. In contrast, the Turquoise midpoint book saw strong volume growth, as brokers directed their flow to venues trusted by their clients.

Chi-x and BATS were both quick to acknowledge the problem and to confirm to their market participants that they would mimic the Turquoise approach by the end of the week. This is a healthy evolution of market structure, and should in time increase confidence in the use of MTF dark books.

We believe that this episode demonstrates clearly that the buyside do care about the choice of venues brokers make, and that a singular focus on dark book match rates might be superseded by a more nuanced appreciation for the quality of liquidity to be found.
We also believe that our soon-to-be-introduced functionality, designed to make our dark pool safer for institutional order flow, will help differentiate the brokers that choose to promote and use these capabilities.