Friday 29 July 2011

Zero Sum Game?


IOSCO, the International Organization of Securities Commissions, has set August 12th as the deadline for responses to its Consultation Report on the “Impact of Technological Changes on Market Integrity and Efficiency”. IOSCO’s input has a long term impact on the evolution of rules by national regulators, and in this instance IOSCO is responding to a specific request from the G20 and the Financial Stability Board.


The Consultation Report reviews some of IOSCO’s previous work on related topics. Whether intended or unintended, the document’s use of phrases such as "no universally acknowledged method for determining", "precise quantitative assessment ... is challenging", "empirical evidence is still scarce", and "further research is necessary" simply serve to highlight the paucity of detailed empirical evidence to support well-crafted regulation in this space.


The report does clearly set out two useful definitions of regulators’ key goals:


Market integrity is the extent to which a market operates in a manner that is, and is perceived to be, fair and orderly and where effective rules are in place and enforced by regulators so that confidence and participation in the market is fostered.


Market efficiency refers to the ability of market participants to transact business easily and at a price that reflects all available market information. Factors considered when determining if a market is efficient include liquidity, price discovery and transparency.”


And there are definitely some interesting nuggets, including the following on page 12 (my emphasis added)…


“For instance, the use of sophisticated low-latency algorithmic trading techniques may prompt less sophisticated traders to withdraw from the market as a result of their fear of being gamed by low latency firms that use faster technology.


Some anecdotal evidence presented to IOSCO suggests this may be particularly true of traditional institutional investors, who, as fundamental investors, are supposed to base their trading decisions on the perceived fundamental value of securities. If such participants withdraw, reflecting a loss of faith in the integrity of the market, the information content of public market prices, may be altered as a knock-on effect. This may potentially result in a less efficient price formation process and possibly cause others to reduce their participation.”


The above quote raises some interesting questions -


• Are “traditional” institutional investors and the brokers that service them really to be considered “less sophisticated”? Based on their public marketing materials, many large brokers now offer algorithmic trading solutions that encapsulate the same techniques as used by low-latency algorithmic firms.


• Does IOSCO consider “anecdotal evidence” of a “fear of being gamed” to be adequate grounds for further regulation, or are they actively trying to highlight the need to establish clearer empirical evidence (to establish how widespread the ‘fears’ are and/or to establish whether the ‘fear’ is warranted by evidence of actual gaming)?


• Absent fundamental investors, does it make sense that the “information content” of prices would be altered (and in a good or bad way), and would this necessarily result in “less efficient price formation”?


A related question is raised on page 27;


“… a challenge posed by HFT is the need to understand whether HFT firms’ superior trading capabilities result in an unfair advantage over other market participants, such that the overall fairness and integrity of the market are put at risk. In the case of HFT, it has been argued that this advantage arises due to the ability to assimilate market signals and execute trades faster than other market participants.”


With or without empirical evidence, there’s no denying that many institutional investors are afraid of HFT (which encapsulates a number of strategies employed by different types of market participant), and that perception of whether the market is “fair” is of huge significance. So how do we know if the fears of gaming and unfair advantage are rational or irrational?


There are three key questions to answer:


1. Is there actually a zero-sum game competition between institutional investors and firms using HFT in which one side is the winner and the other the loser?


2. If there is such a competition, how could we measure the extent to which institutional investors are losing out?


3. If institutional investors are losing out, how can we determine whether the winners enjoy an unfair advantage, or are behaving in ways which constitute market manipulation/abuse?



1. Is there a zero-sum game competition?
  • For various reasons the market has struggled to reach a consensus on this question:
    • Firms using HFT implement a number of strategies – from those that provide resting liquidity to those that are entirely aggressive in nature. So some HFT activities may reduce investors’ execution costs, whilst others may exacerbate market momentum. So there may be a mixture of win-win and win-lose (we discussed this before).
    • It’s not clear how the profits of HFT liquidity providers compare to the profits of traditional market makers whose role they are fulfilling to some extent (by bridging the temporal gap between the arrival of natural buyers and sellers), but if traditional market makers were ‘squeezed out’ by more efficient and automated firms, surely that should represent an overall saving to market users?
  • But another intriguing question is the extent to which HFT strategies actually compete with other (“less sophisticated”) market participants…
    • o One thing we monitor at Turquoise is each member’s ‘hit rate’ - their ability to capture the ‘displayed liquidity’ they see when originating aggressive orders.
      • If speed conveyed a material advantage, and if HFT and other participants were competing directly for the same liquidity, then we would (for example) expect co-located algorithmic firms to have a higher ‘hit rate’ than non co-located agency brokers (who by virtue of being slower would miss out on capturing liquidity).
      • But we actually see the exact opposite – with apparently less-sophisticated agency brokers achieving higher hit rates (consistently above 95% in some cases) compared to below 80% for their supposed ‘competitors’. What this probably means is that there is not a direct competition between these firms with different types of strategy and trading horizon. Whilst firms using HFT may compete with one another, and are very focussed on latency as a source of relative advantage, brokers executing institutional flow are seemingly uncorrelated, and have not tended to focus so much on latency because it doesn’t appear to be necessary to achieve best execution with a high degree of certainty.
  • So we would recommend that regulators investigate whether our data is representative of the broader market, in which case there might not be a case to answer in respect of speed conveying an advantage.
2. But if there was direct competition between firms using HFT and institutional investors, how would we discern the extent to which the traditional institutions are losing?


  • Any “advantage” enjoyed by firms using HFT, and/or the “gaming” for which they might be responsible, should presumably be reflected in higher trading costs for traditional investors, and should be measurable by Transaction Cost Analysis (TCA) providers. But how might this be measured in isolation from all the other dynamic factors in the market? We should be looking for evidence for rising realised trading costs (market impact) or a rising proportion of orders which cannot be completed due to prices moving adversely (opportunity cost) in a manner that controls for concentration amongst asset managers, general market volatility, and other such factors. We suggest two areas for consideration where there should be data readily available to facilitate discussion. 
    • First, for index managers who have less choice regarding their holdings and typically complete execution of all orders (and hence their costs should materialise as market impact rather than opportunity costs), we should search for evidence of growing underperformance vs. their index benchmarks. Such a trend, if present, will be difficult to attribute to specific aspects of market structure, but might support or challenge the concern that current market structure is somehow disadvantaging institutional investors. 
    • Second, for asset managers more widely, and looking at opportunity costs, we should look for evidence of a degradation in costs for liquid stocks (where HFT activity is more prevalent, and fragmentation is greater) relative to illiquid stocks. We would expect that TCA providers may have data to support such a study.
  • We would recommend that regulators search for empirical evidence to support the argument that institutions are being disadvantaged by either the market structure or the behaviour of some participants.


3. And if there is evidence of institutional investors systematically losing out to faster or more sophisticated market participants, how do we determine if this is due to an “unfair advantage”, to “gaming”, or to factors that will be eroded naturally over time?
  • IOSCO suggests that the “advantage arises due to the ability to assimilate market signals and execute trades faster”. It seems likely to us that it is the first of those two points which matters most, and as such, we wonder whether anything can or should be done, since IOSCO itself promotes efficient markets in which the “price that reflects all available market information”. That also leads us to conclude that suggested initiatives such as minimum resting times and order-to-trade ratio caps that seek to control or limit execution will have no positive impact in terms of reducing any information advantage enjoyed by firms using HFT (but will have a host of negative consequences for market quality and costs to issuers and investors). 
  • Others have suggested non-HFT participants cannot afford to make the infrastructure investments that HFT firms make, and that the “unfair advantage” flows from this “barrier to entry”. But it seems obvious from conversations with brokers and technology vendors that any such “barriers” are rapidly reducing through a rapid commoditisation of hardware and software solutions to enable low-latency data processing and trading.
  • And returning to the definition of market efficiency used by IOSCO, we have questioned before whether markets might have become too transparent, and too efficient for the liking of many institutional investors seeking to trade large size. Does HFT vex institutional investors precisely because it ensures that “prices reflect all available information” - particularly when the “information” in question is the institutions’ unfulfilled trading intentions. Have the developments in European market structure and growth of HFT created a market more suited for ‘retail sized’ business? And does the creation of an efficient ‘retail sized’ market ignore the needs of the institutional investor community? Philip Warland, Head of Public Policy at Fidelity International, expressed concerns of this nature at the recent SunGard City Day in London, saying “We have spoken to the European Commission to highlight that too much transparency actually undermines our ability to achieve best execution, and will ultimately hurt investor returns.”
  • And finally, how can we determine if “gaming” plays an element in the discovery of such “information”, and how do we write market rules that preclude such behaviour? This is possibly the most challenging and contentious issue of all and one on which, as the author of rules for our own market, we would welcome thoughtful contributions. We look forward to the European Commission’s proposals on Market Abuse in this respect.
So we’re not persuaded that there is any “unfair advantage” (if any relevant advantage at all) available. And absent a clear definition of what constitutes gaming (or evidence for whether it’s a genuine problem), we think it’s dangerous to start amending the rules. But - we recognise that perception matters - and so we strongly support further data gathering and research on these topics, and are encouraged by the UK Government Foresight Project’s commitment to an evidence-based approach.


And on a separate but related topic, we note that the SEC has voted unanimously for adoption of a “Large Trader Reporting Regime”, under which a unique Large Trader ID (LTID) will be assigned to every large market participant (brokers, proprietary traders, hedge funds and asset managers), and member firms will, upon request, report all trades (with timestamps) by those firms to the SEC. The assignment of these unique IDs for each market participant will allow regulators to piece together the activity of these firms irrespective of how many brokers they use for execution. But it also opens the door to two further developments –
  • If brokers were required to pass on the LTID on every order routed to market, surveillance by venues could then be undertaken at the granularity of the end client. This would reduce false positives (which arise because brokers typically trade for many clients simultaneously) and allow for surveillance of end participants independent of how many brokers they use. Of course, venues would likely need to adjust their trading interfaces to accommodate the LTID on order messages.
  • Provision of LTIDs to the venues would remove a significant obstacle to the creation of a Consolidated Audit Trail through which markets might be required to disclose to regulators in real time the detailed activity (orders and trades) of all participants – although a number of other practical and philosophical issue remain (see a prior blog on this topic).
We invite feedback from brokers, competitors, regulators and institutional investors on our approach and our views. Previous editions of Turquoise Talks can be found here under the ‘Blogs’ tab.


P.S. We encourage our clients to complete the Automated Trader 2011 Algorithmic Trading Survey

Monday 13 June 2011

The End of Time

Once Upon a Time… our markets consisted of a single CLOB (Central Limit Order Book) by country. Absent competition, these order books were slower & participants’ fees were higher. Such CLOB markets worked with strict ‘Price-Time’ priority. Additionally, in many continental European markets there were concentration rules mandating broker-dealers to execute client orders in the CLOB – so the Price-Time priority in the CLOB was to a significant extent the only game in town (although with support for iceberg orders, some would characterize it as ‘Price-Display-Time’).

As we’re all familiar with, MiFID changed our market structure by sweeping away the concentration rules mandating use of a single CLOB by country, and allowing the emergence of multiple competing PLOBs (Public Limit Order Books). This has resulted in a period of intense competition and rapid innovation, driving dramatic reductions in trading tariffs, huge improvements in system performance and capacity, and the emergence of dark Midpoint orders books, and so on and so forth. And as brokers have developed the technology to participate in multiple lit PLOBs and dark midpoint books, they have also deployed internal crossing networks where they seek to internalise customer flow before or in parallel to routing it to external venues.

How have all these developments impacted Price-Time priority in our market?

Most individual PLOBs operate with Price-Time priority (we’ll get to those that don’t a bit later), but the fact that there are more than one means that brokers need to calculate in which price-time queue the speedy execution their own limit orders is more certain.

Imagine a multi-lane motorway (each lane is a Price-Time queue of a PLOB), with traffic (limit orders to Buy) queuing up to pass through a set of toll booths (marking the Best Bid in each venue). Coming from the other direction are the Sell orders, also queuing in the same number of lanes for each booth. Booth operators (exchanges and MTFs) are supposed to keep their particular queue moving in a fair and orderly manner. Happily, the collision of a Buy and Sell order results in a Trade (which is published) rather than a car crash, and such collisions happen when somebody pays the toll (the venue’s fee and the spread) to cross and meet the oncoming other queue. So, if you’re in a hurry you can jump to the front of the queue by setting a new best Bid or best Offer, or you can pay a premium for immediacy by submitting an aggressive order.

Where there are multiple venues with the same displayed price, how do brokers decide which ones to access? We should expect a SOR’s venue selection to be driven by cost, certainty of execution, and (possibly) market impact.



  • Cost includes the explicit tariff for aggressive flow and also the related post-trade costs.
  • Certainty of execution determined by a variety of factors including the broker’s latency to the venue (both for inbound market data and order routing), the average lifespan of limit orders at each venue, and the share volume or number of different orders/participants at the BBO. To estimate certainty, some brokers measure the historical success rate of capturing a targeted bid/offer price when routing to each venue, whilst others use displayed size or a venue’s share of trading as proxies for this.
  • Market impact in this context depends on whether there are differences (by venue) in the propensity of prices to rebound (or fade further) after being hit by a marketable order


Assuming you don’t want to cross the spread, and you have decided to join the queue with a particular limit price, which queue is the right one to post your limit order in?

In the beginning brokers either chose which venue(s) to post to on the basis of each venue’s overall share of trading in the stock (or group of stocks) – a bit like joining a queue because everyone else is (very English), and trusting in the wisdom of the crowd. Others set specific ratios for each queue, and in doing so some brokers were doubtless influenced by the payment of rebates for getting to the front of MTF queues and by a desire to stimulate and support competition amongst venues.

But as both brokers and their clients have become more sophisticated, so the criteria for venue selection have evolved. It’s increasingly the case that brokers are applying a variety of predictive signals to determine which queue they can get to the front of quickest. What factors are they considering, and how do they capture these in their SOR decisions? Basically – how long is each queue and how fast is it moving?



  • When setting a new EBBO, brokers can jump to the front of any queue they choose. But their choice still matters, because if their new price is subsequently matched on other venues, they want to ensure that they’re in a queue where the booth is moving quickly – a venue that’s reliably attractive to contra-side aggressive flow.
  • If joining an existing queue, they need to consider the length of the queue in relation to the speed at which it’s moving. This similarly depends on the arrival rate of contra-side aggressive flow.


In choosing where to post their limit orders, brokers have to predict the behaviour of prospective counterparties aggressing the market. For example, markets with lower take-fees, more participants and lower latency may enjoy more success in attracting aggressive flow, and hence become more attractive venues for posting.



But that’s not the whole picture – the twin forces of competition and technological innovation have changed the landscape in two ways that arguably reduce the certainty of execution for publically displayed limit orders (and hence reduce the incentives to post them):



  • There are a whole bunch of other queues that you can’t see – effectively another private motorway next to yours that you may or may not be entitled to use. You’re waiting patiently in your queue on the public highway, and you see reports of other people’s executions on the private motorway, but the public traffic doesn’t seem to be moving.
  • Having reached the front of your chosen public queue, you’re expecting to trade when an incoming contra-side order crosses the spread. Instead, somebody sneaks ahead of you at the last second, or you get only a partial execution as some of it is allocated to the people behind you.




Both brokers and exchanges have started to monetize time/place priority as the valuable commodity that it is…

First, brokers…



  • With SORs in place, brokers had the opportunity to introduce their own crossing networks (whether ATSs or BCNs) without doing a disservice to their clients (previously an order kept ‘upstairs’ could not easily also be represented in multiple other places). With ”take” fees being high on US exchanges and ECNs (relative to equivalent fees in Europe), it made particular sense for brokers to internalise marketable flow. Some firms already had the necessary internal market making capabilities, some acquired the capability by buying specialists in the field, and others approached the big market makers active in public markets and encouraged them to do the same thing in their own pools.
  • Whilst many market makers are pro-transparency and are, in principle at least, against internalisation, getting a chance to intercept liquidity before your competitors was a fairly compelling opportunity. Brokers realised that the customer flow they were executing was a valuable commodity that could be monetised by offering electronic market makers an earlier opportunity to interact - in essence selling those market makers ‘time/place priority’. So whilst electronic market makers may have displaced the traditional market making businesses of many large banks, within brokers’ own liquidity pools they have become a source of revenues and/or cost savings.
  • The same is beginning to happen in Europe, although the ambiguity surrounding provision of 3rd party “non-discretionary” access to Broker Crossing Systems acts as a partial brake on some firms.




And then exchanges/ECNs and MTFs…

As competition has heated up, exchanges have also started to experiment with different routing or “allocation” models (most of which are breaks from the traditional price-display-time model).


  • NYSE’s model offers Designated Market Makers and Floor Brokers ‘parity’ - the ability to participate in a trade even when they’re not at the front of the queue. The exchange dilutes time priority for normal participants (as just under two-thirds of the liquidity they might have captured in a strict price-time model is instead allocated elsewhere) in return for the fees and committed liquidity they get from the DMMs. This model has recently attracted regulatory criticism, although it’s worth noting that diluting price-time priority lowers the importance of pure speed as a market advantage (instead it’s about your relationship with the exchange) – and hence does not necessarily favour HFT firms. Such models exist because the committed liquidity that DMMs bring can give the exchange a competitive advantage.
  • DirectEdge’s “flash orders” were seen by some as an attempt by the market to subvert price-time priority and instead ‘sell’ priority to a selective subset of customers.
  • NASDAQ’s PSX introduces size priority, allocating incoming contra-side shares pro-rata to the displayed size of participants at the BBO.
  • The arrival of Taker-Maker books (in which a rebate is paid for removing liquidity) can be understood as an attempt to create a venue in which limit orders enjoy superior time (or price) priority over those posted in venues that charge for liquidity removal.




Some exchanges are arguing against internalization on the basis that (as a result of reducing the certainty of execution for public limit orders), it will reduce incentives to post limit orders in public order books and lead to a vicious circle of widening spreads and increasing internalisation. Basically they appear to be against the dilution of price-display-time priority. And yet the exchanges have also exacerbated the dilution of price-display-time priority through the launch of multiple competing order books and alternative “allocation” models.

All of this makes for a more complex market structure than either participants or regulators are accustomed to, and means we’ll probably be debating these topics for some time.



  • Since the regulators have clearly opted for a competitive (and thereby fragmented) market structure, should we worry about the degree of fragmentation?
  • Is the continual evolution of technology leading us towards a much more distributed market model in which the role of displayed order books is less prominent?
  • How much volume do we need in public order books to provide reliable price formation?
  • To what extent does price discovery rely upon there being price-time priority within the market?
  • And if the death of price-time priority does ultimately undermine the efficacy of the price formation process, then “who done it”?
    • Was it regulators with the introduction of RegATS, RegNMS and MiFID to stimulate competitive (and fragmented) markets?
    • Was it exchanges introducing innovations that that give ‘first look’ or privileged interaction rights to a subset of members?
    • Was it brokers leveraging their SOR investments by introducing their own dark pools to which they give precedence over public markets?
    • Was it exchanges operating multiple order books with differential tariff models?
    • Or, like Murder on the Orient Express, were we all guilty?

Thursday 27 January 2011

Size Matters...

Amongst the many questions posed in the CESR/ESMA MiFID II consultation is this one;
“Is it necessary that minimum tick sizes are prescribed?"


Almost everyone (apparently excluding NYSE Euronext) agreed (at least until two days ago) that a lack of tick-size harmonisation is an unnecessary inefficiency that depresses volumes, creates trading errors and results in significant maintenance costs for venue operators and participants. And consensus seems to be growing that the dangers of restricting order-to-trade ratios or imposing minimum resting times on orders would outweigh any potential (poorly articulated) benefits. So attention is refocusing on tick sizes. And yet getting and keeping a consensus on tick sizes has been difficult, so, perhaps it’s worth revisiting why tick sizes matter so much, and to whom.



What are the pros and cons of small tick sizes?



  • Smaller ticks intensify competition amongst market-makers and liquidity providers, and thus attract more liquidity and tighten spreads.


    • Tighter spreads are the most obvious measure of market quality. They reduce transaction costs for marketable orders.

    • For larger orders, greater depth of liquidity is required to reduce effective and realised spreads – and there has been a strong correlation between tighter bid-ask spreads and increasing depth of liquidity.

    • More tick granularity improves the efficiency with which statistical arbitrageurs can “port” liquidity from one asset to a related one, resulting in more efficient and more liquid market.

    • So on the face of it, narrow ticks allow tighter spreads, and tighter spreads improve overall market efficiency.

  • But, with many more price points to choose from, liquidity is naturally distributed across more price points, which has a number of possible implications:


    • Liquidity at the touch (and every other individual price point) may be lower, which some have used to support a somewhat disingenuous argument that “liquidity has reduced since MiFID”. This argument is weak, since the “effective spread” or cost to trade any given size of order has declined due to an increase in visible liquidity when all price points are considered.

    • This distribution of liquidity, and smaller volume at each tick, might reduce incentives to post larger orders, as they will be more easily discerned by other market participants. So smaller tick sizes can encourage the further slicing & dicing of orders.

    • The participation by algorithms and market makers at a greater number of price points subsequently requires more order amendments and cancellations as markets move, driving higher volumes of market data and putting strain on the infrastructure of market operators, data vendors and consumers alike.



  • Some argue that tick sizes can be too low (although exactly what constitutes “too low” is a subject of fierce debate):


    • Where the tick size is too low, the cost of setting a new best bid/offer is small, and so large orders are more prone to being “stepped ahead of”. This reduces the incentives to display size in the public markets, continuing the trend towards smaller order and trade sizes and more frequent data updates.

    • Lower liquidity (shorter queues) at each price point, combined with a number of competing order books for each security, might also dilutes the incentives to leave orders in the market for a period of time so as to reach the front of queue – and without such an incentive orders will tend to have a shorter duration – once again fuelling faster market data update rates.


What about larger tick sizes?



  • Larger ticks force a consolidation of liquidity at fewer price points, leading to greater price stability and potentially strengthening the “time” component of “price-time” priority by requiring orders to remain in the book for a while in order to reach the front of the queue. This has the potential to drive greater stability and hence to reduce market data volumes.

  • But, and it’s a very big ‘but’, larger ticks reduce the capacity for participants to price improve, and force wider spreads, and thus they:


    • Materially increase transaction costs for marketable orders (particularly smaller orders from retail participants or algorithms).

    • Exclude some liquidity from the market that cannot afford to cross the spread, and which is unable to gain time priority by setting a new best price.

    • Increase incentives for investors to find price-improvement via non-display or OTC trading avenues.

    • Increase the potential profits of market makers, creating stronger incentives for firms with a substantial distribution network to internalise client flow rather than route it to public markets.


Who wins with larger ticks?



  • Many market makers and day-traders have seen their profits eroded by smaller spreads, and hence typically advocate larger ticks (or at least argue against further reduction). Whilst sustaining the profitability of such participants is unlikely to be regulators’ principal concern, a lack of profitability will lead to a further reduction in liquidity from such participants.

  • Smaller brokers (and often investors executing on a DMA basis) struggle to execute amongst the “blur” resulting from smaller tick sizes and hence higher data volumes. Only those with significant IT budgets can invest in Smart Order Routers capable of capturing the visible liquidity spread across many price points and venues. So perhaps many would welcome a re-consolidation of liquidity at fewer price points, even if it moderately increased their transaction costs. And exchanges to whom such members are more important might reach different conclusions about the optimal level for tick sizes.

  • Both data vendors and data consumers have been stretched by the climb in the order-to-trade ratio. Might they breathe a sigh of relief if tick sizes were increased?

Who prefers smaller ticks and spreads?



  • For retail brokers, tighter spreads translate directly into lower transaction costs.

  • Firms with diversified investment portfolios (e.g. index and quant investors) who rely heavily on algorithmic trading should also realise lower transaction costs.

  • Electronic market makers and some retail brokers, who prefer “egalitarian” markets in which they can interact with liquidity on a fair and equal basis with banks and brokers, like smaller ticks because they reduce brokers’ appetite and capacity to internalise flow. Reduced internalisation, they argue, results in more natural liquidity reaching the public markets, thereby reducing the potential for adverse selection and encouraging greater limit order liquidity (whether from market makers or investors) into the markets.

  • Statistical arbitrageurs, upon whom market participants rely to transfer liquidity and risk across venues and correlated instruments, are negatively impacted by larger ticks and by internalisation, and so hence have a strong preference for smaller tick sizes.

  • Any individual venue enjoys a relative advantage if it allows smaller ticks than its competitors, simply because it can publish a tighter BBO, and hence attract order flow from brokers seeking best execution. But, this is tragedy (of the commons) waiting to happen, and can lead to a “race to the bottom” which harms market quality.

Historically, tick sizes were set by exchanges. They sought to balance the needs of different types of market participant. In the UK, the buyside, led by the IMA, consistently lobbied for smaller ticks, with the banks (and market makers in particular) resisting, and the exchange was stuck in the middle. Only as the business migrated towards DMA did this impasse clear, with narrower ticks becoming more widely accepted.


Then the MTFs arrived, and by offering standardised pan-European ticks across their own markets and adopting smaller ticks, we succeeded in attracting new liquidity to our platforms and to the market as a whole. The success of MTFs in attracting this liquidity forced exchanges to follow suit or find that they no longer offered best execution.


After a while, market participants, struggling with inconsistent tick-size regimes across venues, then gave impetus to the discussions that have resulted in today’s “gentleman’s agreement” on harmonised dynamic tick tables for many markets. And somewhere along the way, some on the buyside seem to have switched to the other side of the debate, and now advocate larger ticks.


So what happens next?



  • The harmonisation of tick sizes, attractive in principle, requires a compromise between firms with opposing interests. How robust is the consensus that harmonisation should trump individual venue’s ability to tailor their market to their customers needs? NYSE Euronext seem to be challenging the need for a consensus by announcing changes to their own ticks (deviating from the FESE tables) without consulting many market participants or other platforms.

  • If we aim to retain harmonised ticks, how do we allow different types of market participant to have an appropriate level of influence in the debate, or should we be looking to put the decisions in the hands of academics or regulators rather than practitioners with vested interests?

  • Can the current “gentleman’s agreement” approach involving venues and brokers working be relied upon to keep working, as new MTFs spring up, or in the face of a global exchange group announcing changes (whether considered good or bad) without consultation?

  • If ESMA intends to prescribe tick sizes, then how should the terms of reference be set to ensure they reach an appropriate balance between Europe-wide harmonisation and the dynamics of different markets?

Regardless of who makes the decisions, a larger problem is determining whether proposed changes are likely to be beneficial for investors (who should matter more than intermediaries). There are two problems:



  1. We have no universally accepted measure for market quality.

  2. Because MiFID coincided with the credit crisis, we have no way of disaggregating MiFID’s effects on liquidity and market quality from the effects of the crisis.

Perhaps we can take a leaf out the SEC’s book in the US. When considering changes of this kind, the SEC has a longstanding practice of running short pilot programmes (of a few months), applying the proposed changes to a small but representative sample of securities. This allows the SEC, market participants and academics to gather solid empirical evidence and evaluate the impact of the changes on liquidity and transaction costs relative to a control group of stocks for which no changes were made. I don’t suppose this resolves the arguments or competing interests, but it must surely guarantee a more informed debate.

Or perhaps the dynamic tick tables we use in Europe (whereby the tick size changes intra-day based on the instrument’s price) already provide adequate data for academics to pour over?

This has been, and will continue to be, an imperfect process, and I worry that this week's announcement by NYSE Euronext will harm our ability to maintain a consensus in the future.