Trade Talk Blog

The official blog of Trading Technologies, your source for professional futures trading software.

← Back to Trade Talk Blog

This is the third in a series of blog posts on MiFID II (Markets in Financial Instruments Directive II). If you missed the earlier posts, see MiFID II: How Did We Get Here and What Does it Mean? and MiFID II and Algorithmic Trading: What You Need to Know Now.

In this post, we take a look at MiFID II testing implications for investment firms engaged in algorithmic trading and review Trading Technologies’ solutions.

Our latest software-as-a-service (SaaS) solution, the TT® platform, allows customers to benefit from the lowest latency order messaging rates publicly available via an off-the-shelf trading platform. Reducing the number of instructions required to be machine-code translated by compilers and processed by the CPU radically improves time-critical efficiencies. The TT platform achieves much of this high-speed performance via their highly optimized code base and by leveraging proven techniques such as network stack kernel bypass. This combination is applied throughout the critical trading path, from market data ingestions to analytics and trade decision logic and on through to market order access.  

MiFID II: Blasting Through the Directive’s Algorithmic Drag

To the well-read MiFID II aficionado or equally concerned investment firm participant, RTS 6 provides systems and process requirements for investment firms engaged in algorithmic trading. The directive’s underlying concern is weighted towards the prevention, detection and containment of an algorithm from behaving in an unintended manner that could create disorderly trading conditions.

Unlike, for example, the equation for maximum dynamic pressure, the regulatory texts describing “disorderly trading conditions” do not lend themselves as succinctly or as clearly defined from an engineering perspective. At Trading Technologies, we’re well-accustomed to encoding technical requirements into binary actions. As a software leader, we adhere to self-inflicted stringent testing of our products beyond the current MiFID II requirements. Nevertheless, that leaves no room for complacency, however vague the regulatory definition.

We conducted extensive consultations with customers, financial trade bodies and technical fora. This collation included FIX Trading Community’s attempt to bring clarity to the regulation. From these findings, we defined our own MiFID II test scripts into a series of suites

What Are These Combined Tests and How Are They Applied?

These multiple tests are categorized in five new MiFID II testing suites and will be applied to our automated trading functionality, including Autospreader® Strategy Engine, Synthetic Strategy Engine and Algo Strategy Engine in X_TRADER®, and Autospreader and our synthetic order types in TT.

In my previous MiFID II blog post, MiFID II and Algorithmic Trading: What You Need to Know Now, we announced the formation of our Algorithm Oversight Council (AOC) to focus on our algorithm design, testing, documentation and customer support. These test suites will be core to the AOCs deliverables internally and the results made transparent to our customers for their own regulatory satisfaction. They also include MiFID II’s stress testing for running high messaging and trade volumes respectively.

Enhanced Key Testing Suites:

  1. Exchange Latency
    Determines algorithm performance and behavior subjected to trading venue slowness and delays. These include and not limited to impacts such as high volumes, impairment of systems or member/participant activities.
  2. Disconnection Tests
    Determines algorithm performance and behavior impacted by exchange disconnects and reconnects at random time intervals.
  3. Erroneous Orders and Transactions
    Determines algorithm performance and behavior subjected to exchange-induced trade busts executed by the algorithm concerned.
  4. Price Volatility Stress
    Determines algorithm performance and behavior subjected to extremely volatile market conditions that involve significant short-term market volume and price volatility deltas.
  5. Message Rate Stress Tests
    Determines algorithm performance and behavior subjected to specifically high message loads generated by the trading firm’s own activity.

Taking into account the manner appropriate to the nature, scale and complexity of the underlying synthetic order:

We devised a comprehensive and definitive set of error states to determine passes or failures. Typically, the error states comprised detection of unintended orders sent from a specific automated order function, unnecessary order changes, excess or deviation by cessation from normal message traffic for the order type in question, and more. Like-with-like comparative testing played a significant role when analyzing against abnormal price changes, such as high volatility or disorderly markets. The discrepancies in reactionary order handling between, for example, an Autospreader with a large scope of price output variables versus a synthetic iceberg with none, were taken into account.

MiFID II defined stress testing of our plumbing:

  • “Running high messaging volume tests using the highest number of messages received and sent during the previous six months, multiplied by two.”
  • “Running high trade volume tests using the highest volume of trading reached during the previous six months, multiplied by two.”

While targeted at investment firms, these two requirements are arguably vague. For that reason, we took a conservative approach in determining the appropriate load volumes. We scrutinized our hosted TTNET™ environment and identified the highest single-second period over the previous six months. The Regulatory Technical Standard does not stipulate the period over which the max load should be considered. Applying a conservative approach, we ensured we took the highest spike in volume, multiplied it by 2.5 and sustained that load for many minutes while conducting our tests.

We went beyond these levels of testing in order to determine both the highest potential message and trade loads our customers’ automated servers would experience over a six-month period. By analyzing our TTNET environment and applying a prudent comfort buffer, we multiplied the six-month requirements in both instances by 2.5. All outcomes tested were passed. These will be published as part of our AOC initiative.  

ADL®:  Testing Customer-Created Algorithms in X_TRADER

Assemble, Test, Approve and Launch

Investment firms via their independent Algorithm Compliance Officer (ACO) are required to test both their in-house and third-party-created algorithms. For X_TRADER users creating algorithms in ADL, we will draw from our own in-house testing described in this blog post to provide samples for disorderly market simulation. These samples will be available for customers to run their algorithms against and develop accordingly to satisfy their own regulatory requirements. Our AOC will publish further details and sample descriptions for use. This information will be available on our MiFID II compliance webpage.

Algorithm Management Workflows in TT

New algo management control workflows and versioning states will be optionally available to administrators in ADL impacted by EU regulation. Only end users or designers enabled in Setup will be able to save and share their algorithms with their investment firm’s independent approver. The designer will be required to apply an algorithm name, description and list of input variables with their specific ranges. No algorithm may be deployed into a live marketplace without testing, approval and deployment rights being assigned for the specific version or instance of the algorithm in question.

TT’s PACT: Preemptive Algorithm Compliance Testing

MiFID II introduces testing requirements for algorithms utilized or provided by investment firms, whether developed in-house or by third parties. In addition to our in-house synthetic order type stress and functional testing, we intend to launch an exciting new backtesting application to work seamlessly with the algorithm management workflows described in this blog later in 2018. The algorithm approver will be empowered to pre-define one or more periods of historical data for a given instrument with which to backtest and subject to a developer’s algorithms. This will provide significant scope for multiple backtesting scenarios.

Aspiring to our pioneering and creative DNA, as a leading software designer, we have sought to develop a comprehensive and transparent array of algorithm testing solutions. Our objective is to facilitate our customers’ present and future optimal compliance trajectories beyond MiFID II’s current regulatory orbit.

In my next blog post, I will take a look at MiFID II’s order and execution messaging requirements and examine Trading Technologies’ solutions for transactional reporting and recordkeeping.