Articles Marketmedia

Designing Fault-Intolerant Trading Systems

Written by Terry Flanagan | Dec 3, 2012 9:13:51 AM

With the impact of this year’s trading snafus involving Knight, Facebook and others still fresh in everyone’s mind, trading system designers are under pressure to build safeguards against fat-finger errors and rogue algorithms.

In high-frequency trading, where milliseconds cost millions, systems must respond immediately to trading irregularities before financial disaster occurs.

“With regulators focused on pre-trade risk, the bar is getting higher and higher,” said Peter Metford, chief technology officer at Cyborg Trading, a trading systems provider. “Moving forward, trading systems will need to be able to identify who they’re trading with, who their counterparties are and what their risk limits are. All this increases the need for technology to leave no room for error.”

Cyborg Trading’s Cloud Trader provides an environment in which the functionality, correctness and performance of algorithms can be tested before they are launched into production.

A ‘local sandbox test environment’ provides a dedicated local development machine in order to test functionality throughout the development process, including debugging tools which provide full-code transparency.

Metford, an engineer whose stealth aircraft detection and tracking algorithms are presently in use by modern radar detection, has built multiple layers of logic tests into the Cloud Trader system, which will stop completely if it detects behavior out of the normal bounds.

The system checks not only whether the transaction made sense, but whether the account had the authority to make the trade in the first place or whether the algorithm has the authority to make the trade.

“Our platform has a hierarchy of checks to the millisecond,” said Metford. “We specialize in high-frequency trading where milliseconds cost millions and our system can respond immediately to trading irregularities before financial disaster occurs.”

In fact, trading systems aren’t much different from avionics or other precision-engineered systems that have zero fault-tolerance.

“Just like any well-engineered system, every link in the chain for the system needs to support the intended goal and check for anomalies at every step,” said Scott Ignall, chief technology officer at Lightspeed Financial, a provider of direct access brokerage.

“The system in this case is the trading firms, technology providers, broker-dealers and exchanges. They all need to provide the necessary pre-trade checks to prevent these errors. The regulators have done a good job of tackling individual components, but we don’t believe the entire ‘system’ has been viewed holistically when analyzing the risk of these systems.”

For example, broker-dealers using their own MPID [Market Participant Identifier] can provide their own risk/fat finger checks. However, if they use another broker-dealer’s MPID, they must use another system. “This part of the rule actually introduces more risk since we now have more systems that need to communicate, and any incongruence in logic could cause the exact type of error we are trying to avoid,” said Ignall.

Another example “is the multiple regulatory bodies not co-ordinating risk rules and efforts when almost every trading system out there has a mix of asset classes, supported by different regulatory bodies exposed to a single trader on a single screen,” said Ignall.

Fault-intolerant systems should become more and more of an industry standard when designing automated trading systems.

“The normal engineering practice is to design fault-tolerant or graceful degradation systems, which means the systems can limp along rather than fail completely when some part of the system fails,” said Metford at Cyborg Trading. “With the new regulatory changes, the 2010 ‘flash crash’ as well as all the mini-flash crashes and the recent Knight Capital craziness, shouldn’t the exchanges be adopting a similar approach?”