Background
With the rise of global unbundling, traders’ best-execution workload will increase dramatically, as they must develop a best-ex commission allocation scheme for all counterparties. Given so many brokers and so much noise inherent to trading, this is not just a practical concern, but an econometric one. This isn’t a new problem, but one that will become more common. Some of ITG’s clients, notably quantitative firms, are already trading exclusively for best-ex, with a history of performing TCA and judging algorithmic trading experiments. This document attempts to document some of the practices we have observed and offers some insights into how they can be used by traders at non-quantitative funds.
Understanding Risk
Ideally, brokers would offer algorithms that provide the lowest level of cost they’re capable of for a given level of risk—faster algorithms being costlier with lower risk on average and slower algorithms the converse. The industry didn’t intentionally develop this way, but competitive forces have driven the market for commercial algorithms to have a few different entry points that are fairly similar across brokers and roughly achieve a sort of efficient frontier.
Source: ITG
One of the biggest challenges for an institutional trading desk is estimating the risk tolerance for a given order or program. The most obvious and sought after inputs to risk tolerance are market direction and portfolio manager alpha, but these are difficult to predict. While it does make sense to pursue a better understanding of directional measures, we think it is also essential to understand the humbler, but more predictable, element of risk: frequency. The more frequently you implement a type of trade, the less contribution each order / day has to the final performance.
Source: ITG
This sounds simple and obvious, but it can be difficult to track the motivation of each trade from PM to desk to broker. This tracking is a technology investment worth making.
Strategy Selection
Once the risk of an order or program is well understood, it’s time to select a strategy. This is an area where the skill and experience of a trader is key. It may seem that you should just pick the strategy on the efficient frontier according to your risk tolerance, but there is another dimension to consider and that is current market conditions. Volume, spread and volatility vary substantially from day to day and trade to trade. This third dimension can make the shape of the efficient frontier very different for a given trade vs. the historical expectation. For example, in a high volume, low volatility environment, it might be very low cost to reduce market risk substantially by trading faster.
In general, the efficient frontier for strategy selection can be improved by using personalized data rather than a peer data set that may blend together flows with widely varying motivations. The downside of using only personalized flow is that you may not have a sufficient sample, so a good system will bootstrap with peer TCA data where necessary.
Auto-Routing
Many brokers offer discretionary implementation shortfall algorithms that can change their behavior based on market conditions and estimates of direction. These strategies are probably most valuable for smaller flows with high risk tolerance. This concept leads us to the general topic of auto-routing, which can increase productivity on the desk by allowing human traders to focus their attention on the most difficult orders.
Auto-routing can be as simple as “orders tagged in a certain way, below a certain size go to various implementation shortfall algorithms.” It can be as complex as a model relying on past experiences. Either way it is important to note that even though the routing is nominally “automatic,” auto-routing still reflects a human judgement about cost/risk tradeoff, just at higher scale.
Beyond productivity, a significant benefit of auto-routed flow is that it removes bias, so over the long term it should be more practical to make incremental improvements. The most significant downsides are operational risk and the potential for outsized losers due to breaking news.
Broker Selection
In the case of a process-oriented implementation, we suggest separating the concerns of strategy selection from broker selection. For a given strategy (fast, slow, aggressive, on-close, etc.), try to find suitable brokers to include and randomly assign orders to each. In this way, the trader is not creating bias by over-using brokers with whom he or she is most familiar. A side benefit is that a universal FIX spec can be applied to each strategy to simplify order entry and make intent consistent across providers. The goal of the random distribution of orders is to create a fair metric for allocating commission that requires less econometric alchemy to understand. It also creates a clear incentive for brokers to provide better execution to increase their trading allocation.
Evaluation
It is much easier to evaluate brokers if the sample is fair in terms of strategy and order difficulty, but even with such a sample, there will still be residual bias. The good news is that it will be easier to adjust for it since you’ve created a reasonably fair and complete cross section.
Even if you do everything right, you are still unlikely to be able to rank order brokers from 1 to N. It is more likely that you will have 2-3 tranches of brokers: e.g., “best, average, underperforming.” From here, you can allocate commissions, but it is common to use more than just raw performance to reward brokers. Firms often have a report card consisting of a performance score (usually more than 50%) with various qualitative service and reliability scores also factoring into the final ranking.
Exploration vs. Exploitation
One pitfall to avoid is allocating too much commission to the best broker in each category. The first-order logic makes sense here: performance wins. But other brokers may be able to improve over time or demonstrate specialization in a specific type of order flow. Said differently, if you exploit the best option too much, you fail to explore other options in the future, leaving you potentially stuck in a local maximum.
Conclusion
By turning the trading process into a randomized controlled trial, a trading desk can reduce the noise in broker assessment, and take on more brokers at scale. Broker report cards can be more quantitative and productive and you can extract more labor from the Street in addressing your performance needs. Auto-routing can further increase productivity and reduce noise as the data becomes more conclusive. By adopting some of these techniques of quantitative managers, traders may be able to devote more time to strategy selection and ultimately increase the value of their contribution to the investment management process.
Ben Polidore is Managing Director and Head of Algorithmic Trading for the Americas at ITG.