Buy-side and sell-side senior traders discuss the viability of implementing a global trading model that can accommodate regional differences and idiosyncrasies.
By Stuart Baden Powell, Head of Asian Electronic Execution, Macquarie (Hong Kong), Dan Royal, Global Head of Equity Trading, Janus Henderson (US) and Hugh Spencer, Head of Asian Trading, Janus Henderson (Singapore)
(This article first appeared in Global Trading magazine.)
Stuart Baden Powell:
Several clients have taken the route of installing an algo wheel or another form of bracketing system. Some have done this in response to markets in financial instruments directive (MiFID) II, to enhance workflow, or to improve execution performance amid a tilt towards a more quantitatively driven buyside dealing desk.
The reasons vary, but implementation variance across clients is wider still. Some have mature processes in place, others are at a nascent stage. In either case — as with the algos that are mapped to each bracket — they share the need for constant refinement and rigorous data analysis.
Globally, much has been made of the challenges of implementing such structures in Asia Pacific (Apac) markets compared with those in Europe, Middle East and Africa (Emea) or the US. Asia traders would point to the nuances of stocks, foreign exchange and regulations versus Emea or the US. Much like Apac though, clients in Emea also face differing exchange operations, still have foreign exchange concerns and specific regulatory challenges. In addition, they encounter far tougher routing logic based on the existence of multilateral trading facilities (MTFs) (both lit and dark), LIS venues or systematic internalisers (SIs) In both cases, clients often trade algorithmically across the full spectrum of countries in their regions.
Similarly, in the US, while foreign exchange and macro legislation concerns fade, further order types, higher diversity of participant strategies and routing requirements are constant challenges to effective automated trading processes.
In all three regions there are unique challenges that can be structured and thus processed and canned into brackets or other automated sequencing. Ultimately there are methods to trade efficiently and effectively on a broadly globally consistent format while adjusting for differences between each constituent stocks’ individual characteristics.
At the entry level, the ability to provide clear, consistent goal-posts and solid levels of communication, coupled with the ability to allow broker discretion to provide optimal solutions to reach those targets, improves relative outperformance.
Janus Henderson runs a broadly globally consistent model, so Dan, how do you see the upsides and challenges cross-region?
Dan Royal:
The use of the term algo wheel has now finally reached a point that it could be considered disingenuous and irrelevant. Automation or augmentation among tool selection is definitely prevalent within most broker, fintech and execution management systems (EMS) offerings. Yet the commonly used term of “wheel” seems to sell short the idea of what we are trying to accomplish.
Wheel has the connotation that the tools are commoditised, performance distinctions are negligible, and selection is based on factors other than performance.
Let’s define what we’re attempting to accomplish: an ability to navigate a complex environment in a dynamic manner that allows us to maximize our ability to access suitable liquidity, minimize adverse impact and do so in a manner that avoids predictability and signaling to potentially predatory participants.
To achieve this, a trader needs a defined and distinct set of tools that can cover the wide variety of market conditions and liquidity demands. Those distinct toolsets need to be well understood, effective in performing their defined tasks, complimentary to the adjacent tool sets and seamless in an ability to navigate among them.
Each toolset needs optimization by continually comparing and contrasting performance among like-minded products. This achieves the framework for a platform of best-of-breed tools that allow the trader or intelligence engine to select the proper tool to navigate almost any scenario. In this case, the spokes represented by the tool sets are unique and distinct, making the wheel significantly out of alignment.
The challenge then becomes the best way to optimize the scope and sequence of tools in any given trading scenario, on the assumption the liquidity need is such it requires a plan and potential course correction. In a truly automated world, the scope of variabilities is massive and the ability to program the decision tree is challenging at best and likely introduces significant gaps in the assumptions.
As the industry pushes towards disintermediation of the human, the value added by the trader becomes underrepresented in the equation. It’s not to say automation and tool switching aren’t relevant, but more to suggest that trader augmentation with the proper toolset and guidance may yield a better outcome.
Lastly, toolsets will and should vary by a firm’s needs or regional distinctions. Even in the confines of our global bracketed algo construct, the regional distinctions between product and market structure requirements are significant. Global efforts should be considered guideposts for the philosophy and framework for the platform. We feel it is important to establish regional product that is optimized to the region’s needs.
SBP:
The point about distinct tools and the need to understand them is key, with an inherent risk in both areas. For example, if you use an average daily volume (ADV) selector as your filter, how do you optimally calculate the level and dynamically adjust? Or do you stretch the targets and move into classification and clustering models? On the latter, which is very topical, we see challenges in implementation, for instance on compactness – that is, within the cluster — or isolation – that is, the distance between clusters. Trading based on specific parameters and algorithms is essential to monitor at the stock level and for an ongoing basis, which is a resource unto itself.
This brings us to chronology, namely: are the costs of implementing and supporting structures (people and analytics) in your favour compared with the benchmark savings you make by potentially static or constrained algo mappings that may or may not produce optimal results – for example, on small ADV and low notional savings? How does this change over time on a marginal cost per basis points saving for example?
The structures of both buy-side and sell-side can also start to adjust — both in skill set transition of existing staff to an actual new breed of staffing. Is the requirement to manually handle high numbers of often binary or “quick to find” answers on sell- and buy-side still relevant, given algos are “hard mapped”?
Or is it (or will it be) more about larger scale data analysis where answers are not often known in advance and the ability to “phone a friend” for a rapid response not available?
Looking a bit deeper, we also see challenges around the use of aggregated data: taking averages over decent sized data sets should mean “the data speaks for itself”, but does it? Comparability across broker performance or, in our case, across client performance is sometimes challenged based on the compactness of your cluster, as a simple case in point.
Hugh, how do you see the buy-side adjusting and also how is the buy- and sell-side adjustment to servicing being dislocated, if at all?
Hugh Spencer:
Brokers and providers of electronic solutions that fit into the construct of a platform that places a greater emphasis on the evolution of execution methods are naturally incentivized to provide new ways to optimize customer results. To further promote the spirit of exploration, it is vital to create a complementary framework whereby the qualities that exist outside of a quantifiable universe are measured and rewarded appropriately.
Armed with the understanding and knowledge that their clients not only value but formally evaluate elements such as innovation, quality of daily interactions, adaptation to microstructure changes or connectivity to non-conventional liquidity, vendors and brokers become emboldened to not only create and offer a better product, but also to ensure they offer better overall levels of service.
This environment fosters the true spirit of partnership between client and broker. This highly communicative relationship can, in the long run, only add value for the most important stakeholder: the investor. It also galvanizes the idea that the restrictive nature of a purely quantitative based and automated system of execution is not only unsuitable for the full spectrum of asset managers, but that by building the road to bilateral exploration our firm will be more strongly positioned for the future.
SBP:
In turn, how does the algo sales-trader rise to meet those demands? What skills are required in that seat today? Has it changed? Should it change? How does that relate to high touch or program trading, if at all? And how does the total service offering from different or merged channels adjust to the changes on the client-partner side?
Ultimately the focus sits in quant. Asking traders to learn to code is a popular suggestion, but although beneficial to the individual, the extra risks probably outweigh any additional resource benefits in actual implementation.
A key strategic question is, how does an algo-sales trading desk move to a materially more quantitative approach that can handle (ideally) large data sets from hard coded, canned algorithms that reduce real-time requirements, but also maintain and develop that critical partnership on a qualitative level?
As you say, developing integrative negotiation solutions between client and broker that bring the wider algo platform skillset into play through trusted partnership, can result in materially improved outcomes for both sides across all three major regions of the world. It is a solid first step.