Articles Marketmedia

Gensler: Predictive Data Analytics Presents Challenges

Written by Shanny Basar | Oct 14, 2021 10:05:00 AM

SEC Chair Gary Gensler: Prepared Remarks at SEC Speaks

Thank you. I’m happy to appear at SEC Speaks for the first time as Chair of the Securities and Exchange Commission.

This event provides great continuing legal education to lawyers, accountants, and other market professionals. It also gives a platform for dozens of the talented and dedicated SEC staff and directors to share some insights about our work.

I’d like to thank the Practising Law Institute for working with our agency on this program, and my colleagues Gurbir Grewal and Renee Jones for co-chairing this event.

As is customary, I will note I’m not speaking on behalf of the Commission or the SEC staff.

Today, I’d like to speak about the uses of digital analytics in finance.

As I wrote while researching these issues at the Massachusetts Institute of Technology,[1] “[f]inancial history is rich with transformative analytical innovations that improve the pricing and allocation of capital and risk.” This ranges from Fibonacci’s development of present value formulas in the 13th century to the development of the Black-Scholes options pricing model in the 1960s.

https://twitter.com/GaryGensler/status/1448362629802627082

I believe what we’re currently witnessing is just as groundbreaking as Fibonacci. Predictive data analytics, including machine learning, are increasingly being adopted in finance — from trading, to asset management, to risk management. Though we’re still in the early stages of these developments, I think the transformation we’re living through now could be every bit as big as the internet was in the 1990s.

Across our economy, artificial intelligence, predictive data analytics, and its associated insatiable demand for data are shaping and will continue to reshape many parts of our economy. Sometimes, it seems platforms can predict things about us that we don’t even know about ourselves.

When we text a friend about a new restaurant, we might see it advertised to us on another platform.

When we order dish soap online, the website might note that people who bought that soap purchased sponges, too.

When we stream music, a platform might start to suggest other albums we’d like, too.

On finance platforms, we’ve started to see these tools as well. Platforms can tune their marketing and make recommendations to us based on data.

We might complete a trade, only to learn that other investors who bought that stock also traded a different stock. Robo-advisers suggest particular funds to us based upon automated algorithms, our particular data, and our stated preferences.

Regarding other modern digital platforms — from news outlets to social media sites — there have been debates about whether these companies optimize for our welfare, or a combination of factors that includes their revenues.

While we can learn from those debates, there’s something that is distinctive about finance platforms. They have to comply with investor protections through specific duties — things like fiduciary duty, duty of care, duty of loyalty, best execution, and best interest. These legal duties may conflict with such platforms’ ability to optimize for their own revenue.

Today, digital platforms, including finance platforms, have new capabilities to tailor products to individual investors, using digital engagement practices (DEPs). These modern features go beyond game-like elements, or what is sometimes called “gamification.” They encompass the underlying predictive data analytics, as well as a variety of differential marketing practices, pricing, and behavioral prompts.

While these developments can increase access and choice, they also raise important public policy considerations, including: conflicts of interest, bias, and systemic risks.

Conflicts of Interest

First, I’d like to discuss potential conflicts of interest.

The algorithms that modern technologies rely on have tradeoffs. A platform and the people behind that platform have to decide what they’re optimizing for, statistically speaking.

In the case of that online retailer, perhaps the platform’s employees are optimizing for revenues, basket size, and margin.

In the case of brokerage apps, robo-advisers, or online investment advisers, when they use certain digital engagement practices, what are they optimizing for?

Are they solely optimizing for our returns as investors?

Or are they also optimizing for other factors, including the revenues of the platforms?

To the extent that revenues are in the mix in their optimization functions, that means from time to time, they’re going to issue a prompt that will, statistically speaking, optimize their own returns. Further, based on predictive data analytics, I may get different prompts, suggestions, or visual cues than another investor will.

Therein lies the tension and the potential conflict. What do we do about that tradeoff?

For example, a robo-adviser might steer us to higher-fee or more complex products, even if that isn’t in our best interest. As one scholar put it, “rapid advances in artificial intelligence and machine learning may soon necessitate a rethink of liability regimes applicable to robo-advisors.”[2]

Moreover, a brokerage app might use DEPs to encourage more trading, because they would receive more payment from those trades. More trading, though, doesn’t always lead to higher returns. In fact, the opposite is often true. Or perhaps an app might steer us to high-risk products, options trading, or trading on margin, which may generate more revenue for the platform.

When do these design elements and psychological nudges cross the line and become recommendations?  The answer to that question is important, because that might change the nature of the platform’s obligations under the securities laws.

Further, even if certain practices might not meet the current definition of recommendation, I believe they raise a question as to whether there are some appropriate investor protection guardrails to consider, beyond simply the application of anti-fraud rules.

In August, the Commission published a request for public comment on the use of new and emerging technologies by financial industry firms.[3] The comment period just closed.

For example, we received a comment from the University of Miami School of Law’s Investor Rights Clinic, which provides pro bono services to investors of modest means in Florida.[4]

The clinic reports “a sharp increase in clients and prospective clients who suffered losses in their accounts with digital platforms that use DEPs.”

Their clients, they note, “trust the financial institutions they use and express confusion as to the reason their trusted institution would promote high-risk strategies or approve them for levels of options or margin trading that are not appropriate for them.”

In the clinic’s view, these business models do “present a conflict of interest between the retail investor’s needs and the digital platform’s incentive to make money.”

I understand that these tools have opened up the capital markets to a whole new group of people. Digital platforms — from the internet, to mobile phones, to apps — have streamlined user interfaces, enhanced the user experience, and brought greater retail participation into our markets. That, in and of itself, brings a lot of good. But the application of digital analytics raises new questions about conflicts of interest that I think we ought to consider as well.

Some of these issues can (and will) be addressed under our existing rule sets, or through updates to those rules. I’ve asked staff to take a close look at the feedback we received as they make recommendations for the Commission’s consideration, both related to brokers and to investment advisers. We’re separately looking at the incentives, like payment for order flow, which may drive some of these practices.

Bias

The second policy consideration I’d like to discuss is bias, and how people — regardless of race, color, religion, national origin, sex, age, disability, and other factors — receive fair access and prices in the financial markets.

How can we ensure that new developments in analytics don’t instead reinforce societal inequities?

This isn’t a new issue. We’ve seen this a lot in the consumer credit space. During the 1960s, the Civil Rights and women’s rights movements demanded action to address the historical biases embedded in credit reporting. The U.S. passed laws to protect equal access in housing, credit reporting, and credit applications.[5]

Today, platforms have an insatiable appetite for a seemingly endless array of data. This raises new questions about what they can do with that data.

For example, one study has shown that people who used iOS software had better credit than people who used Android software.[6]

We have protections in our laws for certain groups of people. What if it turns out that people who used Android software also happened to be women, say, or members of a racial or ethnic minority?

The underlying data used in the analytic models could be based upon data that reflects historical biases, along with underlying features that may be proxies for protected characteristics, like race and gender.[7]

As finance platforms rely on increasingly sophisticated data analytics, I believe that it will be appropriate to safeguard against algorithmically fortifying such biases.

Systemic Risk

The third policy area I’d like to discuss is systemic risk.

When new financial technologies come along, we need to protect for financial stability and resiliency.

I believe that we need to consider how the broad adoption of new forms of digital analytics, and in particular a subset of artificial intelligence called deep learning, might contribute to a future crisis.[8]

For example, these models could encourage herding into certain datasets, providers, or investments, greater concentration of data sources, and interconnectedness.

Such herding, interconnectedness, and concentration can lead to system-wide issues. We saw herding in subprime mortgages before the 2008 financial crisis, in certain stocks during the dot-com bubble, and in the Savings and Loan crisis of the 1980s.

The interconnectedness of many credit rating models deepened the 2008 financial crisis. About a decade ago, Greece’s debt crisis triggered similar cases in Portugal, Spain, and elsewhere.

Today’s digital analytics, including deep learning, represent a significant change when compared to previous advances in data analytics. They are increasingly complex, non-linear, and hyper-dimensional; they are less explainable. I believe existing regulations are likely to fall short when it comes to the broad adoption of new forms of predictive digital analytics in finance.

Thus, 2020s data analytics may bring more uniformity and network interconnectedness, or expose gaps in regulations developed in an earlier era. Financial fragility could come from different pathways — perhaps some critical data aggregator, or in particular model designs.

***

In conclusion, I believe we live in a transformative time, where artificial intelligence and predictive data analytics are changing many aspects of our economy. We may be at the early stages of these developments in finance, but we’re already seeing changes in multiple areas: from trading, to asset management, to risk management and beyond.

I believe that these technologies present the opportunity to expand access and lead to better risk management. The predictive data analytics also raise a number of important challenges: conflicts of interest, bias, and systemic risk.

Thank goodness — that still leaves us humans with a role! All kidding aside, policymakers, technologists, computer scientists, and market participants can come together, engage in robust debate, and tackle a number of important issues that the adoption of predictive data analytics presents. I believe this will allow these technological developments to live up to their great promise.

Thank you.