Articles Marketmedia

AI Liability Comes Into Sharper Focus

Written by Rob Daly | Oct 25, 2019 6:00:37 PM

Financial markets are relying on autonomous technology like neural networks and smart contracts more and more, but the industry and market regulators continue to work out who is liable when the technology goes rogue.

Although the technology is new, the issues of governance, cybersecurity, and privacy are not, according to Gary DeWaal, special counsel, chair, financial, markets, and regulatory, at law practice Katten Muchin Rosenman.

“The important thing is to consider in rolling out anything is how are these issues going to be addressed and what are the new hot spots,” he said during a panel discussion at the second annual Fintech Forward conference hosted by LabCFTC.

Within the past year, the US Securities and Exchange Commission found the Zachary Coburn, founder of the decentralized token-exchange DeltaEther, culpable for the platform’s behavior as an unregistered securities exchange even though it ran autonomously using ERC-20 smart contract and that he had left the exchange.

“It is an example of someone being picked and made responsible,” said DeWaal.

The SEC noted that Coburn should have anticipated that his application could violate SEC licensing requirements and fined him $75,000 and disgorgement of profits, he added.

However, regulators should not be too harsh on the programmers who might have written a few lines of codes, said fellow panelist Sam Playle, a data scientist at Kaizen Reporting.

“The programming is an important part of it,” he said. “Determining what date will be in scope, developing the algorithm, and all of these other issues are non-programming questions which have been critical in determining what the final AI is going to look like. It is not just to the programmers to whom we should look.”

The danger of the technologies like neural networks that could include millions of parameters is the lack of transparency in how the neural networks drew their conclusions, he added.

Neural networks’ opaque nature could easily lead Wall Street to relive high-profile crises similar to Long-Term Capital Management in 1998.

“They were not using machine learning as we understand it in the modern era,” said Playle. “They were using statistical models, and they knew what they were but still didn’t properly understand and quantify the risks. Now we’re in an even worse situation because we don’t know what the model is doing, so how can we begin to quantify the risk?”

Wall Street and market regulators should not be so afraid about knowing what the AI models are doing, cautioned panelist Aaron Klein, a fellow, economic studies, and policy director of Center on Regulation and Markets at Brookings Institution.

There will be hiccups as the technology develops, but regulators and the industry should compare the improvements these technologies provide against the current state rather than some perfect end state.

“If I asked whether you would want to switch to autonomous vehicles that caused 75,000 deaths a year, many would not,” he said. “But if I said that deaths and accidents would be cut down by 75%, you might. They are the same amount. It’s about how to frame the question.”