According to a recent study, there are 55 analogies used by policymakers to describe what “artificial intelligence” (AI) is and does. This semantic maelstrom is a factor in a robust U.S. federal agency and industry debate about how to regulate the development and use of AI models applied to myriad uses, from cellphone text predictions to disease detection to assisting investment decisions. President Biden’s “Executive Order on the Safe, Secure and Trustworthy Development and Use of Artificial Intelligence” of October 2023 instructed federal agencies to report on how AI is developed and used by entities under their authority, and how AI is developed and used by the agencies themselves. The Commodity Futures Trading Commission’s (CFTC) Technology Advisory Committee (TAC) subcommittee on Emerging and Evolving Technologies was tasked with producing a report on AI in markets regulated by the CFTC.
A CFTC advisory committee report on AI
IATP has a long-standing interest in how derivatives prices serve as benchmarks for prices paid to farmers and in how trading technology affects price discovery and price volatility. I was honored to serve as a public interest representative on the AI subcommittee to bring our past analysis of trading technology to bear on the introduction of AI in derivatives markets. Our report, which the TAC approved on May 2 to send to the CFTC, uses the definition of AI from the National Institute of Standards and Technology:
a machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations, or decisions influencing real or virtual environments. Artificial intelligence systems use machine- and human-based inputs to perceive real and virtual environments; abstract such perceptions into models through analysis in an automated manner; and use model inference to formulate options for information or action. (Glossary of Terms, p. 5)
How did we apply this granular definition to trading, risk management and the back-office transaction work (clearing) of the derivatives markets? An initial topic of discussion among subcommittee members concerned how to distinguish between trading algorithms currently in widespread use, including for agricultural futures contracts, and the advent of generative AI-driven algorithmic trading. Directed by generative AI, algorithms, e.g., numerically encoded trading strategies, could change without human intervention in response to the data environment from which the AI trading model is continuously learning. The subcommittee report discussed how and when humans can be held accountable for AI-enabled decisions in terms of “Humans in the Loop,” that is, when humans give direct feedback to an AI model.
The most familiar forms of AI are the ChatGPT and Bard large language models that predict what you want to say and how you want to say it on your electronic devices. The controversies about the overpromising and underperforming of AI were summarized in a May 15 New York Times opinion piece, “Press Pause on the Silicon Valley Hype Machine.”
The subcommittee report did not assess hyperbolic claims for AI but instead focused on generic use cases and their attendant risks in commodity derivatives markets. (pp. 48-51) The report summarized “the potential of AI to improve automated processes governing core functions like risk management, surveillance, fraud detection, and the identification, execution, and back-testing of trading strategies. AI can also be used to collect and analyze information about current or prospective customers and counterparties, and for surveillance and fraud detection.” (p. 46) These generic use cases and risks are mostly drawn from academic literature because CFTC-registered entities do not disclose which AI models they use nor how they use them.
. . . it is difficult to determine whether and how CFTC-regulated firms are currently, or might in the future, use AI. Part of the challenge is a lack of direct knowledge about the organizations currently leveraging AI, in addition to the level of transparency among these firms for regulators and customers, about the components of the AI model, especially as they relate to trading strategies and risk management. (p. 47)
The CFTC cannot regulate the use of AI by registered entities if it has no direct knowledge about how AI is being used, by whom and for what purposes.
The CFTC appointed its first Chief Data and Artificial Intelligence Officer on May 1. This official will lead staff interviews with CFTC registrants to try to acquire “direct knowledge” about their AI use. The report’s first recommendation is “The CFTC should host a public roundtable discussion and CFTC staff should directly engage in outreach with CFTC-registered entities to seek guidance and gain additional insights into the business functions and types of AI technologies most prevalent within the sector.” (p. 11) Since staff interviews likely will be confidential, the public roundtable may offer the public its first “direct knowledge” about AI use by CFTC registrants.
AI and trading algorithms
IATP began to research algorithmic trading in automated trading systems (ATS) in 2013, when the CFTC requested comment on a “Concept Release” for regulating ATS. We began to study it in the wake of the infamous May 6, 2010 “flash crash,” during which about $1 trillion in the contract value of a stock futures index was temporarily lost in less than five minutes before humans intervened to stabilize the trade matching engine and recuperate much of the lost value. Within a week, the CFTC had identified the origin of the flash crash: a series of trades made to drive down the price of the Standard and Poor’s e-mini contract, the largest stock index futures, on the Chicago Mercantile Exchange (CME). However, the failure of CME “kill switches,” computer programs to slow down trading during extreme price volatility, also enabled the flash crash. A CFTC and Securities and Exchange Commission (SEC) staff report in September 2010 analyzed the role of High Frequency Traders (HFT) and automated execution algorithms among other flash crash factors.
Despite the “lessons learned” in the CFTC and SEC staff report and academic analyses, the SEC regulation of HFT was limited to a requirement that firms using HFT to trade across markets (e.g., equities and derivatives) off exchange would register with the SEC and be subject to record keeping and reporting requirements. The CFTC proposed a more ambitious regulation of automated trading (Reg AT) in 2015 that was opposed by the CME, ATS beneficiaries and members of Congress. That opposition found a sympathetic ear in the Trump administration CFTC, which withdrew a revised version of Reg AT and substituted for it an industry self-regulatory “Principles of Electronic Trading” rule in 2020. The CFTC’s oversight of AI, whether in the form of regulations or voluntary guidance documents to industry, will build on the current industry self-regulation of ATS.
AI risks to competition
IATP briefly recalled the history of the CFTC’s failure to regulate ATS in our April 24 response to the agency’s Request for Information (RFI) about the use of AI in the derivatives industry. We responded to several questions, including one on the risks of AI for competition. A Big Tech oligopoly currently develops the foundational AI models that are trained on the largest sets of “unstructured data,” which often includes copyrighted publications and patented information. An array of copyright holders are suing AI developers for appropriating their work without permission or compensation. For example, Meta’s AI chatbot continues to “scrape” data from publications for its news summaries, even as it withdraws from directly posting published articles.
However, the investment of the financial services industry, especially the largest banks, in AI is beginning to exceed that of Big Tech itself, according to a Better Markets response to the RFI. Better Markets writes,
the five largest investment banks filed 94 percent of AI-related patents between 2017 and 2021, published two-thirds of the AI research papers, and accounted for half of AI investments. Experts expect that financial institutions’ spending on AI will continue to expand, doubling from 2023 to 2027 and topping $400 billion. (p. 2)
If the patents filed and investments made result in proprietary AI models that perform successfully in the derivatives markets, mega-bank competitors that depend on purchasing and then adapting AI models likely will be at a technological and competitive disadvantage. As we note below, a registered entity’s adaptation of an AI model developed by a third party carries a higher risk of technological disfunction than large language models developed specifically for the mega-banks’ lines of business.
Our response on AI risks to competition began by summarizing the testimony of an agricultural commodity trader competing with a HFT algorithmic trader for timely access to contracts to reduce price risk for the trader’s customers. That trader, speaking at a 2018 CFTC co-sponsored conference on agricultural futures trading, noted that while HFT lowered transaction costs, the savings mattered little because firms that could afford to buy supercomputers and HFT software, locating them near to exchanges to access contracts in nanoseconds, would outcompete less financially endowed traders. AI adds to the upfront cost structure of trading and likely would make it harder for commodity trading specialists to compete with AI-enabled specialists in the largest banks and hedge funds.
The CEO of OpenAI Sam Altman said in 2023 that ChatGPT-4, a foundational model, cost $100 million to train. The costs of training include hardware, software, energy costs of computation and different kinds of testing. Large language models, trained with fewer data parameters on relatively smaller data sets, cost less but likely will be beyond the budgets of most CFTC-regulated entities to buy and adapt to their needs. More targeted models, producing fewer data outputs, likely will cost less still. But a larger question, beyond the scope of this article, is whether the costs of AI models are justified relative to the risks of using those models for which there are still no agreed testing standards for “responsible AI.”
AI value alignment and AI safety
Both the AI subcommittee report and the RFI address questions about possible data interface inadequacy between the design of AI models and their adaptation by third-party service (TPS) providers for use by clients, such as derivatives traders. For example, a TPS provider might not disclose the methodologies or the risk metrics used to train the AI model, which could lead to a misalignment between the AI model and the client’s risk management and trading strategies. IATP quoted from a research group strategy for reducing misalignment :
Much of the research at the intersection of artificial intelligence and ethics falls under the heading of machine ethics, i.e., adding ethics and/or constraints to a particular system’s decision-making process. One popular technique to handle these issues is called value alignment, i.e., restrict the behavior of an agent so that it can only pursue goals which follow values that are aligned to human values. (p. 1)
The application of this research by CFTC-registered entities to avoid false or misleading outputs that could unintentionally disrupt the transactions or even trading strategies requires the cooperation of TPS AI providers with the CFTC. One way to secure that cooperation would be to require the providers to register with and report to the CFTC about which AI models are used by exchanges, intermediaries and registered market participants in their business and risk management departments.
However, even successful value alignment does not guarantee that AI models will operate safely and not harm registered entities, as well as the customers of contracts that manage price risks in both financial commodities, such as interest rates, and physically backed commodities, such as wheat. Researcher Heidi Khlaaf writes, “The AI community, conflating requirements engineering with safety, has allowed those building AI systems to abdicate safety by equating safety measures with a system meeting its intent (i.e. value alignment) [italics in the original]. Yet, in system safety engineering, safety must center on the lack of harm to others that may arise due to the system intent itself.” (p. 4) A CFTC-registered entity’s value alignment could restrict the AI model’s behavior in a way that was consistent with the entity’s desired data output.
If the alignment parameters were themselves unsafe, e.g., a higher investment risk tolerance than is prudent for an entity with a high debt-to-equity ratio, “harm to others,” i.e., the entity’s customers, could result. The TAC subcommittee report advised the CFTC, “It is crucial that the Commission avail itself of computer science research to distinguish value alignment from safety requirements to comprehensively assess risk in AI systems.” (p. 25) On June 4, 13 current and former Open AI and Google Deep Learning employees warned in a letter to the companies that AI risks were being ignored in product development. They asked to be released from broad non-disclosure agreements and for whistle blower protections to be able to debate AI product safety issues within the companies without fear of retaliation.
How might the CFTC update its Risk Management Program (RMP) rule to audit AI for safety and accuracy?
The current RMP rule requires independent third parties to conduct and report an annual audit of a registrant’s automated risk management tools and automated trading systems. That rule, released near the launch of ATS more than 20 years ago, is not adequate to audit AI-driven risk management tools because those tools are being modified by what the AI model learns from its data environment. But what is adequate?
Our response to the RFI staff questions about auditing AI systems in derivatives trading was short because we found little research on the topic. However, we cited research that pointed to the first requirement for effective CFTC oversight of the auditing of AI-powered risk management — the cooperation of the industry with the CFTC:
Lack of access to data and algorithmic systems strikes us as the most significant vulnerability of the current AI audit ecosystem. Protecting proprietary information is not a proper response, as all audit systems provide some sort of privileged access to auditors, and disclosure does not have to be direct nor absolute. The National Institute of Standards and Technology, for instance, protects models by having companies run models via a custom Application Program Interface (API) for the Face Recognition Vendor Test (FRVT). Such mediated access, subject to auditor vetting (perhaps by an audit oversight board) and consistent with the audit scope, will be critical to enabling third party auditing of AI systems. (p. 8)
IATP advised the CFTC to avail itself of NIST and other auditing expertise to update the RMP rule. Registrant resistance to mediated access to AI systems for third-party auditors could result in a buildup of system vulnerabilities that not only could imperil the safety and accuracy of the registrant’s AI systems but could have a contagion effect among the customers of intermediaries, such as futures commission merchants, who might deploy those systems for customer protection.
AI-facilitated fraud and market manipulation
Much of the focus on AI-facilitated fraud has concerned the use of AI voice cloning to deceive people to invest in high-risk financial products. The Acting Comptroller of the Currency Michael Hsu recently warned of a “potential explosion” of AI-enabled financial fraud, particularly by fraudsters that target the elderly and “vulnerable communities.”
Historically, most investors in CFTC-regulated contracts have been institutional investors rather than retail investors, e.g., farmer cooperatives advised by commodity specialists, rather than individual farmers. Sophisticated investors could avoid AI-enabled fraud with the use of AI for fraud detection, a use case for AI identified in the subcommittee report.
However, current legislation would expand the CFTC’s authorities to include retail investors who are prominent in trading “digital assets,” i.e., cryptocurrencies, such as Bitcoin. The U.S. House of Representatives passed legislation in May to make the CFTC the primary regulator of digital asset derivatives contracts and to severely restrict the SEC’s authority over cryptocurrencies. If that bill and a Senate counterpart became law, trading in digital assets would increase and with it, the trend of the investigations of digital currency fraud. The CFTC uses its current authority to take enforcement action in cryptocurrency cash markets. The director of CFTC enforcement recently said that half of all CFTC enforcement actions concerned digital assets: “For example, in 2022, we charged Mirror Trading and other defendants for a fraud in which the defendants stole over $1.7 billion in digital assets from its victims, which included over 23,000 victims in the U.S.” He added that since 2022, fraudster techniques have become more sophisticated.
IATP wrote to the CFTC, “In theory AI models should improve current data surveillance technologies to detect fraud and market manipulation if they are trained on an array of data that includes a registrant’s historical trading data, currently used algorithms, the specifications of self-certified contracts, information about the underlying assets of those contracts, and rulebooks of self-regulatory organizations and of the Commission.” However, for the theoretical potential of AI to detect fraud and prevent market manipulation of a contract to become reality, the CFTC needs to understand how exchanges are using AI to detect fraud and to enhance the data surveillance capabilities of position accountability systems. Furthermore, exchanges must explain at what points in the AI-directed position accountability system there is a Human in the Loop to make decisions about when and how to intervene to prevent or at least diminish market manipulation.
If AI is not environmentally sustainable, how should government agencies respond?
The last item of the CFTC RFI was “Staff welcomes any relevant comments, including on related topics that may not be specifically mentioned but that a commenter believes should be considered.” We responded that the increased energy and water use in the processing of AI data is unsustainable. The CFTC should be part of an interagency working group to make AI more sustainable and to study the triaging of AI uses if the technology cannot be made sustainable. We cited a Microsoft AI engineer who decided to go public with her findings in an article published in Nature, a widely read scientific journal. Kate Crawford characterized AI energy and water use as the “elephant in the room” that few AI product developers have acknowledged. Based on her article, we wrote to the CFTC staff, “The rate of unsustainability is suggested by water use in Microsoft and Google data centers in West Des Moines, Iowa. For example, a local citizen’s lawsuit revealed, ‘As Google and Microsoft prepared their Bard and Bing large language models, both had major spikes in water use — increases of 20% and 34%, respectively, in one year, according to the companies’ environmental reports.’”
We noted that mainstream media, e.g., The New York Times, had reported on AI’s exorbitant energy use. OpenAI CEO Sam Altman is bankrolling a nuclear power startup that claims to be able to satisfy sustainably all AI data center needs. Even if this AI energy plan is feasible in the long timeline of nuclear power development, the problem of AI water use, mainly to cool the server farms, is a clear and present danger. IATP pointed to the remarkable New York Times series, “Uncharted Waters,” which documents how U.S. aquifers are depleting rather than recharging, as a harbinger of crisis to be exacerbated by current agricultural and industrial use patterns of water, deteriorating water infrastructure and climate change.
The CFTC is not an environmental regulatory agency with authorities over water and energy use. However, given the multi-billion-dollar financial services investment in AI noted above, we believe that both AI product developers and the financial services industry expect a high rate of return on AI investment. It would be imprudent for both the industry and the CFTC to ignore the “elephant in the room” while planning to develop regulations and/or voluntary guidance to industry about the uses and risks of AI while excluding sustainability risks. The CFTC advisory subcommittee on AI chose not to include any mention of AI sustainability issues in its report. The CFTC Commissioners should authorize staff to participate in an interagency working group on AI sustainability.
Conclusion
IATP’s most direct concern about AI is to what extent it will autonomously direct trading algorithms used to discover agricultural futures prices. These prices are benchmarks for both Freight on Board export prices, which include the price of the commodity, shipping, logistics and insurance, and for forward contracting by farmers and ranchers of their grain and livestock production, which often provides for their operating capital and living expenses until the production is sold. However, to understand the impact of AI on any one category of agricultural contracts, more systemic knowledge of how the contracts are traded, how that trading is regulated or not, and accountability for that trading, including enforcement actions, is required. Assuming that AI safety and sustainability problems can be resolved, some role for AI in the present and future of commodity markets is very likely inevitable. What that role is will depend on how and for what purposes humans manage AI.