The SEC Sets Its Sights on AI

Photo - The SEC Sets Its Sights on AI
The U.S. Securities and Exchange Commission (SEC) is broadening its horizons, extending its reach beyond BTC and altcoins to the AI market. This surprising development came to light when the former Chairman of the Commission discussed the latest regulatory initiatives involving cryptocurrencies.
The International Organization of Securities Commissions (IOSCO) recently issued a series of policy recommendations aimed at the regulation of crypto and digital markets. The proposals address a range of issues including conflicts of interest, cross-border cooperation, engagement with retail customers, and the operations of crypto firms.

In a recent interview, former SEC Chairman Jay Clayton, currently serving as a non-executive chairman of investment firm Apollo and a board member of American Express, offered his thoughts on the new IOSCO document. Clayton suggested that crypto platforms operating outside the U.S. jurisdiction could pose a financial risk to American citizens. While he views cryptography as a technology that can spawn innovative and enhanced products, he also perceives it as inadequately regulated, with industry players not sufficiently addressing the issues in the crypto sector.

Cryptocurrencies have emerged outside of traditional financial systems, instantly making a global impact. Clayton acknowledges the necessity for cross-border regulation but is wary of the complexities involved in its implementation. This perspective was not particularly groundbreaking; however, Clayton's primary insight concerned an entirely different domain.

AI's potential to manipulate the stock market

In an unexpected shift, Clayton turned the conversation towards a recent hoax involving a fabricated AI image of a fire at the Pentagon. This fake news, understandably, briefly became a top news story. The ripple effect on the S&P 500 was minor, causing fluctuations of only 0.3%, which quickly corrected as people realized the news was a sham.

Nevertheless, this episode starkly illustrated how AI could be used to manipulate public opinion and, consequently, influence the stock market by affecting security prices. Building on this, Clayton referenced the creation of a counterfeit account disseminating false information about FDA approval for certain drugs as an example of potential misinformation that could significantly sway the stock prices of relevant pharmaceutical companies and related industries.
Should this be on the radar of the SEC? It is, I am certain it should be
Clayton proffered rhetorically, answering his own question
He firmly believes that this issue should be a priority for the Commission. However, the first step is acknowledging the potential for AI-enabled market manipulation, followed by the application of the SEC's familiar arsenal of deterrents and prohibitions.
Jay Clayton, Former SEC Chief Source: youtube

Jay Clayton, Former SEC Chief Source: youtube

Moreover, Clayton brought up another issue. About twenty years ago, the Commission served effectively as a kind of 'watchdog', safeguarding entry to the stock market. Companies initially faced SEC officials who determined the veracity of the information supplied. This arrangement wasn't popular with everyone, but Clayton proudly acknowledged the elevated standards of data protection and decision-making of that time.

Nowadays, however, information is instantly and ubiquitously disseminated, posing a novel risk. Under these new conditions, verifying the truthfulness of data becomes a daunting task. When it comes to a specific company, one could visit its official website and sift through all available materials. Yet, when it's about macroeconomic events or significant news influencing the market, the situation turns more intricate. Clayton suggests that the regulatory community should collectively ponder how to tackle this predicament.

Nonetheless, Jay Clayton firmly believes that problems tied to technology should be resolved by those who created them. If the activities of innovators have given rise to a new risk, then it's up to the developers to suggest how to mitigate it. To motivate them into action, a mixture of self-interest and regulation is required. Clayton himself postulates that any document generated by AI should bear an electronic 'watermark', signifying that it was created or processed 'non-humanly'. Such a simple marker could substantially alter the perception of a document or video, leading to more scrutinizing attitudes. Despite this, Clayton doesn't seem overly optimistic.
…seems like a losing battle though I mean
Clayton reflects ruefully towards the end of the interview