The Onset of the Age of Dysregulatory Flavour

Michaela Murphy
Feb 18, 2025

Can you really cut red tape and simultaneously lobby for trustworthy & safe AI?
Last week, at the AI Action Summit held in Paris, United States Vice President, JD Vance offered insight into the Trump Administration's view on the AI industry and technology. Vance was swift to firmly assure the Trump Administration’s motivation to bolster the growth of the burgeoning industry and maintain American AI technology’s dominance as the gold standard. He urged that the United States will stand against ‘excessive’ restrictions on AI technology and encouraged other countries to adopt a dysregulatory flavor in an effort to drive unbridled AI innovation.
Overregulation does slow innovation and Vance’s stance is understandable. At the summit, French President Emmanuel Macron agreed, favoring cutting red tape, while also advocating for trustworthy AI; additionally, the United Kingdom joined the United States in declining to sign the Statement on Inclusive and Sustainable Artificial Intelligence.
However, determining what qualifies as adequate regulation versus excessive regulation is a complex challenge. It raises imperative questions about who is ultimately responsible for risk — should developers bear the responsibility, or do consumers also share the blame, where does the government? As the landscape of AI capabilities and limitations continues to evolve, innovation may be fraught with unknown risks – should guardrails be dependent on the type of artificial intelligence being developed? Would it truly be possible to cut the red tape and simultaneously lobby for safe AI?
Regulatory Precedent: From Aviation to Crypto
Regulation of emerging technologies, such as artificial intelligence, share similarities with the historical regulation of industries like commercial aviation and cryptocurrency. Any profit-maximizing entity has to simultaneously choose product price, product safety, and other quality attributes of its products and these sectors demonstrate the complexities of balancing innovation and growth with the need for oversight and safety.
The commercial aviation industry, specifically focusing on the United States industry, underwent significant change throughout the 1980s, revolving around the Airline Deregulation Act of 1978.
Prior to deregulation, the government controlled fares and routes, ensuring airlines met safety and service standards. The removal of these controls sparked increased competition, with incumbents being able to be more competitive, leading to lower ticket prices and greater accessibility for the general public. However, simultaneously, deregulation inadvertently resulted in cost-cutting measures, despite the potential detriment of service quality and safety, particularly among smaller carriers and charter flights. While the majority of major airlines maintained safety standards, at the time following deregulation, it was reported that the amount of resources devoted by airlines to aircraft maintenance fell 30% during deregulation's first six years and a survey of commercial airline pilots revealed that almost half believe their companies deferred maintenance for an excessive period of time. Although significant crashes did not increase after deregulation, public perception remained widely skeptical about air safety. The heightened concern among the public and media over the reduced safety margins led to increased pressure on the FAA (Federal Aviation Administration) to be more diligent in enforcing its safety regulations, illustrating the public’s need for governmental oversight to trust the industry.
While this was only a quick overview of the aviation industry, the dysregulation shift in the 80s highlighted the tension between competition, accessibility, and safety — further demonstrating the continued challenge of balancing safety and opportunity across industries, including AI.
Aviation is a more balanced example of the impacts of deregulation, where certain safety controls consistently remained in place due to human risk. In contrast, the cryptocurrency industry, where lack of regulation has led to significant volatility, offers far more striking examples of the effects of a hands-off regulatory approach – even after three years, the waves of the FTX fiasco still remain significant.
Although the mission of a decentralized financial system promises privacy, autonomy, and low transaction costs, continued collapses and events of misuse hamper the industry, continually revealing the risks of the lightly regulated system. On one hand, the lack of comprehensive regulations allows for innovation and decentralization, which are central to the borderless nature of blockchain technology; however, this lack of oversight also invites potential fraud, manipulation, and instability, as evidenced by the collapse of FTX. The failure of FTX serves as a stark reminder of the risks inherent in an unregulated market, where transparency and consumer protection often take a backseat to profit maximization.
While the aviation and cryptocurrency industries offer varied insight into the role of governmental regulation, the lessons from these historical regulatory precedents will likely influence how governments approach the regulation of artificial intelligence.
How the AI Industry is Different & The Challenges That Will Certainly Arise
Unlike the majority of other industries, artificial intelligence is not a confined, static field. Highlighting, both the extraordinary potential of AI and the reason it will demand the most effective regulation.
AI integrations have already seeped into virtually every industry. For regulation to be effective, policies must evolve alongside these industries and account for the unique nuances and risk of AI's integration across these various sectors.
In healthcare, the risk is and will continue to be incredibly high due to the direct implications on human care, health, and privacy, as applications already range from medical research to diagnosis aids, robot-assisted surgeries, and patient data analysis. Any errors or biases in algorithms could result in catastrophic consequences, or even death, requiring the regulation in healthcare particularly stringent with high levels of oversight, accountability, and transparency. However, outside of healthcare, risks may be lower, but industries across retail or entertainment may still face monumental risks related to privacy violations, manipulation, or bias.
Given the diverse applications of AI, a one-size-fits-all approach to regulation may not be adequate, revealing a need for highly specific parameters for individual industries. A siloed approach to policy addresses the unique risks and requirements of each sector, creating specific guidelines for accountability – as well as necessitating the need for industry specific tests to ensure compliance. By dividing AI regulation into these silos, regulators can focus on the critical needs of each industry, creating a more effective and efficient regulatory environment, but also necessitates that the regulators and policymakers must be industry experts themselves, or require guidance from industry and AI advisors.
Beyond solely government regulations, should industry leaders, and even individual consumers, be additionally relied on to take more responsibility for the ethical use and development of AI technologies? With a dysregulation flavour, it’s difficult to predict whether companies and developers would take proactive steps to establish ethical parameters or self-regulatory frameworks, that could range anywhere from self-prescribed transparency guidelines to establishing internal auditing systems.
The Big (Unanswered) Question
As models are being developed to surpass human cognitive abilities, how will a human be able to effectively regulate or fact-check a model that possesses superior cognitive capabilities?
We don’t know and not sure anyone truly does.
Our Final Thoughts
The need for regulatory oversight must be balanced with the demand for ethical, transparent technology use, as the supply of innovation can only grow in response to the demand for the product.
While regulations may initially slow an industry's growth, they ultimately lead to long-term accessibility and build consumer trust. As seen with the aviation industry, bolstered oversight by the FAA following deregulation increased consumer confidence and boosted the growth of the industry long-term; we believe, we would likely see the same pattern with the AI industry, where regulations will lead to more trust in novel tools leading to wider-spread integration across all industries in the coming years.
If we want customers and industries to implement AI on every level, we need to make sure it is trusted – Making detailed and comprehensive regulations and guidelines nonnegotiable.
This article raised numerous questions regarding the future of AI, that nobody has the answers to.
At Ergodic, we believe as AI technologies are massively developed, safety and ethics needs to be paramount. As we develop our platform, we are prioritising transparency to build AI that is safer and accessible for all users.
Want to be a part of the mission? We are searching for more talented individuals to join the team! We are currently on the lookout for Full Stack Engineers and AI Engineers to work hybrid in our Munich or London offices.; if you are interested in learning more about our hiring process and team, check out our website or message for further information.
Resources:
Fung, B. (2024). EU approves landmark AI law, leapfrogging US to regulate critical but worrying new technology. [online] CNN. Available at: https://edition.cnn.com/2024/03/13/tech/ai-european-union/index.html [Accessed 14 Feb. 2025].
Cbsnews.com. (2024). European Commission accuses Elon Musk’s X platform of violating EU Digital Services Act. [online] Available at: https://www.cbsnews.com/news/elon-musk-x-twitter-eu-says-violating-digital-services-act-blue-checkmarks/ [Accessed 14 Feb. 2025].
MADHANI, A. and ADAMSON, T. (2025). JD Vance rails against ‘excessive’ AI regulation in a rebuke to Europe at the Paris AI summit. [online] AP News. Available at: https://apnews.com/article/paris-ai-summit-vance-1d7826affdcdb76c580c0558af8d68d2 [Accessed 14 Feb. 2025].
Hallam, M. (2025). US, UK decline to sign Paris AI summit declaration. [online] dw.com. Available at: https://www.dw.com/en/us-uk-decline-to-sign-paris-ai-summit-declaration/a-71575536 [Accessed 14 Feb. 2025].
Clea Caulcutt (2025). JD Vance warns Europe to go easy on tech regulation in major AI speech. [online] POLITICO. Available at: https://www.politico.eu/article/vp-jd-vance-calls-europe-row-back-tech-regulation-ai-action-summit/ [Accessed 14 Feb. 2025].
Goetz, A. and Dempsey, P. (1989). Airline Deregulation Ten Years After: Something Foul in the Air Airline Deregulation Ten Years After: Something Foul in the Air AIRLINE DEREGULATION TEN YEARS AFTER: SOMETHING FOUL IN THE AIR. Journal of Air Law and Commerce Journal of Air Law and Commerce, [online] 54. Available at: https://scholar.smu.edu/cgi/viewcontent.cgi?referer=&httpsredir=1&article=1865&context=jalc.
Barnett, A. and Higgins, M.K. (1989). Airline Safety: The Last Decade. Management Science, [online] 35(1), pp.1–21. doi:https://doi.org/10.1287/mnsc.35.1.1.
Ness, L. (2025). How Crypto Regulation Could Change Under Trump and the New SEC. [online] @BLaw. Available at: https://news.bloomberglaw.com/us-law-week/how-crypto-regulation-could-change-under-trump-and-the-new-sec [Accessed 18 Feb. 2025].
World Economic Forum. (2024). Cryptocurrency regulations are changing across the globe. Here’s what you need to know. [online] Available at: https://www.weforum.org/stories/2024/05/global-cryptocurrency-regulations-changing/ [Accessed 18 Feb. 2025].
Pathways to the Regulation of Crypto-Assets: A Global Approach M A Y 2 0 2 3. (n.d.). Available at: https://www3.weforum.org/docs/WEF_Pathways_to_the_Regulation_of_Crypto_Assets_2023.pdf.
Hossein Nabilou (2019). How to regulate bitcoin? Decentralized regulation for a decentralized cryptocurrency. International Journal of Law and Information Technology, [online] 27(3), pp.266–291. doi:https://doi.org/10.1093/ijlit/eaz008.