April 2026: The Month of AI

06 May 2026 | Business Law | Information Technology and IT Law

  • April 2026, proved to be an eventful month in the world of AI. Within a single month, South Africa’s Draft National Artificial Intelligence (AI) Policy was withdrawn shortly after publication, OpenAI discontinued Sora due to unsustainable operating costs, and the highly publicised OpenAI trial involving Elon Musk commenced, with Musk serving as a key witness. Together, these events underscored growing concerns about the pace, cost, and governance of AI development.
 
Since the release of ChatGPT on 30 November 2022, AI has become embedded in everyday life. OpenAI, positioned at the forefront of this expansion, has developed systems widely used for tasks such as meeting transcription, document editing, calculations, and advice. However, this rapid expansion raises serious questions about risk, sustainability, and whether current regulatory frameworks are adequate to protect consumers and the broader environment in which it operates.
 
What Is AI?
 
Despite widespread use, AI remains poorly understood by most users. Definitions often rely on examples such as ChatGPT or Copilot rather than an understanding of the technology itself. The EU AI Act defines an AI system as a machine-based system operating with varying levels of autonomy that generates outputs such as predictions or decisions which are based on input data. In simple terms, AI processes data to produce outputs, often autonomously. Artificial general intelligence (AGI) goes even further, referring to systems capable of surpassing human intelligence.
 
This growing disconnect between reliance on AI and understanding of it raises a critical question: at what point does rapid AI development become harmful rather than beneficial? One cannot deny the risks and potential harm it poses. Some of the key issues and concerns identified include, inter alia:
  • whether we are in an AI Bubble and whether this bubble will soon burst; 
  • the impact of AI on the labour market and job displacement; 
  • the effect of AI development on the environment;
  • whether further development of AGI should be encouraged or halted; and 
  • whether current regulatory frameworks, both nationally and internationally, are equipped to offer sufficient protection and regulation. 
 
The AI Bubble 
 
The AI Bubble refers to an economic bubble period where asset prices rise rapidly, exceeding its real value, but once the excitement in the market wears off, returns fail to materialise, the bubble “bursts” and as a result, the market crashes. 
 
For example, OpenAI’s valuation rose from $29 billion in 2023 to $100 billion in 2025, and by March 2026 reached a valuation of $852 billion. However, this exponential growth has not necessarily translated into successful economic results. On 26 April 2026, OpenAI discontinued Sora 2 (OpenAI’s video generation model) after it failed to generate sufficient revenue to justify its operating costs. According to the Wall Street Journal, Sora 2 incurred operating costs of approximately $15 million while simultaneously losing $1 million per day.
 
Many commentators viewed this as the first sign of the AI bubble bursting. This concern is reinforced by statements from OpenAI’s CEO, Sam Altman, who acknowledged the possibility of an AI bubble in September 2025,and the “GenAI Divide: State of AI in Business 2025” study conducted by the Massachusetts Institute of Technology (MIT) that found that 95% of AI startup companies fail to generate a financial return. 
 
To predict when the AI Bubble will burst is impossible, but if it bursts, AI will not simply disappear. Consumers have indeed become too dependent on AI for the technology to simply disappear; however, seeing that AI development and infrastructure account for approximately 45% of the total S&P 500 capitalisation, if the AI Bubble were to burst, it would have a devastating effect on the stock market. 
 
The Elon Musk/Sam Altman (OpenAI) Trial 
 
Elon Musk is reportedly seeking $150 billion in damages from OpenAI and Microsoft, alleging that OpenAI abandoned its original nonprofit mission to develop safe, open-source AGI. Musk claims charitable funds were used for commercial gain and seeks to ultimately force OpenAI to revert to its original nonprofit roots, remove its leadership and potentially redirect $130 billion in wrongful gains to a charity.
This trial could prove to be detrimental to the future of AI since Musk has been advocating for a pause in AGI development since 2023.
 
From Musk’s three-day testimony, it became quite apparent that the world’s richest man and arguably the leader of AI and AGI development, believes AI poses huge safety concerns.
 
One of Musk’s attorneys, Steven Molo, argued that expert testimony should be admissible as to the fact that AI could end humankind and was quoted saying “Extinction risk is a real problem. This is a real risk. We could all die.”, to which judge Yvonne Gonzalez Rogers responded, “I think it's ironic that your client, despite these risks, is creating a company that's in the exact same space” and that “This is not a trial on the safety risks of artificial intelligence”. 
 
While the safety risks of AI are not at trial, perhaps they should be. 
 
Inherent Risks of AI and AGI Development
 
Firstly, it comes as no surprise that AI development leaves a significant carbon footprint, as both its development and operation require vast amounts of energy and resources. In 2025, the International Environmental Association (IEA) reported that although AI has the potential to transform the energy sector, sustainable development remains crucial. The IEA noted that the most significant concerns presently posed by AI relate to electricity and mineral consumption; however, it also acknowledged that fears about AI accelerating climate change may be overstated. The report does not provide a definitive answer as to whether AI development will ultimately harm the environment or serve as a solution to climate change. As with many questions surrounding AI, the answer appears to be that “time will tell”.
 
Secondly, given AI’s ability to generate content at an increasingly rapid pace, and with AGI predicted to surpass human intelligence in the coming years, it is necessary to recognise AI’s impact on the labour market. In 2023, the World Economic Forum reported that AI was expected to be adopted by 75% of companies, with approximately 25% of organisations believing that its implementation would result in job losses. On 4 April 2026, Nexford University published a study citing a report by investment bank Goldman Sachs, which predicted that AI could replace the equivalent of 300 million full-time jobs by 2030. MIT and Boston University further reported that AI would replace approximately 2 million manufacturing workers by the end of 2026, while the McKinsey Global Institute estimated that by 2030, at least 14% of the population would need to change careers as their roles became automated by AI. When contrasted, the predictions made in 2023 compared to the realities emerging in 2026 are deeply concerning.
 
 
Finally, when considering AGI and predictions by Forbes that it may surpass human intelligence between the late 2020s and 2040, alongside Sam Altman’s prediction that this milestone could be reached by 2030, it is worth questioning whether such development should be encouraged or halted. Should a technology that remains largely misunderstood be developed to a point where it surpasses human intelligence? The answer depends largely on the safety mechanisms and regulatory frameworks in place to ensure that humanity does not “create a monster”. While AGI surpassing human intelligence may once have belonged to the realm of science fiction, recent developments and the unprecedented pace of innovation suggest that it is no longer a question of if, but when. Unfortunately, AI is advancing far more rapidly than regulatory frameworks can be implemented to provide adequate protection, with South Africa serving as a prime example.
 
South Africa’s Draft National AI Policy
 
South Africa’s withdrawn Draft National Artificial Intelligence (AI) Policy (the Policy), intended to position South Africa as the continental AI leader, was published on 10 April 2026 but was withdrawn shortly thereafter due to fictitious AI-generated references. Ironically, its withdrawal may have prevented the implementation of a deeply flawed framework.
 
The Policy was described as a “work in progress”, a “work in progress” may, however, be a far more suitable description than the Minister perhaps intended. Its vision centred on economic growth, innovation, and service delivery, particularly in education, healthcare, and agriculture, but failed to provide clear mechanisms for oversight, consumer protection, or accountability.
 
The proposed policy vision is intended to reflect South Africa’s aspiration to utilise AI to “catalyse socio-economic transformation, drive innovation, and contribute to a more inclusive, sustainable, and competitive national and continental future.” At first glance, it is not entirely clear what the Policy practically or actually entailed. It is also questionable whether the document can truly be characterised as a “policy” at all, as it reads less like a regulatory framework and more like a developmental strategy.
 
The Policy strived to adopt a “Futures Triangle Approach” intended to shape South Africa’s AI landscape. This approach sought to integrate the “Push of the Present,” the “Pull of the Future,” and the “Weight of the Past.” 
 
Concerningly, the majority of the Policy’s triangle approach was directed towards how AI can be used to drive economic growth, improve service delivery, and stimulate innovation. The only component that meaningfully referenced regulatory frameworks was limited to how existing frameworks should be adapted to accommodate AI development. In a growing digital environment, where cybersecurity risks are increasing, one cannot help but question why the Policy did not prioritise consumer protection, AI regulation, or effective supervision.
 
Ultimately, the Policy amounted to little more than normative idealism. It proposed the establishment of an AI Ethics Board, aimed to adopt an “Ethics First Approach,” and aspired to position South Africa at the forefront of ethical AI development aligned with constitutional values. However, it offered no real or meaningful protection and failed to meaningfully address safety concerns. Rather than pursuing a futuristic ideal, the revised Policy should arguably focus on formulating and implementing a robust AI regulatory foundation that can serve as a building block toward achieving its longer-term objectives.
 
The Policy identified six strategic pillars. While this section provided some hope, particularly the Responsible Governance pillar, which referred to cybersecurity measures, risk management, data protection, governance and data handling practices, transparency, and the facilitation of cross-border data transfers as policy interventions, they amounted to little more than vague or aspirational definitions. It remains troubling that, within an 86-page draft policy, only approximately six pages attempted to address issues of safety and security.
When viewed comparatively against other jurisdictions, such as California, this concern becomes even more prominent. California has been widely recognised as a national leader in responsible and ethical AI in the United States, a distinction unsurprising given its position as the home of Silicon Valley. 
 
Comparative Perspective: California
 
In an environment where AI technologies are already deeply entrenched and widely utilised, protection and regulation remain central to California’s legislative framework. From AB249, regulating healthcare information, to SB243, which restricts chatbots from engaging in discussions related to suicidal ideation or sexually explicit content, and the proposed Leading Ethical AI Development for Kids Act, California's AI legislation consistently prioritises user protection.
 
For example, the California AI Transparency Act (AB853) requires AI developers to ensure that AI-generated content includes provenance data accessible through AI detection tools. (Provenance data refers to metadata embedded within digital content that verifies its origin, modification history, authenticity, and chain of custody.) Amendments to the Act will further require large online platforms to allow users to access provenance data associated with uploaded content once the amendment comes into effect on 1 January 2027.
 
Additionally, AB316 extends to civil litigation, enabling plaintiffs to institute claims for harm caused by AI systems developed, modified, or used by a defendant. Importantly, defendants may not rely on a defence that the AI system acted autonomously in causing the harm.
 
The failures of South Africa’s draft Policy
 
By contrast, South Africa appears to favour an innovation-first approach while postponing meaningful safeguards. This raises serious concerns about whether constitutional rights, consumer protections, and worker security are adequately protected.
 
Another critical shortcoming of the policy relates to its treatment of employment and job displacement. While the policy emphasises AI’s potential to create job opportunities across South Africa, it fails to address the growing reality that AI is fundamentally reshaping multiple industries, including legal services, finance, customer support, and content creation. Entry-level professional roles are particularly vulnerable to automation, yet the policy does not meaningfully engage with the risks of large-scale job displacement or its broader implications for the labour market.
 
Ultimately, the withdrawn Policy reflected a genuine desire to position South Africa as a meaningful participant in the global AI landscape. However, good intentions alone are not sufficient. In its current form, the Policy speaks more to aspiration than to protection, leaving critical questions of accountability, safety, and social impact unresolved. As AI continues to move from theory into everyday use, the absence of clear and enforceable safeguards risks placing individuals, workers, and consumers in a vulnerable position. If AI is to serve as a tool for inclusive growth rather than unintended harm, regulatory clarity and protection should form the foundation upon which innovation is built, not an afterthought once the damage has already been done.

ARTICLE BY

Candidate Attorney
Tel:

Download

CONTACT

DURBAN

Telephone: 031 536 8500

PHYSICAL ADDRESS
45 Vuna Close,
Umhlanga Ridge, 4319

POSTAL ADDRESS
PO Box 913, Umhlanga Rocks, 4320

GPS Co-ordinates: 29°43'45.9"S 31°04'19.3"E

JOHANNESBURG

Telephone: 010 015 5800

PHYSICAL ADDRESS
4 Sandown Valley Crescent, Sandton, Gauteng, 2196

GPS Co-ordinates: 26°06'09.0"S 28°03'02.7"E

CAPE TOWN

Telephone: 021 879 2516

PHYSICAL ADDRESS
801, 8th Floor, Touchstone House,
7 Bree Street, Cape Town, 8001 | Dx 74 Cape Town

GPS Co-ordinates: 33°55'3.644:"S 18°25'19.66"E

ENQUIRY

© Cox Yeats Attorneys 2026 | PAIA | Website by Loud Crowd Media