To this day, Asilomar State Beach and Conference Grounds guest rooms still do not have telephones or televisions. Wi-Fi connectivity has also only recently become available. This preserves the rustic charm of the nearly 70-acre complex, surrounded by 30 historic buildings, located on the scenic shores of Pacific Grove in Southwest California.
Despite its timeless charm, Asilomar experienced a remarkable convergence of the world’s most forward-thinking minds in 2017. More than 100 scholars from law, economics, ethics and philosophy gathered and formulated some principles around artificial intelligence (AI).
Known as the 23 Asilomar AI Principles, it is believed to be one of the earliest and most consistent AI management frameworks to date.
Even if Asilomar doesn’t ring a bell, you certainly haven’t missed the open letter signed by thousands of AI experts, including SpaceX CEO Elon Musk, calling for a six-month hiatus from training AI systems beyond GPT’s power : 4.
The letter opened with one of Asilomar’s principles.Advanced AI could represent a profound change in the history of life on Earth and must be planned and managed with appropriate care and resources.“.
Many speculated that the origin of this message lies in the creation of the artificial intelligence chatbot, ChatGPT-4, which has taken the digital landscape by storm. Since its launch last November, the chatbot has sparked a fierce arms race among tech giants to unveil similar tools.
Still, beneath the relentless pursuit are some deep ethical and societal concerns about technologies that can invent creations that eerily mimic human work with ease.
Until this open letter, many countries had adopted a laissez-faire approach to the commercial development of artificial intelligence.
Within a day of the publication of this letter, Italy became the first Western country to ban the use of OpenAI’s generative AI chatbot ChatGPT over privacy concerns, though the ban was eventually lifted on April 28 after OpenAI met the regulator’s requirements. .
The reactions of the world
That same week, US President Joe Biden met with his science and technology advisers to discuss the “risks and opportunities” of artificial intelligence. He urged tech companies to ensure maximum security of their creations before releasing them to the willing public.
A month later, on May 4, the Biden-Harris administration announced a series of actions designed to advance responsible AI innovations that protect the rights and safety of Americans. These measures included a draft policy guide for the development, procurement and use of AI systems.
On the same day, the UK government announced it would launch a thorough review of the impact of AI on consumers, businesses and the economy, and whether new controls are needed.
On May 11, key EU lawmakers reached a consensus on the urgent need for stricter regulations on generative artificial intelligence. They have also advocated a blanket ban on facial surveillance and will vote on a draft EU AI act in June.
Regulators in China had already unveiled draft measures in April to regulate the governance of generative AI services. The Chinese government wanted companies to submit comprehensive safety assessments before offering their products to the public. However, the authority wants to offer a supportive environment that has pushed leading enterprises to create AI models that can challenge the likes of ChatGPT-4.
In general, most countries are either looking for investment or planning regulations. However, because the boundaries of possibility are continually shifting, no expert can predict with certainty the exact sequence of developments and consequences that generative AI will bring.
In fact, lack of precision and preparation is what challenges the regulation and governance of AI.
What about Singapore?
Last year, the Info-Communications Media Development Authority (IMDA) and the Personal Data Protection Commission (PDPC) introduced AI Verify, an AI governance testing framework and toolkit that encourages industries to embrace new transparency in AI implementation.
AI Verify comes in the form of a minimum viable product (MVP), allowing enterprises to demonstrate the capabilities of their AI systems while taking robust measures to mitigate risk.
With an open invitation to participate in the international pilot project, Singapore hopes to strengthen the existing framework by incorporating valuable insights gathered from diverse perspectives and actively contribute to the establishment of international standards.
Unlike other countries, Singapore recognizes trust as the foundation on which the rise of AI must be built. The way to increase trust is to communicate the multifaceted dimensions of AI applications with maximum clarity and effectiveness to all stakeholders—regulators, businesses, auditors, consumers, and the public at large.
Singapore acknowledges the potential for cultural and geographic variation to shape the interpretation and implementation of AI’s overarching ethical principles, resulting in a fragmented framework for AI governance.
As such, building trustworthy AI and having a framework for determining AI trustworthiness is considered optimal at this stage of development.
Why should we regulate AI?
The nonsense of voices like Elon Musk, Bill Gates and even Stephen Hawking speak of a common belief. if we fail to take a proactive approach to the coexistence of machines and humanity, we may unwittingly sow the seeds of our own destruction.
Our society is already deeply affected by the explosion of algorithms that have skewed opinions, widened inequality, or caused currency to plummet. As AI rapidly matures and regulators struggle to keep pace, we risk not having the right rules for decision-making, leaving us vulnerable.
As such, some experts refused to sign the open letter because they believed that it understated the true magnitude of the situation and that it required too little to change. Their logic is that sufficiently “intelligent” AI will not be confined to computer systems for long.
With OpenAI’s intention to create an artificial intelligence system that aligns with human values and intentions, it’s just a matter of time before AI is “conscious”, having a powerful cognitive system capable of making autonomous decisions that are indistinguishable from normal from man.
By then, it will render any regulatory framework based on current AI systems obsolete.
Of course, even if we entertain these speculative views that sound like echoes of science fiction tales, other experts have wondered whether the field of artificial intelligence remains in its infancy despite its remarkable rise.
They warned that strict regulations could stifle the very innovation that drives us forward. Instead, the potential of AI needs to be better understood before regulations can be considered.
Furthermore, AI permeates many domains, each with unique nuances and considerations, so it doesn’t make sense to simply have a generic management framework.
How should we regulate AI?
The puzzle that surrounds artificial intelligence is inherently unique. Unlike traditional engineering systems, where designers can confidently predict functionality and outcomes, AI operates in the realm of uncertainty.
This fundamental difference calls for a new approach to regulatory frameworks that confronts the complexities of AI’s failures and its propensity to exceed intended boundaries. Accordingly, the focus has always revolved around controlling the applications of technology.
At this point, the idea of stricter controls on the use of generative artificial intelligence may seem puzzling, as its integration into our daily lives becomes more and more widespread. As such, the collective gaze shifts to the vital concept of transparency.
Experts want to develop standards for how AI should be built, tested and operated so that they are subject to a greater degree of external oversight, fostering an environment of accountability and trust. Others consider the most powerful versions of AI to be left under limited use.
In testimony before Congress on May 16, OpenAI CEO Sam Altman proposed a licensing regime to ensure AI models maintain strict security standards and undergo thorough vetting.
However, this could lead to a situation where only a few companies equipped with the necessary resources and capabilities can effectively navigate the complex regulatory landscape and dictate how AI should be implemented.
Tech and business personality Bernard Marr emphasized the importance of not weaponizing artificial intelligence. In addition, he emphasized the urgent need for a “switch,” a fail-safe mechanism that allows for human intervention in the event of AI’s arbitrariness.
Equally important is the unanimous adoption by producers of internationally mandated ethical guidelines that serve as a moral compass to guide their creations.
As attractive as these solutions sound, the question of who has the authority to implement them and determine liability in the event of accidents involving artificial intelligence or humans remains unanswered.
Among the tempting solutions and conflicting perspectives, one undeniable fact remains. The future of AI regulation is at a critical juncture, awaiting decisive action by humans, just as we eagerly await how AI will shape us.
Featured image credit: IEEE