The CEO of OpenAI, Sam Altman, took the stand this week to speak directly to US lawmakers about the risks artificial intelligence (AI) poses and why heavier regulations are needed amidst ethical, legal and national security concerns.
Speaking to the Senate Judiciary subcommittee, Altman, who was the man behind ChatGPT, noted that AI systems have become incredibly powerful but that as it advances, more people are getting anxious about the way it could change the way we live.
He noted that his team at OpenAI as well as himself were concerned about this too. To mitigate the risks, Altman proposed forming a US-based or global agency or committee that would be able to license these AI systems and to ensure compliance with safety standards as well as have the authority to revoke licenses.
Don't miss: HK privacy watchdog vows to monitor data privacy risks related to ChapGPT
Altman was speaking to congress just months after the release of the original ChatGPT in November last year. At that point, the platform made waves for its ability to answer and write like a human. Since then, AI has exploded onto the scene with many companies putting out their own AI systems and with major tech companies such as Microsoft and Google starting an AI war as they attempt to restructure their search functions.
What initially began as simple concerns regarding students using ChatGPT to copy homework has expanded into serious fears over copyright regulations, privacy, the spread of fake news and more as the system looks to possibly even rid people of jobs.
While US lawmakers have yet to reveal new AI rules, they have been taking the conversation more seriously by bringing tech CEOs to the White House in recent weeks to hear more about their concerns and how they can better protect citizens.
When asked about what his worst fears when it comes to AI, Altman noted that the industry could cause "significant harm to the world" and that if the technology goes wrong, "it can go quite wrong".
He then suggested that lawmakers impose safeguards to prevent AI models that have the capacity to “self-replicate" and "self-exfiltrate". Essentially, to prevent AI systems from being able to manipulate humans, a core concern of many in the AI space as well as OpenAI itself as the technology gets smarter.
As part of his efforts to raise concerns about the future of AI, Altman will be embarking on a worldwide tour this month to national capitals and major cities across six continents in order to speak to politicians and to educate them on the dangers of the platform and why it must be regulated.
The news comes as AI continues to ramp up globally. Recently, OpenAI launched ChatGPT-4 the latest iteration of itsAI model.
GPT-4 is OpenAI's latest iteration of its AI model that has revolutionised the platform. While OpenAI has frequently made updates to its original system up to GPT-3.5, this is the company's most significant upgrade. With the new upgrade, you will now have a system that is more capable and reliable, according to OpenAI in a blog post on its website.
"GPT-4 is more reliable, creative, and able to handle much more nuanced instructions than GPT-3.5," the company said. It added that the system can also handle a significantly higher volume of input text which means that it can now analyse more complex topics and respond in a more accurate way. You can also now present GPT-4 with images, graphs, and infographics when it comes to prompting the system into action to further increase its usefulness.
One of the main issues that was initially faced by ChatGPT was that it would produce non-factual responses to prompts and that it was not creative enough. With GPT-4, responses will be 40% more factual. It can also predict text in character and can pretend to be different personas.
GPT-4 for dummies: 101 on the even more powerful version of ChatGPT
Expedia launches in-app travel planning experience powered by ChatGPT
Coca-Cola to use ChatGPT and AI tools in its marketing efforts