Open AI earlier launched a new model, GPT 4o, which had the world in a frenzy. However, it was informed that the company is forming a Safety and Security committee that would help in the training of a new model to replace the current GPT-4 system.
The committee is led by board members of Open AI, including the CEO Sam Altman. Open AI also mentioned on their blog that directors Bret Taylor, Adam D’Angelo and Nicole Seligman would also be actively working on the committee.
The committee will be responsible to provide recommendations to the board on “essential safety and security matters” according to the OpenAI’s statement.
Purpose of the committee
According to the Open AI’s blog, “A first task of the Safety and Security Committee will be to evaluate and further develop OpenAI’s processes and safeguards over the next 90 days.”
In this time, the committee would evaluate the existing safety practices of the company and would suggest strategies for further development. After this evaluation stage of 90 days, the Safety and Security Committee will share their recommendations with the whole board. Then after the full review, Open AI would be sharing an update publicly.
Why did the need for the committee arise?
This precautionary measure was taken due to some of the important members of the Open AI’s team leaving after the launch of their model. Former Chief Scientist Ilya Sutskever and Jan Leike, who were leaders of OpenAI’s Superalignment team – that works on keeping the company aligned with its goals – left the company in May.
The Superalignment team has overall been disbanded by the company, with some active members joining other groups within the firm.
The Ex- researcher of OpenAI, Jan Leike, had concerns about the safety and governance of the AI system of the company which is why he preferred to resign as the matter was not prioritized by the company. He later joined Anthropic, a competitor of Open AI.
“I believe much more of our bandwidth should be spent getting ready for the next generations of models, on security, monitoring, preparedness, safety, adversarial robustness, (super)alignment, confidentiality, societal impact, and related topics,” Leike said.
Pressure on Open AI
As the competition in the AI world increases, many companies have tried to get an edge over the prevailing leader of the market.
Anthropic has started to highlight the importance of safety and security for their system, after the controversy of Open AI. “Our research teams investigate the safety, inner workings, and societal impact of AI models — so that artificial intelligence has a positive impact on society as it becomes increasingly advanced and capable,” says the company’s mission statement.
xAI, the company founded by Elon Musk, has also announced that it raised $6 billion. This has paved new ways for the company to challenge Open AI as they have already had legal disputes in the past.
Amid all that, Open AI has been facing governance and safety issues that they need to work on before the competition gets too tough to handle. Forming a Safety and Security committee was their answer to all the controversies going around and to promise safety to their customers.