Sam Altman Co-founder and CEO of OpenAI speaks right through the Italian Tech Week 2024 at OGR Officine Grandi Riparazioni on September 25, 2024 in Turin, Italy.
Stefano Guidi | Getty Images News | Getty Images
OpenAI on Wednesday introduced a brand new “safety evaluations hub,” a webpage the place it is going to publicly show artificial intelligence fashions’ protection effects and the way they carry out on checks for hallucinations, jailbreaks and damaging content material, comparable to “hateful content or illicit advice.”
OpenAI stated it used the protection opinions “internally as one part of our decision making about model safety and deployment,” and that whilst machine playing cards free up protection check effects when a fashion is introduced, OpenAI will any more “share metrics on an ongoing basis.”
“We will update the hub periodically as part of our ongoing company-wide effort to communicate more proactively about safety,” OpenAI wrote at the webpage, including that the protection opinions hub does now not mirror the whole protection efforts and metrics and as an alternative presentations a “snapshot.”
The information comes after CNBC reported previous Wednesday that tech corporations which can be main the way in which in synthetic intelligence are prioritizing merchandise over analysis, in line with trade mavens who’re sounding the alarm about protection.
CNBC reached out to OpenAI and different AI labs discussed within the tale smartly ahead of it used to be printed.
OpenAI not too long ago sparked some on-line controversy for now not operating sure protection opinions at the last model of its o1 AI fashion.
In a up to date interview with CNBC, Johannes Heidecke, OpenAI’s head of protection techniques, stated the corporate ran its preparedness opinions on near-final variations of the o1 fashion, and that minor permutations to the fashion that came about after the ones checks don’t have contributed to vital jumps in its intelligence or reasoning and thus would not require further opinions.
Still, Heidecke said within the interview that OpenAI ignored a possibility to extra obviously provide an explanation for the adaptation.
Meta, which used to be additionally discussed in CNBC’s reporting on AI protection and analysis, additionally made a statement Wednesday.
The corporate’s Fundamental AI Research staff launched new joint analysis with the Rothschild Foundation Hospital and an open dataset for advancing molecular discovery.
“By making our research widely available, we aim to provide easy access for the AI community and help foster an open ecosystem that accelerates progress, drives innovation, and benefits society as a whole, including our national research labs,” Meta wrote in a weblog submit pronouncing the analysis developments.