Status: Not passed (Vetoed by Governor on September 29, 2024)
The Safe and Secure Innovation for Frontier Artificial Intelligence Models Act, or SB 1047, is a 2024 California bill intended to "mitigate the risk of catastrophic harms from AI models so advanced that they are not yet known to exist".[1] Specifically, the bill would apply to models which cost more than $100 million to train and were trained using a quantity of computing power greater than 1026 integer or floating-point operations.[2] SB 1047 would apply to all AI companies doing business in California—the location of the company does not matter.[3] The bill creates protections for whistleblowers[4] and requires developers to perform risk assessments of their models prior to release, under the supervision of the Government Operations Agency. It would also establish CalCompute, a University of California public cloud computing cluster for startups, researchers and community groups.
Background
The rapid increase in capabilities of AI systems in the 2020s, including the release of ChatGPT in November 2022, caused some researchers and members of the public to express concern about existential risks associated with increasingly powerful AI systems.[5][6] The plausibility of this threat is widely debated.[7] AI regulation is also sometimes advocated for in order to prevent bias and privacy violations.[6] However, it has been criticized as possibly leading to regulatory capture by large AI companies like OpenAI, in which regulation advances the interest of larger companies at the expense of smaller competition and the public in general.[6]
In May 2023, hundreds of tech executives and AI researchers[8] signed a statement on AI risk of extinction, which read "Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war." It received signatures from the two most-cited AI researchers,[9][10][11]Geoffrey Hinton and Yoshua Bengio, along with industry figures such as OpenAI CEO Sam Altman, Google DeepMind CEO Demis Hassabis, and Anthropic CEO Dario Amodei.[12][13] Many other experts thought that existential concerns were overblown and unrealistic, as well as a distraction from the near-term harms of AI, for example discriminatory automated decision making.[14] Famously, Sam Altman strongly requested AI regulation from Congress at a hearing the same month.[6][15] Several technology companies have made voluntary commitments to conduct safety testing, for example at the AI Safety Summit and AI Seoul Summit.[16][17]
The bill was originally drafted by Dan Hendrycks, co-founder of the Center for AI Safety, who has previously argued that evolutionary pressures on AI could lead to "a pathway towards being supplanted as the Earth's dominant species."[24][25] The center issued a statement in May 2023 co-signed by Elon Musk and hundreds of other business leaders stating that "Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war."[26]
State Senator Wiener first proposed AI legislation for California through an intent bill called SB 294, the Safety in Artificial Intelligence Act, in September 2023.[27][28][29] SB 1047 was introduced by Wiener on February 7, 2024.[30][31]
On May 21, SB 1047 passed the Senate 32-1.[32][33] The bill was significantly amended by Wiener on August 15, 2024 in response to industry advice.[34] Amendments included adding clarifications, and removing the creation of a "Frontier Model Division" and the penalty of perjury.[35][36]
On August 28, the bill passed the State Assembly 48-16. Then, due to the amendments, the bill was once again voted on by the Senate, passing 30-9.[37][38]
On September 29, Governor Gavin Newsom vetoed the bill. It is considered unlikely that the legislature will override the governor's veto with a two-thirds vote from both houses.[39]
Provisions
Prior to model training, developers of covered models and derivatives are required to submit a certification, subject to auditing, of mitigation of "reasonable" risk of "critical harms" of the covered model and its derivatives, including post-training modifications. Safeguards to reduce risk include the ability to shut down the model,[4] which has been variously described as a "kill switch"[40] and "circuit breaker".[41]Whistleblowing provisions protect employees who report safety problems and incidents.[4]
SB 1047 would also create a public cloud computing cluster called CalCompute, associated with the University of California, to support startups, researchers, and community groups that lack large-scale computing resources.[35]
Covered models
SB 1047 covers AI models with training compute over 1026 integer or floating-point operations and a cost of over $100 million.[2][42] If a covered model is fine-tuned using more than $10 million, the resulting model is also covered.[36]
Critical harms
Critical harms are defined with respect to four categories:[1][43]
Cyberattacks on critical infrastructure causing mass casualties or at least $500 million of damage
Autonomous crimes causing mass casualties or at least $500 million of damage
Other harms of comparable severity
Compliance and supervision
SB 1047 would require developers, beginning January 1, 2026, to annually retain a third-party auditor to perform an independent audit of compliance with the requirements of the bill, as provided.[35] The Government Operations Agency would review the results of safety tests and incidents, and issue guidance, standards, and best practices.[35] The bill creates a Board of Frontier Models to supervise the application of the bill by the Government Operations Agency. It is composed of 9 members.[42][needs update]
Reception
Debate
Proponents of the bill describe its provisions as simple and narrowly-focused, with Sen. Scott Weiner describing it as a "light-touch, basic safety bill".[45] This has been disputed by critics of the bill, who describe the bill's language as vague and criticize it as consolidating power in the largest AI companies at the expense of smaller ones.[45] Proponents responded that the bill only applies to models trained using more than 1026FLOPS and with over $100 millions, or fine-tuned with more than $10 millions, and that the threshold could be increased if needed.[46]
The penalty of perjury has been a subject of debate, and was eventually removed through an amendment. The scope of the "kill switch" requirement had also been reduced, following concerns from open-source developers. Contention also happened on the usage of the term "reasonable assurance", eventually replaced after the amendment by "reasonable care". Critics argued that "reasonable care" standard imposes an excessive burden by requiring confidence that models could not be used to cause catastrophic harm, while proponents argued that the standard of "reasonable care" does not imply certainty and is a well-established legal standard that already applies to AI developers under existing law.[46]
After the bill was amended, Anthropic CEO Dario Amodei wrote that "the new SB 1047 is substantially improved, to the point where we believe its benefits likely outweigh its costs. However, we are not certain of this, and there are still some aspects of the bill which seem concerning or ambiguous to us."[85] Amodei also commented, "There were some companies talking about moving operations out of California. In fact, the bill applies to doing business in California or deploying models in California... Anything about 'Oh, we're moving our headquarters out of California...' That's just theater. That's just negotiating leverage. It bears no relationship to the actual content of the bill."[86]xAI CEO Elon Musk wrote, "I think California should probably pass the SB 1047 AI safety bill. For over 20 years, I have been an advocate for AI regulation, just as we regulate any product/technology that is a potential risk to the public."[87]
On September 9, 2024, at least 113 current and former employees of AI companies OpenAI, Google DeepMind, Anthropic, Meta, and xAI signed a letter to Governor Newsom in support of SB 1047.[88][89]
Open source developers
Critics expressed concerns about liability on open source software imposed by the bill if they use or improve existing freely available models. Yann LeCun, the Chief AI Officer of Meta, has suggested the bill would kill open source AI models.[63] Currently (as of July 2024), there are concerns in the open-source community that due to the threat of legal liability companies like Meta may choose not to make models (for example, Llama) freely available.[90][91] The AI Alliance has written in opposition to the bill, among other open-source organizations.[69] In contrast, Creative Commons co-founder Lawrence Lessig has written that SB 1047 would make open source AI models safer and more popular with developers, since both harm and liability for that harm are less likely.[41]
Public opinion polls
The Artificial Intelligence Policy Institute, a pro-regulation AI think tank,[92][93] ran three polls of California respondents on whether they supported or opposed SB 1047.
A YouGov poll commissioned by the Economic Security Project found that 78% of registered voters across the United States supported SB 1047, and 80% thought that Governor Newsom should sign the bill.[100]
A David Binder Research poll commissioned by the Center for AI Safety, a group focused on mitigating societal-scale risk and a sponsor of the bill, found that 77% of Californians support a proposal to require companies to test AI models for safety risks, and 86% consider it an important priority for California to develop AI safety regulations.[101][102][103][104]
On the other hand, the California Chamber of Commerce has conducted its own poll, showing that 28 % of respondents supported the bill, 46 % opposed, and 26 % were neutral. The framing of the question has however been described as "badly biased".[e][93]
SB 1047 Governor Veto
Governor Gavin Newsom vetoed SB 1047 on September 29, 2024, citing concerns over the bill's regulatory framework targeting only large AI models based on their computational size, while not taking into account whether the models are deployed in high-risk environments.[105][106] Newsom emphasized that this approach could create a false sense of security, overlooking smaller models that might present equally significant risks.[105][107] He acknowledged the need for AI safety protocols[105][108] but stressed the importance of adaptability in regulation as AI technology continues to evolve rapidly.[105][109]
Governor Newsom also committed to working with technology experts, federal partners, and academic institutions, including Stanford University's Human-Centered AI (HAI) Institute, led by Dr. Fei-Fei Li. He announced plans to collaborate with these entities to advance responsible AI development, aiming to protect the public while fostering innovation.[105][110]
^Question asked by the Artificial Intelligence Policy Institute's poll in August 2024: "Some policy makers are proposing a law in California, Senate Bill 1047, which would require that companies that develop advanced AI conduct safety tests and create liability for AI model developers if their models cause catastrophic harm and they did not take appropriate precautions."[93]
^Question asked in the California Chamber of Commerce's poll: "Lawmakers in Sacramento have proposed a new state law—SB 1047—that would create a new California state regulatory agency to determine how AI models can be developed. This new law would require small startup companies to potentially pay tens of millions of dollars in fines if they don’t implement orders from state bureaucrats. Some say burdensome regulations like SB 1047 would potentially lead companies to move out of state or out of the country, taking investment and jobs away from California. Given everything you just read, do you support or oppose a proposal like SB 1047?"[93]
^"Adaptability is critical as we race to regulate a technology still in its infancy. This
will require a delicate balance. While well-intentioned, SB 1047 does not take
into account whether an Al system is deployed in high-risk environments,
involves critical decision-making or the use of sensitive data. Instead, the bill
applies stringent standards to even the most basic functions - so long as a
large system deploys it. I do not believe this is the best approach to protecting
the public from real threats posed by the technology"
^"By focusing only on the most expensive and large-scale models, SB 1047
establishes a regulatory framework that could give the public a false sense of
security about controlling this fast-moving technology. Smaller, specialized
models may emerge as equally or even more dangerous than the models
targeted by SB 1047 - at the potential expense of curtailing the very innovation
that fuels advancement in favor of the public good."
^"Let me be clear - I agree with the author - we cannot afford to wait for a major
catastrophe to occur before taking action to protect the public. California will
not abandon its responsibility. Safety protocols must be adopted. Proactive
guardrails .should be implemented, and severe consequences for bad actors
must be clear and enforceable."
^"Adaptability is critical as we race to regulate a technology still in its infancy. This
will require a delicate balance. While well-intentioned, SB 1047 does not take
into account whether an Al system is deployed in high-risk environments,
involves critical decision-making or the use of sensitive data. Instead, the bill
applies stringent standards to even the most basic functions - so long as a
large system deploys it. I do not believe this is the best approach to protecting
the public from real threats posed by the technology"
For the Chinese television drama in 2014, see Say I Love You (2014 TV series). Season of television series List of Say I Love You episodesCover of the second Blu-ray volume released by StarChild in Japan on January 23, 2013.Country of originJapanNo. of episodes13 + 1 OADReleaseOriginal networkTokyo MXOriginal releaseOctober 6 (2012-10-06) –December 29, 2012 (2012-12-29) Say I Love You. is a 2012 romance Japanese anime based on the manga written and illustrated by Kanae Hazuk...
العلاقات الكيريباتية الناوروية كيريباتي ناورو كيريباتي ناورو تعديل مصدري - تعديل العلاقات الكيريباتية الناوروية هي العلاقات الثنائية التي تجمع بين كيريباتي وناورو.[1][2][3][4][5] مقارنة بين البلدين هذه مقارنة عامة ومرجعية للدولتين: وجه ا...
National Medal of ScienceDeskripsiKontribusi luar biasa dalam bidang fisika, biologi, matematika, teknik sosial dan ilmu perilakuLokasiWashington, D.C.NegaraAmerika SerikatDipersembahkan olehPresiden AmerikaDiberikan perdana1963Situs webhttp://www.nsf.gov/od/nms/medal.jsp National Medal of Science (bahasa Indonesia: Medali Sains Nasional) adalah suatu penghormatan yang diberikan oleh Presiden Amerika Serikat kepada individu dalam bidang sains dan teknik yang telah memberikan kontribusi pentin...
Liga I 2007-2008 Competizione Liga I Sport Calcio Edizione 90ª Organizzatore FRF Date dal 27 luglio 2007al 7 maggio 2008 Luogo Romania Partecipanti 18 Risultati Vincitore CFR Cluj(1º titolo) Retrocessioni Ceahlăul Piatra NeamțDacia MioveniU Cluj Statistiche Miglior marcatore Ionel Dănciulescu (21) Cronologia della competizione 2006-2007 2008-2009 Manuale La Liga I 2007-2008 è stata la 90ª edizione della massima serie del campionato rumeno di calcio, disputato tra il...
PT Yodya Karya (Persero)SebelumnyaPN Yodya Karya (1961 - 1972)JenisBadan usaha milik negaraIndustriKonsultansiPendahuluNV Architectenbureau Job en SpreyDidirikan29 Maret 1961; 63 tahun lalu (1961-03-29)KantorpusatJakarta, IndonesiaWilayah operasiIndonesiaTokohkunciColbert Thomas Pangaribuan[1](Direktur Utama)Zuhairi Misrawi[2](Komisaris Utama)JasaKonsultansi konstruksiKonsultansi non-konstruksiPengembangan propertiPenyusunan rancangan dan rekayasaPendapatanRp 334,008 mily...
Chemical compound PS75Identifiers IUPAC name N-(pyridin-4-yl)-7-chloronaphthalen-1-amine CAS NumbernoneChemical and physical dataFormulaC15H11ClN2Molar mass254.72 g·mol−13D model (JSmol)Interactive image SMILES Clc1cc2c(cccc2cc1)Nc1ccncc1 PS75 is an experimental analgesic drug which acts as a functionally selective alpha-2A adrenergic agonist, with a Ki of 8.2nM and an EC50 of 4.8nM at the α2A receptor. In animal studies it was found to produce analgesia but without the sedation typi...
Cet article est une ébauche concernant une localité suédoise. Vous pouvez partager vos connaissances en l’améliorant (comment ?) selon les recommandations des projets correspondants. Skänninge Administration Pays Suède Province historique Östergötland Comté Östergötland Commune Mjölby Paroisse Skänninge Statut de ville XIIIe siècle Démographie Population 3 242 hab. (2005) Densité 1 183 hab./km2 Géographie Coordonnées 58° 24′ 00�...
يفتقر محتوى هذه المقالة إلى الاستشهاد بمصادر. فضلاً، ساهم في تطوير هذه المقالة من خلال إضافة مصادر موثوق بها. أي معلومات غير موثقة يمكن التشكيك بها وإزالتها. (نوفمبر 2019) الدوري الإسباني الدرجة الثانية ب 2017–18 تفاصيل الموسم الدوري الإسباني الدرجة الثانية ب النسخة 41 ال�...
Kurva tegangan-regangan dalam uji tekan suatu spesimen Kekuatan tekan adalah kapasitas dari suatu bahan atau struktur dalam menahan beban yang akan mengurangi ukurannya. Kekuatan tekan dapat diukur dengan memasukkannya ke dalam kurva tegangan-regangan dari data yang didapatkan dari mesin uji. Beberapa bahan akan patah pada batas tekan, beberapa mengalami deformasi yang tidak dapat dikembalikan. Deformasi tertentu dapat dianggap sebagai batas kekuatan tekan, meski belum patah, terutama pada ba...
Irish revolutionary figure (1763–1798) This article is about the Irish revolutionary leader associated with the rebellion in Ireland of 1798. For other uses, see Wolf tone (disambiguation). Wolfe TonePortrait in the National Gallery of Ireland, DublinBornTheobald Wolfe Tone(1763-06-20)20 June 1763Dublin, Kingdom of IrelandDied19 November 1798(1798-11-19) (aged 35)Dublin, Kingdom of IrelandBurial placeBodenstown Graveyard, Sallins, County Kildare, IrelandEducationTrinity College DublinK...
Shore-based operational unit of the United States Coast Guard A Sector is a shore-based operational unit of the United States Coast Guard. Each Sector is responsible for the execution of all Coast Guard missions within its Area of Responsibility (AOR), with operational support from Coast Guard Cutters and Air Stations. Subordinate commands within a Sector typically include Stations and Aids-to-Navigation (ATON) Teams. Some Sector commands also have subordinate units such as Sector Field Offic...
Rappresentazione della Dichiarazione dei diritti dell'uomo e del cittadino del 1789. Per indicare la Francia vengono utilizzate diverse locuzioni consolidatesi nel tempo, di origini le più diverse. Indice 1 Locuzioni comuni 1.1 La figlia primogenita della Chiesa 1.2 L'Esagono 1.3 La Patria dei diritti dell'Uomo 1.4 Oltre Quiévrain 2 Note 3 Bibliografia Locuzioni comuni La Patria dei diritti dell'Uomo (Patrie des droits de l'Homme). La Figlia primogenita della Chiesa (Fille aînée de l’É...
Manufacturing company in Japan IHI CorporationNative name株式会社IHICompany typePublic KKTraded asTYO: 7013IndustryHeavy equipmentFounded5 December 1853; 170 years ago (1853-12-05)HeadquartersToyosu IHI Building, Tokyo, JapanKey peopleTsugio Mitsuoka [jp] (chairman)Hiroshi Ide [jp] (president and CEO)ProductsSpace developmentJet enginesDiesel enginesGas enginesIndustrial machineryBridge & steel structuresEnergy systemsetcRevenue�...
Shaykh al-MaqâriMuhammad Saddiq Al-Minshawiمُحَـمّـد صِـدّيْـق المِـنـشَـاوي Nama lain Mohamed Seddik El-Menshawy El Minshawy El Minshawi Informasi pribadiLahir(1920-01-20)20 Januari 1920Al Minshah, Kegubernuran Sohag, Mesir[1]Meninggal20 Juni 1969(1969-06-20) (umur 49)Kairo, MesirAgamaIslamKebangsaan MesirDikenal sebagaiPenghapal al-Qur'an akurat[2] Unique recitation of the Qur'anPekerjaanUlamaPengarangQari Muhammad Saddiq Al-Minshawi...
تقيس طاقة السطح مدى تحطم الروابط بين الجزيئية الذي يحدث عند تشكل سطح ما.[1][2][3] وفي فيزياء المواد الصلبة، يجب أن يكون للسطوح أقل طاقة ممكنة (أقل توتر سطحي) مقارنة بطاقة المادة نفسها، وإلا ستتشكل قوة دافعة لتكوين سطوح تعمل على إلغاء وتغليف عمق المادة. يمكن إذن تعر�...
Hellenistic founder of Neoplatonism (c. 204 5–270) Not to be confused with Photinus. PlotinusHead in white marble. The identification as Plotinus is plausible but not proven.Bornc. 204/5 CEAsyut or Lycopolis, Egypt, Roman EmpireDied270 (aged 64–65) CECampania, Roman EmpireNotable workThe Enneads[1]EraAncient philosophyRegionWestern philosophySchoolNeoplatonism[1][2][3][4]Main interestsPlatonism, metaphysics, mysticism[1][3...
Characteristic of some artificial intelligences For similar terms, see Situated robotics, Nouvelle AI, and behavior-based AI. See also: Situated learning, Situated knowledge, and Situated cognition This article appears to be a dictionary definition. Please rewrite it to present the subject from an encyclopedic point of view. (May 2023) In artificial intelligence and cognitive science, the term situated refers to an agent which is embedded in an environment. The term situated is commonly used ...
Alexander F. Chamberlain Alexander Francis Chamberlain (January 12, 1865 – April 8, 1914) was a Canadian anthropologist, born in England. Under the direction of Franz Boas he received the first Ph.D. granted in anthropology in the United States from Clark University in Worcester, Massachusetts. After graduating, he taught at Clark, eventually becoming full professor in 1911. Under the auspices of the British Association, his area of specialty was the Kootenay (British Columbia)...