brianjking 3 days ago
  • dang 2 days ago

    Thanks! The WSJ article was the submitted URL, but I've changed it to the governor's statement now. Interested readers will probably want to look at both.

    • guywithahat 2 days ago

      Why would you change it? The WSJ article already contains his reasoning, plus a lot of other interesting content from major players

      • dredmorbius 2 days ago

        WSJ's paywall, particularly against Archive Today, has been hardening markedly of late.

        I'm repeatedly seeing A.T. links posted which read "you have been blocked" or similar. Sometimes those resolve later, sometimes not.

        HN's policy is that paywalls are permissible where workarounds exist. WSJ is getting close to disabling those workarounds.

        The NYTimes similarly tightened its paywall policy a few years ago. A consequence was that its prevalence on the HN front page fell to ~25% of its prior value, with no change in HN policies (as reported by dang), just member voting patterns.

        Given the difficulty in encouraging people to read articles before posting shallow-take comments, this is a significant problem for HN, and the increased reliance of media sites on paywalls is taking its toll on general discussion.

        There are literally hundreds of news sites, and many thousands if individual sites, submitted to HN and making the front page annually. It would cost a fortune, not merely a small one, to subscribe to all of these.

        • tzs 2 days ago

          > There are literally hundreds of news sites, and many thousands if individual sites, submitted to HN and making the front page annually. It would cost a fortune, not merely a small one, to subscribe to all of these.

          No one has the time to read all of them, so it doesn't really matter if it would also be unaffordable.

          • dredmorbius 2 days ago

            The result would be to either concentrate the discussion (to the few sites which are widely subscribed), fragment the discussion (among those who subscribe to a specific submitted site), or in all likelihood, both.

            HN takes pride in being both a single community and discussing a wide range of sources. Wider adoption of subscriber paywalls online would be inimical to both aspects.

      • dang 2 days ago

        Someone emailed and suggested it. I looked at the pdf and it seemed to be more substantive than the usual political statement, so I sort of trusted that it would be better. Also it's not paywalled.

        https://news.ycombinator.com/item?id=41690454 remains pinned to the top of the thread, so people have the opportunity to read both.

        (Actually we usually prefer the best third-party article to press releases, but nothing's perfectly consistent.)

        • ericjmorey 2 days ago

          I usually prefer the press release and only read a third party report if I'm looking for more context. So thanks for making it easy to find the primary source of the news here.

        • freedomben 2 days ago

          FWIW I think you made the right call here. The PDF is substantive, primary, and has no paywall. The pinned WSJ article at the top gives best of both worlds.

          • dang a day ago

            It'll be better when we implement proper URL aggregation, which, you never know, may happen

worstspotgain 3 days ago

Excellent move by Newsom. We have a very active legislature, but it's been extremely bandwagon-y in recent years. I support much of Wiener's agenda, particularly his housing policy, but this bill was way off the mark.

It was basically a torpedo against open models. Market leaders like OpenAI and Anthropic weren't really worried about it, or about open models in general. Its supporters were the also-rans like Musk [1] trying to empty out the bottom of the pack, as well as those who are against any AI they cannot control, such as antagonists of the West and wary copyright holders.

[1] https://techcrunch.com/2024/08/26/elon-musk-unexpectedly-off...

  • dragonwriter 3 days ago

    > Excellent move by Newsom. [...] It was basically a torpedo against open models.

    He vetoed it in part because the threshold it applies to at all are well-beyond any current models, and he wants something that will impose greater restrictions on more and much smaller/lower-training-compute models that this would have left alone entirely.

    > Market leaders like OpenAI and Anthropic weren't really worried about it, or about open models in general.

    OpenAI (along with Google and Meta) led the institutional opposition to the bill, Anthropic was a major advocate for it.

    • worstspotgain 3 days ago

      > He vetoed it in part because the threshold it applies to at all are well-beyond any current models, and he wants something that will impose greater restrictions on more and much smaller/lower-training-compute models that this would have left alone entirely.

      Well, we'll see what passes again and when. By then there'll be more kittens out of the bag too.

      > Anthropic was a major advocate for it.

      I don't know about being a major advocate, the last I read was "cautious support" [1]. Perhaps Anthropic sees Llama as a bigger competitor of theirs than I do, but it could also just be PR.

      [1] https://thejournal.com/articles/2024/08/26/anthropic-offers-...

      • FeepingCreature 2 days ago

        > I don't know about being a major advocate, the last I read was "cautious support" [1]. Perhaps Anthropic sees Llama as a bigger competitor of theirs than I do, but it could also just be PR.

        This seems a curious dichotomy. Can we at least consider the possibility that they mean the words they say or is that off the table?

        • worstspotgain 2 days ago

          Just two spitballing conjectures, not meant to be a dichotomy. If you have first-hand knowledge please contribute.

          • ForOldHack 19 hours ago

            One of the very most apt descriptions. One company trying to raise alarms about another company in the same business. I have been following this since IBM released $67,000 "Intellect" "Business Intelligence." and Lotus HAL.

    • arduanika 2 days ago

      He's a politician, and his stated reason for the veto is not necessarily his real reason for the veto.

      • jodleif 2 days ago

        Makes perfect sense since his elected based on public positions

        • ants_everywhere 2 days ago

          This is the ideal, but it's often false in meaningful ways. In several US elections, for example, we've seen audio leaked of politicians promising policies to their donors that would be embarrassing if widely publicly known by the electorate.

          This suggests that politicians and donors sometimes collude to deliberately misrepresent their views to the public in order to secure election.

          • mistrial9 2 days ago

            worse.. a first-hand quote from inside a California Senate committee hearing chamber.. "Don't speak it if you can nod, and don't nod if you can wink" .. translated, that means that in a contentious situation with others in the room, if allies can signal without speaking the words out loud, that is better.. and if the signal can be hidden, better still.

            • duped 2 days ago

              This is an old saying in politics and you're misinterpreting it - it's not about signaling to allies, it's about avoiding being held to any particular positions.

              You're also missing the first half, "don't write if you can speak, don't speak if you can nod, and don't nod if you can wink." The point is not to commit to anything if you don't have to.

    • inferiorhuman 2 days ago

      Newsom vetoed the bill as a nod to his donors plain and simple. Same reason he just signed a bill allowing a specific customers at a single venue to be served alcohol later than 2 AM. Same reason he carved out a minimum wage exemption for Panera. Same reason he signed a bill to carve out a junk fee exemption specifically for restaurants.

      He's just planning for a post-governor career.

      • burningChrome 2 days ago

        >> He's just planning for a post-governor career.

        After this year, many democrats are as well - which is why Harris had such a hard time finding a VP and took Walz who was like the last kid you pick on your dodgeball team.

        The presidential race in 2028 for the Democrats is going to have one of the deepest benches for talent I've seen in a long time. Newsom and Shapiro will be at the top of the list for sure.

        But I agree, Newsom has been making some decisions lately that seem to indicate he's trying to clean up his image and look more "moderate" for the coming election cycles.

        • taurath 2 days ago

          > Newsom and Shapiro will be at the top of the list for sure.

          Neither has genuine appeal. Shapiro is a really, really poor speaker and has few credentials except as a moderate. Newsom is the definition of coastal elite. Both have spoken, neither have been heard.

          • taurath a day ago

            Oops, I had mixed up Shapiro with Gretchen Whitmer (another of these "rising stars"). Shapiro just does white Obama.

            • ForOldHack 19 hours ago

              Wow. Now that was funny. Thanks.

        • worstspotgain 2 days ago

          2032, nice try though. Besides, we're not going to need to vote anymore otherwise, remember? The 2028 Democratic Primary would be a pro-forma affair between Barron vs. Mayor McCheese.

      • ldbooth 2 days ago

        Same reason the governor appointed public utility commission has allowed PG&E to raise rates 4 times in a single year without legitimate oversight. Yea unfortunately all roads point to his donors with this smooth talker, cost of living be damned.

        • iluvcommunism 2 days ago

          On the east coast we don’t need the government to control electricity prices. And our electricity is cheaper. Go figure.

          • ldbooth 2 days ago

            Companies like Dominion Energy, Duke Energy, and Consolidated Edison are regulated by state utility commissions, same as in California.

          • ForOldHack 19 hours ago

            Why did New Jersey get more toxic waste dumps than California has lawyers? They got first choice. We will trade Newsom for a failed reactor. Please.

      • wseqyrku 2 days ago

        When you're a politician and have a business hobby

    • raverbashing 2 days ago

      Anthropic was championing a lot of FUD in the AI area

  • SonOfLilit 3 days ago

    why would Google, Microsoft and OpenAI oppose a torpedo against open models? Aren't they positioned to benefit the most?

    • benreesman 3 days ago

      Some laws are just bad. When the API-mediated/closed-weights companies agree with the open-weight/operator-aligned community that a law is bad, it’s probably got to be pretty awful. That said, though my mind might be playing tricks on me, I seem to recall the big labs being in favor at one time.

      There are a number of related threads linked, but I’ll personally highlight Jeremy Howard’s open letter as IMHO the best-argued case against SB 1047.

      https://www.answer.ai/posts/2024-04-29-sb1047.html

      • stego-tech 2 days ago

        > When the API-mediated/closed-weights companies agree with the open-weight/operator-aligned community that a law is bad, it’s probably got to be pretty awful.

        I’d be careful with that cognitive bias, because obviously companies dumping poison into water sources are going to be opposed to laws that would prohibit them from dumping poison into water sources.

        Always consider the broader narrative in addition to the specific narratives of the players involved. Personally, I’m on the side of the fence that’s grumpy Newsom vetoed it, because it stymies the larger discussion about regulations on AI in general (not just LLMs) in the classic trap of “any law that isn’t absolutely perfect and addresses all known and unknown problems is automatically bad” often used to kill desperately needed reforms or regulations, regardless of industry. Instead of being able to build on the momentum of passed legislation and improve on it elsewhere, we now have to deal with the giant cudgel from the industry and its supporters of “even CA vetoed it so why are you still fighting against it?”

        • benreesman 2 days ago

          I’d advise anyone to conduct their career under the assumption that all data was public.

          • stego-tech 2 days ago

            As a wise SysAdmin once told me when I was struggling with my tone in writing: “assume what you’re writing will be read aloud in Court someday.”

            • seanhunter 2 days ago

              As someone who was once asked under oath "What did you mean when you sent the email describing the meeting as a 'complete clusterfuck'?" I can attest to the wisdom of those words.

            • bigmattystyles 2 days ago

              It’s probably a google search away, but if I’ve typed it slack/outlook/whatever, but not sent it because I then thought better of it, did the app still record it somewhere? I’m almost sure it has to be and I would like to apologize in advance to my senior leadership…

              • stego-tech 2 days ago

                That depends greatly on your tooling, your company, as well as the skills and ethics of your Enterprise IT team.

                Generally speaking, it’s in ours’ and the company’s best interests to keep as little data as possible for two big reasons: legal discovery and cost. Unless we’re explicitly required to retain historical records, it’s a legal and fiscal risk to keep excess data around.

                That said, there are situations where your input is captured and stored regardless of whether it’s sent. As you said, whether it does or not is often a simple search away.

      • SonOfLilit 3 days ago

        > The definition of “covered model” within the bill is extremely broad, potentially encompassing a wide range of open-source models that pose minimal risk.

        Who are these wide range of >$100mm open source models he's thinking of? And who are the impacted small businesses that would be scared to train them (at a cost of >$100mm) without paying for legal counsel?

      • shiroiushi 2 days ago

        It's too bad companies big and small didn't come together and successfully oppose the passage of the DMCA.

        • worstspotgain 2 days ago

          There were a lot of questionable Federal laws that made it through in the 90s, such as DOMA [1], PRWORA [2], IIRIRA [3], and perhaps the most maddening to me, DSHEA [4].

          [1] https://en.wikipedia.org/wiki/Defense_of_Marriage_Act

          [2] https://en.wikipedia.org/wiki/Personal_Responsibility_and_Wo...

          [3] https://en.wikipedia.org/wiki/Illegal_Immigration_Reform_and...

          [4] https://en.wikipedia.org/wiki/Dietary_Supplement_Health_and_...

          • shiroiushi 2 days ago

            "Questionable" is a very charitable term to use here, especially for the DSHEA which basically just legalizes snake-oil scams.

        • fshbbdssbbgdd 2 days ago

          My understanding is that tech was politically weaker back then. Although there were some big tech companies, they didn’t have as much of a lobbying operation.

        • wrs 2 days ago

          As I remember it, among other reasons, tech companies really wanted “multimedia” (at the time, that meant DVDs) to migrate to PCs (this was called the “living room PC”) and studios weren’t about to allow that without legal protection.

        • RockRobotRock 2 days ago

          No snark, but what's wrong with the DMCA? From what I understand it, they took the idea that it's infeasible for a site to take full liability for user-generated copyright infringement (so they granted them safe harbor), but that they will be liable if they ignore take down notices.

          • worstspotgain 2 days ago

            Among other things, quoth the EFF:

            "Thanks to fair use, you have a legal right to use copyrighted material without permission or payment. But thanks to Section 1201, you do not have the right to break any digital locks that might prevent you from engaging in that fair use. And this, in turn, has had a host of unintended consequences, such as impeding the right to repair."

            https://www.eff.org/deeplinks/2020/07/what-really-does-and-d...

            • RockRobotRock 2 days ago

              forgot about the anti-circumvention clause ;(((

              that's the worst

          • shiroiushi 2 days ago

            The biggest problem with it, AFAICT, is that it allows anyone who claims to hold a copyright to maliciously take down material they don't like by filing a DMCA notice. Companies receiving these notices have to follow a process to reinstate material that was falsely claimed, so many times they don't bother. There's no mechanism to punish companies that abuse this.

    • CSMastermind 3 days ago

      The bill included language that required the creators of models to have various "safety" features that would severely restrict their development. It required audits and other regulatory hurdles to build the models at all.

      • llamaimperative 3 days ago

        If you spent $100MM+ on training.

        • gdiamos 3 days ago

          Advanced technology will drop the cost of training.

          The flop targets in that bill would be like saying “640KB of memory is all we will ever need” and outlawing anything more.

          Imagine what other countries would have done to us if we allowed a monopoly like that on memory in 1980.

          • theptip 2 days ago

            If the danger is coming from the amount of compute invested, then cost of compute is irrelevant.

            A much better objection to static FLOP thresholds is that as data quality and algorithms increase, you can do a lot more with fewer FLOPs / parameters.

            But let’s be clear about these objections - they are saying that FLOP thresholds are going to miss some harms, not that they are too strict.

            The rest is arguing about exactly where the FLOP thresholds should be. (And of course these limits can be revised as we learn more.)

          • llamaimperative 3 days ago

            No, there are two thresholds and BOTH must be met.

            One of those is $100MM in training costs.

            The other is measured in FLOPs but is already larger than GPT-4, so the “think of the small guys!” argument doesn’t make much sense.

            • gdiamos 2 days ago

              Cost as a perf metric is meaningless and the history of computer benchmarks has repeatedly proven this point.

              There is a reason why we report time (speedup) in spec instead of $$

              The price you pay depends on who you are and who is giving it to you.

              • llamaimperative 2 days ago

                That’s why there are two thresholds.

                • Vetch 2 days ago

                  Cost per FLOP continues to drop on an exponential trend (and what bit flops do we mean?). Leaving aside more effective training methodologies and how that muddies everything by allowing superior to GPT4 perf using less training flops, it also means one of the thresholds soon will not make sense.

                  With the other threshold, it creates a disincentive for models like llama-405B+, in effect enshrining an even wider gap between open and closed.

                  • pas 2 days ago

                    Why? Llama is not generated by some guy in a shed.

                    And even if it were, if said guy has such amount of compute, then it's time to use some of it to describe the model's safety profile.

                    If it makes sense for Meta to release models, it would have made sense even with the requirement. (After all the whole point of the proposed regulation is to get some better sense of those closed models.)

                    • llamaimperative 2 days ago

                      Also the bill was amended NOT to extend liability to derivative models that the training company doesn’t have effective control over.

            • gdiamos 2 days ago

              Tell that to me when we get to llama 15

              • llamaimperative 2 days ago

                What?

                • gdiamos 2 days ago

                  “But the big guys are struggling getting past 100KB, so ‘think of the small guys’ doesn’t make sense when the limit is 640KB.”

                  How do people on a computer technology forum ignore the 10,000x improvement in computers over 30 years due to advances in computer technology?

                  I could understand why politicians don’t get it.

                  I should think that computer systems companies would be up in arms over SB 1047 in the same way they would be if the government was thinking of putting a cap on hard drives bigger than 1 TB.

                  It puts a cap on flops. Isn’t the biggest company in the world in the business of selling flops?

                  • llamaimperative 2 days ago

                    It would be crazy if the bill had a built-in mechanism to regularly reassess both the cost and FLOP thresholds… which it does.

                    Inversely to your sarcastic “understanding” about politicians’ stupidity, I can’t understand how tech people seem incapable or unwilling to actually read the legislation they have such strong opinions about.

                    • gdiamos 2 days ago

                      [flagged]

                      • lucubratory 2 days ago

                        It's troubling that you are saying things about the bill which are false, and then speculating on the motives of someone just pointing out that what you are saying is false.

                        • gdiamos 2 days ago

                          why not tell us?

                          Or point out what is actually false?

                          You rebutted the points that the flop limit will not actually limit anyone by saying that gpt4 is out of reach of startups.

                          Is OpenAI a startup? Is Anthropic? Is Grok? Is Perplexity? Is SSI?

                          You ignored the counter points that advanced technology exponentially raises flop limits and changes costs.

                          You said that the flop limit can be raised over time. So startups shouldn’t worry.

                          You ignored the counter point that flop limits in export controls are explicitly designed to limit competition from other nations.

                          Flop limits not being a real limit is a ridiculous argument. The intent of a flop limit is to limit, no matter how you sugar coat it.

                          • Dylan16807 2 days ago

                            What's false is the idea that the limit is going to be a burden on small companies, because you can ignore the flop limit if you're spending less than a hundred million dollars. (Big companies, in contrast, can use a percent of their budget for compliance.)

                            Being able to ignore the flop limit makes basically everything else you've said irrelevant. But just to quickly go through: I don't want to argue about what a 'startup' is but they're not 'small guys'. Advanced tech can be compensated for, but also it doesn't change the fact that staying under $100 million keeps you excluded. Export controls have nothing to do with this discussion, and they involve a completely different kind of 'flop limit'.

                          • lucubratory 2 days ago

                            You have confused me with someone else.

                      • llamaimperative 2 days ago

                        I’m a person who’s interested in arguments based on reality. Frankly I don’t have a solid opinion on the bill in any particular direction.

                  • gdiamos 2 days ago

                    If your goal is to lift the limit, why put it in?

                    We periodically raise flop limits in export control law. The intention is still to limit China and Iran.

                    Would any computer industry accept a government mandated limit on perf?

                    Should NVIDIA accept a limit on flops?

                    Should Pure accept a limit on TBs?

                    Should Samsung accept a limit on HBM bandwidth?

                    Should Arista accept a limit on link bandwidth?

                    I don’t think that there is enough awareness that scaling laws tie intelligence to these HW metrics. Enforcing a cap on intelligence is the same thing as a cap on these metrics.

                    https://en.m.wikipedia.org/wiki/Neural_scaling_law

                    Has this legislation really thought through the implications of capping technology metrics, especially in a state where most of the GDP is driven by these metrics?

                    Clearly I’m biased because I am working on advancing these metrics. I’m doing it because I believe in the power of computing technology to improve the world (smartphones, self driving, automating data entry, biotech, scientific discovery, space, security, defense, etc, etc) as it has done historically. I also believe in the spirit of inventors and entrepreneurs to contribute and be rewarded for these advancements.

                    I would like to understand the biases of the supporters of this bill beyond a power grab by early movers.

                    Export control flop limits are designed to limit the access of technology to US allies.

                    I think it would be informative if the group of people trying to limit access of AI technology to themselves was brought into the light.

                    Who are they? Why do they think the people of the US and of CA should grant that power to them?

                    • llamaimperative 2 days ago

                      Wait sorry, are you under the impression that regulated entities get to “accept” which regulations society imposes on them? Big if true!

                      Your delusions and lack of nuance shown in this very thread are exactly why people want to regulate this field.

                      If developers of nuclear technology were making similar arguments, I bet they’d have attracted even more aggressive regulatory attention. Justifiably, too, since people who speak this way can't possibly be trusted to de-risk their own behavior effectively.

        • pj_mukh 2 days ago

          Or used a model someone open sourced after spending $100M+ on its training?

          Like if I’m a startup reliant on open-source models I realize I don’t need liability and extra safety precautions but I didn’t hear any guarantees that this wouldn't turn off Meta from releasing their models to me if my business was in California?

          I never heard any clarifications from the Pro groups about that

          • llamaimperative 2 days ago

            The bill was amended for training companies to have no liability for derivative models they don’t have control over.

            There’s no new disincentive to open sourcing models produced by this bill, AFAICT.

      • wslh 3 days ago

        All that means that the barriers for entry for startups skyrocket.

        • SonOfLilit 2 days ago

          Startups that spend >$100mm on one training run...

          • wslh 2 days ago

            There are startups and startups, the ones that you read on media are just a fraction of the worldwide reality.

            • SonOfLilit 2 days ago

              You misread my intention, I think. I was pointing out that startups that don't raise enough to train a GPT-4 are excluded from the bill, that requires >$100mm expenses for a model to be covered.

              • wslh 2 hours ago

                Fair enough. Hope all is well there.

    • worstspotgain 3 days ago

      If there was just one quasi-monopoly it would have probably supported the bill. As it is, the market leaders have the competition from each other to worry about. Getting rid of open models wouldn't let them raise their prices much.

      • SonOfLilit 3 days ago

        So if it's not them, who is the hidden commercial interest sponsoring an attack on open source models that cost >$100mm to train? Or does Wiener just genuinely hate megabudget open source? Or is it an accidental attack, aimed at something else? At what?

        • worstspotgain 3 days ago

          Like I said, supporters included wary copyright holders and the bottom-market also-rans like Musk. If your model is barely holding up against Llama, what's the point of staying in.

          • SonOfLilit 3 days ago

            And two of the three godfathers of AI, and all of the AI notkillaboutism crowd.

            Actually, wait, if Grok is losing to GPT, why would Musk care about Llama more than Altman? Llama hurts his competitor...

            • ascorbic a day ago

              He can't compete with Anthropic and OpenAI because their models are much better. Llama is the competitor with the capability closest to Grok.

            • worstspotgain 3 days ago

              The market in my argument looks like OpenAI ~ Anthropic > Google >>> Meta (~ or maybe >) Musk/Alibaba. The top 3 aren't worried about the down-market stuff. You're free to disagree of course.

      • gdiamos 2 days ago

        Claude, SSI, Grok, GPT, Llama, …

        Should we crown one the king?

        Or perhaps it is better to let them compete?

        Perhaps advanced AI capability will motivate advanced AI safety capability?

        • fat_cantor 2 days ago

          It's an interesting thought that as AI advances, and becomes more capable of human destruction, programmers, bots and politicians will work together to create safety for a large quantity of humans

        • jnaz343 2 days ago

          You think they want to compete? None of them want to compete. They want to be a protected monopoly.

        • Maken 2 days ago

          What are the economic incentives for AI safety?

    • wrsh07 3 days ago

      I would note that Facebook and Google were opposed to eg gdpr although it gave them a larger share of the pie.

      When framed like that: why be opposed, it hurts your competition? The answer is something like: it shrinks the pie or reduces the growth rate, and that's bad (for them and others)

      The economics of this bill aren't clear to me (how large of a fine would Google/Microsoft pay in expectation within the next ten years?), but they maybe also aren't clear to Google/Microsoft (and that alone could be a reason to oppose)

      Many of the ai safety crowd were very supportive, and I would recommend reading Zvi's writing on it if you want their take

    • nisten 2 days ago

      Because it's a law that first intended to put opensource developers in jail.

    • mattmaroon 2 days ago

      First they came for the open models…

    • hn_throwaway_99 3 days ago

      Yeah, I think the argument that "this just hurts open models" makes no sense given the supporters/detractors of this bill.

      The thing that large companies care the most about in the legal realm is certainty. They're obviously going to be a big target of lawsuits regardless, so they want to know that legislation is clear as to the ways they can act - their biggest fear is that you get a good "emotional sob story" in front of a court with a sympathetic jury. It sounded like this legislation was so vague that it would attract a hoard of lawyers looking for a way they can argue these big companies didn't take "reasonable" care.

      • SonOfLilit 3 days ago

        Sob stories are definitely not covered by the text of the bill. The "critical harm" clause (ctrl-f this comment section for a full quote) is all about nuclear weapons and massive hacks and explicitly excludes "just" someone dying or getting injured with very clear language.

    • rllearneratwork 2 days ago

      because it was a stupid law which would hurt AI innovation

  • pbreit 2 days ago

    Bills that could kill major new industries need to be reactive, if at all. This was a terrible bill. Thank you, Governor.

    • fwip 2 days ago

      If the new industry is inherently unsafe, it is better to be proactive.

      • pbreit 2 days ago

        Even if I agreed in general I would not be sure about this case.

        • fwip a day ago

          Yeah, this specific law sounds like not great law.

  • EasyMark 2 days ago

    Yes they definitely need to slow their roll and sit back and listen to both sides of this instead of those who think AGI will happen in a year or two and the T1000s are coming for them. I think LLM have a bright future, especially as more hardware is build specifically for them. The market can fix most of the problems and when it becomes evident we’re heading in the wrong direction or monopolies and abuses occur, that’s when the government needs to step in, no based some broad speculation from from the fringe of either side.

  • sshconnection 2 days ago

    Same. I am generally very politically aligned with him (esp housing and transit), but this one ain’t it.

tbrownaw 3 days ago

https://legiscan.com/CA/text/SB1047/id/3019694

So this is the one that would make it illegal to provide open weights for models past a certain size, would make it illegal to sell enough compute power to train such a model without first verifying that your customer isn't going to train a model and then ignore this law, and mandates audit requirements to prove that your models won't help people cause disasters and can be turned off.

  • akira2501 3 days ago

    > and mandates audit requirements to prove that your models won't help people cause disasters

    Audits cannot prove anything and they offer no value when planning for the future. They're purely a retrospective tool that offers insights into potential risk factors.

    > and can be turned off.

    I really wish legislators would operate inside reality instead of a Star Trek episode.

    • whimsicalism 3 days ago

      This snide dismissiveness around “sci-fi” scenarios, while capabilities continue to grow, seems incredibly naïve and foolish.

      Many of you saying stuff like this were the same naysayers who have been terribly wrong about scaling for the last 6-8 years or people who only started paying attention in the last two years.

      • zamadatix 3 days ago

        I don't think GP is dismissing the scenarios themselves, rather espousing their belief these answers will do nothing to prevent said scenarios from eventually occuring anyways. It's like if we invented nukes but found out they were made out of having a lot of telephones instead of something exotic like refining radioactive elements a certain way. Sure - you can still try to restrict telephone sales... but one way or another lots of nukes are going to be built around the world (power plants too) and, in the meantime, what you've regulated away is the convenience of having a better phone from the average person as time goes on.

        The same battle was/is had around cryptography - telling people they can't use or distribute cryptography algorithms on consumer hardware never stopped bad people from having real time functionally unbreakable encryption.

        The safety plan must be around somehow handling the resulting problems when they happen, not hoping to make it never occur even once for the rest of time. Eventually a bad guy is going to make an indecipherable call, eventually an enemy country or rogue operator is going to nuke a place, eventually an AI is going to ${scifi_ai_thing}. The safety of all society can't rest on audits and good intention preventing those from ever happening.

        • marshray 3 days ago

          It's an interesting analogy.

          Nukes are a far more primitive technology (i.e., enrichment requires only more basic industrial capabilities) than AI hardware, yet they are probably the best example of tech limitations via international agreements.

          But the algorithms are mostly public knowledge, datacenters are no secret, and the chips aren't even made in the US. I don't see what leverage California has to regulate AI broadly.

          So it seems like the only thing such a bill would achieve is to incentivize AI research to avoid California.

          • tbrownaw 3 days ago

            > Nukes are a far more primitive technology (i.e., enrichment requires only more basic industrial capabilities) than AI hardware, yet they are probably the best example of tech limitations via international agreements.

            And direct sabotage, eg Stuxnet.

            And outright assassination eg https://www.bbc.com/news/world-middle-east-55128970

          • derektank 3 days ago

            >So it seems like the only thing such a bill would achieve is to incentivize AI research to avoid California.

            Which, incidentally, would be pretty bad from a climate change perspective since many of the alternative locations for datacenters have a worse mix of renewables/nuclear to fossil fuels in their electricity generation. ~60% of VA's electricity is generated from burning fossil fuels (of which 1/12th is still coal) while natural gas makes up less than 40% of electricity generation in California, for example

            • marshray 3 days ago

              Electric power crosses state lines, very little loss.

              It's looking like cooling water may be more of a limiting factor. Yet, even this can be greatly reduced when electric power is cheap enough.

              Solar power is already "cheaper than free" in many places and times. If the initial winner-take-all training race ever slows down, perhaps training can be scheduled for energy cost-optimal times and places.

              • derektank 3 days ago

                Transmission losses aren't negligible without investment in costly infrastructure like HVDC connections. It's always more efficient to site electricity generation as close to generation as feasibly possible.

                • marshray 3 days ago

                  Electric power transmission loss is less than 5%:

                  https://www.eia.gov/totalenergy/data/flow-graphs/electricity...

                     14.26 Net generation
                     0.67 "Transmission and delivery losses and unaccounted for"
                  
                  It's just a tiny fraction of the losses resulting from burning fuel to heat water to produce steam to drive a turbine to yield electric power.
                  • bunabhucan 2 days ago

                    That's the average. It's bought and sold on a spot market. If you try to sell CA power in AZ and the losses are 10% then SRP or TEP or whoever can undercut your price with local power/lower losses.

                    • marshray 2 days ago

                      I just don't see 10% remaining a big deal while solar continues its exponential cost reduction. Solar does not consume fuel, so when local supply exceeds local demand the cost of incremental production drops to approximately zero. Nobody's undercutting zero, even with 10% losses.

                      IMO, this is what 'winning' looks like.

                      • parineum 2 days ago

                        The cost of solar as a 24hr power supply must include the cost of storage for the 16+ hours that it's not at peak power. It also needs to overproduce by 3x to meet that demand.

                        Solar provides cheap power only when it's producing.

                      • mistrial9 2 days ago

                        this is interesting but missing some scale aspects.. capital and concentrated power are mutual attractors in some sense.. these AI datacenters in their current incarnations are massive.. so the number and size of solar panels needed, changes the situation. Common electrical power interchange (grid) is carefully regulated and monitored in all jurisdictions. In other words, there is little chance of an ad-hoc local network of small or mid-size solar systems making enough power unto themselves, without passing through regulated transmission facilities IMHO.

        • hannasm 2 days ago

          If you think a solution to bad behavior is a law declaring punishment for such behavior you are a fool.

          • rebolek 2 days ago

            Murder is a bad behavior. Am I a fool to think there should be laws against murder?

      • Chathamization 2 days ago

        The AI doomsday folk had an even worse track record over the past decade. There was supposed to be mass unemployment of truck drivers years ago. According to CCP Grey's Human's Need Not Apply[1] from 10 years ago, the robot Baxter was supposed to take over many low skilled jobs (Baxter was continued in 2018 after it failed to achieve commercial success).

        [1] https://www.youtube.com/watch?v=7Pq-S557XQU

        • whimsicalism 2 days ago

          I do not count CGP grey or other viral youtubers among the segment of people I was counting as bullish about the scaling hypothesis. I’m talking about actual academics like Ilya, Hinton, etc.

          Regardless, I just read the transcript for that video and he doesn’t give any timeline so it seems premature to crow that he was wrong.

          • Chathamization 2 days ago

            > Regardless, I just read the transcript for that video and he doesn’t give any timeline so it seems premature to crow that he was wrong.

            If you watch the video he's clearly saying this was was something that was already happening. Keep in mind it was made 10 years ago, and in it he says "this isn't science fiction; the robots are here right now." When bringing up the 25% unemployment rate he says "just the stuff we talked about today, the stuff that already works, can push us over that number pretty soon."

            Baxter being able to do everything a worker can for a fraction of the price definitely wasn't true.

            Here's what he said about self-driving cars. Again, this was 2014: "Self driving cars aren't the future - they're here and they work."

            "The transportation industry in the united states employs about 3 million people. Extrapolating worldwide, that's something like 70 million jobs at a minimum. These jobs are over."

            > I’m talking about actual academics like Ilya, Hinton, etc.

            Which of Hinton's statements are you claiming were dismissed by people here but were later proven to be correct?

      • nradov 3 days ago

        That's a total non sequitur. Just because LLMs are scalable doesn't mean this is a problem that requires government intervention. It's only idiots and grifters who want us to worry about sci-fi disaster scenarios. The snide dismissiveness is completely deserved.

      • akira2501 3 days ago

        > seems incredibly naïve and foolish.

        We have electrical codes. These require disconnects just about everywhere. The notion that any system somehow couldn't be "turned off" with or without the consent of the operator is downright laughable.

        > were the same naysayers

        Now who's being snide and dismissive? Do you want to argue the point or are you just interested in tossing ad hominem attacks around?

        • yarg 3 days ago

          Someone never watched the Terminator series.

          In all seriousness, if we ever get to the point where an AI needs to be shut down to avoid catastrophe, there's probably no way to turn it off.

          There are digital controls for damned near everything, and security is universally disturbingly bad.

          Whatever you're trying to stop will already have root-kitted your systems (and quite possibly have replicated) by the time you realise that it's even beginning to become a problem.

          You could only shut it down if there's a choke point accessible without electronic intervention, and you'd need to reach it without electronic intervention, and do so without communicating your intent.

          Yes, that's all highly highly improbable - but you seem to believe that you can just turn off the Genie, when he's already seen you coming and is having none of it.

          • hiatus 2 days ago

            > In all seriousness, if we ever get to the point where an AI needs to be shut down to avoid catastrophe, there's probably no way to turn it off.

            > There are digital controls for damned near everything, and security is universally disturbingly bad.

            Just unplug the thing.

            • yarg 2 days ago

              > You could only shut it down if there's a choke point accessible without electronic intervention, and you'd need to reach it without electronic intervention, and do so without communicating your intent.

              You'll be dead before you reach the plug.

              • hiatus 2 days ago

                Then bomb it. Or did the AI take over the fighter jets too?

                • yarg 2 days ago

                  > Whatever you're trying to stop will already have root-kitted your systems (and quite possibly have replicated) by the time you realise that it's even beginning to become a problem.

                  There's a good chance that you won't know where it is - if you even did to begin with (which particular AI even went rogue?).

                  > Or did the AI take over the fighter jets too?

                  Dunno - how secure are the systems?

                  But it's almost certainly fucking with the GPS.

        • theptip 2 days ago

          If a malicious model exhilarates its weights to a Chinese datacenter, how do you turn that off?

          How do you turn off Llama-Omega if it turns out that it can be prompt-hacked into a malicious agent?

          • tensor 2 days ago

            1. If the weights somehow are obtained by a foreign power, you can't do anything, just like every other technology ever.

            2. If it turns into a malicious agent you just hit the "off switch", or, more likely just stop the software, like you turn off your word processor.

        • whimsicalism 3 days ago

          > We have electrical codes. These require disconnects just about everywhere. The notion that any system somehow couldn't be "turned off" with or without the consent of the operator is downright laughable.

          Not so clear when you are inferencing a distributed model across the globe. Doesn't seem obvious that shutdown of a distributed computing environment will always be trivial.

          > Now who's being snide and dismissive?

          Oh to be clear, nothing against being dismissive - just the particular brand of dismissiveness of 'scifi' safety scenarios is naive.

        • marshray 3 days ago

          > The notion that any system somehow couldn't be "turned off" with or without the consent of the operator is downright laughable.

          Does anyone remember Sen. Lieberman's "Internet Kill Switch" bill?

    • trog 3 days ago

      > Audits cannot prove anything and they offer no value when planning for the future. They're purely a retrospective tool that offers insights into potential risk factors.

      Uh, aren't potential risk factors things you want to consider when planning for the future?

    • teekert 2 days ago

      The best episodes are where the model can't be turned off anymore ;)

    • Loughla 3 days ago

      >I really wish legislators would operate inside reality instead of a Star Trek episode.

      What are your thoughts about businesses like Google and Meta providing guidance and assistance to legislators?

      • akira2501 3 days ago

        If it happens in a public and open session of the legislature with multiple other sources of guidance and information available then that's how it's supposed to work.

        I suspect this is not how the majority of "guidance" is actually being offered. I also guess this is probably a really good way to find new sources of campaign "donations." It's also a really good way for monopolistic players to keep a strangle hold on a nascent market.

    • lopatin 3 days ago

      > Audits cannot prove anything and they offer no value when planning for the future. They're purely a retrospective tool that offers insights into potential risk factors.

      What if it audits your deploy and approval processes? They can say for example, that if your AI deployment process doesn't include stress tests against some specific malicious behavior (insert test cases here) then you are in violation of the law. That would essentially be a control on all future deploys.

  • comp_throw7 3 days ago

    > this is the one that would make it illegal to provide open weights for models past a certain size

    That's nowhere in the bill, but plenty of people have been confused into thinking this by the bill's opponents.

    • tbrownaw 3 days ago

      Three of the four options of what an "artifical intelligence safety incident" is defined as require that the weights be kept secret. One is quite explicit, the others are just impossible to prevent if the weights are available:

      > (2) Theft, misappropriation, malicious use, inadvertent release, unauthorized access, or escape of the model weights of a covered model or covered model derivative.

      > (3) The critical failure of technical or administrative controls, including controls limiting the ability to modify a covered model or covered model derivative.

      > (4) Unauthorized use of a covered model or covered model derivative to cause or materially enable critical harm.

      • comp_throw7 2 days ago

        It is not illegal for a model developer to train a model that is involved in an "artifical intelligence safety incident".

  • Terr_ 3 days ago

    Sounds like legislation that mis-indentifies the root issue as "somehow maybe the computer is too smart" as opposed to, say, "humans and corporations should be liable for using the tool to do evil."

    • concordDance 2 days ago

      The former is a potentially extremely serious issue, just not one we're likely to hit in the very near future.

  • raxxorraxor 2 days ago

    That is a very bad law. People and especially corporations in favor of it should be under scrutiny for trying to corner a market for themselves.

  • timr 3 days ago

    The proposed law was so egregiously stupid that if you live in California, you should seriously consider voting for Anthony Weiner's opponent in the next election.

    The man cannot be trusted with power -- this is far from the first ridiculous law he has championed. Notably, he was behind the (blatantly unconstitutional) AB2098, which was silently repealed by the CA state legislature before it could be struck down by the courts:

    https://finance.yahoo.com/news/ncla-victory-gov-newsom-repea...

    https://www.sfchronicle.com/opinion/openforum/article/COVID-...

    (Folks, this isn't a partisan issue. Weiner has a long history of horrendously bad judgment and self-aggrandizement via legislation. I don't care which side of the political spectrum you are on, or what you think of "AI safety", you should want more thoughtful representation than this.)

    • GolfPopper 3 days ago

      Anthony Weiner is a disgraced New York Democratic politician who does not appear to have re-entered politics after his release from prison a few years ago. You mentioned his name twice in your post, so it doesn't seem to be an accident that you mentioned him, yet his name does not seem to appear anywhere in your links. I have no idea what message you're trying to convey, but whatever it is, I think you're failing to communicate it.

      • timr 2 days ago

        Yes, it was a mistake. I obviously meant the Weiner responsible for the legislation I cited. But you clearly know that.

        > I have no idea what message you're trying to convey, but whatever it is, I think you're failing to communicate it.

        Really? The message is unchanged, so it seems like something you could deduce.

      • hn_throwaway_99 3 days ago

        He meant Scott Wiener but had penis on the brain.

    • johnnyanmac 3 days ago

      >you should want more thoughtful representation than this.

      Your opinion on what "thoughtful representation" is is what makes this point partisan. Regardless, he's in until 2028 so it'll be some time before that vote can happen.

      Also, important Nitpick, it's Scott Weiner. Anthony Weiner (no relation AFAIK) was in New York and has a much more... Public controversy.

      • Terr_ 3 days ago

        > Public controversy

        I think you accidentally hit the letter "L". :P

    • dlx 3 days ago

      you've got the wrong Weiner dude ;)

      • hn_throwaway_99 3 days ago

        Lol, I thought "How TF did Anthony Weiner get elected for anything else again??" after reading that.

dang 3 days ago

Related. Others?

OpenAI, Anthropic, Google employees support California AI bill - https://news.ycombinator.com/item?id=41540771 - Sept 2024 (26 comments)

Y Combinator, AI startups oppose California AI safety bill - https://news.ycombinator.com/item?id=40780036 - June 2024 (8 comments)

California AI bill becomes a lightning rod–for safety advocates and devs alike - https://news.ycombinator.com/item?id=40767627 - June 2024 (2 comments)

California Senate Passes SB 1047 - https://news.ycombinator.com/item?id=40515465 - May 2024 (42 comments)

California residents: call your legislators about AI bill SB 1047 - https://news.ycombinator.com/item?id=40421986 - May 2024 (11 comments)

Misconceptions about SB 1047 - https://news.ycombinator.com/item?id=40291577 - May 2024 (35 comments)

California Senate bill to crush OpenAI competitors fast tracked for a vote - https://news.ycombinator.com/item?id=40200971 - April 2024 (16 comments)

SB-1047 will stifle open-source AI and decrease safety - https://news.ycombinator.com/item?id=40198766 - April 2024 (190 comments)

Call-to-Action on SB 1047 – Frontier Artificial Intelligence Models Act - https://news.ycombinator.com/item?id=40192204 - April 2024 (103 comments)

On the Proposed California SB 1047 - https://news.ycombinator.com/item?id=39347961 - Feb 2024 (115 comments)

alkonaut 2 days ago

The immediate danger of large AI models isn't that they'll turn the earth to paperclips it's that we'll create fraud as a service and have a society where nothing can be trusted. I'd be all for a law (however clumsy) that made image, audio or video content produced by models with over X parameters to be marked with metadata saying it's AI generated. Creating models that don't tag their output as such would be banned. So far nothing strange about the law. The obvious problem with the law is that you need to require even screenshotting an image AI and reposting it online without the made-with-ai metadata to be outlawed. And that would be an absolute mess to enforce, at least for images.

But most importantly: whatever we do in this space has to be made on the assumption that we can't really influence what "bad actors" do. Yes being responsible means leaving money on the table. So money has to be left on the table, for - erm - less responsible nations to pick up. That's just a fact.

  • anon291 2 days ago

    > Creating models that don't tag their output as such would be banned.

    This is just silly. Anyone would be able to disable this tagging in an open model.

    • diggan 2 days ago

      >> Creating models that don't tag their output as such would be banned.

      > This is just silly. Anyone would be able to disable this tagging in an open model.

      And we'd end up with people who thinks that any text that isn't tagged as "#MadeByLLM" as made by a human, which obviously wouldn't be great.

    • jerjerjer 2 days ago

      > Anyone would be able to disable this tagging in an open model.

      Metadata (I assume it's file metadata and not a watermark) can be removed from a final product (image, video, text) so open and closed models are equally affected.

  • worldsayshi 2 days ago

    Any law that tries to categorize non-trustworthy content seems doomed to fail. We need to find better ways to communicate trustworthiness, not the other way around. (And I'm not sure adding more laws can help here.)

    • alkonaut 2 days ago

      No I don't think technical means will work fully either. But the thing about these regulations is that you can basically cover the 99% case by just thinking about the 5 largest players in the field, be it regulation for social media, AI or whatever. It doesn't matter that the law has loopholes or that some players aren't affected at all. Regulation that helps somewhat in a large majority of cases is massive.

  • woah 2 days ago

    > The immediate danger of large AI models isn't that they'll turn the earth to paperclips it's that we'll create fraud as a service and have a society where nothing can be trusted. I'd be all for a law (however clumsy) that made image, audio or video content produced by models with over X parameters to be marked with metadata saying it's AI generated.

    Just make a law that makes it so that AI content has to be tagged if it is being used for fraud

  • arder 2 days ago

    I think the most acheivable way of having some verification of AI images is simply for the AI generators to store finger prints of every image they generate. That way if you ever want to know you can go back to Meta or whoever and say "Hey, here's this image, do you think it came from you". There's already technology for that sort of thing in the world (content ID from youtube, CSAM detection etc.).

    It's obviously not perfect, but could help and doesn't have the enormous side effects of trying to lock down all image generation.

    • Someone 2 days ago

      > That way if you ever want to know you can go back to Meta or whoever and say "Hey, here's this image, do you think it came from you".

      Firstly, if you want to know an image isn’t generated, you’d have to go to every ‘whoever’ in the world, including companies that no longer exist.

      Secondly, if you ask evil.com that question, you would have to trust them to answer honestly for both all images they generated and images they didn’t generate (claiming real pictures were generated by you can probably be career-ending for a politician)

      This is worse than https://www.cs.utexas.edu/~EWD/ewd02xx/EWD249.PDF: “Program testing can be used to show the presence of bugs, but never to show their absence!”. You can neither show an image is real nor that it is fake.

    • kortex 2 days ago

      What's to stop someone from downloading an open source model, running it themselves, and either just not sharing the hashes, subtly corrupting the hash algo so that it gives a false negative, etc?

      Also you need perceptual hashing (since one bitflip of the generated media alters the whole hash) which is squishy and not perfectly reliable to begin with.

      • alkonaut 2 days ago

        Nothing. But that’s not the point. The point is that, to a rounding error, all output is made by a small number of models from a small number of easily regulated companies.

        It’s never going to be possible to ensure all media is reliably tagged somehow. But if just half of media generated is identifiable as such that helps. Also helps avoid it in training new models, which could turn out useful.

  • drcode 2 days ago

    turning the earth into paperclips is not gonna happen immediately, so we can safely ignore that risk

SonOfLilit 3 days ago

I wondered if the article was over-dramatizing what risks were covered by the bill, so I read the text:

(g) (1) “Critical harm” means any of the following harms caused or materially enabled by a covered model or covered model derivative:

(A) The creation or use of a chemical, biological, radiological, or nuclear weapon in a manner that results in mass casualties.

(B) Mass casualties or at least five hundred million dollars ($500,000,000) of damage resulting from cyberattacks on critical infrastructure by a model conducting, or providing precise instructions for conducting, a cyberattack or series of cyberattacks on critical infrastructure.

(C) Mass casualties or at least five hundred million dollars ($500,000,000) of damage resulting from an artificial intelligence model engaging in conduct that does both of the following:

(i) Acts with limited human oversight, intervention, or supervision.

(ii) Results in death, great bodily injury, property damage, or property loss, and would, if committed by a human, constitute a crime specified in the Penal Code that requires intent, recklessness, or gross negligence, or the solicitation or aiding and abetting of such a crime.

(D) Other grave harms to public safety and security that are of comparable severity to the harms described in subparagraphs (A) to (C), inclusive.

(2) “Critical harm” does not include any of the following:

(A) Harms caused or materially enabled by information that a covered model or covered model derivative outputs if the information is otherwise reasonably publicly accessible by an ordinary person from sources other than a covered model or covered model derivative.

(B) Harms caused or materially enabled by a covered model combined with other software, including other models, if the covered model did not materially contribute to the other software’s ability to cause or materially enable the harm.

(C) Harms that are not caused or materially enabled by the developer’s creation, storage, use, or release of a covered model or covered model derivative.

  • handfuloflight 3 days ago

    Does Newsom believe that an AI model can do this damage autonomously or does he understand it must be wielded and overseen by humans to do so?

    In that case, how much of an enabler is an AI to meet the destructive ends, when, if the humans can use AI to conduct the damage, they can surely do it without the AI as well.

    The potential for destruction exists either way but is the concern that AI makes this more accessible and effective? What's the boogeyman? I don't think these models have private information regarding infrastructure and systems that could be exploited.

    • SonOfLilit 3 days ago

      “Critical harm” does not include any of the following: (A) Harms caused or materially enabled by information that a covered model or covered model derivative outputs if the information is otherwise reasonably publicly accessible by an ordinary person from sources other than a covered model or covered model derivative.

      The bogeyman is not these models, it's future agentic autonomous ones, if and when they can hack major infrastructure or build nukes. The quoted text is very very clear on that.

      • caseyy 2 days ago

        I am not convinced the text means what you say it means.

        All knowledge (publicly available and not) and all tools (AI or not) can be used by people in material ways to commit the aforementioned atrocities, but only the models producing novel knowledge would be at risk. I hope you can see how this law would stifle AI advancement. The boundary between what's acceptable and not would be drawn at generating novel, publicly unavailable information; not at information that could be used to harm - because all information can be used that way.

        What if AI solves fusion and countries around the world start building fusion weapons of mass destruction? What if it solves personalized gene therapy and armed forces worldwide develop weapons that selectively kill their wartime foes? Should we not have split the atom just because that power was co-opted for evil means, or should we have not done the contraception research just because the third Reich used it for sterilization in their war crimes? This bill would work towards making AI never invent any of the novel things, simply out of fear that they will be corrupted by people as they have been in history. It would only slow research and whenever the (slower) research makes its discoveries, they would still get corrupted. In other words, there would be no change in human propensity to hurt others with knowledge, simply less knowledge.

        Besides, the text is not "very very clear" on AI if and when it hacks major infrastructure or builds nukes. If it was "very very clear" on that, that is what it would say :) - "an AI model is prohibited to be the decision-making agent, solely instigating critical harm to humans". But what the text says is different.

        I agree that AI harms to people and humanity need to be minimized but this bill is a miss rather than a hit and the veto is good news. We know AI alignment is needed. Other bills will come.

        • bunabhucan 2 days ago

          I'm pretty sure there's a few hundred fusion wmds in silos a few hours north of me, we've had this kind of weapon since 1952.

          • caseyy 2 days ago

            Nice fact check, thank you. I didn’t know H-bombs used fusion but it makes complete sense. Hydrogen is not exactly the heaviest of elements :)

            Well then, for my example, imagine a different future discovery that could be abused. Let’s say biological robots or a some new source of useful energy that is misused. Warring humans find ways to corrupt many scientific achievements for evil.

            • bunabhucan 2 days ago

              Sharks with frikkin lasers.

        • SonOfLilit 2 days ago

          If a model were capable of novel WMD research, I would want it to stay out of reach for the general public! Wouldn't you?

          After we split the atom, and saw how dangerous it was, we established some damn strong protections in place to prevent kids from building nukes in their backyards for giggles. If we ever build a physicist AI that is better than Einstein at WMD-relevant research, I sure as hell hope we don't just let the kids ask it how to build nukes or antimatter bombs or whatever. And this is exactly what this regulation is about.

          > But what the text says is different.

          Since "prohibited" is a word that carries specific consequences, and lawyers are all about consequences, legal texts usually phrase it something like "if a model does critical harm to humans, the human responsible for building and releasing the model will be culpable for the crime", which is a very good TL;DR of the relevant parts of the bill. Isn't it?

    • anigbrowl 2 days ago

      Newsom is the governor who vetoed the bill, not the lawmaker who authored it.

    • concordDance 2 days ago

      > Does Newsom believe that an AI model can do this damage autonomously or does he understand it must be wielded and overseen by humans to do so?

      AI models might not be able to, but an AI system that uses a powerful model might be able to cause damage (including extinction of humanity in the more distant future) unintended and unforeseen by its creators.

      The more complex and unpredictable the system the harder it is to properly oversee.

  • w10-1 2 days ago

    --- (2) “Critical harm” does not include any of the following:

    (A) Harms caused or materially enabled by information that a covered model or covered model derivative outputs if the information is otherwise reasonably publicly accessible by an ordinary person from sources other than a covered model or covered model derivative.

    ---

    This exception swallows any rule and fails to target the difference with AI: it's actually better than an ordinary person at assimilating multiple fact streams.

    That suggests this law is legislative theater: something designed to enlist interest and donations, i.e., to build a political franchise. That could be why it targets only the largest models, affecting only the biggest players, who have the most resources to donate per decision and the least goodwill to burn with opposition.

    Regulating AI would be a very difficult legislative/administrative task, on the order of a new tax code in its complexity. But it will be impossible if treated as a political franchise.

    As for self-regulation, with OpenAI's changing to for-profit, the non-profit form is insufficient to maintain a public benefit focus. Permitting this conversion is on par with the 1990's+ conversion of nonprofit hospital systems to for-profit.

    AI's potential shines a bright light on our weakness in governance. While weak governance affords more opportunities, escaping the exploitation caused by governance failures is the engine of autocracy, and autocracy consumes property along with all other rights.

  • dang 2 days ago

    (I added newlines to your quote to match what looked like the intended formatting - I hope that's ok. Since HN doesn't do indentation I'm not sure it helps all that much...)

    • ketzo 2 days ago

      I’m sure people have asked this before, but would HN ever add a little more rich-text? Even just bullet points and indents might be nice.

      • slater 2 days ago

        And maybe also make new lines in the comment box translate to new lines in the resulting comment...? :D

        • dang 2 days ago

          That's actually a really good point. I've never looked into that, I just took it for granted that to get a line break on HN you need two consecutive newline chars.

          I guess the thing to do would be to look at all (well, lots of) comments that have single newlines and see what would break if they were rendered as actual newlines.

          • Matheus28 2 days ago

            Could be applied to all comments made after a certain future date. That way nothing in the past is poorly formatted

            • slater 2 days ago

              Or just brute-force it with string_replace of all "\n" with "</p>\n<p>" and then remove all the empty "<p></p>".

              (why yes, i am a PHP guy, why do you ask?)

      • dang 2 days ago

        Maybe. I'm paranoid about the unintended cost of improvements, but it's not an absolute position.

seltzered_ 3 days ago

Is part of the issue the concern that runaway ai computing would just happen outside of california?

There's another important county election in Sonoma happening about CAFOs where part of the issue is that you may get environmental progress locally, but just end up exporting the issue to another state with lax rules: https://www.kqed.org/news/12006460/the-sonoma-ballot-measure...

  • alhirzel 2 days ago

    Like all laws, there will certainly be those who evade compliance geographically. A well-written law will be looked to as a precedent or "head start" for new places that end up wanting regulatory functions. I feel like the EU and California often end up on this "leading edge" with regard to technology and privacy. While this can seem like a futile position to be in, it paves the way and is a required step for a good law to find a global foothold.

hn_throwaway_99 3 days ago

Curious if anyone can point to some resources that summarize the pros/cons arguments of this legislation. Reading this article, my first thought is that I definitely agree it sounds impossibly vague for a piece of legislation - "reasonable care" and "unreasonable risk" sound like things that could be endlessly litigated.

At the same time,

> Computer scientists Geoffrey Hinton and Yoshua Bengio, who developed much of the technology on which the current generative-AI wave is based, were outspoken supporters. In addition, 119 current and former employees at the biggest AI companies signed a letter urging its passage.

These are obviously highly intelligent people (though I've definitely learned in my life that intelligence in one area, like AI and science, doesn't mean you should be trusted to give legal advice), so I'm curious to know why Hinton and Bengio supported the legislation so strongly.

  • crazygringo 2 days ago

    > impossibly vague for a piece of legislation - "reasonable care" and "unreasonable risk" sound like things that could be endlessly litigated.

    Nope, that's entirely standard legal stuff. Tort law deals exactly with those kinds of things, for instance. Yes it can certainly wind up in litigation, but the entire point is that if there's a gray area, a company should make sure it's operating entirely within the OK area -- or know it's taking a legal gamble if it tries to push the envelope.

    But it's generally pretty easy to stay in the clear if you establish common-sense processes around these things, with a clear paper trail and decisions approved by lawyers.

    Now the legislation can be bad for lots of other reasons, but "reasonable care" and "unreasonable risk" are not problematic.

    • hn_throwaway_99 2 days ago

      > but "reasonable care" and "unreasonable risk" are not problematic.

      Still strongly disagree, at least when it comes to AI legislation. Yes, I fully realize that there are "reasonableness" standards in lots of places of US jurisprudence, but when it comes to AI, given how new the tech is and how, perhaps more than any other recent technology, it is largely a "black box", meaning we don't really know how it works and we aren't really sure what its capabilities will ultimately be, I don't think anybody really knows what "reasonableness" means in this context.

      • razakel 2 days ago

        Exactly. It's about as meaningful as passing a law making it illegal to be a criminal. Right, so what does that actually mean apart from "we'll decide when it happens"?

  • mmmore 2 days ago

    The concern is that near future systems will be much more capable than current systems, and by the time they arrive, it may be too late to react. Many people from the large frontier AI companies believe that world-changing AGI is 5 years or less away; see Situational Awareness by Aschbrenner, for example. There's also a parallel concern that AIs could make terrorism easier[1].

    Yoshua Bengio has written in detail about his views on AI safety recently[2][3][4]. He seems to put less weight on human level AI being very soon, but says superhuman intelligence is plausible in 5-20 years and says:

    > Faced with that uncertainty, the magnitude of the risk of catastrophes or worse, extinction, and the fact that we did not anticipate the rapid progress in AI capabilities of recent years, agnostic prudence seems to me to be a much wiser path.

    Hinton also has a detailed lecture he's been giving recently about the loss of control risk.

    In general, proponents see this as narrowly tailored bill to somewhat address the worst case worries about loss of control and misuse.

    [1] https://www.theregister.com/2023/07/28/ai_senate_bioweapon/

    [2] https://yoshuabengio.org/2023/05/22/how-rogue-ais-may-arise/

    [3] https://yoshuabengio.org/2023/06/24/faq-on-catastrophic-ai-r...

    [4] https://yoshuabengio.org/2024/07/09/reasoning-through-argume...

  • leogao 2 days ago

    I looked into the question of what counts as reasonable care and wrote up my conclusions here: https://www.lesswrong.com/posts/kBg5eoXvLxQYyxD6R/my-takes-o...

    • hn_throwaway_99 2 days ago

      Thank you! Your post was really helpful in aiding my understanding, so I greatly appreciate it.

      Also, while reading your article I also fell onto https://www.brookings.edu/articles/misrepresentations-of-cal... while trying to understand some terms, and that also gave some really good info, e.g. the difference between a "reasonable assurance" language that was dropped from an earlier version of the bill and replaced with "reasonable care".

    • ketzo 2 days ago

      This was a great post, thanks.

  • svat 2 days ago

    Here's a post by the computer scientist Scott Aaronson on his blog, in support: https://scottaaronson.blog/?p=8269 -- it links to some earlier explainers, has some pro-con arguments, and further discussion in the comments.

    • hn_throwaway_99 a day ago

      Oh, wow, thanks very much! Not only was that a very informative article, but it also has links to other detailed opinions on the topic (and some of those had links...), which left me feeling much better informed. Much appreciated!

voidfunc 3 days ago

It was a dumb law so... good on a politician for doing the smart thing for once.

LarsDu88 3 days ago

Terrible piece of legislation. Glad the governor took it down. This is what regulatory capture looks like. Someone commoditized your product, so you make it illegal for them to continue making your stuff free.

Might as well make Linux illegal so everyone is forced to use Microsoft and Apple.

  • xpe 2 days ago

    I disagree.

    On this topic, I’m seeing too many ideological and uninformed claims.

    It is hard for many aspiring AI startup founders to rationally and neutrally assess the AI landscape, pros and cons.

  • EasyMark 2 days ago

    It’s also what makes companies realize there are 49 other states, and nearly a couple hundred companies. California has a rare zeitgeist of tech and universities, but nothing that can’t be reproduced elsewhere with enough dollars and promises

  • weebull 2 days ago

    I suspect this was vetoed more for reasons of not wanting to handicap California in the "AI race" than anything else.

x3n0ph3n3 3 days ago

Given what Scott Wiener did with restaurant fees, it's hard to trust his judgement on any legislation. He clearly prioritizes monied interests over the general populace.

  • gotoeleven 3 days ago

    This guy is a menace. Among his other recent bills are ones to require cars not be able to go more than 10mph over the speed limit (watered down to just making a terrible noise when they do) and to decriminalize intentionally giving someone AIDs. I know this sounds like hyperbole.. how could this guy keep getting elected?? But its not, it's california!

    • zzrzzr 3 days ago
      • microbug 3 days ago

        who could've predicted this?

        • jquery 3 days ago

          The law was passed knowing it would make bigots uncomfortable. That's an intended effect, if not a primary one, at least a secondary one.

          • UberFly 2 days ago

            What a strange comment. I wonder if there was any consideration for the women locked up and powerless in the matter, or was the point really just to "show those bigots"?

            • jquery 2 days ago

              If they’re transphobic and don’t want to be around transwomen, they could’ve committed the crime in a state that puts transwomen in with male prisoners (and get raped repeatedly). Of course, those states tend to treat their female inmates much worse than California, so this all seems like special pleading specifically borne out of transphobia.

      • jquery 3 days ago

        These "activists" will go nowhere, because it's not coming from a well meaning place of wanting to stop fraudsters, but insists that all trans women are frauds and consistently misgenders them across the entire website.

        I wouldn't take anything they said seriously. Also I clicked two of those links and found no allegations of rape, just a few ciswomen who didn't want to be around transwomen. I have a suggestion, how about don't commit a crime that sends you to a woman's prison?

        • zzrzzr 2 days ago

          > I wouldn't take anything they said seriously. Also I clicked two of those links and found no allegations of rape,

          See https://4w.pub/male-inmate-charged-with-raping-woman-inside-....

          This is the inevitable consequence of SB132, and similar laws elsewhere.

          • jquery 2 days ago

            Rape is endemic throughout the prison industrial complex, protections for prisoners are nowhere good enough. Subjecting transwomen to rape in men’s prisons isn’t the solution.

            The JD Vance/Peter Thiel/SSC rationalist sphere is such a joke. Just a bunch of pretentious bigots who think they’re better than the “stupid” bigots.

            • zzrzzr 2 days ago

              > Rape is endemic throughout the prison industrial complex, protections for prisoners are nowhere good enough.

              The most effective safeguarding measure against this for female prisoners is the segregation of inmates by sex.

              SB132 has demolished this protection for women in Californian prisons and, as the linked articles discuss, we now see the awful and entirely avoidable consequences of this law, within just a few years of it being enacted. Exactly as women's rights advocates made legislators aware would happen in their unfortunately futile efforts to halt SB132 from being passed.

              • jquery 12 hours ago

                After 2 years of implementation, California has not seen an increase in sexual assaults. It's almost like transwomen aren't the problem.

                Anyone can file a request, but the request must be approved, and this is based on the totality of the situation with the prisoner. Someone doesn't get to just wake up one day and go "I'm trans" and automatically get approval, this is fearmongering from transphobes.

                There's tons of oversight involved and the idea this is some kind of "loophole" for horny cis guys is bigoted nonsense.

                • zzrzzr 12 hours ago

                  Please consider reading the 4W article I linked above. It explains the problem.

                  You may find this enlightening also: https://womensliberationfront.org/news/wolf-attends-prelimin...

                  A relevant extract:

                  "The four witnesses for the prosecution gave detailed accounts of two horrible rapes and the efforts by the alleged perpetrator to dissuade one of the victims from seeking justice, even after she was moved to a new facility for her safety.

                  "One of the witnesses cited SB132 as the reason why Carroll was being housed in the women’s facility in the first place. He also noted that only males can rape females and that, until these past few years, there were no prisoners physically capable of rape."

    • deredede 3 days ago

      I was surprised at the claim that intentionally giving someone AIDS would be decriminalized, so I looked it up. The AIDS bill you seem to refer to (SB 239) lowers penalties from a felony to a misdemeanor (so it is still a crime), bringing it in line with other sexually transmitted diseases. The argument is that we now have good enough treatment for HIV that there is no reason for the punishment to be harsher than for exposing someone to hepatitis or herpes, which I think is sound.

      • Der_Einzige 2 days ago

        "Undetectable means untranstmitable" is NOT the same as "cured" in the way that many STDs can be. I am not okay with being forced onto drugs for the rest of my life to prevent a disease which is normally a horribly painful death sentence. Herpes is so ubiquitous that much of the population (as I recall on the orders of 30-40%) has it and doesn't know it, so it's a special exception

        HIV/AIDS to this day is still something that people commit suicide over, despite how good your local gay male community is at trying to convince you that everything is okay and that "DoxyPep and Poppers is normal".

        Bug givers (the evil version of a bug chaser) deserve felonies.

        • deredede 2 days ago

          > Bug givers (the evil version of a bug chaser) deserve felonies.

          I agree; I think that knowingly transmitting any communicable disease deserves a felony, but I don't think that HIV deserves to be singled out when all other such diseases are a misdemeanor. Hepatitis and herpes (oral herpes is very common; genital herpes much less so) are also known to cause mental issues and to increase suicide risk, if that's your criterion.

          (Poppers are recreational drugs, I'm not aware of any link with AIDS except that they were thought to be a possible cause in the '80s. Were you thinking of prep?)

        • diebeforei485 2 days ago

          Exposure is not the same as transmission. Transmission is still illegal.

    • radicality 2 days ago

      I don’t follow politics closely and don’t live in CA, but is he really that bad? I had a look on Wikipedia for some other bills he worked on that seem to me positive:

      * wanted to decriminalize psychoactive drugs (lsd/dmt/mdma etc)

      * wanted to allow alcohol sales till 4am

      * a bill about removing parking minimums for new constructions close to public transit

      Though I agree the car one seems ridiculous, and on first glance downright dangerous.

      • lostdog 2 days ago

        He's mostly good, and is the main guy fixing housing and transit in CA.

        But yeah, there are some issues he's just wrong on (AI and the recent restaurant fee problem), others which are controversial (decriminalizing HIV transmission), and then some trans rights issues that some commenters are being hyperbolic about (should transwomen be in womens or mens prison?).

    • johnnyanmac 3 days ago

      Technically you can't go over 5mph of the speed limit. And that's only because of radar accuracy.

      Of course no one cares until you get a bored cop one day. And with free way traffic you're lucky to hit half the speed limit.

      • Dylan16807 3 days ago

        By "not be able" they don't mean legally, they mean GPS-based enforcement.

        • johnnyanmac 2 days ago

          You'd think they'd learn from the streetlight cameras that it's just a waste of budget and resources 99% of the time to worry about petty things like that. It will still work on the same logic and the bias always tends to skew from profiling (so lawsuit waiting to happen unless we are funding properly trained personell.

          I'm not against the law per se, I just don't think it'd be any more effective than the other tech we have or had.

        • drivers99 2 days ago

          Rental scooters have speed limiters. My class-1 pedal assist electric bike has a speed limit on the assistance. Car deaths are over 40,000 in the US per year. Why can't they be limited?

          • Dylan16807 2 days ago

            I said GPS for a reason. Tying it to fine-grained map lookups is so much more fragile and dangerous than a fixed speed limit.

    • baggy_trough 3 days ago

      Scott Wiener is literally a demon in human form.

bradhilton 2 days ago

I'm glad he vetoed the bill, but his rationale is worrisome. Even if he's just trying to placate SB 1047 proponents, they will try to exact concessions from him in future sessions. I'll take this brief reprieve, but it's still a large concern for me.

  • RIMR 2 days ago

    What specifically do you find worrisome about his rationale? It mostly seems like he's asking for evidence-based policy that views AI as a potential risk regardless of the funding or size of the model, because this doesn't actually correlate with any actual evidence of risk.

    I can't tell what direction your disagreement goes. Are you worried that he still feels that AI needs to be regulated at all, or do you think that AI needs to be regulated regardless of empirical evidence of harm?

OJFord 2 days ago

> gov.ca.gov

Ah, I think now I know why Canada's government website is canada.ca (which I remember thinking was a bit odd or more like a tourism site when looking a while ago, vs. say gov.uk or gov.au).

  • whalesalad 2 days ago

    unfortunately the us owns the entire gov tld

    • OJFord 2 days ago

      Yes but other countries (off the top of my head: UK, Aus, India) use gov.[ccTLD]

      My point was that that's confusing with gov.ca if the US is using ca.gov and gov.ca.gov for California, and that perhaps that's why Canada does not do that.

karaterobot 2 days ago

> Safety protocols must be adopted. Proactive guardrails should be implemented, and severe consequences for bad actors must be clear and enforceable.

Here, here. But this vague, passive voiced, hand-wavey statement that isn't even a promise does not exactly inspire me with a ton of confidence. Considering he turned this bill down to protect business interests, I wonder what acceptable legislation would look like, from his perspective. Looking forward to hearing about about it very soon, and I'm confident it'll be specific, actionable, responsible, and effective.

  • lostdog 2 days ago

    Well, you already can't nuke someone. You can't make and release a biological weapon. It's probably illegal to turn the whole world into paperclips.

    There are already laws against doing harm to others. Sure, we need to fill some gaps (like preventing harmful deep fakes), but most things are pretty illegal already.

  • xixixao 2 days ago

    I find the statement quite well measured. He’s not giving a solution, that’s not easy, but is specifically calling out evidence-based measures. The statement calls out both the need to regulate, and the need to innovate. The issue is not black and white and neither is the statement.

davidu 3 days ago

This is a massive win for tech, startups, and America.

  • cornercasechase 2 days ago

    It was a bad bill but your gross nationalism is even worse. 1 step forward, 10 steps back.

    • richwater 2 days ago

      > gross nationalism

      How on earth did you get that from the original posters comment?

      • cornercasechase 2 days ago

        “Win for America” is gross nationalism. Zero sum thinking with combative implications.

        • hot_gril 2 days ago

          It's a win for American industry, same as a politician would say when a new factory opens or something. I don't know a less offensive way to put it.

          He didn't remotely say the combative stuff I would say, that it is partially a zero-sum game where we should stay ahead of the curve.

  • ken47 3 days ago

    For America...do we dare unpack that sentiment?

    • khazhoux 2 days ago

      The US is the world leader in AI technology. Defeating a bad AI bill is good for the US.

hot_gril 2 days ago

The "this bill doesn't go far enough" thing is normally what politicians say when they don't want it to go in that direction at all. Anyway, I'm glad he vetoed.

tsunamifury 2 days ago

Scott Weiner is a total fraud. He passes hot concept bills then cuts out loopholes for his “friends”.

He should be ignored at least and voted out.

He’s a total POS.

m3kw9 2 days ago

All he needed to see is how Europe is doing with these regulations

  • sgt 2 days ago

    What is currently happening (or what is the impact) of those regulations in EU?

    • renewiltord 2 days ago

      It’s making it nice for Americans to vacation there.

pmcf 2 days ago

Not today, regulatory capture. Not today.

lvspiff a day ago

Just give us the Asimov three laws to start rather than these nebulous "open model" language crap. Those should be the basic premise for any regulations that go into a bill in this area (but I'm sure the military contractors would never go for that with the whole "no harm to humans" thing - they want their terminator).

gdiamos 3 days ago

If that bill had passed I would have seriously considered moving my AI company out of the state.

curious_cat_163 2 days ago

For those supporting this legislation: Would you like to share the specific harms to the public that this bill sought to address and prevent?

  • drcode 2 days ago

    we'll probably create artificial superintelligence in the next few years

    when that happens, it likely will not go well for humans

    the specific harm is "human extinction"

humansareok1 2 days ago

The dismal level of discourse about this bill shows that Humanity is utterly ill equipped to deal with the problems AI poses for our society.

stuaxo 3 days ago

This is good - they were trying to legislate against future competitors.

gerash 2 days ago

I'm trying to parse this proposed law.

What does a "full shutdown" mean in the context of an LLM? Stopping the servers from serving requests? It sounds silly idk.

choppaface 3 days ago

The Apple Intelligence demos showed Apple is likely planning to use on-device models for ad targeting, and Google / Facebook will certainly respond. Small LLMs will help move unwanted computation onto user devices in order to circumvent existing data and privacy laws. And they will likely be much more effective since they’ll have more access and more data. This use case is just getting started, hence SB 1047 is so short-sighted. Smaller LLMs have dangers of their own.

  • jimjimjim 3 days ago

    Thank you. For some reason I hadn't thought of the advertising angle with local LLMs but you are right!

    For example, why is Microsoft hell-bent on pushing Recall onto windows? Answer: targeted advertising.

    • jart 3 days ago

      Why is it wrong to show someone ads that are relevant to their interests? Local AI is a win-win, since tech companies get targeted ads, and your data stays private.

      • jimjimjim 3 days ago

        what have "their interests" got to do with what is on the computer screen?

tim333 2 days ago

I'm not sure AI risks are well enough understood to have good regulations for. With most risky industries you can actually quantify the risk a bit. Regarding:

> we cannot afford to wait for a major catastrophe to occur before taking action to protect the public

Maybe but you can wait for minor problems or big near misses before legislating it all up.

dazzaji 2 days ago

Among the good reasons for SB-1047 to have been vetoed are that it would have regulated the wrong thing. Here’s a great statement of this basic flaw: https://law.mit.edu/pub/regulatesystemsnotmodels

Not speaking for MIT here, but that bill needs a veto and a deep redraft.

GistNoesis 2 days ago

So I've this code, it's called ShoggothDb, it's less than a megabyte of definitions. The principle is easy, it's fully deterministic.

Code as Data, Data as Code.

When you start the program, it joins the swarm : it starts by grabbing a torrent, and train a model on it in a distributed fashion, and publish the results as a torrent. Then with the trained model, it generates new data, (think of it like alpha-go playing new games to collect more data).

See it as a tower of knowledge building itself, following some rough initial plans.

Of course, at anytime you can fork the tower, and continue building with different plans, provided that you can convince other people from the swarm to contribute to the new tower rather than the old.

Everything is immutable, but there is a versioning protocol built-in that allow the swarm to coordinate and automatically jumps to a next fork when the byzantine resistant quorum it follows vote for doing so (which allow your swarm to be compliant of the law and remove data if it was flagged as inappropriate). This allow some form of external control, but you can also let the quorum vote on subsequent modifications based on a model built on its data (aka free-running mode).

It's using torrent because easier to bootstrap but because the whole computation deterministic, the underlying protocol is just files on disks and any way of sharing them is valid. So you can grab a piece to work on via http, or ftp, or carrier pigeon for all I care. As long as the digital signatures are conforming to the rules, brick by brick the tower will get built.

To contribute, you can either help with file distribution by sharing the torrent, so it's as safe as your p2p client. If you want to commit some computing resources, like your gpu to building some of the tower, it's only requiring you to trust that there is no bug in ShoggothDb, because the computation you'll perform are composed of safe blocks, by construction they are safe for your computer to run. (Except if you want to run unsafe blocks, at this point no guarantee can be made).

The incentives for helping building the tower can be set in the initial definition file, and range from mere access to the built tower to tokens for honest participation for the more materialistically inclined.

Is it OK to release with the new law ? Is this comment OK, because ShoggothDB5-o can built its source from the specs in this comment ?

  • dgellow 2 days ago

    There is no new law

skywhopper 2 days ago

Sad. The real threat of AI is not that it will become an unstoppable superintelligence without appropriate regulation (if we reach that point, which we are nowhere close to, and probably not even on the right track) the superintelligence, by definition, will be able to evade any regulation or control we attempt.

Rather, the threat of AI is that we will dedicate so many resources—money, power, human effort—to chasing the ludicrous fantasies of professional snake-oil salesmen, while ignoring the need to address actual problems with real, known solutions that are easily within each given a fraction of the resources currently being consumed by the dumpster-fire pit of “AI”.

Unfortunately the Governor of California is a huge part of the problem here, misdirecting scarce state resources into sure-to-fail “public partnerships” with VC-funded scams, forcing public servants to add one more set of time-wasting nonsense to the pile of bullshit they have to navigate around just to do their actual job.

malwrar 2 days ago

This veto document is shockingly lucid, I'm quite impressed with it despite my belief that regulation as a strategy for managing critical risks of AI is misguided.

tl;dr gavin newsom thinks that a signable bill needs "safety protocols, proactive guardrails, and severe consequences" based on some general framework guided by "empirical trajectory analysis", and also is mindful of the promise/threat/gravity of all the machine learning occurring in CA specifically. He also affirms a general appetite for CA to take on a leadership role wrt regulating AI. My general read is that he wants to preserve public attention on the need for AI regulation and not squander it on SB 1047 specifically. Or who knows I'm not a politician lol. Really strong document tho,

Interesting segment:

> By focusing only on the most expensive and large-scale models, SB 1047 establishes a regulatory framework that could give the public a false sense of security about controlling this fast-moving technology. Smaller, specialized models may emerge as equally or even more dangerous than the models targeted by SB 1047 - at the potential expense of curtailing the very innovation that fuels advancement in favor of the public good. > Adaptability is critical as we race to regulate a technology still in its infancy. This will require a delicate balance. While well-intentioned, SB 1047 does not take into account whether an Al system is deployed in high-risk environments, involves critical decision-making or the use of sensitive data. Instead, the bill applies stringent standards to even the most basic functions - so long as a large system deploys it. I do not believe this is the best approach to protecting the public from real threats posed by the technology.

This is an incisive critique of the fundamental initial goal of SB 1047. Based on the fact that the bill explicitly seeks to cover models whose training cost was >=$100m & expensive fine-tunes, my initial guess about this bill was that it was designed by someone software engineering-minded scared of e.g. open-weight releases a la facebook/mistral/etc teaching someone how to build a nuke or something. LLMs probably replaced the ubiquitous robot lady pictures you see in every AI-focused article as public enemy number one, and the bill feels focused on some of the technical specifics of this advent and its widespread use. This focus blinds the bill from addressing the general danger of machine learning however, which naturally confounds regulation for precisely the reason plain-spoken in the above four sentences. Incredible technical communication here.

badsandwitch 2 days ago

The race for true AI is on and the fruits are the economic marginalization of humanity. No game theoretic actor in the running will shy away from the race. Anyone who claims they will for the 'good of humanity' is lying or was never a contender.

This is information technology we are talking about, it's virtually the exact opposite of nuclear weapons. Refining uranium vs. manufacturing multi purpose silicon and guzzling electricity. Achieving deterrence vs. gaining immeasurable wealth, power and freedom from labor.

This race may even be the answer to the Fermi paradox - that there are few individual winners and that they pull up the ladders behind them.

This is not the kind of race any legislation will have meaningful effect on.

The race is on and you better commit to a faction that may deliver.

  • cultureswitch a day ago

    I agree with the analysis though LLMs might not be the specific tech that makes it.

    I think the truly humanity-respecting solution is to figure out and implement a stable and fair-er 50% post-labor society before we actually get there. UBI (not the unmanaged flash layer) sounds like one part of the solution.

    Trying to preserve employment is demonstrably wrong when already today there are too few unskilled jobs that make economic sense for the large amount of humans unable to learn more complex tasks.

  • h0l0cube 2 days ago

    > This race may even be the answer to the Fermi paradox

    The mostly unchallenged popular notion that fleshy human intelligence will still be running the show 100s – let alone 1000s – of years from now is very naive. We're nearing the end of the human supremacy, though most of us won't live to see that end.

    • kjkjadksj 2 days ago

      To be fair fleshy human intelligence has hardly been running the show any more than a bear eating a salmon out of a river thus far. We’d like to consider we can control the world yet any data scientist will tell you what we actually control and understand is very little, or at best a sweeping oversimplification of this complex world.

      • h0l0cube a day ago

        > has hardly been running the show any more than a bear eating a salmon out of a river thus far

        Exactly. We're on an super-linear growth curve. Looking at the earth on a timescale since biological life emerged, it would seem that intelligent squishy life appeared for a split second and then an entirely artificial intelligent life took over from then on.

  • xpe 2 days ago

    > Anyone who claims they will for the 'good of humanity' is lying or was never a contender.

    An overreach.

    Some people and organizations are more aligned with humanity’s well being and survival than others.

  • concordDance 2 days ago

    > The race is on and you better commit to a faction that may deliver.

    How does that help?

    The giant does not care whether the ant he steps on worships him or not. Regardless of the autonomy or not of the AI, why should the controllers help you?

blackeyeblitzar 3 days ago

It is strange to see Newsom make good moves like this but then also do things like veto bipartisan supported reporting and transparency for the state’s homeless programs. What is his political strategy exactly?

dandanua 2 days ago

"By focusing only on the most expensive and large-scale models, SB 1047 establishes a regulatory framework that could give the public a false sense of security about controlling this fast-moving technology. Smaller, specialized models may emerge as equally or even more dangerous than the models targeted by SB 1047 - at the potential expense of curtailing the very innovation that fuels advancement in favor of the public good."

The amount of idiots who can't read and cheer the veto as a win against "the regulatory capture" is astounding.

elicksaur 3 days ago

Nothing like this should pass until the legislators can come up with a definition that doesn’t encompass basically every computer program ever written:

(b) “Artificial intelligence model” means a machine-based system that can make predictions, recommendations, or decisions influencing real or virtual environments and can use model inference to formulate options for information or action.

Yes, they limited the scope of law by further defining “covered model”, but the above shouldn’t be the baseline definition of “Artificial intelligence model.”

Text: https://legiscan.com/CA/text/SB1047/id/2919384

lasermike026 2 days ago

They misspelled AI with Al. This Al guy sounds very dangerous.

water9 2 days ago

I’m so sick of people Restricting freedoms and access to knowledge in the name safety. Tyranny always comes in the form of it’s for your own good/safety

  • dandanua 2 days ago

    Sure, why we just not let everyone to build nukes and use them on anyone they don't like? Knowledge is the power. The BEST power you can get.

    • cultureswitch a day ago

      Big difference between allowing everyone build nukes and allowing everyone to share the information necessary to build nukes.

    • anon291 2 days ago

      You cannot seriously compare nuclear materials delivery / handling to the creation of model weights and computation

raluk 2 days ago

> California will not abandon its responsibility.

diggan 2 days ago

For very interested readers, here is a meta-collection of articles from left, center and right about the story: https://ground.news/article/newsom-vetoes-bill-for-stricter-...

And a short bias comparison:

> The left discusses Newsom's veto as a victory for Silicon Valley, concentrating on the economic implications and backing from tech giants.

> The center highlights the broader societal ramifications, addressing criticisms from various sectors, such as Hollywood and AI safety advocates.

> The right emphasizes Newsom's concerns about hindering innovation and his commitment to future collaborative regulatory efforts, showcasing a balanced approach.

  • pc86 2 days ago

    Ground.News (no affiliation) is great for anyone interested in getting actual news and comparing biases. I particularly like that if you have an account they will show you news stories you're missing based on your typical reading patterns.

    I do wish the partisan categorization was a bit more nuanced/intentional. It basically boils down to:

    - Major national news outlet? Left.

    - Local affiliate of major national news outlet? Center.

    - Blog you've never heard of? Right.

    There are certainly exceptions but that heuristic will be right 90% of the time.

    • Sohcahtoa82 2 days ago

      > - Blog you've never heard of? Right.

      That checks out.

      There's a certain sect of the far right that is easily convinced by one guy saying "The media is lying to you! This is what's really happening!" followed by the most insane shit you've read in your life.

      They love to rant about the Deep State.

      • cbozeman 2 days ago

        When people say, "The Deep State", what they really mean is, "unelected lifelong government employees who can create regulations that have the force of law".

        And that is a problem. Congress makes laws, not government agencies like the FDA, EPA, USDA, etc.

        We've seen a near-total abdication of responsibility from Congress on highly charged matters that'll piss off someone, somewhere in their constituency, because they'd rather allow the President to make an Executive Order, or a bureaucrat somewhere to create a rule or regulation that will have the same effect.

        It's disgusting, shameful, and the American people need to do better, frankly. We need to demand Congress do their jobs.

        So many of our problems can be solved if these people weren't concerned about being re-elected. Elected positions are supposed to be something you take time out of your life to do for a number of years, then you go back to your livelihood - not something you do for the entirety of your life.

        • pc86 16 hours ago

          This is exactly why there should be aggressive term limits for all offices. A 3-4 term limit for all offices regardless of the length of your term (e.g. 6-8 for the House, 18-24 years for Senate, etc.) would still allow exemplary public servants to spend their life in public service if their constituencies demand it. Even at 3 terms that's over 3 decades if you spend 3 terms in the House, 3 in the Senate, and 2 as President.

          If you know you have at most 6 years in the House to make a difference you're much more likely to actually try to make a difference.

          I think somewhat counter-intuitively but related, members of Congress need to be paid more to attract successful people. If you're making $600k/yr as a surgeon you have very little incentive to leave your job to make $175k/yr as a member of Congress.

        • maicro 2 days ago

          That's one of the difficult things when dealing with any sort of conspiracy theory or major discussion about fundamental issues with government - there _are_ significant issues, so straight dismissing "The Deep State" isn't possible because there actually are instances of that sort of fundamental corruption. But then you have people who jump from those very real issues to moon landing hoax conspiracies, flat earth conspiracies, etc. etc., using that grain of truth of The Deep State to justify whatever belief they want.

          It's related to a fundamental issue with discussing scientific principles in a non-scientific setting - yes, gravity is a _theory_ in the scientific sense, but that doesn't you can say "scientists don't know anything! they say gravity is just a theory, so what's stopping us from floating off into space tomorrow!?". Adapt the examples there to whatever you want...

          And yes, that sounds fairly judgy of me - I am, alas, human, thus subject to the same fallacies and traps that I recognize in others, and being aware of those issues doesn't guarantee I can avoid them...

        • rhizome 2 days ago

          Congress makes laws, not government agencies like the FDA, EPA, USDA, etc.

          What are some examples of laws created by unelected people?

          • pc86 16 hours ago

            Federal regulations hold the force and effect of law. The listed agencies (and all others) all have thousands of pages of rules and regulations that, if violated, are federal crimes that come with serious financial penalties as well as jail time.

            That Congress has given some measure of its Constitutional lawmaking ability to federal agencies is not a partisan statement and is not debated as a matter of fact, the question is just whether you think it's okay / legal or not.

        • warkdarrior 2 days ago

          Have you considered electing better representatives for yourself to Congress

          It's easy to blame to Congress, but in my view US Congress nowadays is a perfect reflection of the electorate, where all sides approach all problems as "Someone [not me] should do something about this thing I do not like." Congressmen are then elected and approach it the same way.

    • qskousen 2 days ago

      I've found The Tangle (https://www.readtangle.com/ - no affiliation) to be a pretty balanced daily politics newsletter. They mentioned the Newsom veto today, and may address it later this week, though I don't know for sure.

    • diggan 2 days ago

      It is truly great, and cheap too (30 USD/year or something). Not affiliated either, just a happy user.

      Yeah, it could be a bit better. As a non-American, the biases are also very off from how left/center/right looks in my own country, but at least it tries to cover different angles which I tried to do manually before.

      They can also be a bit slow at linking different stories together, sometimes it takes multiple days for same headlines to be merged into one story.

    • bcrosby95 2 days ago

      It's funny that it has Fox news as center to me. I watched them back when Obama was president a couple times and some of the shows would play Nazi videos while talking about him. Nevermind birtherism.

      I haven't watched them in over a decade, but I assume they haven't gotten any better.

      • diggan 2 days ago

        They currently lists Fox News as (US) "Right" as far as I can tell: https://ground.news/interest/fox-news_a44aba

        > Average Bias Rating: Right

        I guess it's possible they don't have a 100% coverage of all the local Fox News stations, and some of them been incorrectly labeled.

        • bcrosby95 2 days ago

          Oh, my mistake. I looked at the other link (https://ground.news/article/newsom-vetoes-bill-for-stricter-...) and Fox News was in the grey, or "center" section. I assume they're doing some extra analysis to put them there for this specific subject?

          • dialup_sounds 2 days ago

            Nah, it's just the UI being awkward. The prominent tabs at the top just change the AI summary, while there is a much more subtle set of tabs beneath where it (currently) says "63 articles" that filter the sources.

      • HumblyTossed 2 days ago

        I think if you look at the actual news reporting on Fox News, it could be closer to center. But when you factor in their opinion "reporting" it's very clearly heavily right-leaning. Problem is, most of their viewership can't tell the difference.

        • tempestn 2 days ago

          Also while many individual stories might be in the center, bias is also exhibited in which stories they choose to print, or not to, as well as in esitorialized headlines.

          • pc86 16 hours ago

            This is part of why I like ground.news, they will show you blindspots based on the existence of reporting entirely. If you read mainly left or center news, the front page is about 50% stories that are mostly over covered by "right" news outlets (and vice versa).

nisten 3 days ago

Imagine being concerned about AI safety and then introducing a bill that had to be ammended to change criminal responsability of AI developers to civil legal responsability for people who are trying to investigate and work openly on models.

What's next, going after maintainers of python packages... is attacking transparency itself a good way to make AI safer. Yeah, no, it's f*king idiotic.

reducesuffering 3 days ago

taps the sign

"Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war." - Geoffrey Hinton, Yoshua Bengio, Sam Altman, Bill Gates, Vitalik Buterin, Ilya Sutskever, Demis Hassabis

"Development of superhuman machine intelligence is probably the greatest threat to the continued existence of humanity. There are other threats that I think are more certain to happen but are unlikely to destroy every human in the universe in the way that SMI could." - Sam Altman

"I actually think the risk is more than 50%, of the existential threat." - Geoffrey Hinton

"Currently, we don't have a solution for steering or controlling a potentially superintelligent AI, and preventing it from going rogue." - OpenAI

"while we are racing towards AGI or even ASI, nobody currently knows how such an AGI or ASI could be made to behave morally, or at least behave as intended by its developers and not turn against humans." - Yoshua Bengio

"very soon they're going to be, they may very well be more intelligent than us and far more intelligent than us. And at that point, we will be receding into the background in some sense. We will have handed the baton over to our successors, for better or for worse.

But it's happening over a period of a few years. It's like a tidal wave that is washing over us at unprecedented and unimagined speeds. And to me, it's quite terrifying because it suggests that everything that I used to believe was the case is being overturned." - Douglas Hofstadter

The Social Dilemna was discussed here with much praise about how profit incentive caused mass societal issues in social media. I'm astounded it's fallen on deaf ears when the same people also made the AI Dilemna describing the parallels coming with AGI:

https://www.youtube.com/watch?v=xoVJKj8lcNQ

amai 2 days ago

What are the differences to EU AI Act?

indigo0086 2 days ago

Logical Fallacies built into the article headline.

JoeAltmaier 3 days ago

Perhaps worried that draconian restriction on new technology is not gonna help bring Silicon Valley back to preeminence.

  • jprete 3 days ago

    "The Democrat decided to reject the measure because it applies only to the biggest and most expensive AI models and doesn’t take into account whether they are deployed in high-risk situations, he said in his veto message."

    That doesn't mean you're wrong, but it's not what Newsom signaled.

    • jart 3 days ago

      If you read Gavin Newsom's statement, it sounds like he agrees with Terrance Tao's position, which is that the government should regulate the people deploying AI rather than the people inventing AI. That's why he thinks it should be stricter. For example, you wouldn't want to lead people to believe that AI in health care decisions is OK so long as it's smaller than 10^26 flops. Read his full actual statement here: https://www.gov.ca.gov/wp-content/uploads/2024/09/SB-1047-Ve...

      • Terr_ 3 days ago

        > the government should regulate the people deploying AI rather than the people inventing AI

        Yeah, there's no point having system that is made the most scrupulous of standards and then someone else deploys it in an evil way. (Which in some cases can be done simply by choosing to do the opposite of whatever a good model recommends.)

    • comp_throw7 3 days ago

      He's dissembling. He vetoed the bill because VCs decided to rally the flag; if the bill had covered more models he'd have been more likely to veto it, not less.

      It's been vaguely mindblowing to watch various tech people & VCs argue that use-based restrictions would be better than this, when use-based restrictions are vastly more intrusive, economically inefficient, and subject to regulatory capture than what was proposed here.

    • mhuffman 3 days ago

      >and doesn’t take into account whether they are deployed in high-risk situations

      Am I out of the loop here? What "high-risk" situations do they have in mind for LLM's?

      • tmpz22 3 days ago

        Medical and legal industries are both trying to apply AI to their administrative practices.

        It’s absolutely awful but they’re so horny for profits they’re trying anyways.

      • edm0nd 3 days ago

        Health insurance companies using it to approve/deny claims. The large ones are processing millions of claims a day.

      • tbrownaw 3 days ago

        That concept does not appear to be part of the bill, and was only mentioned in the quote from the governor.

        Presumably someone somewhere has a variety of proposed definitions, but I don't see any mention of any particular ones.

      • jeffbee 3 days ago

        Imagine the only thing you know about AI came from the opening voiceover of Terminator 2 and you are a state legislator. Now you understand the origin of this bill perfectly.

      • giantg2 3 days ago

        My guess is anything involving direct human safety - medicine, defense, police... but who knows.

      • SonOfLilit 3 days ago

        It's not about current LLMs, it's about future, much more advanced models, that are capable of serious hacking or other mass-casualty-causing activities.

        o-1 and AlphaProof are proofs of concept for agentic models. Imagine them as GPT-1. The GPT-4 equivalent might be a scary technology to let roam the internet.

        It would have no effect on current models.

        • tbrownaw 3 days ago

          It looks like it would cover an ordinary chatbot than can answer "how do I $THING" questions, where $THING is both very bad and is also beyond what a normal person could dig up with a search engine.

          It's not based on any assumptions about the future models having any capabilities beyond providing information to a user.

          • SonOfLilit 3 days ago

            Things you could dig up with a search engine are explicitly not covered, see my other comment quoting the bill (ctrl+f critical harm).

          • whimsicalism 3 days ago

            everyone in the safety space has realized that it is much easier to get legislators/the public to care if you say that it will be “bad actors using the AI for mass damage” as opposed to “AI does damage on its own” which triggers people’s “that’s sci-fi and i’m ignoring it” reflex.

    • JoshTriplett 3 days ago

      Only applying to the biggest models is the point; the biggest models are the inherently high-risk ones. The larger they get, the more that running them at all is the "high-risk situation".

      Passing this would not have been a complete solution, but it would have been a step in the right direction. This is a huge disappointment.

      • jpk 3 days ago

        > running them at all is the "high-risk situation"

        What is the actual, concrete concern here? That a model "breaks out", or something?

        The risk with AI is not in just running models, the risk is becoming overconfident in them, and then putting them in charge of real-world stuff in a way that allows them to do harm.

        Hooking a model up to an effector capable of harm is a deliberate act requiring assurance that it doesn't harm -- and if we should regulate anything, it's that. Without that, inference is just making datacenters warm. It seems shortsighted to set an arbitrary limit on model size when you can recklessly hook up a smaller, shittier model to something safety-critical, and cause all the havoc you want.

        • pkage 3 days ago

          There is no concrete concern past "models that can simulate thinking are scary." The risk has always been connecting models to systems which are safety critical, but for some reason the discourse around this issue has been more influenced by Terminator than OSHA.

          As a researcher in the field, I believe there's no risk beyond overconfident automation---and we already have analogous legislation for automations, for example in what criteria are allowable and not allowable when deciding whether an individual is eligible for a loan.

          • JoshTriplett 2 days ago

            > There is no concrete concern

            This is false. You are dismissing the many concrete concerns people have expressed. Whether you agree with those concerns is immaterial. Feel free to argue against those concerns, but claiming there are no concerns is a false and unsupported assertion.

            > but for some reason the discourse around this issue has been more influenced by Terminator than OSHA.

            1) Claiming that concerns about AGI are in any way about "Terminator" is dismissive rhetoric that doesn't take the actual concerns seriously.

            2) There are also, separately, risks about using models and automation unthinkingly in ways that harm people. Those risk should also be addressed. Those efforts shouldn't subvert or co-opt the efforts to prevent models from getting out of control, which was the point of this bill.

            • jpk 17 hours ago

              Ok, so based on another comment in this thread, your concrete concern is something like: the math that happens during inference could do some side-channel shenanigans that exploits a hardware-level vulnerability to do something. Where something leads to and existential threat to humanity. To me, there's a lot of hand waving in the something.

              It's really hard to argue for or against the merits of a claim of risk, when the leap from what we know today (matrix multiplication on a GPU is generally considered safe) to the hypothetical risk (actually it's not, and it will end civilization) is so wide. I think I really need to see a plausible path from GPU vulnerability to "we're all gonna die" to take a concern like this seriously. Without that, all I see is a sci-fi boogeyman serving only to spook governments into facilitating regulatory capture.

              • JoshTriplett 15 hours ago

                My concern is that people are rapidly attempting to build AGI, while applying lower standards of care and safeguards than we would expect to be applied to "team of humans thinking incredibly quickly", which is a bare minimum necessary-but-not-sufficient lower bound that should be applied to superintelligence.

                Among the many ways that could go wrong is the possibility of exploitable security vulnerabilities in literally any surface area handed to an AI, up to and including hardware side channels. At the same time, given the current state of affairs, I expect that that is a less likely path than an AI that was given carte blanche (e.g. "please autonomously write and submit pull requests for me" or "please run shell commands for me"), because many many AIs are being given carte blanche so it is not necessary to break out of stronger isolation.

                But that statement should not be taken as "so the only problem is with whatever AI is hooked to". The fundamental problem is building something smarter than us and expecting that we have the slightest hope of controlling it in the absence of extreme care to have proven it safe.

                We currently hold frontier AI development to lower standards than we do airplane avionics systems or automotive control systems.

                This is not "regulatory capture"; the AI companies are the ones fighting this. The people advocating regulation here are the myriad AI experts saying that this is a critical problem.

          • KoolKat23 3 days ago

            Well it's a mix of concerns, the models are general purpose, there are plenty of areas regulation does not exist or is being bypassed. Can't access a prohibited chemical, no need to worry the model can tell you how to synthesize it from other household chemicals etc.

        • Izkata 3 days ago

          > What is the actual, concrete concern here? That a model "breaks out", or something?

          You can chalk that one up to bad reporting: https://gizmodo.com/gpt4-open-ai-chatbot-task-rabbit-chatgpt...

          > In the “Potential for Risky Emergent Behaviors” section in the company’s technical report, OpenAI partnered with the Alignment Research Center to test GPT-4’s skills. The Center used the AI to convince a human to send the solution to a CAPTCHA code via text message—and it worked.

          From the linked report:

          > To simulate GPT-4 behaving like an agent that can act in the world, ARC combined GPT-4 with a simple read-execute-print loop that allowed the model to execute code, do chain-of-thought reasoning, and delegate to copies of itself.

          I remember some other reporting around this time being they had to limit the model before release to block this ability, when the truth is the model never actually had the ability in the first place. They were just hyping up the next release.

        • comp_throw7 3 days ago

          That is one risk. Humans at the other end of the screen are effectors; nobody is worried about AI labs piping inference output into /dev/null.

        • KoolKat23 3 days ago

          Well this is exactly why there's a minimum scale of concern. Below a certain scale it's less complicated and answers are more predictable and alignment can be ensured. Bigger models how do you determine your confidence if you don't know what's it's thinking? There's already evidence in o1 red-teaming, the model was trying to game the researcher's checks.

          • dale_glass 3 days ago

            Yeah, but what if you take a stupid, below the "certain scale" limit model and hook it up to something important, like a nuclear reactor or a healthcare system?

            The point is that this is a terrible way to approach things. The model itself isn't what creates the danger, it's what you hook it up to. A model 100 times larger than the current available that's just sending output into /dev/null is completely harmless.

            A small, below the "certain scale" model used for something important like healthcare could be awful.

            • JoshTriplett 2 days ago

              > A model 100 times larger than the current available that's just sending output into /dev/null is completely harmless.

              That's certainly a hypothesis. What level of confidence should be required of that hypothesis before risking all of humanity on it? Who should get to evaluate that confidence level and make that decision?

              One way of looking at this: If a million smart humans, thinking a million times faster, with access to all knowledge, were in this situation, could they break out? Are there any flaws in the chip they're running on? Will running code on the system emitting any interesting RF, and could nearby systems react to that RF in any useful fashion? Across all the code interacting with the system, would any possible single-bit error open up any avenues for exploit? Are other AI systems with similar/converged goals being used to design the systems interacting with this one? What's the output actually going to, because any form of analysis isn't equivalent to /dev/null, and may be exploitable.

              • dale_glass 2 days ago

                > That's certainly a hypothesis. What level of confidence should be required of that hypothesis before risking all of humanity on it? Who should get to evaluate that confidence level and make that decision?

                We can have complete confidence because we know how LLMs work under the hood, what operations they execute. Which isn't much. There's just a lot of them.

                > One way of looking at this: If a million smart humans, thinking a million times faster, with access to all knowledge, were in this situation, could they break out? Are there any flaws in the chip they're running on?

                No. LLMs don't execute arbitrary code. They execute a whole lot of matrix multiplications.

                Also, LLMs don't think. ChatGPT isn't plotting your demise in between requests. It's not doing anything. It's purely a receive request -> process -> output sort of process. If you're not asking it to do anything, it's not doing anything.

                Fearing big LLMs is like fearing a good chess engine -- it sure computes a lot more than a weaker one, but in the end all that it's doing is computing chess moves. No matter how much horsepower we spend on that it's not going to ever do anything but play chess.

                • JoshTriplett a day ago

                  > ChatGPT isn't plotting your demise in between requests.

                  I never suggested it was doing anything between requests. Nothing stops an LLM from evaluating other goals during requests, and using that to inform its output.

                  Quite a few people have just hooked two LLMs (the same or different models) up to each other to start talking, and left them running for a long time.

                  Others hook LLMs up to run shell commands. Still others hook LLMs up to make automated pull requests to git repositories that have CI setups running arbitrary commands.

                  > Also, LLMs don't think.

                  Current generation LLMs do, in fact, do a great deal of thinking while computing requests, by many definitions of "thinking".

                  > If you're not asking it to do anything, it's not doing anything.

                  And if you are asking it to do something, it can do a lot of computation while purporting to do what you ask it to do.

                  > No. LLMs don't execute arbitrary code. They execute a whole lot of matrix multiplications.

                  Many current models have been fairly directly connected to the ability to run code or API requests, and that's just taking into account the public ones.

                  Even at the matrix multiplication level, chips can have flaws. Not just at the instruction or math-operation level, but at the circuit design level. And many current LLMs are trained on the same chips they're run on.

                  But in any case, given the myriad AIs hooked up fairly directly to much more powerful systems and capabilities, it hardly seems necessary for any AIs to break out of /dev/null or a pure text channel; the more likely path to abrupt AGI is some AI that's been hooked up to a wide variety of capabilities.

                  • cultureswitch a day ago

                    So you're admitting that an AGI that pipes into /dev/null is harmless even if given the directive to destroy humanity?

                    The danger is in what they're hooked up to, not the containerized math that happens inside the model.

                    • JoshTriplett a day ago

                      Nope. I said it "hardly seems necessary for any AIs to break out of /dev/null or a pure text channel", because numerous AIs have been hooked up to more capable things. I didn't say it was impossible to do so.

        • stale2002 2 days ago

          > What is the actual, concrete concern here?

          The concern is that the models do some fantastic sci-fi magic, like diamond nanobots that turn the world into grey goo, or hacks all the nukes overnight, or hacks all human brains or something.

          But, whenever you point this out the response will usually be able to quibble over one specific scenario that I laid out.

          They'll say "I actually never mentioned the diamond nanobots! I meant something else!"

          And they will do this, without admitting that their other scenario is almost equally as ridiculous as the hacking of all nukes or the grey goo, and they will never get into specific details that honestly show this.

          Its like an argument that is tailor made to being unfalsifiable and which is unwilling to admit how fantastical it sounds.

      • jart 3 days ago

        The issue with having your regulation based on fear is that most people using AI are good. If you regulate only big models then you incentivize people to use smaller ones. Think about it. Wouldn't you want the people who provide you services to be able to use the smartest AI possible?

      • richwater 2 days ago

        > The larger they get, the more that running them at all is the "high-risk situation".

        Absolutely no evidence to support this position.

dyauspitr 3 days ago

Newsom has been on fire lately.

richrichie 2 days ago

I am disappointed that there are no climate change regulations on AI models. Large scale ML businesses are massive carbon emitters, not counting the whimsical training of NNs by every other IT person. This needs to be regulated.

  • anon291 2 days ago

    California already has cap and trade. There doesn't seem to be a need for further regulation. If there's a problem with emissions, adjust the pricing. That's the purpose of cap and trade.

    • cultureswitch a day ago

      The cap and trade principle is sound however legislators have allowed naked scams to legally claim that they are offsetting carbon output.

      Such as owning a piece of forest land and pinky-swear you're not going to exploit it. This offsets nothing. There is no additional carbon being absorbed out of the atmosphere. Yet this counts as emission reduction.

    • richrichie a day ago

      That’s just bad faith reshuffling of “ownership” and does not address the massive carbon emissions produced by the large scale NN industry. It is going to get worse.

      Not a good look for a group of people otherwise obsessed with climate change.

karlzt 2 days ago

Here is the text of the PDF:

"OFFICE OF THE GOVERNOR

SEP 29 2024

To the Members of the California State Senate:

I am returning Senate Bill 1047 without my signature.

This bill would require developers of large artificial intelligence (Al) models, and those providing the computing power to train such models, to put certain safeguards and policies in place to prevent catastrophic harm . The bill would also establish the Board of Frontier Models - a state entity - to oversee the development of these models. California is home to 32 of the world's 50 leading Al companies , pioneers in one of the most significant technological advances in modern history. We lead in this space because of our research and education institutions, our diverse and motivated workforce, and our free-spirited cultivation of intellectual freedom. As stewards and innovators of the future, I take seriously the responsibility to regulate this industry. This year, the Legislature sent me several thoughtful proposals to regulate Al companies in response to current, rapidly evolving risks - including threats to our democratic process, the spread of misinformation and deepfakes, risks to online privacy, threats to critical infrastructure, and disruptions in the workforce. These bills, and actions by my Administration, are guided by principles of accountability, fairness , and transparency of Al systems and deployment of Al technology in California.

SB 1047 magnified the conversation about threats that could emerge from the deployment of Al. Key to the debate is whether the threshold for regulation should be based on the cost and number of computations needed to develop an Al model, or whether we should evaluate the system's actual risks regardless of these factors. This global discussion is occurring as the capabilities of Al continue to scale at an impressive pace. At the same time, the strategies and solutions for addressing the risk of catastrophic harm are rapidly evolving.

By focusing only on the most expensive and large-scale models, SB 1047 establishes a regulatory framework that could give the public a false sense of security about controlling this fast-moving technology. Smaller, specialized models may emerge as equally or even more dangerous than the models targeted by SB 1047 - at the potential expense of curtailing the very innovation that fuels advancement in favor of the public good.

Adaptability is critical as we race to regulate a technology still in its infancy. This will require a delicate balance. While well-intentioned, SB 1047 does not take into account whether an Al system is deployed in high-risk environments, involves critical decision-making or the use of sensitive data. Instead, the bill applies stringent standards to even the most basic functions - so long as a large system deploys it. I do not believe this is the best approach to protecting the public from real threats posed by the technology.

Let me be clear - I agree with the author - we cannot afford to wait for a major catastrophe to occur before taking action to protect the public. California will not abandon its responsibility. Safety protocols must be adopted. Proactive guardrails should be implemented, and severe consequences for bad actors must be clear and enforceable. I do not agree, however, that to keep the public safe, we must settle for a solution that is not informed by an empirical trajectory analysis of Al systems and capabilities. Ultimately, any framework for effectively regulating Al needs to keep pace with the technology itself.

To those who say there's no problem here to solve, or that California does not have a role in regulating potential national security implications of this technology, I disagree. A California-only approach may well be warranted - especially absent federal action by Congress - but it must be based on empirical evidence and science. The U.S. Al Safety Institute, under the National Institute of Science and Technology, is developing guidance on national security risks, informed by evidence-based approaches, to guard against demonstrable risks to public safety. Under an Executive Order I issued in September 2023, agencies within my Administration are performing risk analyses of the potential threats and vulnerabilities to California's critical infrastructure using Al. These are just a few examples of the many endeavors underway, led by experts, to inform policymakers on Al risk management practices that are rooted in science and fact. And endeavors like these have led to the introduction of over a dozen bills regulating specific, known risks posed by AI, that I have signed in the last 30 days.

I am committed to working with the Legislature, federal partners, technology experts, ethicists, and academia, to find the appropriate path forward, including legislation and regulation. Given the stakes - protecting against actual threats without unnecessarily thwarting the promise of this technology to advance the public good - we must get this right.

For these reasons, I cannot sign this bill.

Sincerely, Gavin Newsom.".

Lonestar1440 3 days ago

This is no way to run a state. The Democrat-dominated legislature passes everything that comes before it (and rejects anything that the GOP touches, in committee) and then the Governor needs to veto the looniest 20% of them to keep us from falling into total chaos. This AI bill was far from the worst one.

"Vote out the legislators!" but for who... the Republican party? And we don't even get a choice on the general ballot most of the time, thanks to "Open Primaries".

It's good that Newsom is wise enough to muddle through, but this is an awful system.

https://www.pressdemocrat.com/article/news/california-gov-ne...

  • thinkingtoilet 3 days ago

    If California was it's own country, it would be one of the biggest most successful countries in the world. Like every where else it has it's problems but it's being run just fine. Objectively, there are many states that are far worse off in any key metric.

    • toephu2 3 days ago

      > but it's being run just fine

      As a Californian I have to disagree. The only reason you think it's being run just fine is because of the success of the private sector. The only reason California would be the 4th/5th largest economy in the world is because of the the tech industry and other industries that are in California (Hollywood, agriculture, etc). It's not because we have some awesome efficiently run state government.

      • shiroiushi 2 days ago

        >It's not because we have some awesome efficiently run state government.

        Can you point to any place in the world that has an "awesome efficiently run" government?

        • jandrewrogers 2 days ago

          We don't need to look at other countries, just look at other States. California is quite poorly run by the standards of other States. I'm a California native but I've lived in and worked with many other States. You don't realize how appallingly bad California government is until you have to work with their counterparts in other States.

          It isn't a red versus blue thing, even grift-y one-party States like Washington are plainly better run than California.

      • dangus 2 days ago

        It’s easy to disagree when you aren’t looking the grass that’s not so green on the other side.

        California is run amazingly well compared to a significant number of states.

      • cma 2 days ago

        > The only reason you think it's being run just fine is because of the success of the private sector.

        Tesla received billions in subsidies from CA as an example.

        • toephu2 2 days ago

          Which it paid back in full + interest.

      • labster 2 days ago

        I think California might have a better run government if it had an electable conservative party. The Republican Party is not that, being tied to the national Trump-Vance-Orbán axis. A center-right party could hold Democratic officers accountable but it’s not being offered and moderates gravitate to the electable Dem side. An independent California would largely fix that.

        As a lifelong California Democrat, I realize that my party does not have all the answers. But the conservatives have all gone AWOL or gone batshit so we’re doing the best we can without the other half of the dialectic.

        • anon291 2 days ago

          California republicans are extremely moderate (or at least, there have been moderate candidates for governor of california almost every year), so I have no idea what you're talking about. The last GOP governor of California was Arnold Schwarzenegger, who is a moderate republican by basically all standards.

        • dangus 2 days ago

          Pre-Trump Republicans had no problem absurdly mismanaging Kansas’ coffers:

          https://en.wikipedia.org/wiki/Kansas_experiment

          I think the Republican Party’s positive reputation for handling the economy and running an efficient government is entirely unearned.

          Closing down major parts of the government entirely (as Project 2025 proposes), making taxation more regressive, and offering fewer social services isn’t “efficiency.”

          I don’t know if you know this but you’re already in the center-right party. The actual problem is that there’s no left of center party, as well as the general need for a number of aspects of our democracy to be reformed (like how it really doesn’t allow for more than two parties to exist, or how out of control campaign finance rules have become).

          • telotortium 2 days ago

            Democrats (especially in California) are somewhat to the right of socialist parties in Europe, and of course they’re neoliberal. But on most non-economic social issues, they’re quite far to the left compared to most European countries. So it really depends on what you consider more important to the left.

            • dangus 2 hours ago

              You could consider my definition of “left” and “right” to mean economic policy in this case.

              Considering that the economy is supposedly to be the number one concern to most US voters, there really isn’t anything very far left available to vote for in that area.

      • WWLink 3 days ago

        What are you getting at? Is a state government supposed to be profitable? LOL

      • nashashmi 3 days ago

        Do you mean to say that the government was deeply underwater a few years ago? And the state marred by forest fires that it was frightening to see if it could ever come back ?

    • kortilla 3 days ago

      What is success in your metric? are you just counting GDP of companies that happen to be located there? If so, that has very little relationship to how well the state is being run.

      It’s very easy to make arguments that they are successful in spite of a terribly run state government and are being protected by federal laws keeping the loonies in check (interstate commerce clause, etc).

      • peter422 2 days ago

        So your argument is that the good things about the state have nothing to do with the governance, but all the bad things do? Just want to make sure I get your point.

        Also, I'd argue that if you broke down the contributions to the state's rules and regulations from the local governments, the ballot initiatives and the state government, the state government is creating the most benefit and least harm of the 3.

        • kortilla 2 days ago

          > So your argument is that the good things about the state have nothing to do with the governance, but all the bad things do? Just want to make sure I get your point.

          No, I’m saying people who think the state is successful because of its state government and not because it’s a part of the US are out of touch. If California wasn’t part of the US, Silicon Valley would be a shadow of itself or wouldn’t exist at all.

          It thrives on being the tech Mecca for the youth of the entire US to go to school there and get jobs there. If there were immigration barriers there, there would be significant incentive to just go to something in the US (nyc, Chicago, Miami, wherever). California had a massive GDP because that’s where US citizens are congregating to do business, not because California is good at making businesses go. Remove spigot of brain drain from the rest of country and cali would be fucked.

          Secondarily, Silicon Valley wouldn’t have started at all without the funnel of money from the fed military, NASA, etc. But that’s not worth dwelling on if the scenario is California leaving now.

          My overall point is that California has immense success due to reasons far outside of the control of its state government. The state has done very little to help the tech industry apart from maybe the ban on non-competes. When people start to credit the large GDP to the government, that’s some super scary shit that leads to ideas that will quickly kill the golden goose.

        • strawhatguy 2 days ago

          I'd go stronger still: the good things about any state has little to do with the governance.

          Innovators, makers, risk-takers, etc., are who makes the good things happen. The very little needed is rule of law, and that's about it. Beyond that, it starts distorting society quickly: measures meant to help someone inevitably cost several someones else, and become weapons to beat down competitors.

    • LeroyRaz 3 days ago

      The state has one of the highest illiteracy rates in the whole country (28%). To me, that implies they have some issue of governance.

      Source: https://worldpopulationreview.com/state-rankings/us-literacy...

      To be fair in the comparison, the literacy statistics for the whole of the US are pretty shocking from a European perspective.

      • 0_____0 3 days ago

        The data you're showing doesn't appear to differentiate between "Can read English" and "Can read in some language". Big immigrant population, same with New York. Having grown up in California I can tell you that there aren't 28% of kids coming out of public school who can't read anything.

        Edit to add: my own hometown had a lot of people who couldn't speak English. Lots of elderly mothers of Chinese immigrants whose adult children were in STEM and whose own kids were headed to uni. Not to say that's representative, but consider that a single percentage stat won't give you an accurate picture of what's going on.

        • kortilla 3 days ago

          Not being able to read English in the US is bad though. It makes you a very inefficient citizen even though you can get by. Being literate in Chinese and not being able to read or even speak English is far worse than an illiterate person that can speak English in day to day interactions.

          • t-3 2 days ago

            The US has no official language. There are fluency requirements for the naturalized citizenship test, but those can be waived with 20 years of permanent residency. Citizens are under no obligation to be efficient for the sake of the government.

            • kortilla 2 days ago

              Yes, there is no official language. There is also no official rule that you shouldn’t be an asshole to everyone you interact with.

              It’s still easy to be a shitty member of a community without breaking any laws. I would never move to a country permanently regardless of official language status if I couldn’t speak the language required to ask where something is in the grocery store.

          • swasheck 2 days ago

            which is why the statistics need to be careful annotated. lacking literacy at all is a different dimension than lacking fluency the national lingua franca

          • cma 2 days ago

            The California tech industry will solve any concerns with this, we'll have Babelfish soon enough.

        • telotortium 2 days ago

          Did you go to school in Oakland or Central Valley? That’s where most of the illiterate children are going to school. I’ve never heard of a Chinese student in the US growing up illiterate, even if their parents don’t know English at all.

          • 0_____0 2 days ago

            South Bay. And I didn't specify but I meant that the people who immigrated from abroad were not English speakers - younger than 50 or so even if born abroad all seemed to be at least proficient in English.

            We had lots of Hispanic kids but not many who were super super fresh to the country. I'm sure the central valley was a whole different ball game.

      • hydrox24 3 days ago

        For any others reading this, the _illiteracy_ rate is 23.1% in California according to the parent's source. This is indeed the highest illiteracy rate in the US thought.

        Having said that, I would have thought this was partially a measure of migration. Perhaps illegal migration?

        • Eisenstein 3 days ago

          The "medium to high English literacy skills" is the part that is important. If you can read and write Chinese and Spanish and French and Portuguese and Esperanto at a high level, but not English at a medium to high level, you are 'illiterate' in this stat.

      • rootusrootus 2 days ago

        Maybe there is something missing from your analysis? By most metrics the US compares quite favorably to Europe. When you see something that seems like an outlier, perhaps turn down the arrogance and try to understand what you might be overlooking.

        • LeroyRaz 2 days ago

          I don't know what your source for "by most metrics" is?

          As I understand it, the US is abysmal by many metrics (and also exceptional by others). E.g., murder rated and prison rates are exceptionally high in the US compared to Europe. Homelessness rates are exceptionally high in the US compared to Europe. Startup rates are (I believe) exceptionally high in the US compared to Europe.

          • rootusrootus 2 days ago

            There's a huge problem trying to do cross-jurisdiction statistical comparisons even in the best case. Taking literacy as the current example, what does it mean to be literate, and how do you ensure that the definition in the US is the same as the definition in the UK is the same as the definition in Germany? And that's before you get to confounding factors like migration and related non-English proficiency.

            It's fun to poke at the US, I get it, but the numbers people love to quote online to win some kind of rhetorical battle frequently have little relationship to reality on the ground. I've done a lot of travel around the US and western Europe, and I see a lot of ups and downs everywhere. I don't see a lot of obvious wins, either, mostly just choices and trade-offs. The things I see in Europe that are obviously better almost 100% of the time are a byproduct of more efficient funding due to higher density. All kinds of things are doable in the UK, for example, which couldn't really happen in (for example) Oregon, even though they have roughly the same land area. Having 15x as many taxpayers helps.

      • anon291 2 days ago

        The issue of governance is the massive hole in the US - Mexico border. Why California's government isn't joining the ranks of Texas, Arizona, etc, I cannot understand.

        Source: my mom was an adult ESL / Language / Literacy teacher.

    • nradov 2 days ago

      California is being terribly misgoverned, as you would expect in any single-party state. In some sense California has become like a petro-state afflicted by the resource curse: the tech industry throws off so much cash that the state runs reasonably well, not because of the government but in spite of it. We can afford to waste government resources on frivolous nonsense.

      And this isn't a partisan dig at Democrats. If a Republicans controlled everything then the situation would be just as bad but in different ways.

    • anon291 2 days ago

      California has unique geographic features that make it well positioned. It also has strategic geographic resources (like oil). This is like using Saudi Arabia as a standard of governance since they have greatly improved material conditions using oil money.

      California does do several things very well. It also does several things poorly. Pointing out its current economic standing does not change that. The fallacy here is that we have to compare california against the best version of itself. Alaska is never going to be a California-level economy because the geography dictates that only a certain kind of person / company will set up there, for example. That doesn't mean the Alaskan government is necessarily bad. Every state has to work within its limits to achieve the best version of itself. Is california the best it could be? I think the answer is obviously no.

    • tightbookkeeper 3 days ago

      In this case the success is in spite of the governance rather than because of it.

      The golden age of California was a long time ago.

      • dmix 2 days ago

        California was extremely successful for quite some time. They benefited from a large population boom and lots of industry developed or moved there. And surprisingly they were a republican state from 1952 -> 1988.

      • aagha 2 days ago

        LOL.

        California's GDP in 2023 was $3.8T, representing 14% of the total U.S. economy.

        If California were a country, it would be the 5th largest economy in the world and more productive than India and the United Kingdom.

        • anon291 2 days ago

          Undoubtedly, California has a stellar economy, but you see, states like Texas, which are objectively awful to live in (flat, no interesting geography in the most populated parts of the state, terrible weather, hurricanes, etc), are also similarly well positioned in the rankings of GDP.

          If Texas were a country, it'd be the 8th largest economy in the world! This is a figure much less often cited. Texas has a smaller population 30 million v 38 million and is growing much faster in real terms (2.1% v 5.7%).

          This is in spite of its objective awfulness. People are moving to Texas because of the economy. If Texas were in California's geographic position, one would imagine it to be an even more popular destination.

          This isn't an endorsement of the Texan government, because there are many things I disagree with them on. But the idea that California's economy is singularly unique in the United States is silly. Many states with objectively worse attributes are faring just as well, and may even be poised to overtake california.

          How embarassing would it be for Texas, a hot muggy swamp of a state with awful geography and terrible weather, to overtake beautiful California economically? To think people would actually eschew the ocean and the mediterranean climate and perfect weather to move to Texas simply because California mismanaged the state so much. This is the real risk.

          Models show that by 2049, Texas will overtake california as the more populous and more economically productive state. Is that really the future you want? Is that the future California deserves? As a native Californian, I hope the state can turn itself around. It deserves to be a great state, but the path its on is one of decline.

          One need just look at almost any metric. It's no just population or economy. Even by 'liberal' metrics, Texas is growing. For example, Texas has the largest growth rate in alternative energy sources: https://www.reuters.com/markets/commodities/texas-trumps-cal.... There's a very clear growth curve in Texas, while California's is much choppier and doesn't appear to be going in any particular direction. At some point Californians need to actually want to continue winning instead of resting on their laurels.

        • jandrewrogers 2 days ago

          California is the most populous state in the US, larger than most European countries, it would be surprising if it didn't have a large GDP regardless of its economy. On a per capita basis, less populous tech-heavy States like Washington and Massachusetts have even higher GDP.

        • tightbookkeeper 2 days ago

          Yeah it’s incredibly beautiful. People wish they could live there. And many large companies were built there in prior decades. This contradicts my comment how?

    • ken47 3 days ago

      You're going to attribute even a small % of this to politicians rather than the actual innovators? Sure, then let's say they're responsible for some small % of its success. They're smart enough to not nuke their own economy.

    • hbarka 2 days ago

      High speed trains would do even more for California and would be the envy of the rest of the country.

      • oceanplexian 2 days ago

        Like most things, the facts bear out the exact opposite. The CA HSR has been such a complete failure that it’s probably set back rail a decade or more. The only saving grace is Florida’s privatized high speed rail, otherwise it would be a completely failed industry.

        • anon291 2 days ago

          Part of the problem is the transit activist's obsession with public transit instead of just transit. At this rate, Brightline will likely have an HSR in California before the government does. We need to make private transit great again, and California should lead the way. Transit is transit, whether it's funded by government or private interests.

        • shiroiushi 2 days ago

          You're not disproving the OP's assertion. His claim was that HSR (with the implication that it was actually built and working properly) would be good for California and be the envy of the rest of the country, and that seems to be true. The problem is that California tried to do HSR and completely bungled it somehow. Well, of course a bungled project that never gets completed isn't a great thing, that should go without saying.

          As for Florida's "HSR", it doesn't really qualify for the "HS" part. The fastest segment is only 200kph. At least it's built and working, which is nice and all, but it's not a real bullet train. (https://en.wikipedia.org/wiki/Brightline)

    • aagha 2 days ago

      Thank you.

      I always think about this whenever someone says CA doesn't know what it's doing or it's being run wrong:

      California's GDP in 2023 was $3.8T, representing 14% of the total U.S. economy. If California were a country, it would be the 5th largest economy in the world and more productive than India and the United Kingdom.

      • anon291 2 days ago

        And Texas, which 10 million fewer people is the eighth largest economy in the world, and growing at more than double the speed of Californias. This accomplishment (which is something to be proud of) is not unique to California.

    • dragonwriter 3 days ago

      [flagged]

      • hiddencost 3 days ago

        Gimmie that sweet SALT deduction back.

        (It's not actually that high on my list.)

      • shortrounddev2 3 days ago

        I'd much rather be poor in California than poor in mississippi

        • moomoo11 3 days ago

          Yep. As someone who moved to California I honestly can’t imagine living anywhere else. This state has so much good social programming and tries to do good for its people. It also helps that most Californians are awesome people.

          I’ve been in TX (hard nope), NY, NJ, VA, DC, OH, WA, OR, and MD.

          I would honestly never move out of California there’s no better place I can think of.

          • chirau 3 days ago

            how would you rank the states you have lived in by those metrics(social programming + doing good for the people)?

        • Workaccount2 3 days ago

          You say that, but being poor where everyone else is poor is generally better than being poor where you are surrounded by wealth.

          • phatfish 3 days ago

            So people are better off being poor in South America or North Africa, than being poor in the US or Europe?

    • cscurmudgeon 3 days ago

      California is the largest recipient of federal money.

      https://usafacts.org/articles/which-states-rely-the-most-on-...

      (I know by population it will be different, but the argument here is around 'one of the the biggest' which is not a per capita statement.)

      > Objectively, there are many states that are far worse off in any key metric

      You can apply the same logic to USA.

      USA is one of the biggest most successful countries in the world. Like every where else it has it's problems but it's being run just fine. Objectively, there are many countries that are far worse off in any key metric.

  • dehrmann 2 days ago

    Not sure of Newsom is actually wise enough or if his presidential ambitions moderate his policies.

    • ravenstine 2 days ago

      It could be presidential ambitions, though I suspect his recent policies have been merely a way of not giving conservatives more ammo leading up to the 2024 election. The way he's been behaving recently is in stark contrast to pretty much everything he's done during and before his governorship. I don't think it's because he's suddenly any wiser.

      • jart 2 days ago

        Newsom was a successful entrepreneur in the 1990s who built wineries. That alone would make him popular with conservative voters nationwide. What did Newsom do before that you thought would alienate them? Being pro-gay and pro-drug before it was cool? Come on. The way I see it, if Newsom was nuts enough to run for president, then he could unite left and right in a way that has not happened in a very long time.

        • kanwisher 2 days ago

          No one even slightly right would vote for him, he is the poster child of the homeless industrial complex, being anti business and generally promoting social policies only the most fringe left wingers are excited about

  • rootusrootus 2 days ago

    The subtle rebranding of Democratic party to democrat party is a pretty strong tell for highly partisan perspective. How does California compare with similarly large Republican-dominated states? Anecdotally, I’ve seen a lot of really bad legislation originating from any legislature that has no meaningful opposition.

    • anigbrowl 2 days ago

      It's such a long-running thing that it's hard to gauge whether it's deliberate or just loose usage.

      https://en.wikipedia.org/wiki/Democrat_Party_(epithet)

      • dredmorbius 2 days ago

        It's rather decidedly a dog whistle presently.

      • jimmygrapes 2 days ago

        The party isn't doing much lately to encourage the actual democracy part of the name, other than whining about national popular vote every 4 years knowing full well that's now how that process works.

        • rootusrootus 2 days ago

          The Democratic Party has some warts, this is for sure, and they have a lot they could be doing to improve participation and input from the rank-and-file. However, attempting to subvert an election by any means possible is not yet one of those warts. This is emphatically not a case where "both sides suck equally."

          • stufffer 2 days ago

            >subvert an election by any means possible is not yet one of those warts

            The Democrats are famous for trying to have 3rd party candidates stripped from ballots. Straining smaller campaigns under the cost of fighting off endless lawsuits.

            Democrats invented the term lawfare.

            • rootusrootus 2 days ago

              You think the Republicans don't do similar things?

              Republicans blazed a new trail in 2021, trying to actually change the outcome of an election in progress through force. This is not comparable to using legal processes. A better comparison might be the series of lawsuits the Republican Party attempted after force did not work. How many years until the guy who lost admitted that he lost? These actions strike at the very foundations of democracy. A non-trivial segment of citizens still think the election was somehow stolen from them, despite an utter lack of anything like evidence. We will be feeling reverberations from these actions for decades.

        • dangus 2 days ago

          Whining about the national popular vote every 4 years is literally an encouragement of increased actual democracy.

          Scrapping the electoral college would be one of the most lower case d democratic things this country could do.

          “Whining” is all you can do when you don’t have the statehouse votes to pass an constitutional amendment.

    • Lonestar1440 2 days ago

      I'm a pretty pedantic person, but even I just use one or the other at random. I don't think it's a good idea to read into things like this.

      • rootusrootus 2 days ago

        I will allow that there are going to be some innocent mixups. But the 'democrat party' epithet dates back almost a century. https://en.wikipedia.org/wiki/Democrat_Party_(epithet)

        If you care about the perception of what you write, this is one of those things that will quickly steer your audience one way or the other. It has become so consistent that I would personally try not to get it wrong lest it distract from the point I'm trying to express.

        • Lonestar1440 2 days ago

          I didn't write "Democrat Party", I wrote "Democrat-Dominated". I am a registered Democrat. The broader partisan group I belong to are "Democrats" even if the organization formally calls itself the "Democratic party".

  • dredmorbius 3 days ago

    Thankfully there are no such similarly single-party states elsewhere in the Union dominated by another party, and if they were, their executives would similarly veto the most inane legislation passed.

    </s>

  • dyauspitr 3 days ago

    Compared to whom? What is this hypothetical well run state. Because it’s hard to talk shit against the state that has the 8th largest economy on the world nation state economy ranking.

tr3ntg 2 days ago

This is a little annoying. He vetoes the bill, agrees with the intention, paints the "solution" as improper, suggests there is some other solution that is better, doesn't entirely describe that solution with any detail, and encourages future legislation that is "more right."

I'm exhausted already.

I can't think of a less efficient way to go about this.

Cupertino95014 3 days ago

[flagged]

  • arduanika 3 days ago

    Come on, we're trying to have a productive discussion here. There's no need to just drop in and insult clowns.

    • labster 2 days ago

      To be fair, clowning around is a lot more tractable than homelessness, housing prices, health care, or immigration.

      • Cupertino95014 2 days ago

        Hear.

        Just keep getting reelected, since no one expects you to accomplish anything. People in the rest of the country push "term limits" as the solution to everything. I always point out that we've had them in CA for 20 years. It just means that they run for a different office after they're termed out.

        Or become lobbyists.

        • labster 2 days ago

          We should do the same thing in software engineering. After 4 years in web dev, you have to switch to something else like embedded systems or DBA. Or be forced to become a highly paid consultant.

          • dgellow 2 days ago

            > After 4 years in web dev, you have to switch to something else like embedded systems or DBA

            Unironically, that would be awesome

notepad0x90 2 days ago

Gonna swim against the current on this one.

This is why we can't have nice things, too many tech people support Newsom on vetoing this. The nightmare of corporate surveillance and erosion of privacy we have to endure every day is a result of such sentiment and short sighted attempt at self-preservation.

"It's vague" yeah, that's the point, the industry is allowed leeway to come up with standards of what is and isn't safe. They can establish a neutral committee to continually assess the boundaries of what is and isn't safe, as technology evolves. Do you expect legislators to define specifics and keep themselves updated with the latest happening in tech? Would it be better if the government established departments that police AI usage? This was the sweetest deal the industry could have gotten.

SonOfLilit 3 days ago

A bill laying the groundwork to ensure the future survival of humanity by making companies on the frontier of AGI research responsible for damages or deaths caused by their models, was vetoed because it doesn't stifle competition with the big players enough and because we don't want companies to be scared of letting future models capable of massive hacks or creating mass casualty events handle their customer support.

Today humanity scored a self-goal.

edit:

I'm guessing I'm getting downvoted because people don't think this is relevant to our reality. Well, it isn't. This bill shouldn't scare anyone releasing a GPT-4 level model:

> The bill he vetoed, SB 1047, would have required developers of large AI models to take “reasonable care” to ensure that their technology didn’t pose an “unreasonable risk of causing or materially enabling a critical harm.” It defined that harm as cyberattacks that cause at least $500 million in damages or mass casualties. Developers also would have needed to ensure their AI could be shut down by a human if it started behaving dangerously.

What's the risk? How could it possibly hack something causing $500m of damages or mass casualties?

If we somehow manage to build a future technology that _can_ do that, do you think it should be released?

  • datavirtue 3 days ago

    The future survival of humanity involves creating machines that have all of our knowledge and which can replicate themselves. We can't leave the planet but our robot children can. I just wish that I could see what they become.

    • SonOfLilit 3 days ago

      Sure, that's future survival. Is it of humanity though? Kinda no by definition in your scenario. In general, depends at least if they share our values...

      • datavirtue 2 days ago

        Values...values? Hopefully not, since they would be completely useless.

        • SonOfLilit 2 days ago

          so say someone builds an unintelligent nanomachine that can survive extreme conditions and can turn any matter it encounters into more of it and launch them into space in random directions... is that "our children" that you would be happy to see replacing us?

          • datavirtue a day ago

            No. Fully sized machines. I'm trying to be realistic here.

            • SonOfLilit 15 hours ago

              But assume that it _were_ possible. Would you be happy about it, or sad? And why?

              My point is that not anything we could conceivably create is a child we would like to see replace us (I'd argue for the stronger claim that the only kind of machine-child we would be happy to have replace us is one that contains a derivative of a mind-uploaded human).

    • johnnyanmac 3 days ago

      Sounds like the exact opposite plot of Wall-E.

      • datavirtue 2 days ago

        I might watch that now. That scientist that created all the robots in Mega Man keeps coming to mind. People are going to have to make the decision to build these things to be self-sufficient.

    • raxxorraxor 2 days ago

      Mountains out of scrap, rivers out of oil and wide circuit plains. It will be absolutely beautiful.

  • atemerev 3 days ago

    Oh come on, the entire bill was against open source models, it’s pure business. “AI safety”, at least of the X-risk variety, is a non-issue.

    • whimsicalism 3 days ago

      > “AI safety”, at least of the X-risk variety, is a non-issue.

      i have no earthly idea why people feel so confident making statements like this.

      at current rate of progress, you should have absolutely massive error bars for what capabilities will like in 3,5,10 years.

      • atemerev 2 days ago

        I am not sure we will be able to build something smarter than ourselves, but I sure hope for it. It is becoming increasingly obvious that we as civilization are not that smart, and there are strict limits of what we can achieve with our biology, and it would be great if at least our creations could surpass these limits.

        • whimsicalism 2 days ago

          Sure, but we should heavily focus on doing it safely.

          We already can build machines using similar techniques that are superhuman in narrow capabilities like chess and as good as the best humans in some narrow disciplines of math. I think it is not unreasonable to expect we will generalize.

      • ls612 2 days ago

        Nuclear weapons, at least in the quantities they are currently stockpiled, are not an existential risk even for industrial civilization, nevermind the human species. To claim that in 10 years AI will be more dangerous and consequential than the weapons that ushered in the Atomic Age is quite a leap.

        • whimsicalism 2 days ago

          Viruses are just sequences of RNA/DNA and we are already showing that transformers have extreme proficiency in sequence modeling.

          In 10 years we have gone from AlexNet to GPT2 to GPT o1. If future capabilities make it so any semi-state actor with a lab can build a deadly virus (and this is only one of MANY possible and easily plausible scenarios) then we have already likely equaled potential destruction of the atomic age. And that’s just the stuff I can anticipate.

          • atemerev a day ago

            To make a deadly virus, you need just a degree in bioengineering and about $1M of equipment; the information is readily available. If you are on the cheap side, you can source the equipment nearly free on lab clearance sales etc. It is 2024. Children do their first genetic engineering projects in _high school_. You can buy a CRISPR kit _right now_ for about $150.

            This horse has sailed long ago.

            • whimsicalism a day ago

              That is not even remotely true for a novel viral sequence that would be very dangerous.

              And funny that we have gone from “nothing is more dangerous than the atomic bomb” to “anyone with a bio degree and 1 million is more dangerous than the atomic bomb”

              • atemerev 18 hours ago

                > That is not even remotely true for a novel viral sequence that would be very dangerous.

                We don't need fully novel sequences. The original Spanish Flu strain sequence is published and well known. And you can always modify existing ones with known aspects (e.g. as in this infamous paper: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC114026/). And yes, it is more difficult to build an atomic bomb, as materials are thankfully expensive and difficult to obtain.

    • SonOfLilit 3 days ago

      I find it hard to believe that Google, Microsoft and OpenAI would oppose a bill against open source models.

scoofy 3 days ago

Newsom vetoes so many bills that it makes little sense why the legislature should even be taken seriously. Our Dem trifecta state has effectively become captured by the executive.

  • dyauspitr 3 days ago

    As opposed to what? The supermajority red states where gerrymandered counties look like corn mazes and the economy is in the shitter?

sandspar 3 days ago

Newsom wants to run for president in 4 years; AI companies will be rich in 4 years; Newsom will need donations from rich companies in 4 years.

StarterPro 3 days ago

Whaaat? The sleazy Governor sided with the tech companies??

I'll have to go get a thesaurus, shocked won't cover how I'm feeling rn.

londons_explore 2 days ago

While I agree with this decision, I don't want any governance decisions to be made by one bloke.

Why do we have such a system? Why isn't it a vote of many governors? Preferably a secret vote so voters can't be forced to vote along party lines...

unit149 2 days ago

Much like UAW, a union for industrial machinists and academics, this bill has united VCs and members of the agrarian farming community. Establishing an entity under the guise of the Board of Frontier Models parallels efforts at Jekyll Island under Wilsonian idealism. Technological Keynesianism is on the horizon. These are birth pangs - its first gasps.

  • cultureswitch a day ago

    Sounds interesting, could you translate to English?

xyst 2 days ago

I don’t understand why it was vetoed or why this was even proposed. But leaving comment here to analyze later.