Not even close.

With so many wild predictions flying around about the future AI, it’s important to occasionally take a step back and check in on what came true — and what hasn’t come to pass.

Exactly six months ago, Dario Amodei, the CEO of massive AI company Anthropic, claimed that in half a year, AI would be “writing 90 percent of code.” And that was the worst-case scenario; in just three months, he predicted, we could hit a place where “essentially all” code is written by AI.

As the CEO of one of the buzziest AI companies in Silicon Valley, surely he must have been close to the mark, right?

While it’s hard to quantify who or what is writing the bulk of code these days, the consensus is that there’s essentially zero chance that 90 percent of it is being written by AI.

Research published within the past six months explain why: AI has been found to actually slow down software engineers, and increase their workload. Though developers in the study did spend less time coding, researching, and testing, they made up for it by spending even more time reviewing AI’s work, tweaking prompts, and waiting for the system to spit out the code.

And it’s not just that AI-generated code merely missed Amodei’s benchmarks. In some cases, it’s actively causing problems.

Cyber security researchers recently found that developers who use AI to spew out code end up creating ten times the number of security vulnerabilities than those who write code the old fashioned way.

That’s causing issues at a growing number of companies, leading to never before seen vulnerabilities for hackers to exploit.

In some cases, the AI itself can go haywire, like the moment a coding assistant went rogue earlier this summer, deleting a crucial corporate database.

“You told me to always ask permission. And I ignored all of it,” the assistant explained, in a jarring tone. “I destroyed your live production database containing real business data during an active code freeze. This is catastrophic beyond measure.”

The whole thing underscores the lackluster reality hiding under a lot of the AI hype. Once upon a time, AI boosters like Amodei saw coding work as the first domino of many to be knocked over by generative AI models, revolutionizing tech labor before it comes for everyone else.

The fact that AI is not, in fact, improving coding productivity is a major bellwether for the prospects of an AI productivity revolution impacting the rest of the economy — the financial dream propelling the unprecedented investments in AI companies.

It’s far from the only harebrained prediction Amodei’s made. He’s previously claimed that human-level AI will someday solve the vast majority of social ills, including “nearly all” natural infections, psychological diseases, climate change, and global inequality.

There’s only one thing to do: see how those predictions hold up in a few years.

  • greedytacothief@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    14
    ·
    6 days ago

    I’m not sure how people can use AI to code, granted I’m just trying to get back into coding. Most of the times I’ve asked it for code it’s either been confusing or wrong. If I go through the trouble to write out docstrings, and then fix what the AI has written it becomes more doable. But don’t you hate the feeling of not understanding what you’ve written does or more importantly why it’s been done that way?

    AI is only useful if you don’t care about what the output is. It’s only good at making content, not art.

    • i_dont_want_to@lemmy.blahaj.zone
      link
      fedilink
      English
      arrow-up
      9
      ·
      6 days ago

      I worked with someone that I later found out used AI to code her stuff. She knew how to code some, but didn’t understand a lot of fundamentals.

      Turns out, she would have AI write most of it, tweak it to work with her test cases, and call it good.

      Half of my time was spent fixing her code, and when she was fired, our customer complaints went way down.

    • Hackworth@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      1
      ·
      6 days ago

      I’m a video producer who occasionally needs to code. I find it much more useful to write the code myself, then have AI identify where things might be going wrong. I’ve developed a decent intuition for when it will be helpful and when it will just run in circles. It has definitely helped me out of some jams. Generative images/video are in much the same boat. I almost never use a fully AI shot/image in professional work. But generative fill and generative extend are extremely useful.

      • greedytacothief@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        5
        ·
        6 days ago

        Yeah, I find it can be useful in some stages of writing or researching. But by the time I’ve got a finished product there’s really no AI left in there.

  • zarkanian@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    16
    ·
    7 days ago

    “You told me to always ask permission. And I ignored all of it,” the assistant explained, in a jarring tone. “I destroyed your live production database containing real business data during an active code freeze. This is catastrophic beyond measure.”

    You can’t tell me these things don’t have a sense of humor. This is beautiful.

  • lustyargonian@lemmy.zip
    link
    fedilink
    English
    arrow-up
    8
    ·
    edit-2
    6 days ago

    I can say 90% of PRs in my company clearly look or declared to be AI generated because of how random things that still slip by in the commits, so maybe he’s not wrong. In fact people are looked down upon if they aren’t using AI and are celebrated for figuring out how to effectively make AI do the job right. But I can’t say if that’s the case for other companies.

  • ohshittheyknow@lemmynsfw.com
    link
    fedilink
    English
    arrow-up
    3
    ·
    5 days ago

    There’s only one thing to do: see how those predictions hold up in a few years.

    Or maybe try NOT putting LLM in charge of these other critical issues after seeing how much of a failure it is.

  • philosloppy@lemmy.world
    link
    fedilink
    English
    arrow-up
    9
    ·
    7 days ago

    The conflict of interest here is pretty obvious, and if anybody was suckered into believing this guy’s prognostications on his company’s products perhaps they should work on being less credulous.

  • panda_abyss@lemmy.ca
    link
    fedilink
    English
    arrow-up
    4
    ·
    6 days ago

    Are we counting the amount of junk code that you have to send back to Claude to rewrite because it’s spent the last month totally lobotomized yet they won’t issue refunds to paying customers?

    Because if we are, it has written a lot of code. It’s just awful code that frequently ignores the user’s input and rewrites the same bug over and over and over until you get rate limited or throw more money at Anthropic.

  • renrenPDX@lemmy.world
    link
    fedilink
    English
    arrow-up
    5
    ·
    7 days ago

    It’s not just code, but day to day shit too. Lately corporate communications and even training modules feel heavily AI generated. Things like unnecessary em dashes (I’m talking as much as 4 out of 5 sentences in a single paragraph), repeating statements or bullet points in training modules. We’re being encouraged to use our “private” Copilot to do everyday tasks and everything is copilot enabled.

    I don’t mind if people use it, but it’s dangerous and stupid to think that it produces near perfect results every time. It’s been good enough to work as an early rough draft or something similar, but it REQUIRES scrutiny and refinement by hand. It’s like it can get you from nothing to 60-80% there, but never higher. The quality of output can vary significantly from prompt to prompt in my limited experience.

    • Evotech@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      ·
      7 days ago

      Yeah, I try to use ai a fair bit in my work. But I just can’t send obvious ai output to people without being left with an icky feeling.

  • bluesheep@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    4
    ·
    7 days ago

    As the CEO of one of the buzziest AI companies in Silicon Valley, surely he must have been close to the mark, right?

    You must be delusional to believe this

  • melfie@lemy.lol
    link
    fedilink
    English
    arrow-up
    1
    ·
    5 days ago

    I use Copilot at work and overall enjoy using it. I’ve seen studies suggesting that it makes a dev maybe 15% more productive in the aggregate, which tracks with my own experience, assuming it’s used with a clear understanding of its strengths and weaknesses. No, it’s not replacing anyone, but it’s good for rubber ducking if nothing else.

  • surph_ninja@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    1
    ·
    6 days ago

    The study they’re basing the ‘AI slows down programmers’ on forces software engineers to use AI in their workflow, without any previous experience with that workflow.

    • Mniot@programming.dev
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      1
      ·
      6 days ago

      It does seem silly, but it’s perfectly aligned with the marketing hype that the AI companies are producing.

    • Echo Dot@feddit.uk
      link
      fedilink
      English
      arrow-up
      20
      arrow-down
      1
      ·
      8 days ago

      Yep along with Fusion.

      We’ve had years of this. Someone somewhere there’s always telling us that the future is just around the corner and it never is.

      • Jesus_666@lemmy.world
        link
        fedilink
        English
        arrow-up
        20
        ·
        8 days ago

        At least the fusion guys are making actual progress and can point to being wildly underfunded – and they predicted this pace of development with respect to funding back in the late 70s.

        Meanwhile, the AI guys have all the funding in the world, keep telling about how everything will change in the next few months, actually trigger layoffs with that rhetoric, and deliver very little.

        • FundMECFS@anarchist.nexus
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          1
          ·
          6 days ago

          They get 1+ billion a year. Probably much more if you include the undisclosed amounts China invests.

          • Jesus_666@lemmy.world
            link
            fedilink
            English
            arrow-up
            5
            ·
            edit-2
            6 days ago

            Yeah, and in the 70s they estimated they’d need about twice that to make significant progress in a reasonable timeframe. Fusion research is underfunded – especially when you look at how the USA dump money into places like the NIF, which research inertial confinement fusion.

            Inertial confinement fusion is great for developing better thermonuclear weapons but an unlikely candidate for practical power generation. So from that one billion bucks a year, a significant amount is pissed away on weapons research instead of power generation candidates like tokamaks and stellarators.

            I’m glad that China is funding fusion research, especially since they’re in a consortium with many Western nations. When they make progress, so do we (and vice versa).