I feel that there is a massive double standard between those perceived as “skeptics” and “optimists.”

To be skeptical of AI is to commit yourself to near-constant demands to prove yourself, and endless nags of “but what about?” with each one — no matter how small — presented as a fact that defeats any points you may have. Conversely, being an “optimist” allows you to take things like AI 2027 seriously to the point that you can write an entire feature about fan fiction in the New York Times and nobody will bat an eyelid.

In any case, things are beginning to fall apart. Two of the actual reporters at the New York Times (rather than a “columnist”) reported out last week that Meta is yet again “restructuring” its AI department for the fourth time, and that it’s considering “downsizing the A.I. division overall,” which sure doesn’t seem like something you’d do if you thought AI was the future.

Meanwhile, the markets are also thoroughly spooked by an MIT study covered by Fortune that found that 95% of generative AI pilots at companies are failing, and though MIT NANDA has now replaced the link to the study with a Google Form to request access, you can find the full PDF here.

Nevertheless, boosters will still find a way to twist this study to mean something else. They’ll claim that AI is still early, that the opportunity is still there, that we “didn’t confirm that the internet or smartphones were productivity boosting,” or that we’re in “the early days” of AI, somehow, three years and hundreds of billions and thousands of articles in.

I’m tired of having the same arguments with these people, and I’m sure you are too. No matter how much blindly obvious evidence there is to the contrary they will find ways to ignore it. They continually make smug comments about people “wishing things would be bad” or suggesting you are stupid — and yes, that is their belief! — for not believing generative AI is disruptive.

Today, I’m going to give you the tools to fight back against the AI boosters in your life. I’m going to go into the generalities of the booster movement — the way they argue, the tropes they cling to, and the ways in which they use your own self-doubt against you.

  • hperrin@lemmy.ca
    link
    fedilink
    English
    arrow-up
    17
    ·
    edit-2
    11 days ago

    The way I view AI is the way I view those pod coffee machines. They are good at doing a couple things. The only difference is people aren’t trying to push those as a solution to everything.

    You want a single cup of coffee? My guy, this machine will give you exactly what you want.

    You want multiple cups of coffee? Ok, so you can just use this machine multiple times. Yes, it will be annoying and you’ll have to sit there guiding it to make coffee the whole time. Yes, it will be cold, bland coffee by the time you’re finished. Yeah, it will produce tons of waste, but that’s just worth it because you’ll have so much coffee. I’m sure everyone will want to drink it.

    Oh, you want a latte? Well you can use this machine as a starting point, then finish it yourself. Oh no, it won’t be a good latte, but it will be faster than making it yourself, and speed is the most important thing about lattes. Oh, you’re good at making lattes and can make it faster yourself? Well, not all of us are good at lattes. Stop gatekeeping lattes, my dude.

    You want orange juice? Just squeeze an orange into this machine, right where the pod would go! It’s so versatile! You’re gonna love the hot, watered down orange juice! It looks just like regular orange juice, doesn’t it!? How could you not like it??

    You want a cheeseburger? Hold on, I’m positive I can figure out how to get a cheeseburger out of this machine. I’m going to keep finagling it until I can get it to regurgitate something that looks like a cheeseburger.

    • AnarchistArtificer@slrpnk.net
      link
      fedilink
      English
      arrow-up
      3
      ·
      11 days ago

      When I argue with zealots, I find it useful to think of it sort of like my target audience isn’t the zealot, but the people around them. I first adopted this way of thinking in response to pushing back against transphobic disinformation online, because there are so many normal people who don’t have the context necessary to evaluate misguided (or hatefully disinformative) statements, so hearing people push back on that can be pretty powerful, in my experience.

      I think this approach can be especially impactful around AI discussions because I’ve seen so many people whose intuition is giving them bad vibes around AI, but because they’re not confident around technology, they push those bad vibes down and assume that there must be something that they’re missing. It links into the wider problem of how big tech has conditioned people to be mere passive consumers of tech. I wish I could do more to help people to feel more empowered to tinker, but for now, I will have to content myself with being part of the anti-bullshit brigade

      It’s definitely necessary to pick your battles wisely when attempting this, so as to not waste energy when there’s so much nonsense going around, but I’ve found it worthwhile. It helps that there are people like Ed Zitron and Cory Doctorow, who write compellingly on these topics — if there are people who are interested, it’s good to have stuff I can link people to

      • SpikesOtherDog@ani.social
        link
        fedilink
        English
        arrow-up
        2
        ·
        11 days ago

        Look, I say this as a former religious zealot. Life was MUCH EASIER black and white, good and bad. I didn’t have to think through these issues, they were already well written about. Here is a book about how what you are doing is wrong. Of course it is right, it is in the church library.

        For AI, you will be fine as long as the person has not given up thinking for themselves. Once you give up reason, you cannot be convinced.

  • stabby_cicada@slrpnk.net
    link
    fedilink
    arrow-up
    7
    ·
    11 days ago

    Kevin Roose and Casey Newton are two of the most notable boosters, and — as I’ll get into later in this piece — neither of them have a consistent or comprehensive knowledge of AI. Nevertheless, they will insist that “everybody is using AI for everything” — a statement that even a booster should realize is incorrect based on the actual abilities of the models. 

    But that’s because it isn’t about what’s actually happening, it’s about allegiance. AI symbolizes something to the AI booster — a way that they’re better than other people, that makes them superior because they (unlike “cynics” and “skeptics”) are able to see the incredible potential in the future of AI, but also how great it is today

    This is exactly how cryptocurrency/NFT/blockchain boosters were acting in the crypto boom of 2021.

    Here’s hoping Grok and Gemini end up in the same dumpster as the procedurally generated racist monkey jpgs.

    • Auth@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      11 days ago

      There is another part to this. These people are invested maybe even over leveraged in AI stocks so now they need to promote it and get others to join them in buying into this AI hype because if it fails they seek to lose money. We saw this happen with crypto, someone who didnt care at all about crypto could buy any coin and within months their entire social media would be full of them promoting that coin because they need others to buy in to increase the value of their investment.

  • SpikesOtherDog@ani.social
    link
    fedilink
    English
    arrow-up
    7
    ·
    10 days ago

    One thing Ed did not cover was machine translation of one language to the next. Crunchyroll had a translation service that was using chatgpt, possibly because it was cheap and easy in the moment.

    Once the company “pulls the profit lever,” the cost of using a human may suddenly be more attractive. There is no way prompted translation services are going to be able to provide cost effective services to cover their losses.

    Additionally, there is no suggestion that we are entering babelfish or universal translator territory with the way that generative text requires constant prompting to stay on task.

    Additionally, notably–and in my own research–generative text is terrible at maintaining consistency. Consistency is imperative to a translation because using different names for the same thing within one work or even a body of work results in long lasting confusion.

    I wrote this initially to ask what about translation, and I answered my question. Comments welcome.

  • panda_abyss@lemmy.ca
    link
    fedilink
    arrow-up
    4
    ·
    9 days ago

    So, an AI booster is not, in many cases, an actual fan of artificial intelligence. People like Simon Willison or Max Woolf who actually work with LLMs on a daily basis don’t see the need to repeatedly harass everybody, or talk down to them about their unwillingness to pledge allegiance to the graveyard smash of generative AI. In fact, the closer I’ve found somebody to actually building things with LLMs, the less likely they are to emphatically argue that I’m missing out by not doing so myself.

    I would say I’m in this actually building things group. I hate the “AI will do everything and take all your jobs” ai bro types.

    I work as a data scientist, over the last 6 months I’ve built 5 real “AI” tools that have succeeded. I know AI is dumb dumb stupid and I only use LLM tools where they actually make sense.

    Projects work because I’m building tools to automate annoying busy work that everyone hates doing, or I’m giving people tools that makes it easier to access the information they need to do their job better.

    I know this is the fuck AI community and I’m saying I use AI, but 100% with you all on saying fuck the AI boosters.

    The amount of overhype is insane, and it’s mostly by people who barely know how any of this works pitching it to people who don’t know how it works at all.

    I’ve watched so many stupid projects get green lit to just basically light money on fire with zero value just to say “we’re using AI”. I’ve seen shitty chatbots created and deployed. I’ve watched these AI hype people getting promoted and given huge bonuses. It’s asinine.

  • Jerkface (any/all)@lemmy.ca
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    2
    ·
    9 days ago

    Is this something that comes up for people frequently? Are there desirable aspects of your life that you only have because you were able to win an argument with an AI booster?

    • relianceschool@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      1
      ·
      9 days ago

      I see this less as a kit for arguing with folks in real life (or worse, on the internet), and more of an exploration/dissection of common arguments put forth by the pro-AI crowd. So the next time you see those points pop up in an article or editorial, you can more easily evaluate whether or not they hold water.