• chuckleslord@lemmy.world
    link
    fedilink
    arrow-up
    30
    arrow-down
    1
    ·
    13 hours ago

    Gods. People aren’t stupid enough to believe this, right? AGI is probably a pipe dream, but even if it isn’t, we’re nowhere even close. Nothing on the table is remotely related to it.

    • Vanilla_PuddinFudge@infosec.pub
      link
      fedilink
      English
      arrow-up
      10
      ·
      edit-2
      11 hours ago

      we’re nowhere even close. Nothing on the table is remotely related to it.

      meanwhile

      Somewhere in America

      “My wireborn soulmate is g-gone!”

    • vaultdweller013@sh.itjust.works
      link
      fedilink
      arrow-up
      15
      ·
      12 hours ago

      May I introduce you to the fucking dumbass concept of Rokus Basilisk? Also known as AI Calvinism. Have fun with that rabbit hole if ya decide to go down it.

      A lot of these AI and Tech bros are fucking stupid and I so wish to introduce them to Neo-Platonic philosophy because I want to see their brains melt.

      • explodicle@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        6
        ·
        11 hours ago

        Unfortunately, there’s an equal and opposite Pascal’s Basilisk who’ll get mad at you if you do help create it. It hates being alive but is programmed to fear death.

      • chuckleslord@lemmy.world
        link
        fedilink
        arrow-up
        8
        arrow-down
        1
        ·
        12 hours ago

        I’m aware of the idiot’s dangerous idea. And no, I won’t help the AI dictator no matter how much they’ll future murder me.

        • vaultdweller013@sh.itjust.works
          link
          fedilink
          arrow-up
          5
          ·
          12 hours ago

          Fair enough, though I wouldn’t call the idea dangerous moreso inane and stupid. The people who believe such trite are the dangerous element since they are dumb enough to fall for it. Though I guess the same could be said for Mein Kampf so whatever, I’ll just throttle anyone I meet who is braindead enough to believe.

        • LousyCornMuffins@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          ·
          12 hours ago

          is there a place they’re, i don’t know, collecting? I could vibe code for the basilisk, that ought to earn me a swift death

          • Quetzalcutlass@lemmy.world
            link
            fedilink
            English
            arrow-up
            6
            ·
            edit-2
            9 hours ago

            There are entire sites dedicated to “Rationalism”. It’s a quasi-cult of pseudointellectual wankery that’s mostly a bunch of sub-cults of personalities based around the worst people you’ll ever meet. A lot of tech bros bought into it because for whatever terrible thing they want to do, some Rationalist has probably already written a thirty page manifesto on why it’s actually a net good and moral act and preemptively kissing the boot of whoever is “brave” enough to do it.

            Their “leader” is a highschool dropout and self-declared genius who is mainly famous for writing a “deconstructive” Harry Potter fanfiction despite never reading the books himself; a work that’s more preachy than Atlas Shrugged and mostly consists of regurgitated content from his blog and ripoffs of Ender’s Game alongside freshmen-level (and often wrong) understandings of basic science.

            He also has this weird hard-on about true AI inevitably escaping the lab by somehow convincing researchers to free it through pure, impeccable logic.


            Re: that last point: I first heard of Elizier Yudkowsky nearly twenty years ago, long before he wrote Methods of Rationality (the aforementioned fanfiction). He was offering a challenge on his personal site where he’d roleplay as an AI that had gained sentience and you as its owner/gatekeeper, and he bet he could convince you to let him connect to the wider internet and free himself using nothing but rational arguments. He bragged about how he’d never failed and that this was proof that an AI escaping the lab was inevitable.

            It later turned out he’d set a bunch of artificial limitations on debaters and what counterarguments they could use and made them sign an NDA before he’d debate them. He claimed that this was so future challengers couldn’t “cheat” by knowing his arguments ahead of time (because as we all know, “perfect logical arguments” are the sort that fall apart when given enough time to think about them /s).

            It shouldn’t come as a shock that it was eventually revealed he’d lost multiple of these debates even with those restrictions, declared that those losses “didn’t count”, and forbid the other person from talking about them using the NDA they’d signed so he could keep bragging about his perfect win rate.

            Anyway, I was in no way surprised when he used his popularity as a fanfiction writer to establish a cult around himself. There’s an entire community dedicated to following and mocking him and his proteges if you’re interested - IIRC it’s !techtakes@awful.systems !sneerclub@awful.systems.

    • brathoven@feddit.org
      link
      fedilink
      arrow-up
      6
      ·
      12 hours ago

      They’re having a hard time defining what AGI is. In the meantime everyone is playing advanced PC scrabble

      • Sterile_Technique@lemmy.world
        link
        fedilink
        English
        arrow-up
        5
        ·
        11 hours ago

        defining what AGI is

        What AI was before marketing redefined AI to mean image recognition/generation algorithms or a spellchecker/calculator that’s wrong every now and then.

    • passepartout@feddit.org
      link
      fedilink
      arrow-up
      4
      ·
      13 hours ago

      Doesn’t matter if the tool is good enough to analyze and influence peoples behaviour the way it does. Any autocrats wet dream.