while(true){💩};

  • 0 Posts
  • 24 Comments
Joined 2 years ago
cake
Cake day: June 11th, 2023

help-circle

















  • My argument is incredibly simple:

    YOU exist. In this universe. Your brain exists. The mechanisms for sentience exist. They are extremely complicated, and complex. Magic and mystic Unknowables do not exist. Therefore, at some point in time, it is a physical possibility for a person (or team of people) to replicate these exact mechanisms.

    We currently do not understand enough about them yet to do this. YOU are so laser-focused on how a Large Language Model behaves that you cannot take a step back and look at the bigger picture. Stop thinking about LLMs specifically. Neural-network artificial intelligence comes in many forms. Many are domain-specific such as molecular analysis for scientific research. The AI of tomorrow will likely behave very different from those of today, and may require hardware breakthroughs to accomplish (I don’t know that x86_64 or ARM instruction sets are sufficient or efficient enough for this process). But regardless of how it happens, you need to understand that because YOU exist, you are the prime reason it is not impossible or even unfeasible to accomplish.


  • This argument feels extremely hand-wavey and falls prey to the classic problem of “we only know about X and Y that exist today, therefore nothing on this topic will ever change!”

    You also limit yourself when sticking strictly to narrow thought experiments like the Chinese room.

    If you consider the human brain, which is made up of nigh-innumerable smaller domain-specific neural nets combined together with the frontal lobe, has consciousness, this absolutely means that it is physically possible to replicate this process by other means.

    We noticed how birds fly and made airplanes. It took many, MANY Iterations that seem excessively flawed by today’s standards, but were stepping stones to achieve a world-changing new technology.

    LLMs today are like DaVinci’s corkscrew flight machine. They’re clunky, they technically perform something resembling the end goal but ultimately in the end fail the task they were built for in part or in whole.

    But then the Wright brothers happened.

    Whether sentient AI will be a good thing or not is something we will have to wait and see. I strongly suspect it won’t be.


    EDIT: A few other points I wanted to dive into (will add more as they come to mind):

    AI derangement or psychosis is a term meant to refer to people forming incredibly unhealthy relationships with AI to the point where they stop seeing its shortcomings, but I am noticing more and more that people are starting to throw it around like the “Trump Derangement Syndrome” term, and that’s not okay.


  • I think this post requires a certain level of schizo-posting culture (shitposting) to understand.

    The character being portrayed by the young woman here seems to be an engineer of some sort who is fascinated with ocean-compatible robotics. She is implying that her research is going to be highly invaluable for when the earth eventually floods from polar icecap melting, heat expansion, etc. after the world “ends.” Her robots would help with hunting/gathering tasks with the world underwater.

    The “husbando” bit is a casual racism/“Engrish” way of implying that she is married to (or at least very into) you, the reader. You are her partner, or she at least thinks you are (could be one of those “they don’t know they’re my boyfriend yet” scenarios, unclear and needs more detail).