• 0 Posts
  • 12 Comments
Joined 2 years ago
cake
Cake day: June 13th, 2023

help-circle
  • “LLMs are not intelligent because they do not know anything. They repeat patterns in observed data.”

    we are also predictive systems, but that doesn’t mean we are identical to LLMs. “LLMs are not intelligent because they do not know anything.” is just not true, without saying humans are not intelligent and do not know anything. there are some unaddressed framing issues in how it’s being thought about.

    they “know” how to interpret a lot of things in a way that is much more environmentally adaptable than a calculator. language is just a really weird eco-niche, and there is very little active participation, and the base model is not updated as environments change.

    this is not saying humans and LLMs are identical, this is saying that instead of the real differences, the particular aspect your are claiming shows LLMs aren’t intelligent… is a normal part of intelligent systems.

    this is a spot somewhere in between “human intelligence is the only valid shape of intelligence” and “LLMs are literally humans”

    as for vocabulary i’m always willing to help for those that can’t find or figure out tools to self-learn.

    when i talk about ‘tribal’ aspects, i refer to the collapsing of complexity towards a binary narrative to fit to fit the preferences of your tribe, for survival reasons. i also refer to this as dumb ape brain, because it’s a simplification of the world to the degree that i would expect from literal apes trying to survive in the jungle, and not people trying to better understand the world around them. which is important when shouting your opinions to each-other in big social movements. this is actually something you can map to first principles and how we use the errors our models experience in order to notice things, and how we contextualize the sensory experience after the fact. what i mean is, we have a good understanding of this, but nobody wants to hear it from the people who actually care.

    ‘laziness’ should be a lack of epistemic vigilance, not a failure to comply to the existing socio-economic hierarchy and hustle culture. i say this because ignorance in this area is literally killing us all, including the billionaires that don’t care what LLMs are, but will use every tool they can to maximize paperclips. i’d assume that jargon should at least have salience here… since paperclip maximizing is OG anti-AI talk, but turns out is very important for framing issues in human intelligence as well.

    please try to think of something wholesome before continuing, because tribal (energy saving) rage is basically a default on social media, but it’s not conducive to learning.

    RLHF = reinforcement learning with human feedback. basically upvoting/downvoting to alter future model behaviour, which often leads to sycophantic biases. important if you care about LLMs causing psychotic breaks.

    “inter-modal dissonance” is where the different models using different representations make sense of things, but might not match up.

    an example is vision = signal saying you are alone in the room

    audio signal saying there is someone behind you.

    you look behind you, and you collapse the dissonance, confirming with your visual modality whether the audio modality was being reliable. since both are attempting to be accurate, if there is no precision weighting error (think hallucinations) a wider system should be able to resolve whether the audio processing was mistaken, or there is something to address that isn’t being picked up via the visual modality (if ghosts were real, they would fit here i guess.)

    this is how different systems work together to be more confident about the environment they are both fairly ignorant of (outside of distribution.)

    like cooperative triangulation via predictive sense-making.

    i promise complex and new language is used to understand things, not just to hide bullshitting (like jordon peterson)

    i’d be stating this to the academics, but they aren’t the ones being confidently wrong about a subject they are unwilling to learn about. i fully encourage going and listening to the academics to better understand what LLMs and humans actually are.

    “speak to your target audience.” is literally saying “stay in a confirmation bubble, and don’t mess with other confirmation bubbles.” while partial knowledge can be manipulated to obfuscate, this particular subject revolves around things that help predict and resist manipulation and deception.

    frankly this stuff should be in the educational core right now because knowing how intelligence works is… weirdly important for developing intelligence.

    because it’s really important for people to generally be more co-constructive in the way they adjust their understanding of things, while resisting a lot of failure states that are actually the opposite of intelligence.

    your effort in attempting this communication is appreciated and valuable. sorry that it is very energy consuming, something that is frustrating due to people like jordon peterson or the same creationist cults mired in the current USA fascism problem, who, much like the relevant politicians aren’t trying to understand anything, but to waste your energy so they can do what they want without addressing the dissonance. so they can maximize paperclips.

    all of this is important and relevant. shit’s kinda whack by design, so i don’t blame people for having difficulty, but effort to cooperatively learn is appreciated.


  • cats also suck at analogies and metaphors, but they still have intelligence.

    a rock could not accurately interpret and carry out complex adjustments to a document. LLMs can.

    if the rock was… travelling through complex information channels and high-dimensional concept spaces to interpret the text i gave it, and accurately performed the requested task being represented within those words, yeah it might be a little intelligent.

    but i don’t know any stones that can do that.

    or are you referring to the ‘stochastic parrot’ argument which tries to demonize confabulatory properties of the model, as if humans don’t have and use confabulatory processes?

    just because we have different tools we use along-side of those confabulatory processes does not mean we are literally the opposite.

    or just find some people to be loud with you so you can ignore the context or presented dissonance. this is really popular with certain groups of ‘intelligent’ humans, which i often lovingly refer to as “cults,” which never have to spend energy thinking about the world, cause they can just confabulate their own shared idea of what the world is, and ignore anyone trying to bring that annoying dissonance into view.

    also humans are really not that amazingly ‘intelligent’ depending on the context. especially those grown in an environment that does not express a challenging diversity of views from which to collectively reduce shared dissonance.

    if people understood this, maybe we could deal with things like the double empathy problem. but the same social-confirmation modes ensure minority views don’t get heard, and the dissonance is just signal that we should collectively get mad at until it’s quiet again.

    isn’t that so intelligent of humanity?

    but no, let’s all react with aggression to all dissonance that appears, like a body that intelligently recognizes the threat of peanuts, and kills itself. (fun fact, cellular systems are great viewed in this lens. see tufts university and michael levin for some of the coolest empirical results i’ve ever seen in biology.

    we need to work together and learn from our shared different perspectives, without giving up to a paperclip maximizing social confirmation bubble, confabulating a detached delusion into social ‘reality.’

    to do this, understanding the complex points i’m trying to talk about is very important.

    compressing meaning into language is hard when the interpreting models want to confabulate their own version that makes sense, but excludes any of your actual points, and disables further cooperative communication.

    i can make great examples, but it doesn’t go far if people don’t have any knowledge of

    -current sociology

    -current neuro-psych

    -current machine learning

    -current biology

    -cults and confirmation bubbles, and how they co-confirm their own reality within confabulated complexity.

    -why am i trying so hard, nobody is actually reading this, they are just going to skim it and downvote me because my response wasn’t “LLMS BAD, LLMS DUMB!”

    -i’m tired.

    -i appreciate all of you regardless, i just want people to deal with more uncomfortable dissonance around the subject before having such strong opinions.


  • They’re just like us and smart!

    responding like this after i just explained a bunch of the differences between us and LLMs is kind of dishonest. but you have to make me fit into your model, so you can just ignore my actual point, which was…“LLMs are the opposite of intelligence,” which fits the common take in the area that llms are absolutely ‘not intelligent’ and in no way shape or form similar to our form of intelligence.

    i wouldn’t say they are “just like us and smart,” because that ignores… the whole point i was making in how they are more similar than being presented, but still a different shape.

    like saying “animals are just as smart as humans!” humans are idiots when it comes to interpreting many animals, because they often have a very different shape of intelligence. it’s not about the animals being stupid, but the animals having their own eco-niche fit, and perspective drawn around that. this is also not me saying “animals have the opposite of intelligence” just because they don’t perform human tasks well.

    even better once you start talking about the intelligence of cell groups. could you build a functional body with complex co-constructing organs? why are you more stupid than cell cultures? or people just generally have a shitty understanding of what intelligence is.

    i disagree with both “LLMs are the opposite of intelligence” and your strawman.

    imagine existing outside of tribal binary framing, because you think they don’t properly frame or resemble the truth.


  • “they only output something that resembles human language based on probability. That’s pretty much the opposite of intelligence.”

    intelligence with a different shape =/= the opposite of intelligence. it’s intelligence of a different shape.

    and humans also can’t deal with shit outside of distribution, that’s why we rely on social heuristics… that often over-simplify for tribal reasons, where confirmation bubbles can no longer update their models because they are trying to craft an environment that matches the group confabulation, rather than appropriately updating the shared model.

    but suggesting AI is actually intelligence of a different shape guarantees downvotes here, because the tribe accepts no deviation, because that would make you an enemy, rather than someone who just… wants a more accurate dialogue around the context.


  • That’s… Not actually accurate, but it’s an accurate sounding confabulation that you could put out which collapses the energy you need to keep interpreting the problem.

    Which IS what llms are doing. The failure comes from the incentive structure and style of intelligence. Very right we shouldn’t blind trust the responses though.

    The criticism of “just probability” falls flat as soon as you recognize current expert consensus is that humans minds are… predictive processors, based on scale free principles leading to layered Bayesian predictive models.

    Where LLMs struggle adapting to things outside of distribution (not in the training data) they do not have a way to actively update their weights and biases as they contextualize the growing novel context.

    Also novel context is basically inevitable when interacting with real life, because our environments and preferences are also growing, so, they lack something very important for correcting weak confabulations that collapsed the predictive process into action. There’s also weird softmax/AI ‘reasoning’ fuzzyness helping to emulate some of the malleability of our more active ruminative, and very very social models.

    I usually get downvoted for going against the narrative, but remember we normalize to the tribal opinions around us to socially collapse our group predictive model, because nuance takes energy. But if you can’t communicate, you can’t learn, and you display the same lack of intelligence as confident LLM confabulations.

    I wish I heard people talking about this outside of strictly academic spaces, but communication channels need to be open.

    Keep your eyes out for AI that is diverse, but good at communication/translation/meditation and actively grows.

    Although you might see more like the genie3 stuff that is dealing with intermodal dissonance within a monolithic model perspective, which means always confabulating without using other models to actively balance and grow.

    Well, attempts are being made to make up for that, but you can see how RLHF leads to sycophantic models that will confirm your own confabulations so that you can confirm each other into delusion without other systems for grounding


  • Then why aren’t we going after streaming, which currently is much worse for the environment?

    Not that I don’t agree it should be salient, and caps should definitely be put on big companies for this kind of stuff, generally, only doing it for AI is not really doing much, but people don’t seem to care about actually improving thing rather than beating the tribal drum.

    I agree with the need for action, my issue is the direction and framing, especially part the socially reified misinformation. Musk is a good target while he is ignoring current standards for generators and the like, but the general dialogue around that is full of made up numbers and claims, and nobody cares because it serves the tribal preference.

    Like the idea that ai cuts and pastes existing work to make a new image. That’s not how it works, and artists used to attack artists for the same style “theft” that just doesn’t recognize how brains work, how human social learning works, or how art and styles develop over time.

    Also the energy for conversation is being wasted on these moot points, rather than the larger systemic issues.

    Also some of AI is being developed to directly counter that problem, but I don’t see much advocating for it in these spaces,

    Rather it gets downvoted for “being AI,” completely ignoring why AI was supposed to be a problem in the first place, and void of any active effort to improve things.

    Same communication issues affecting the socio-political sphere, which is why it’s good to learn generally about intelligent systems and how they work.

    Like, how often do you hear people complain that AI is “just predictions systems” completely vacant of the understanding that we are all predictive systems, and predictive processing is the current general consensus in the nero-psych realm now. Etc.

    Basically, like most complex things, it feels impossible to talk about because the simplified social model dominates discourse, and everyone hates scientists for trying to bring annoying reality, and diverse perspectives into the conversation.

    I mean, learn about epistemics and you’ll learn more about AI. Although being familiar with diverse perspectives, while appropriately untangling the dissonance of their differences is basically the core of both AI and sociology.

    But seeing artists get miffed because the Warner/Disney model of art economy might not be compatible with reality and positive growth of our species.

    I hope we can at least both agree, eat the rich and bring back more diverse systems that can check and balance each other.


  • I’m extremely familiar, and definitely agree that the corpo paperclip maximizer will use any tool dishonestly if it helps them make paperclips. a lot of my critique is in the framing of the public dialogue, and how poorly informed much of it is. like with most things, it’s definitely important to deal with the rich assholes lying, and using functional tools for evil. both are bad, in neither situation are the tools the problem.

    good example of elon musk lying about what his cars could do every year for the past decade, which i have heavily critiqued for as long as it’s been happening. definitely a real issue!

    All i want is people not to throw the baby out with the bathwater. Much like I don’t think uninventing the loom would either help people or solve the issue, AI is similar.

    And the “scale only” people I mentioned are the only ones exclusively focused on llms, but nobody out there is just running a basic llm.

    Frankly, the AI I’m most excited about is being grown from the opposite end, as diverse distributed collaborative networks that actively adapt to the shape of the context.

    Honestly I think one of the most valuable things we will get that nobody talks about is functional mediators that can parse complex perspectival differences, enabling communications that have become difficult for our stupid tribal monkey species.

    My issue with AI critique is its usually ungrounded and focused on hating AI, rather than the corpos who do bad stuff with every tool that exists, and lie about everything.

    Even in more traditional AI models, they are currently right now doing things that are amazing, but people think they are just simple collage art stealing machines, with some confabulated interpretation of what it is doing.

    But that topic also gets into the history of the art industry, who currently owns the larger art market, and how people define art and artists.

    But if you actually address the issues, tribal ignorance ensures angry yelling rather than an actual attempt at learning what is being discussed.

    To be fair, people like jordon peterson make people think complexity is unlearnable, because they use it to obfuscate rather than elucidate. So there are definitely valid issues to talk about, but outside of “scale is everything,” the focus on critiquing llms ignores every other part of AI, because they are harder to make people angry about. Unless you have a bunch of ignorant people who you can spur into the same stupid aggression that existed during the tumblr “you’re stealing my style!” wars. Because they can’t comprehend how art is all built socially, and nobody painted anime on cave walls.

    Complex issues, lot of rabbit holes, but ignorance and tribalism are currently the main shapes of actual critique.

    but i think developing functional intelligent systems will hinder bad actors more than they expect. see elon musk fighting to get his shitty llm to lie for him without also losing touch with everything else, picking and choosing where to ignore dissonance is a funny thing that humans are very susceptible to abusing.

    Hopefully that makes sense.


  • Edit: love the downvotes, but could i ever get a reason for what was actually objectionable in my comment? other than requiring you to maybe think differently about something you already had your mind made up about? i’m sorry if i offended defector.com for not properly framing the problem.

    blind tribal reaction is the only way i guess.

    – Love a good strawman in the morning.

    Those feeling a chill in the air are the “scale only” peeps, who were all in on not thinking too hard about the problem. Those focused on more than selling LLMs have a very different framing.

    As for why AI agents aren’t functional, we do actually have a better understanding that doesn’t seem to want to leak out of niche academic areas.

    The amount we’ve learned about intelligent systems this past half decade feels like a whole new world.

    Deskilling is an issue already without AI. To summarize, minimizing energy expenditure is very important due to evolutionary history. No just to people, but to group that creates simplified models that don’t disturb people’s normal trajectory. Cause learning a new model to predict the world takes energy. They do this by making predictable heuristics that are functional enough, but allow the daily scripts to continue unhindered. These simplified heuristics sometimes are too simple, and not robust enough to survive when the environment changes. Think pandas, who got deskilled at staying alive outside of a very specific environment.

    So. In the same way humanity needs to learn to become more robust via diverse but communication focused intelligent systems. Anyone in social sciences knows that evidence strongly shows that diversity representation is weirdly good for any intelligent group.

    Similarly, the better forms of AI currently being ignored are actually built from that end of things, rather than trying to force all perspectives and preferences to live simultaneously in a single model. It creates informational and contextual choke points.

    Also wish i could run into more people who actually study intelligence in these threads.

    Clickbait journalism is a scourge on science. At least the public awareness of sycophantic echo chambers enabling delusional spirals is something good for people to think about,

    Since any idea can survive in a similar segregated confirmation bubble that mechanically cuts off outside ideas and education to preserve the existing group world model.

    And i keep saying laziness should be described by such solipsism, but instead you get called lazy if you aren’t feeding your full life to the machine.

    Just wish progressive public discourse was more generally informed in this area, because it’s very important to understanding society, our bodies, and intelligent systems as a whole.


  • Peanut@sopuli.xyzto196@lemmy.blahaj.zoneMom rule
    link
    fedilink
    English
    arrow-up
    3
    ·
    edit-2
    10 days ago

    a lot of this is novel, and only now being properly understood from confidence of multiple expertise perspectives being compared, that allow more confidence in certain weightings on old ideas. some old ideas are hard to quickly correct for, because people don’t like digging out parts of their current model for making sense of the world.

    to be fair, that is mechanically connected to the same drives that run fear and everything else based on how we contextualize the ‘surprise’ we feel when the world doesn’t match our model.

    most of the stuff on surprisal is mostly in the karl friston direction, or predictive processing. active inference is a very good thing to study, because it teaches how we make sense of the world as a bunch of cells working together, in varied and often novel contexts.

    for more technical reading

    https://direct.mit.edu/books/oa-monograph/5299/Active-InferenceThe-Free-Energy-Principle-in-Mind

    free textbook on MIT, although it’s a couple years old now.

    https://pubmed.ncbi.nlm.nih.gov/38667857/

    this is one of my favourite current takes, basically anything around mahault, friston, or michael levin right now is great for a technical framing of things.

    levin is a good source if you like the cancer analogy

    https://pubmed.ncbi.nlm.nih.gov/33961843/

    although this writeup was a few years back. he is constantly interacting with different experts of different fields on youtube, and there’s a lot to be learned just hearing their conversations and sense making. some of the most amazing empirical results in recent experiments are coming out around michael levin’s work. he keeps a summary up to date for his broader message if anyone wants to know tufts university for something other than the government bagging students.

    more lighthearted and mainstream,

    algospeak by adam aleksic,

    Godel Escher Bach/i am a strange loop by douglas hofstadter,

    for understanding language and complexity.

    extra shortform,

    https://www.youtube.com/@theforestjar/videos

    the forest jar is often dismissed because of the art and dry delivery, but the topics are fantastic, comparing the represented perspectives of different ‘thought tools’ or representational perspectives, to convey a greater and more nuanced picture.

    a lot of knowledge is just understanding how cults work to sustain their current model of the world in-front of critique.

    some things are just general concepts that need to be better collected and talked about together, like the motte and bailey, and how cults, or people like jordon peterson will confabulate pockets of faux expertise complexity (kind of the same way AI will confabulate in a way that sounds like it makes sense) but he is actually just diverting and distracting so that he doesn’t need to deal with the dissonance in question. someone actually framed it well in his jubilee thing and it’s hard to call it out if the surrounding people aren’t familiar.

    that being said, any pocket can take all of your time if you let it, so we need to do better at creating cultures of cooperatively and intentionally interacting with this material. people with more social talent would be valuable here. etc.

    also artists should already be working with scientists to help with communicating the truth of current understanding, better than clickbait dishonest journal headlines.

    hopefully some good resources here, unless you’re looking for something else more specific.


  • Peanut@sopuli.xyzto196@lemmy.blahaj.zoneMom rule
    link
    fedilink
    English
    arrow-up
    9
    ·
    edit-2
    10 days ago

    TLDR: cooperation, solidarity, and understanding diverse perspectives, which is a large part of what ‘learning’ is, if you’ve ever learned about complex intelligent systems. which you should, because we want to resolve dissonance between different model perspectives without losing the ability to communicate, or becoming hostile. also dealing with the non-communicative parasitic patterns, like cults, which are like organisms that get ‘smarter’ and more ‘able’ at large scale, at least in regard to growing goodhart’s law style until the non-comprehended environment dies along with the host.

    Expanded:

    learning about learning is important. the scientific method, bayesian probabilistic weighting, how words work, history and diverse expert consensus, bias, etc.

    all very important and should be the main class in school at this point. if you don’t learn how to learn, you might just find a hole and build complexity within confabulations until nobody knows what you’re talking about, and you can no longer confirm your beliefs with the diverse representations of others.

    this will stack with how to build tools to resist social momentums, and tactics often used to stop progressive memes from gaining foothold. astroturfing/reframing/dividing.

    a story for framing

    a bunch of atheists want to stop your cult dogma from being pushed into schools? get a bunch of feminists mad at the ones protesting to end male genital mutilation, because ‘female genital mutilation is worse’ or some other terrible strawman that re-frames their actual position, and then get some 4channers or joe rogan chuds/rhetoric in there so you have more evidence of bad actors, would be funny if a bunch of media groups come into existence and then die, doing nothing but this kinda ragebait. nice. now both sides have legitimate grievances being dismissed by the other side while they attack each-other. now to push some more stuff for governments/schools to weaken people’s ability to comprehend the world or communicate.

    when they get mad at rich people, let them waste all their energy screaming at buildings, uh oh, anti-fascists and minority groups are rising up? better frame them like an apocalypse of a bunch of idiots on the news, rather than address the constant struggles and suffering people experience, because we want to be pandas and don’t want to adapt to new environments…

    and i like to constantly point out that mainstream journals will even note that the black american community was generally russian propaganda target #1. get your enemy to attack themselves, and you can pay some idiot to idiot his way into office through a bunch of social manipulation tactics, which are being actively described here.

    and nobody is educated enough to interpret the complexity when sometimes there’s a little uncomfortable truth everywhere. easier to pretend none of us see the uncomfortable mistakes of the flawed models currently being used to represent our reality.

    imagine being rich and doing whatever you want, and then spending a little of your hoard to ensure others can’t do what you don’t want them to do. that’s largely what’s going on. you can, through changing what people are able to interact with, hack a lot of minds if you got the money and power. this hackability comes from a lot of social energy minimization vie creating more simplified shared heuristic representations, so you can more easily predict each-other. (for a deep dive, see friston’s free energy principle/active inference, and epistemically focused followup by mahault albarracin.)

    that being said, it’s not just cristo-fascist think tanks like prager-u and the heritage foundation, but a diverse set of differently-able groups working together to support the sustenance of their non-progressive models. kinda a “we don’t fuck with each-other, as long as we target the tribe that wants to force us to think” deal between a bunch of idiotic patterns that continue existing like parasitic cultural organisms.

    so… we gotta fight it like one. like cells and organs of a body working together to fight a new parasite, using the intelligence and tools we have. understanding that the whole of humanity is our ‘body,’ and we’re fighting a cancer that uses noise/stress to disable cells and take them over.

    need to build structure’s that won’t just get noticed and hijacked before they can become effective. understanding how to de-escalate and reframe the thought process of someone who is currently stuck in cult style non-communication traps where they can’t update their model anymore. also need to follow it up so that the inevitability of growing diversity can lead to better forms of understanding the world and communicating, building a robust self-healing body, rather than trying to self-segregate and isolate until a parasite decides to rampage.

    hope that makes sense!



  • Peanut@sopuli.xyzto196@lemmy.blahaj.zoneMom rule
    link
    fedilink
    English
    arrow-up
    22
    arrow-down
    1
    ·
    10 days ago

    You mean the solution to this problem is not for everyone to become a solipsistic asshole?

    It’s to fix the system and culture that encourages it?

    But then i can’t feel justified abusing people around me to take everything i can from them and climb a rung on the socio-econinic ladder.

    What do you mean cancer kills the body? That’s the brains problem to figure out. Definitely not confused cells being coerced into an ignorant and totally destructive culture.

    Snide overload, but you really hit the nail on the head, and I think humans should cooperatively be able to do better than dumb cells. The excuses i see for being solipsistic in freaking and epistemically destructive are both disgusting and disappointing.