

One of my best friends doesn’t like nuts. He’s very sensitive to bitterness.


One of my best friends doesn’t like nuts. He’s very sensitive to bitterness.
Maybe some yellow onion is in your future.
(I’m not a huge radish guy so maybe there are nuances that would make you like one and not the other, but to me they seemed similar)
There’s a lot of stuff it can do that’s useful, just all malicious. Anything which requires confidently lying to someone about stuff where the fine details don’t matter. So it’s a perfect tool for scammers.
I actually ate an onion like an apple just the other day. It reminded me a lot of raw radish actually.
(“…a computer doesn’t, like, know what an apple is maaan…”)
I think you’re misunderstanding and/or deliberately misrepresenting the point. The point isn’t some asinine assertion, it’s a very real fundamental problem with using LLMs for any actually useful task.
If you ask a person what an apple is, they think back to their previous experiences. They know what an apple looks like, what it tastes like, what it can be used for, how it feels to hold it. They have a wide variety of experiences that form a complete understanding of what an apple is. If they have never heard of an apple, they’ll tell you they’ve never heard of it.
If you ask an LLM what an apple is, they don’t pull from any kind of database of information, they don’t pull from experiences, they don’t pull from any kind of logic. Rather, they generate an answer that sounds like what a person would say in response to the question, “What is an apple?” They generate this based on nothing more than language itself. To an LLM, the only difference between an apple and a strawberry and a banana and a gibbon is that these things tend to be mentioned in different types of sentences. It is, granted, unlikely to tell you that an apple is a type of ape, but if it did it would say it confidently and with absolutely no doubt in its mind, because it doesn’t have a mind and doesn’t have doubt and doesn’t have an actual way to compare an apple and a gibbon that doesn’t involve analyzing the sentences in which the words appear.
The problem is that most of the language-related tasks which would be useful to automate require not just text which sounds grammatically correct but text which makes sense. Text which is written with an understanding of the context and the meanings of the words being used.
An LLM is a very convincing Chinese room. And a Chinese room is not useful.
Barely. The food sucks, the housing is worse, and the healthcare is maybe if you’re lucky.

Nah it’s good energy. If you’re still using Twitter in 2026 I don’t respect you.
And a rapist would similarly downplay their crimes.
Murder isn’t categorically wrong. Some things are. Rape is always wrong, there’s no situation where it’s ok to do it.
things arent black and white morally wrong or right,
Some things absolutely are black and white morally right or wrong.
Murder isn’t something that can only be done to people.
Then you’re vegan? Because the cheapest foods are rice, beans, potatos, etc.


I’m not going to use some weird alternative url to remove ai crap, just like I’m not going to append -ai or whatever it was to every google search. I’m just not going to use these services at all. Want me as a user? Remove the AI garbage. It’s that simple.


CDPR is the game dev studio. Their parent company, CD Projekt was who owned GOG. CDPR had nothing to do with it.


You fucking wish. This smarmy bullshit isn’t going to save you when our turn comes.


incredibly recent,
I’ve been telling you that Trump is a fascist who wants to do this shit since 2015. Nothing recent about it, you’re just a fascist.
Next time you’re considering posting AI garbage, don’t.
Yes he does, because you’re not fucking doing anything about it. You can say “That’s illegal!” like it’s a magic spell all you want, but Trump doesn’t care what is legal. Either you go stop him, physically, or he will keep doing whatever he feels like.