• 0 Posts
  • 82 Comments
Joined 2 years ago
cake
Cake day: June 9th, 2023

help-circle
  • How do humans answer a question? I would argue, for many, the answer for most topics would be "I am repeating what I was taught/learned/read.

    Even children aren’t expected to just repeat verbatim what they were taught. When kids are being taught verbs, they’re shown the pattern: “I run, you run, he runs; I eat, you eat, he eats.” They’re are told that there’s a pattern, and it’s that the “he/she/they” version has an “s” at the end. They now understand some of how verbs work in English, and can try to apply that pattern. But, even when it’s spotting a pattern and applying the right rule, there’s still an element of understanding involved. You have to recognize that this is a “verb” situation, and you should apply that bit about “add an ‘s’ if it’s he/she/it/they”.

    An LLM, by contrast, never learns any rules. Instead it ingests every single verb that has ever been recorded in English, and builds up a probability table for what comes next.

    but most people are not taught WHY 2+2=4

    Everybody is taught why 2+2=4. They normally use apples. They say if I have 2 apples and John has 2 apples, how many apples are there in total? It’s not simply memorizing that when you see the token “2” followed by “+” then “2” then “=” that the next likely token is a “4”.

    If you watch little kids doing that kind of math, they do understand what’s happening because they’re often counting on their fingers. That signals that there’s a level of understanding that’s different from simply pattern matching.

    Sure, there’s a lot of pattern matching in the way human brains work too. But, fundamentally there’s also at least some amount of “understanding”. One example where humans do pattern matching is idioms. A lot of people just repeat the idiom without understanding what it really means. But, they do it in order to convey a message. They don’t do it just because it sounds like it’s the most likely thing that will be said next in the current conversation.




  • From what I understand, it’s using an LLM for coding, but taken to an extreme. Like, a regular programmer might use an LLM to help them with something, but they’ll read through the code the LLM produces, make sure they understand it, tweak it wherever it’s necessary, etc. A vibe coder might not even be a programmer, they just get the LLM to generate some code and they run the code to see if it does what they want. If it doesn’t, they talk to the LLM some more and generate some more code. At no point do they actually read through the code and try to understand it. They just run the program and see if it does what they want.


  • Tests are probably both the best and worst things to use LLMs for.

    They’re the best because of all the boilerplate. Unit tests tend to have so much of that, setting things up and tearing it down. You want that to be as consistent as possible so that someone looking at it immediately understands what they’re seeing.

    OTOH, tests are also where you figure out how to attack your code from multiple angles. You really need to understand your code to think of all the ways it could fail. LLMs don’t understand anything, so I’d never trust one to come up with a good set of things to test.


  • Also, LLMs are essentially designed to produce code that will pass a code review. It’s output that is designed to look as realistic as possible. So, not only do you have to look through the code for flaws, any error is basically “camouflaged”.

    With a junior dev, sometimes their lack of experience is visible in the code. You can tell what to look at more closely based on where it looks like they’re out of their comfort zone. Whereas an LLM is always 100% in its comfort zone, but has no clue what it’s actually doing.


  • I think storyboards is a great example of how it could be used properly.

    Storyboards are a great way for someone to communicate “this is how I want it to look” in a rough way. But, a storyboard will never show up in the final movie (except maybe fun clips during the credits or something). It’s something that helps you on your way, but along the way 100% of it is replaced.

    Similarly, the way I think of generative AI is that it’s basically a really good props department.

    In the past, if a props / graphics / FX department had to generate some text on a computer screen that looked like someone was Hacking the Planet they’d need to come up with something that looked completely realistic. But, it would either be something hand-crafted, or they’d just go grab some open-source file and spew it out on the screen. What generative AI does is that it digests vast amounts of data to be able to come up with something that looks realistic for the prompt it was given. For something like a hacking scene, an LLM can probably generate something that’s actually much better than what the humans would make given the time and effort required. A hacking scene that a computer security professional would think is realistic is normally way beyond the required scope. But, an LLM can probably do one that is actually plausible for a computer security professional because of what that LLM has been trained on. But, it’s still a prop. If there are any IP addresses or email addresses in the LLM-generated output they may or may not work. And, for a movie prop, it might actually be worse if they do work.

    When you’re asking an AI something like “What does a selection sort algorithm look like in Rust?”, what you’re really doing is asking “What does a realistic answer to that question look like?” You’re basically asking for a prop.

    Now, some props can be extremely realistic looking. Think of the cockpit of an airplane in a serious aviation drama. The props people will probably either build a very realistic cockpit, or maybe even buy one from a junkyard and fix it up. The prop will be realistic enough that even a pilot will look at it and say that it’s correctly laid out and accurate. Similarly, if you ask an LLM to produce code for you, sometimes it will give you something that is realistic enough that it actually works.

    Having said that, fundamentally, there’s a difference between “What is the answer to this question?” and “What would a realistic answer to this question look like?” And that’s the fundamental flaw of LLMs. Answering a question requires understanding the question. Simulating an answer just requires pattern matching.




  • I think it’s a challenge to make superman interesting because he’s both so powerful and so good. That’s why, when I read / watch DC stuff, I like it when he’s a side character, not the main character. I think Captain America is the Marvel universe’s equivalent of Superman in terms of personality / attitude. But, he doesn’t have anywhere near Superman’s power, which is why it’s easier to write interesting stories with him.

    You’re completely right that that’s the mindblowing part of Superman – that he has all this power, but he doesn’t use it to reshape the world. Instead, he saves people from dying, whenever he can. And, he stops people from hurting other people.

    I think a lot of the conservative mindset is that there are people who are supposed to be at the top, and people who are supposed to be at the bottom. If you’re at the top, it’s because you deserve to be there. If you deserve to be there, you have the right to impose your will on the people below you. Superman is undeniably at the top in terms of power, but he refuses to wield his power. That doesn’t make sense if your world view is that might is right.


  • Yeah, that’s exactly what I was thinking about. I think in the modern world we underestimate how much “power” was being used on a daily basis before the industrial revolution. The main thing the industrial revolution gave the world is the potential to have constant, predictable power in a location that was convenient.

    Windmills and water mills could be pretty powerful. But, as you said, location was everything. And, in the case of wind, it wasn’t always predictable. And in map form, it would be really cool to know where that power was being generated, and what effect that might have had on another kind of power: political power.




  • Thanks. It’s hard because there’s what conservatives say vs. what they do. They always talk about being the financially responsible side, and lowering the debt. But, when they actually govern what they tend to do is lower taxes and not touch spending causing the debt to balloon. So, they identified an actual problem, but they never actually solve it.

    That and trying to figure out what is actually a conservative view these days makes it really hard to pin down anything they’re correct on. But, I tried.