

I’m not really sure I follow.
Just to be clear, I’m not justifying anything, and I’m not involved in those projects. But the examples I know concern LLMs customized/fine-tuned for clients for specific projects (so not used by others), and those clients asking to have confidence scores, people on our side saying that it’s possible but that it wouldn’t actually say anything about actual confidence/certainty, since the models don’t have any confidence metric beyond “how likely is the next token given these previous tokens” and the clients going “that’s fine, we want it anyways”.
And if you ask me, LLMs shouldn’t be used for any of the stuff it’s used for there. It just cracks me up when the solution to “the lying machine is lying to me” is to ask the lying machine how much it’s lying. And when you tell them “it’ll lie about that too” they go “yeah, ok, that’s fine”.
And making shit up is the whole functionality of LLMs, there’s nothing there other than that. It just can make shit up pretty well sometimes.
Wouldn’t that just lead to splitting off of cheap companies, with pro-bono ceos that get paid more by the parent company through side channels? I don’t think there’s any fixing it with these kinds of laws, as they’ll just find loop holes to circumvent it.
maybe if companies were forced to be democratic so figurehead ceos could be ousted by the underpaid workers, but at that point it’s not capitalism, but socialism. and that’s how it usually goes imo, the workable solution to capitalism turns out to be not-capitalism