Doesn’t do that for me. I have to hold left click on a link for over a second to trigger it.
And yeah, pretty decent. It can produce a basically summary of a fair amount of text pretty quickly and generally accurately. It’s not an expert wordsmith, it won’t give a deep and thoughtful analysis of the poem you pointed it at or anything, but that’s not the use case. The use case is “give me the key bullet points of this article so I can decide if I should give it more attention.”, and it does that job pretty well.
This is absolutely, demonstrably, documentedly not true. It is accurate sometimes, and sometimes it shows you absolute bullshit. When will it be accurate? Who knows. So you can use it only when you don’t care about the truth, in which case why even bother, just imagine the article said what you wanted it to say and be done with it
Depends on model tuning. Basically, you can tune a model to hallucinate less, or to write more human-like, but not really both at the same time, at least not for a model you could expect most users to run locally. For this sort of application (summarizing text), you’d tune heavily against hallucination, because ideally your bullet points are going to mostly be made up of direct paraphrase of article text, with a very limited need for fluid writing or anything even vaguely creative.
Basically, you can tune a model to hallucinate less
You can tune it to hallucinate more, you can’t tune it to not hallucinate at all, and that’s what matters. You need it to be “not at all” if you want it to be useful, otherwise you can never be sure that it’s not lying, and you can’t check for lies other than reading the article, which defies the whole purpose of it.
Doesn’t do that for me. I have to hold left click on a link for over a second to trigger it.
I misunderstood the previous comments, actually, yeah, it’s not triggered the way I assumed.
The use case is “give me the key bullet points of this article so I can decide if I should give it more attention.”, and it does that job pretty well.
I’ll put aside all the other complaints I have on my mind, because we’ve both probably gone through similar discussions, I don’t want to get bogged down in yet another, and just say that I honestly can’t imagine this being such a useful or time-saving thing in the first place. Like, did it use to be a frequent problem to you to start reading an article, realise you’re not interested, and give up on it?
Doesn’t do that for me. I have to hold left click on a link for over a second to trigger it.
And yeah, pretty decent. It can produce a basically summary of a fair amount of text pretty quickly and generally accurately. It’s not an expert wordsmith, it won’t give a deep and thoughtful analysis of the poem you pointed it at or anything, but that’s not the use case. The use case is “give me the key bullet points of this article so I can decide if I should give it more attention.”, and it does that job pretty well.
This is absolutely, demonstrably, documentedly not true. It is accurate sometimes, and sometimes it shows you absolute bullshit. When will it be accurate? Who knows. So you can use it only when you don’t care about the truth, in which case why even bother, just imagine the article said what you wanted it to say and be done with it
Depends on model tuning. Basically, you can tune a model to hallucinate less, or to write more human-like, but not really both at the same time, at least not for a model you could expect most users to run locally. For this sort of application (summarizing text), you’d tune heavily against hallucination, because ideally your bullet points are going to mostly be made up of direct paraphrase of article text, with a very limited need for fluid writing or anything even vaguely creative.
You can tune it to hallucinate more, you can’t tune it to not hallucinate at all, and that’s what matters. You need it to be “not at all” if you want it to be useful, otherwise you can never be sure that it’s not lying, and you can’t check for lies other than reading the article, which defies the whole purpose of it.
I misunderstood the previous comments, actually, yeah, it’s not triggered the way I assumed.
I’ll put aside all the other complaints I have on my mind, because we’ve both probably gone through similar discussions, I don’t want to get bogged down in yet another, and just say that I honestly can’t imagine this being such a useful or time-saving thing in the first place. Like, did it use to be a frequent problem to you to start reading an article, realise you’re not interested, and give up on it?