Yeah, I was surprised when they said it could summarize the plot and talk about the characters. To my knowledge, LLMs only memory is in how long their prompt is, so it shouldn’t be able to analyze an entire novel. I’m guessing if an LLM could do something like this, it would only be because the plot was already summarized at the end of the novel.
chatbots also usually have a database of key facts to query, and modern context windows can get very very long (with the right chatbot). but yeah the author probably imagined a lot of complexity and nuance and understanding that isn’t there
I once asked ChatGPT for an opinion on my blog and gave the web address. It summarized some historical posts accurately enough. It was definitely making use of the content, and not just my prompt. Flattered me with saying “the author shows a curious mind”. ChatGPT is good at flattery (in fact, it seems to be trained specifically to do it, and this is part of OpenAI’s marketing strategy).
For the record, yes, this is a bit narcissistic, just like googling yourself. Except you do need to google yourself every once in a while to know what certain people, like employers, are going to see when they do it. Unfortunately, I think we’re going to have to start doing the same with ChatGPT and other popular models. No, I don’t like that, either.
Yes but actually no. LLMs can be setup in such a way where they remember previous prompts; most if not all the AI web services do not enable this by default, if they even allow it as an option.
Yeah, I was surprised when they said it could summarize the plot and talk about the characters. To my knowledge, LLMs only memory is in how long their prompt is, so it shouldn’t be able to analyze an entire novel. I’m guessing if an LLM could do something like this, it would only be because the plot was already summarized at the end of the novel.
chatbots also usually have a database of key facts to query, and modern context windows can get very very long (with the right chatbot). but yeah the author probably imagined a lot of complexity and nuance and understanding that isn’t there
I once asked ChatGPT for an opinion on my blog and gave the web address. It summarized some historical posts accurately enough. It was definitely making use of the content, and not just my prompt. Flattered me with saying “the author shows a curious mind”. ChatGPT is good at flattery (in fact, it seems to be trained specifically to do it, and this is part of OpenAI’s marketing strategy).
For the record, yes, this is a bit narcissistic, just like googling yourself. Except you do need to google yourself every once in a while to know what certain people, like employers, are going to see when they do it. Unfortunately, I think we’re going to have to start doing the same with ChatGPT and other popular models. No, I don’t like that, either.
Yes but actually no. LLMs can be setup in such a way where they remember previous prompts; most if not all the AI web services do not enable this by default, if they even allow it as an option.
All of that stuff is just added to their current prompt. That’s how that function works.