• logicbomb@lemmy.world
    link
    fedilink
    English
    arrow-up
    25
    ·
    7 days ago

    A lot of writers just write for themselves, and don’t really think or care about what other people might think when they read it. That’s perfectly fine, by the way. Writing can be a worthwhile effort even if nobody ever reads it.

    But if you want other people to enjoy it, then you have to keep them in mind. And honestly, this sort of feedback should be invaluable to authors, assuming it’s not an AI hallucination.

      • logicbomb@lemmy.world
        link
        fedilink
        English
        arrow-up
        5
        ·
        7 days ago

        Yeah, I was surprised when they said it could summarize the plot and talk about the characters. To my knowledge, LLMs only memory is in how long their prompt is, so it shouldn’t be able to analyze an entire novel. I’m guessing if an LLM could do something like this, it would only be because the plot was already summarized at the end of the novel.

        • baguettefish@discuss.tchncs.de
          link
          fedilink
          English
          arrow-up
          6
          ·
          7 days ago

          chatbots also usually have a database of key facts to query, and modern context windows can get very very long (with the right chatbot). but yeah the author probably imagined a lot of complexity and nuance and understanding that isn’t there

        • Frezik@lemmy.blahaj.zone
          link
          fedilink
          English
          arrow-up
          5
          ·
          edit-2
          7 days ago

          I once asked ChatGPT for an opinion on my blog and gave the web address. It summarized some historical posts accurately enough. It was definitely making use of the content, and not just my prompt. Flattered me with saying “the author shows a curious mind”. ChatGPT is good at flattery (in fact, it seems to be trained specifically to do it, and this is part of OpenAI’s marketing strategy).

          For the record, yes, this is a bit narcissistic, just like googling yourself. Except you do need to google yourself every once in a while to know what certain people, like employers, are going to see when they do it. Unfortunately, I think we’re going to have to start doing the same with ChatGPT and other popular models. No, I don’t like that, either.

        • L0rdMathias@sh.itjust.works
          link
          fedilink
          arrow-up
          1
          ·
          7 days ago

          Yes but actually no. LLMs can be setup in such a way where they remember previous prompts; most if not all the AI web services do not enable this by default, if they even allow it as an option.

          • logicbomb@lemmy.world
            link
            fedilink
            English
            arrow-up
            3
            arrow-down
            1
            ·
            7 days ago

            LLMs can be setup in such a way where they remember previous prompts

            All of that stuff is just added to their current prompt. That’s how that function works.

    • ch00f@lemmy.world
      link
      fedilink
      arrow-up
      12
      arrow-down
      1
      ·
      7 days ago

      “She listed three characters”

      AI does everything in threes. Likely it just decided to not like three characters not because three characters were bad but because it always does three bullets.

    • Ech@lemmy.ca
      link
      fedilink
      arrow-up
      7
      arrow-down
      3
      ·
      7 days ago

      assuming it’s not an AI hallucination.

      All output from an LLM is a “hallucination”. That’s the core function of the algorithm.