• kakes@sh.itjust.works
    link
    fedilink
    arrow-up
    8
    arrow-down
    7
    ·
    6 months ago

    Well, they actually can, at least to an extent. All you need to do is encode the worldstate in a way the LLM can understand, then decode the LLM’s response to that worldstate (most examples I’ve seen use JSON to good effect).

    That doesn’t seem to be the focus of most of these developers though, unfortunately.

    • bionicjoey@lemmy.ca
      link
      fedilink
      arrow-up
      3
      ·
      6 months ago

      That assumes the model is trained on a large training set of the worldstate encoding and understands what that worldstate means in the context of its actions and responses. That’s basically impossible with the state of language models we have now.

      • kakes@sh.itjust.works
        link
        fedilink
        arrow-up
        2
        arrow-down
        1
        ·
        6 months ago

        I disagree. Take this paper for example - keeping in mind it’s a year old already (using ChatGPT 3.5-turbo).

        The basic idea is pretty solid, honestly. Representing worldstate for an LLM is essentially the same as how you would represent it for something like a GOAP system anyway, so it’s not a new idea by any stretch.