• j4k3@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    11 months ago

    having good mechanisms for “how to safely stop when something weird happens” is critical here. And it makes a lot more of the “what do we do about weird shit in the street” a lot easier.

    I don’t know how vision may be different, but how do they know what they don’t know, differently that LLMs? That’s like the main problem with small LLMs, the next most probable token is always the next most probable token. Sure there is a bit more nuance available at lower levels, but the basic problem remains. The threshold of token choice is a chosen metric and that choice is heavily influenced by cost. If there was more cost effective tensor hardware I would have bought it. I mean I’m sure an FPGA could be used but if it is more cost effective than a GPU, I think we’d all be using it. I know there was some chip announced by IBM, but since when has IBM done anything remotely relevant in the consumer space. I think of IBM as a subsidiary of Red Hat more than anything else now.