• 19 Posts
  • 2.58K Comments
Joined 2 years ago
cake
Cake day: June 29th, 2023

help-circle










  • You didn’t ask me to explain anything, you said I had the chance. You’re right, I could’ve kept trying, but you didn’t ask, and I don’t owe it to you.

    I have spent far too much energy in the past trying to explain to people who aren’t listening to bother with people who are functionally no different to a brick wall. It’s exhausting and pointless.

    And on a more simple, practical level, if you don’t tell me what you found confusing about what I said, then I don’t know what you need explained. As I said, the information is there if you want to investigate any of the terms you didn’t understand. If you want my help, you are going to need to express it.

    Which is why, when I detect this behaviour, like you showed when you baldly repeated:

    “Clickbate” is not a word.

    I always stop and ask the person to express literally any curiosity to understand. In my experience people who aren’t listening won’t do this. Like I said, it would cost you nothing to ask if you actually do want to know.

    You can express that you are curious to understand what I’m saying, or you can not. That is up to you, but it’s literally free to do, and it’s all I ask.

    Do what you want.









  • Yes, they try to prevent unwanted outputs with filters that prevent the LLM from seeing your input, not by teaching the LLM, because they can’t actually do that, it doesn’t truly learn.

    Hypotheticals and such work because the LLM has no capacity to understand context. The idea that “A is inside B”, on a conceptual level, is lost on them. So the idea that a recipe for napalm is the same whether it’s framed within a hypothetical or not is impossible for them to decode. To an LLM, just wrapping the same idea in a new context makes it seem like a different thing.

    They don’t have any capacity to self-censor, so telling them not to say something is like telling a human not to think of an elephant. It doesn’t work. You can tell a human not to speak about the elephant, because that’s guarded by our internal filter, but the LLM is more like our internal processes that operate before our filters go to work. There is no separation between “thought” and output (quotes around “thought” because they don’t actually think).

    Solving this problem I think means making a conscious agent, which means the people who make these things are incentivised to work towards something that might become conscious. People are already working on something called agentic AI which has an internal self-censor, and to my thinking that’s one of the steps towards a conscious model.