• logicbomb@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    1
    ·
    6 months ago

    I did say that people and AI would have similar poor results at explaining themselves. So we agree on that.

    The one thing I’ll add is that certain people performing certain tasks can be excellent at explaining themselves, and if a specific LLM AI exists that can do that, then I’m not aware of it. I added LLM into there because I want to ensure that it’s an AI with some ability for generalized knowledge. I wouldn’t be surprised if there are very specific AIs that have been trained only to explain a very narrow thing.

    I guess I’m in a mood to be reminded of old Science Fiction stories, because I’m reminded of a story where they had people who were trained to memorize situations to testify later. For some reason, I initially think it’s a hugely famous novel like Stranger in a Strange Land, but I might easily be wrong. But anyways, the example they gave in the book was that the person described a house, let’s say the house was white, then they described it as being white on the side that was facing them. The point being that they’d be explaining something as closely to right as was possible, to the point that there was no way that they’d be even partially wrong.

    Anyways, that seems tangentially related at best, but the underlying connection is that people, with the right training and motivation, can be very mentally disciplined, which is unlike any AI that I know, and also probably very unlike this comment.