• lily33@lemmy.world
    link
    fedilink
    English
    arrow-up
    5
    ·
    edit-2
    1 year ago

    It’s not that nobody took the time to understand. Researchers have been trying to “un-blackbox” neural networks pretty much since those have been around. It’s just an extremely complex problem.

    Logistic regression (which is like a neural network but with just one node) is pretty well understood - but even then sometimes it can learn some pretty unintuitive coefficients and it can be tricky to understand why.

    With LLMs - which are enormous by comparison - it’s simply not a tractable problem to understand how it works in detail.