• 0 Posts
  • 9 Comments
Joined 4 years ago
cake
Cake day: June 1st, 2020

help-circle


  • I would argue that what’s going on is that they are compressing information. And it just so happens that the most compact way to represent a generative system (like mathematical relations for instance) is to model their generative structure. For instance, it’s much more efficient to represent addition by figuring out how to add two numbers, than by memorizing all possible combinations of numbers and their sum. So implicit in compression is the need to discover generalizations. But, the network has limited capacity and limited “looping power”, and it doesn’t really know what a number is, so it has to figure all this out by example and as a result will often come to approximate versions of these generalizations. Thus, it will often appear to be intelligent until it encounters something that doesn’t quite fit whatever approximation it came up with and will suddenly get something wrong that seems outside the pattern that you thought it understood, because it’s hard to predict what it’s captured at a very deep level and what it only has surface concepts of.

    In other words, I think it is “kind of” thinking, if thinking can be considered a kind of computation, but it doesn’t always capture concepts completely because it’s not quite good enough at generalizing what it’s learned, but it’s just good enough to appear really smart within a certain distribution of inputs.

    Which, in a way, isn’t so different from us, but is maybe not the same as how we learn and naturally integrate information.