• 0 Posts
  • 18 Comments
Joined 1 year ago
cake
Cake day: June 26th, 2023

help-circle





  • Let’s play a little game, then. We bothe give each other descriptions of the projects we made, and we try to make the project based on what we can get out of ChatGPT? We send each other the chat log after a week or something. I’ll start: the hierarchical multiscale LSTM is a stacked LSTM where the layer below returns a boundary state which will cause the layer above it to update, if it’s true. the final layer is another LSTM that takes the hidden state from every layer, and returns a final hidden state as an embedding of the whole input sequence.

    I can’t do this myself, because that would break OpenAI’s terms of service, but if you make a model that won’t develop I to anything, that’s fine. Now, what does your framework do?

    Here’s the paper I referenced while implementing it: https://arxiv.org/abs/1807.03595


  • Sorry that my personal experience with ChatGPT is ‘wrong.’ if you feel the need to insult everyone who disagrees with you, that seems like a better indication of your ability to communicate than mine. Furthermore, I think we’re talking about different levels of novelty. You haven’t told me the exact nature of the framework you developed, but the things I’ve tried to use ChatGPT for never turn out too well. I do a lot of ML research, and ChatGPT simply doesn’t have the flexibility to help. I was implementing a hierarchical multiscale LSTM, and no matter what I tried ChatGPT kept getting mixed up and implementing more popular models. ChatGPT, due to the way it learns, can only reliably interpolate between the excerpts of text it’s been trained on. So I don’t doubt ChatGPT was useful for designing your framework, since it is likely similar to other existing frameworks, but for my needs it simply does not work.





  • I’m not sure I entirely understand your argument. “We decide it exists, therefore it exists” is the basis of all science and mathematics. We form axioms based on what we observe, then extrapolate from those axioms to form a coherent logical system. While it may be a leap of logic to assume others have consciousness, it’s a common decency to do that.

    Onto the second argument, when I mean “what signal is qualia” I’m talking about what is the minimum number of neurons we could kill to completely remove someone’s experience of qualia. If we could sever the brain stem, but that would kill an excess of cells. We could kill the sensory cortex, but that would kill more cells than necessary. We could sever the connection between the sensory cortex and the rest of the brain, etc. As you minimize the number of cells, you move up the hierarchy, and eventually reach the prefrontal cortex. But once you reach the prefrontal cortex, the neurons that deliver qualia and the neurons that register it can’t really be separated.

    Lastly, you said that assuming consciousness is some unique part of the universe is wrong because it cannot be demonstrably proven to exist. I can’t really argue against this, since it seems to relate to the difference in our experience of consciousness. To me, consciousness feels palpable, and everything else feels as thin as tissue paper.


  • Here’s another way of framing it: qualia, by definition, is not measurable by any instrument, but qualia must exist in some capacity in order for us to experience it. So, me must assume that either we cannot experience qualia, or that qualia exists in a way we do not fully understand yet. Since the former is generally rejected, the latter must be true.

    You may argue that neurochemical signals are the physical manefestation of qualia, but making that assumption throws us into a trap. If qualia is neurochemical signals, which signals are they? By what definition can we precisely determine what is qualia and what is not? Are unconscious senses qualia? If we stimulated a random part of the brain, unrelated to the sensory cortex, would that create qualia? If the distribution of neurochemicals can be predicted, and the activations of neurons was deterministic as well, would calculating every stimulation in the brain be the same as consciousness?

    In both arguments, consciousness is no clearer or blurrier, so which one is correct?