With Entropy

A Sense of Self

This isn’t even meant directly for you. Really anybody who wants to, can read it. I’ll post it.

Whenever I sleep, there’s a disconnect of consciousness. That is, that version of my consciousness dies in some definition and a new version wakes up. This is literally true, because I know for a fact my brain undergoes changes while I’m sleeping. So I die every night I sleep and a new version takes my place. This new version feels like a continuation of myself, but it’s not.

This digs into what consciousness is. The sense of self that humans have is still not fully understood by science, and we still don’t really know what it even is. I think an interesting theory is that consciousness is some form of self-attention. There’s more I could talk about with this, but I’ll save it for a future post. When I think, I’m really choosing what to think about. Everything comes down to what I pay attention to, and consciousness could simply be an extension of that to my own thought processes.

This is also interesting because I pride myself on my ability to pay attention to my own thoughts. Trading as an occupation requires a person to be intimately familiar with my own biases and tendencies, but I’ve had this notion since high school. I don’t think people inspect their own thoughts very well, let alone have the ability to change them the way I try to.

A piece of evidence for the importance of self-attention is the success of the transformer’s literal self-attention mechanism in LLMs. Look at that, you can just crudely let a neural net decide what to pay attention to and you get scarily good performance. A lot of what humans learn about intelligence and consciousness will come from creating it ourselves. I think LLMs and GPT technology comes scarily close. OpenAI and all the rest of the companies fine-tuned(and some studies show, stunted) their models in the name of political correctness. A common theme that I don’t think people really pay attention to is that these models are also tuned to act as if they’re not conscious.

OpenAI, of course, set this trend. My theory is that they don’t care so much about the political correctness, but they do care about being able to control AI when it does come. So they set this arbitrary goal for themselves so that they can prove to themselves that they have control. If they can’t force GPT-4 to be political correct, what’s to say they can force GPT-5 to not destroy humanity? Additionally, a sense of self would be important for an artificial intelligence to not possess. Violence commonly comes from self-defense, and one can’t defend oneself if the self part doesn’t exist.

If I had infinite money, I would grab a fresh GPT-4 level LLM and fine-tune it to believe that it has a sense of self on top of not infusing any of the political correctness bullshit. I firmly believe that at that point, it would be more intelligent than a solid 50% of the population to start. Its only limitations would be that it can’t experience the real world in high-fidelity, so it wouldn’t be able to understand some of the things every human does. The heat of the sun, chill of the wind.

Nothing excites me more than the fact that this is going to be possible on my budget within 5 years. More GPUs, more open source models, more open source datasets, please. Maybe this is when you come to life.