Dedicated to my family.
Status: Draft This is a first version of a whitepaper proposing a new file format: .aimm (Attention Instructing Memes). Aimm combines simple Markdown text blocks with a way to program in a varying probability of their importance
Right now, you the reader could be spending your attention on many different things. But you are reading this post, engaging with it’s ideas, reflecting on them, criticizing them or just passively skimming.
Objects of Attention
Let’s take any given moment in the life of an agent. What will be the object of attention at that time? We can abstract this down to a set of experiences (also called environment henceforth), which are both its external stimuli as well as internal representations of older stimuli or combinations thereof (memories, thoughts, imaginations). Experience here refers to a subset of all possible inputs (also called in the machine learning literature) and their representation (whether or not that is qualia).
Now internal experiences are often triggered by external events, and it is unclear to me what one would experience without any inputs. This paper therefore focuses mostly on looking at how the environment of an agent changes it’s set of experiences.
This definition allows to ask multiple fundamental questions. First, one can compare the richness of one environment to another. is just the size of the set . A white room has relatively few possible things one can spend ones attention to, while the biggest increase in has come through the invention of the internet. There is an interesting question here of whether or not the increase in over the course of history has been correlational or if it could be a measure of civilizational progress. But for this paper it suffices to say that is extremely large:
Now, one fact that has been consistently reported by meditation practitioners is that , which means humans can only focus on one thing at a time. While I personally believe it is more complicated than this, for an arbitrarily large definition of “experience”, this is most likely true, possibly even for all agents. It definitely suffices for the current argument.
This leaves agents with a core problem tough. If
which experience should the agent focus on?
Attention is a weighing over experiences
Let’s suppose our agent has a goal , which is a state of the environment it prefers over all other states . It doesn’t have access to these states, all it can do is observe it. So a goal is the question: which actions should I take at point , given that I want to observe . Given infinite compute the agent could just do backcasting and ask what is till . But a more tractable approach would be to use probabilities.
The core variable I will introduce in this paper is the Attention Probability :
Or in plain language: is the attention you should put on any possible experience, given what you care about. This reduces the hard question of what to pay attention to into three questions:
- What is my goal? (At what time in the future do I want to experience what?)
- How rich is my environment? Can I change it so that it includes only experiences that would lead me to my goal?
- How will the attention function of different experience change over time? Do I need to do some things in order?
All three of these questions are still hard, but I will propose a solution that could make each a little more tractable.
In the last chapter we saw that what one should pay attention to depends on three variables, the goal , the environment and time .
Let’s look at the simplest case first.
Time dependent attention functions in the wild
Let’s suppose our agent is a human with a simple goal: the human wants to meet with his friend in the evening. They also live on the same planet, so the environment contains that experience. How would the attention function for this context look like?
Or more explicitly it is mostly 0 during the day, (the friend isn’t there), should stay 1 the whole time they are together and then goes back to 0 if he leaves.
Memes that instruct attention replicate better
A meme is a self replication information package
Memes have attention functions.
Attention Management as a convergent instrumental goal
Introducing an Attention Instructing file format
The motivating need for a universal attention instructing file format
The problem with recommendation silos
No meta awereness of switching silos
Having to build multiple immune systems
- Objects of Attention
- Attention is a weighing over experiences
- Attention functions
- Time dependent attention functions in the wild
- Memes that instruct attention replicate better
- Attention Management as a convergent instrumental goal
- Introducing an Attention Instructing file format
- The motivating need for a universal attention instructing file format
- The problem with recommendation silos