# Things I Learned While Eating Cookies: Mouse-Neurons-Automatically-Know-the-Chinese-Remainder-Theorem Edition

February 3, 2011 3 Comments

I’ve decided to try a new feature on WLOG blog this semester. Every week the physics department hosts a colloquium where a guest speaker from some other university or research institution gives a talk to all the faculty and graduate students, as well as anyone else who’s nerdy enough to want to sit in on it. Of course, at Colorado they had something similar and I always liked to attend, in part because in addition to an interesting lecture, there were always free cookies and tea served beforehand (if you are, for some reason, needing to lure me into a trap someday, consider baking some peanut butter cookies). And I was delighted to discover when I got to Stony Brook last semester that the “cookies and colloquia” pairing seems to be a staple of the physics community at large.

So my new blogging goal for the semester, both to help ensure that I post things regularly and to give myself a chance to practice giving nontechnical explanations of science concepts, is to report to you one interesting thing I learned each week while eating the department-sponsored cookies. This week, Ila Fiete of UT-Austin was here telling us how she’s applying coding models and information theory to the neuroscience question of how mammal brains keep track of our location. If you’re a nerd with access to journal articles, you can start checking out her work for yourself here. But if you’re not, I offer the following cover version, complete with dorky pictures.

Dr. Fiete’s work covered a wide array of topics, but to me the most interesting takeaway point was this: it’s a well known fact that many animals know how to keep track of their location even independent of visual or olfactory cues. You probably know this yourself, because if you’re anything like me there were definitely times when you were a kid where you would go out in the yard, close your eyes, and see if you could walk over to your favorite tree just by “memory.” But the fact is you don’t have to have memorized a place to use this skill: you can be put in a room you’ve never been inside before and told to walk forward twenty feet and (assuming you’re fully sober, I suppose) you’ll probably do a good job of estimating how far you’ve walked. This ability, sometimes referred to as our vestibular sense, is likely not news to you– but what I’ll bet you *don’t* know is how we manage to do that.

If you’re like me, you would probably have guessed that our brains take in information about how fast we feel like we’re moving, keep track of how LONG we’ve been moving, and then use that information to guess how far we’ve gone. But this is only partially right. previous research has shown that what we actually do (I guess by “we” I technically mean “lab rats,” but I believe this is widely assumed to be true of all mammals) is set up a grid of “benchmarks” in our head and then use a snazzy trick to figure out or actual position.

To see how this works, it’s easiest to pretend for the moment that we’re only allowed to move back and forth on a straight line. What researchers have found by monitoring the firing of neurons as their mice move about is that there are clusters of neurons which will fire every time the mouse has travelled a certain distance. This sets up a sequence of markers in the mouse’s head at regular intervals along the line, like so:

The black line represents the space where the mouse is allowed to move, and the red lines indicate the points where a cluster of neurons in his head would fire.

Now, the mouse *could* guess where he’s at along the line just using these marks, by counting how many times the neurons had fired or something like that. But this wouldn’t allow him to make a very *accurate *guess about your location unless you were exactly on one of the benchmarks. So this is where our brains do something very ingenious (something that, ironically, my brain could never come up with if it were trying to design itself). In addition to this first set of benchmarks, there are several other sets of neurons which each fire at slightly different intervals, setting up *several* sets of benchmarks, all equally spaced relative to each other but all with different distances between them, like so:

Hmm, that’s a little visually confusing, so let’s separate it out into three different sets of benchmarks (but remember, they are all just different way of partitioning the same line in space where the mouse is allowed to move).

There, that’s a lot clearer. See what I mean now? So now comes the clever point. Instead of having to record the entire distance travelled along the line, the mouse’s brain only records how far he is from the nearest blue benchmark, how far from the nearest green benchmark, etc. And this is enough for him to to figure out *exactly* where he is, as long as he has enough benchmarking schemes in place. After all if all he knows is that he’s, lets say, two feet in front of the nearest benchmark, then he could be at any one of several places:

But, if, for example, he also knows that he’s one foot behind the nearest green benchmark, and only a few inches in front of the nearest blue marker, then that’s a different story:

At first glance it may seem as though this has only expanded the list of possible locations, but in fact, there’s now only *one* place along the line where all three conditions are satisfied, and all of a sudden, our mouse knows exactly where he is. Of course, in real life, he has to use an advanced version of this process that works in two or even three dimensions, but the underlying idea is exactly the same*.

Now it may seem unusual that our brains choose to store the information this way. In fact, it’s even *more* surprising when you understand the mathematics required to calculate your position. To convert this kind of benchmark data into an exact location by hand, you have to use a clever, two-dimensional and continuous version of something called the Chinese Remainder Theorem. It’s a nasty mix of geometry and number theory that would take even a really bright mathematician quite a bit of time to work out. Nevertheless, our brains seem to have evolved to do these computation somewhat automatically, and in her latest paper, Dr. Fiete thinks she has a good idea why. First, data stored this way turns out to be much easier to error-check, meaning that if a neuron accidentally fires of fails to fire, there’s a process that can be used most of the time to determine whether a mistake was made. Dr. Fiete says that since her team’s prediction that this kind of error checking might occur, a number of researchers have found evidence of neural architecture that would do just that. Of course, error checking can be done with other techniques, but Fiete has shown using information theory that these alternatives all require substantially more brain cells.

Secondly, the data points stored in a process like this are all of the same *size. *If you wanted to record your location just by storing, say, the number of feet you’ve travelled, then you need to be prepared to store a number that may be as small as just one or two or as large as many millions. This takes a lot of flexibility in your neurons, and it means that if you don’t have enough neurons set aside to store data and you travel a long distance, you might run out of room. On the other hand, if you only ever record the distance from the nearest benchmark, the numbers you store are always predictably small, and have a well-defined maximum size.

Finally, and perhaps most confusingly, when you store this kind of information, it is easy to manipulate. It’s hard to say exactly what I mean like this without being quite mathematical, but suffice it to say that if you’re not using this sneaky method and you want to do some kind of calculation, you may have to do some kind of borrowing or carrying, which takes extra neurons and extra time. You may know, for example, that you just walked three feet backwards, and then forwards nine, and then forward another eight. You can’t just add those numbers together without keeping track of a negative sign and then carrying a one. On the other hand, if you’re keeping track of distances to benchmarks, you can make the benchmarks close enough together that this never happens. Note also that this is distinct from keeping the numbers small– in the first case, you’re only manipulating one-digit numbers, but because the answer can turn out to be a two-digit number, it takes an extra step and an extra neuron. Because of some tricks of modular arithmetic, this never happens in the second method. Another way to say this is that the symbols encoded in your brain don’t have to be converted into numbers in order to be “added” together– they simply have to be placed next to each other. It is as if adding two and seven was as simple as writing “27,” and you can see how much faster we could do math if this were always the case.

Fiete argued that for all these reasons, our brains have evolved to use this counterintuitive benchmarking system in order to maximize the speed and efficiency with which we can determine our location, and I think she makes a pretty convincing case. It was a cool talk, and representative of exactly why I like going to the weekly colloquia: you always find yourself thinking about things you never would have thought about otherwise. Neuroscience? From a physicist? You never know. It’s almost always a pleasant surprise, and I’m looking on to passing on a few more of them throughout the semester.

__

* Interestingly, instead of using a square grid to keep track of benchmarks in 2-d space, Fiete says research has shown that we use a system of triangular tiles instead (draw yourself an equilateral triangle, then draw another equilateral triangle off of each side, and then repeat). I asked her whether this was because the isometry of the system somehow allowed for further simplification when the algorithm is extended to two-dimensions, but she seemed to think it had more to do with the fact that it takes advantage of other bits of neural architecture which had already evolved for other reasons.

That was a really cool read. If temporal interference is the correct encoding mechanism I would be curious to see the impacts of n-dimensional space on those models (for example getting a rat to somehow play “Portal”). I wonder if that would demonstrate any implicit constraints on the model or reveal any underlying mechanisms in the encoding process.

haha, mouse portal, I love it! I will get started on writing the grant proposal this afternoon. And I think that’s a genuinely interesting question; aside from telling us more about the mechanism it could even help address the questions of whether we are uniquely adapted to live in a 3-dimensional world, and whether life as we know it could exist at all in higher dimensions. From what I understand of the mathematics of the coding mechanism, there is nothing that inherently prevents it from being generalized to n dimensions; however, the complexity of the encoding process would, as far as I can tell, go exponentially with n, so it wouldn’t take long before the number of neurons required outstripped the amount of space available, even in an n-dimensional brain!

That would be very cool. Just let me know if you get the grant! ;)

I could be wrong but I believe that in back-propagation models an insufficient number of neurons only prevents successful derivation of the underlying mechanism. Implicitly this means that the rat would have an idea how far it had gone in each linear dimension, but the effects of each action on relative location would be misunderstood. Functionally this would force another mechanism to intercede. The rat would eventually learn a mechanism or pattern of behavior that would allow successful navigation of the maze, but it would be a procedural mechanism rather than one based on spatial reasoning. Consequentially the training times would be several orders of magnitude longer than the suggested mechanisms.