Thursday 26 November 2015

Information on the brain

blog

I normally nod along enthusiastically when reading the Neuroskeptic over at Discover Magazine. From critiquing brain-to-brain communication[1], to his crusade against p-hacking,[2] he rightly questions many aspects of neuroscience. However, it is tough being a skeptic, in particular when skepticism can be used to enforce entrenched ideas and block out the new. I feel the Neuroskeptic may have crossed the line into supporting possible dogma (over insightful questioning) with his discussion of new hydrocephalus (water on the brain) findings. Of interest are the cases in which the condition is developed early in life and is treated (by draining the excess fluid), but then is mostly forgotten about as the person develops. What can happen is spectacular, later in life individuals can be found to still have excess water built up in the cranial cavity. This fluid literally takes up the space which would normally be occupied by the brain, meaning the effected individual has a much smaller brain. See the Neuroskeptic’s article or this academic paper for an image of what this looks like. These images clearly convey the space inside the cranium, with the outline of white being the much reduced brain matter.


What is astounding is that these under-the-radar cases arise because the person shows neither symptoms (swelling or sustained headaches) nor signs of mental deficiency. The paper linked above has the title Brain of a white-collar worker to show how this person was functioning normally in society even with this spectacularly different brain. It is here that the discussion really starts: if a person can operate in pretty much the same way but actually have a much smaller brain, what work is the brain doing?


The Neuroskeptic critiques some interesting claims by Donald R. Forsdyke in response to this line of question. In particular Forsdyke suggests that we need a radical new understanding of how the brain stores information. The specific idea he wants to overthrow is: that the brain scales with the amount of information it contains or scales with intelligent capability. Forsdyke then brings up the brain size debate isolating a poignant consideration: that if brain size equates to intelligence, then men should simply be smarter than women due to men having, on average, larger brains. However, there is no evidence to support this (rather the opposite), and this is further strengthen by research showing that those with savant capabilities do not have larger brains to match this increased ability (see Forsdyke p. 4-5). This then leads a new approach from Forsdyke, who asks: why doesn’t size matter when it comes to the brain? This line of question leads to a possible radical claim, that the information the brain uses may be stored outside of the brain.


Forsdyke presents 3 possible theories for storing information in the brain. The first is the traditional account, that of “chemical or physical form,” in which, presumably, the firing of neurons is an important part. It is this view that Forsdyke sees as being challenged by the hydrocephalus cases. The second is that information is stored in some subatomic form we do not yet understand. In fact the physicist Sir Roger Penrose and anesthesiologist Stuart Hameroff have done just that, suggesting it is at the quantum level that information is stored.[3] However, even if we acknowledge this second suggestion, it still equates more mass with more information storage. If you are missing up to 90% of brain mass, as some of the cases suggest, this is 90% less quantum particles or whatever it is, that is meant to store information.


This takes us to a third option, that the information is not stored in the brain. Instead it may be a form of cloud computing in which:

“The brain is seen as a receptor/transmitter of some form of electromagnetic wave/particle for which no obvious external structure (e.g., an eye) would be needed.” (Forsdyke 2015, p. 5)

In other words, the work of thinking is done outside the matter of the brain and the brains role is to act as a medium for this electromagnetic wave/particle interaction. If this new role for the brain is correct, then you only need as much brain matter as the ‘antenna’ needs and supposedly would allow for there to be reduced brain matter without lost function.


Faced with such a radical new take on the brain, it is not surprising that the Neuroskeptic is, well skeptical. After all, it involves there being all this unseen interactions floating around our head. There are other concerns too, like: why do we need only a small part of the brain to act as an antenna, why not just have that much brain matter in the first place?


However, being skeptical doesn’t answer the question: how do we explain the cases of people acting normally with so little brain matter? The suggestion by the Neuroskeptic is that the brain matter that these people do have is more dense, something that has not been tested for. If the brain matter was somehow forced to compress due to the liquid in the cranial cavity, then there may simply be a lot more brain than the images suggest.


While this is a possible explanation it rings a little hollow. One of the points stressed in the Neuroskeptic’s article is the myth that we ‘only use 10% of the brain.’ While there may be some redundancy in the brain it is not on a huge scale. This means if the brain gets denser, it needs to get a lot denser. Losing half of the brain matter means the density is doubled and if the cases of 90% depletion in brain matter are correct, this would mean a 10 fold increase. It is possible that the plasticity of the brain may allow for this, but such a claim should equally be treated with caution.


This means there is a high level of caution needed to discuss anything related to this unusual phenomena! But I discovered something familiar[4] while researching this topic was the motivation for the cloud computer account of brain. The core idea behind it is not the desire to explain missing brain matter but rather the problem of information storage. One of Forsdyke’s main sources for this is the, at best, patchy work of Simon Berkovich (2014, 1993[5]). His work on the cloud brain and on DNA has one simple focus, many of the physical structures in biology are meant to be holding more information than they physically seem capable of doing. Being an engineer and computer scientist he is worried there simply isn’t enough ‘bits’ to store the information required.


If we take such concerns seriously (and we should if we can’t explain the reduced brain matter cases), then there seems to be two paths we can take. One is to propose new and potentially radical ideas ideas about information storage. The other, and my preferred option, is to question whether it is information storage (in the traditional sense) that the brain is performing.


My issue with the Neuroskeptic is this: he wants to both tow the line of the brain being a information storage machine and the traditional line of ‘chemical or physical forms’ for storing that information. I can’t help but feel that something has to give in this respect, and holding on to traditional ideas for traditional sake is hampering the potential to explain this fascinating phenomena.



[1] For those interested, a signal is ‘taken’ from the EEG read of the sender, while the receiver of this brain-to-brain communication feels pulse on their finger. The receiver is taught to associate different pulses with different actions on a video game the two are playing together. There is no direct inserting of thoughts or commands into the receiver’s brain.

In short, the only interesting thing is the EEG part, the rest is good old fashion communication.

[2] The ‘innocent’ manipulation of data to give your research a better outcome.

[3] There has even been some support for this idea recently.

[4] To those who have read Dreyfus at least.

[5] I was only able to find this article through nefarious means. The reference is: Berkovich SY (1993) On the information processing capabilities of the brain: shifting the paradigm. Nanobiology

Tuesday 10 November 2015

Face it, whether your brain is computer or not, no one has a clue

blog

There is nothing like disagreement to motivate you to action. Several articles have given me the motivation to reactivate this blog. The first comes from the New York Times and has the bold title of “Face it, your brain is a computer.” The author, Professor Gary Marcus, is concerned that people are losing sight of the big picture when considering the brain. Neuroscientists are too focussed on “narrow measurable phenomena” instead of addressing the larger picture. That the author is discouraging scientists from focusing on measurable phenomena is worrying even if you acknowledge that the emphasis is on the phenomena being narrow rather than measurable.[1]


Putting this concern aside, Marcus is deploying a common strategy used by those who advocate a computational theory of mind. It is summarised with the, almost rhetorical, question “If the brain is not a serial algorithm-crunching machine, though, what is it?” This, Fodorian only-game-in–town like challenge has become all too familiar to those who have studied this debate. The answer, unsurprisingly, is that of course the brain must be a computer, why else would Marcus raise it? However, as I will show, what was once meant to be a rallying cry for a reasoned conclusion has become more and more like the 3am challenge of “wanna fight” issued by a freshly ejected drunk.


This is not to say that Marcus doesn’t put forward a case, he does in part. His argument revolves around dismissing three common reasons for rejecting the mind as a computer. The first is captured under the slogan ‘brains are parallel, but computers are serial.’ Marcus rightly points out that modern conceptions of computers are no longer simple serial machines, such a view is “woefully out of date.” Instead we have multi-parallel processing, modular components and this seems to just be the start of such developments. I readily accept this point that to be computer-like does not demand one CPU through which everything is processed.


The second argument he dismisses comes with another slogan: ‘brains are analog, while computers are digital.’ Once again I am happy to agree with Marcus’ conclusions, that the brain could be operating in a digital format or it could not, or it could be a combination of both. What is not explicitly said, but underpins the argument, is the claim that: provided the brain is an algorithm processing machine, then it doesn’t matter if the processing is done digitally or not. Just as long as there is the processing of algorithms, then the computational criteria are met. Expressed in this way, the issue is a bit more hazy. For example, how can you have algorithm processing that isn’t discrete and therefore digital? But I’ll put that aside for now.


This takes us to the third argument, that computers are unable to generate emotions while the brain is. This argument is one I have encountered more and more in the university undergraduate classroom, and one that does seem a fallback position to many who encounter artificial intelligence through pop culture. Once again, I agree with Marcus, the parts of the brain that have been isolated to be core areas of emotional processing are no different from other parts of the brain, so it is very likely that they operate in a similar way. This means that if we can reproduce these in a computer, for example a chess playing one, then we should be able to reproduce the emotional parts too. Again, as I have argued many times in the classroom, emotions seem no more special than other parts of our behaviour/mind so they do not make the mind some special, potentially non-physical thing. It is quite rewarding each year to change a few students’ minds on this point.


So with all this agreement why did I include the above comparison to a drunk trying to pick a fight? The reason is that, just like the drunk only challenging those who she thinks she can beat in a fight, so too has Marcus only presented easy pickings. Note the disparity between the central claim: that the brain is an algorithm-crunching machine and the arguments presented. None of them challenge this idea in the slightest or even discuss it. The anti-computationalist approach driven by the parallel processing ideas of the 1980s and 1990s, is one in which this core idea is questioned. Through puzzles like the frame problem, people like Hubert Dreyfus (1992) claim that algorithm processing just fails to explain human intelligence. Alternatively, roboticist Rodney Brooks has just moved on, turning intelligence into an engineering problem and leaving the high-end picture behind.


While Marcus most likely had a particular audience in mind when he wrote that piece, I cannot help but feel it is aimed too low (e.g. the philosophy undergrad) and not at those actively arguing and reasoning for a anti-computationalist account. In doing so he is not strengthening his case.


To be fair, there is a positive proposal presented by Marcus and I agree that the “real payoff in subscribing to the idea of a brain as a computer would come from using that idea to profitably guide research.” So what is the payoff? Marcus and his colleagues talk of the brain being made up of something like field programmable gate arrays. These are flexible clumps of logic gates that can be configured and reconfigured to perform a range of tasks. The move here seems to be away from an idea of central processing to a conception more consistent with neuroscience, that different parts of the brain perform different functions. In short, rather than the brain being a serial algorithm-crunching machine, it is a parallel algorithm-crunching machine, one with several areas all performing their own tasks.


However, there is a real problem here, especially if we adopt the high-level analysis suggested by Marcus. Are he and his colleagues simply suggesting that the brain might be modular? With parts capable of performing different tasks as the need arises? If so, then their allegiance to Fodor is clear, though I suspect they may want to catch up on some reading which made this point a little while ago (Fodor 1983).


If they are making the stronger claim, that the modularity of mind is best explained by there being separate parts of the brain being individual algorithm-crunching machines, then we have a more interesting claim.[2] However, on closer inspection, there is little evidence to back this up. The academic paper referred to in this New York Times piece is noticeably short and lacking clear justification for its claims. Disappointingly, the argument boils down to: the old conception of the mind as a computer is wrong, why not try out this parallel conception? There is no support other than a few studies that the theory just happens to work alongside and the authors conclude before anyone has the time to ask for “measurable phenomena.” They seem to be trying the old: throw it against the wall and see if it sticks, method of explaining the brain.


While it is true that I may be taking out my frustration of this topic a little heavily on Marcus and his own attempt to add something small to a long standing debate, to me this article epitomises the problem with the discussion in general. Too much is assumed, beating the easy challenges is seen as a win, and attempts at being different land as far away as a gentle breeze carries an apple falling from a tree. For me, the mysteries of the brain are going to require more creativity and new ideas than many are ready to allow for. Perhaps ultimately, it is something much simpler that I pine for, not having to argue with so many drunks at conferences as I try (mostly in vain) to present a genuine alternative to the only game in town.


[1] For full disclosure of the authors intentions, here is the entire passage:

“A lot of neuroscientists are inclined to disregard the big picture, focusing instead on understanding narrow, measurable phenomena (like the mechanics of how calcium ions are trafficked through a single neuron), without addressing the larger conceptual question of what it is that the brain does.”

[2] Though once again not that different from claims made by others.