Thursday 26 November 2015

Information on the brain

blog

I normally nod along enthusiastically when reading the Neuroskeptic over at Discover Magazine. From critiquing brain-to-brain communication[1], to his crusade against p-hacking,[2] he rightly questions many aspects of neuroscience. However, it is tough being a skeptic, in particular when skepticism can be used to enforce entrenched ideas and block out the new. I feel the Neuroskeptic may have crossed the line into supporting possible dogma (over insightful questioning) with his discussion of new hydrocephalus (water on the brain) findings. Of interest are the cases in which the condition is developed early in life and is treated (by draining the excess fluid), but then is mostly forgotten about as the person develops. What can happen is spectacular, later in life individuals can be found to still have excess water built up in the cranial cavity. This fluid literally takes up the space which would normally be occupied by the brain, meaning the effected individual has a much smaller brain. See the Neuroskeptic’s article or this academic paper for an image of what this looks like. These images clearly convey the space inside the cranium, with the outline of white being the much reduced brain matter.


What is astounding is that these under-the-radar cases arise because the person shows neither symptoms (swelling or sustained headaches) nor signs of mental deficiency. The paper linked above has the title Brain of a white-collar worker to show how this person was functioning normally in society even with this spectacularly different brain. It is here that the discussion really starts: if a person can operate in pretty much the same way but actually have a much smaller brain, what work is the brain doing?


The Neuroskeptic critiques some interesting claims by Donald R. Forsdyke in response to this line of question. In particular Forsdyke suggests that we need a radical new understanding of how the brain stores information. The specific idea he wants to overthrow is: that the brain scales with the amount of information it contains or scales with intelligent capability. Forsdyke then brings up the brain size debate isolating a poignant consideration: that if brain size equates to intelligence, then men should simply be smarter than women due to men having, on average, larger brains. However, there is no evidence to support this (rather the opposite), and this is further strengthen by research showing that those with savant capabilities do not have larger brains to match this increased ability (see Forsdyke p. 4-5). This then leads a new approach from Forsdyke, who asks: why doesn’t size matter when it comes to the brain? This line of question leads to a possible radical claim, that the information the brain uses may be stored outside of the brain.


Forsdyke presents 3 possible theories for storing information in the brain. The first is the traditional account, that of “chemical or physical form,” in which, presumably, the firing of neurons is an important part. It is this view that Forsdyke sees as being challenged by the hydrocephalus cases. The second is that information is stored in some subatomic form we do not yet understand. In fact the physicist Sir Roger Penrose and anesthesiologist Stuart Hameroff have done just that, suggesting it is at the quantum level that information is stored.[3] However, even if we acknowledge this second suggestion, it still equates more mass with more information storage. If you are missing up to 90% of brain mass, as some of the cases suggest, this is 90% less quantum particles or whatever it is, that is meant to store information.


This takes us to a third option, that the information is not stored in the brain. Instead it may be a form of cloud computing in which:

“The brain is seen as a receptor/transmitter of some form of electromagnetic wave/particle for which no obvious external structure (e.g., an eye) would be needed.” (Forsdyke 2015, p. 5)

In other words, the work of thinking is done outside the matter of the brain and the brains role is to act as a medium for this electromagnetic wave/particle interaction. If this new role for the brain is correct, then you only need as much brain matter as the ‘antenna’ needs and supposedly would allow for there to be reduced brain matter without lost function.


Faced with such a radical new take on the brain, it is not surprising that the Neuroskeptic is, well skeptical. After all, it involves there being all this unseen interactions floating around our head. There are other concerns too, like: why do we need only a small part of the brain to act as an antenna, why not just have that much brain matter in the first place?


However, being skeptical doesn’t answer the question: how do we explain the cases of people acting normally with so little brain matter? The suggestion by the Neuroskeptic is that the brain matter that these people do have is more dense, something that has not been tested for. If the brain matter was somehow forced to compress due to the liquid in the cranial cavity, then there may simply be a lot more brain than the images suggest.


While this is a possible explanation it rings a little hollow. One of the points stressed in the Neuroskeptic’s article is the myth that we ‘only use 10% of the brain.’ While there may be some redundancy in the brain it is not on a huge scale. This means if the brain gets denser, it needs to get a lot denser. Losing half of the brain matter means the density is doubled and if the cases of 90% depletion in brain matter are correct, this would mean a 10 fold increase. It is possible that the plasticity of the brain may allow for this, but such a claim should equally be treated with caution.


This means there is a high level of caution needed to discuss anything related to this unusual phenomena! But I discovered something familiar[4] while researching this topic was the motivation for the cloud computer account of brain. The core idea behind it is not the desire to explain missing brain matter but rather the problem of information storage. One of Forsdyke’s main sources for this is the, at best, patchy work of Simon Berkovich (2014, 1993[5]). His work on the cloud brain and on DNA has one simple focus, many of the physical structures in biology are meant to be holding more information than they physically seem capable of doing. Being an engineer and computer scientist he is worried there simply isn’t enough ‘bits’ to store the information required.


If we take such concerns seriously (and we should if we can’t explain the reduced brain matter cases), then there seems to be two paths we can take. One is to propose new and potentially radical ideas ideas about information storage. The other, and my preferred option, is to question whether it is information storage (in the traditional sense) that the brain is performing.


My issue with the Neuroskeptic is this: he wants to both tow the line of the brain being a information storage machine and the traditional line of ‘chemical or physical forms’ for storing that information. I can’t help but feel that something has to give in this respect, and holding on to traditional ideas for traditional sake is hampering the potential to explain this fascinating phenomena.



[1] For those interested, a signal is ‘taken’ from the EEG read of the sender, while the receiver of this brain-to-brain communication feels pulse on their finger. The receiver is taught to associate different pulses with different actions on a video game the two are playing together. There is no direct inserting of thoughts or commands into the receiver’s brain.

In short, the only interesting thing is the EEG part, the rest is good old fashion communication.

[2] The ‘innocent’ manipulation of data to give your research a better outcome.

[3] There has even been some support for this idea recently.

[4] To those who have read Dreyfus at least.

[5] I was only able to find this article through nefarious means. The reference is: Berkovich SY (1993) On the information processing capabilities of the brain: shifting the paradigm. Nanobiology

Tuesday 10 November 2015

Face it, whether your brain is computer or not, no one has a clue

blog

There is nothing like disagreement to motivate you to action. Several articles have given me the motivation to reactivate this blog. The first comes from the New York Times and has the bold title of “Face it, your brain is a computer.” The author, Professor Gary Marcus, is concerned that people are losing sight of the big picture when considering the brain. Neuroscientists are too focussed on “narrow measurable phenomena” instead of addressing the larger picture. That the author is discouraging scientists from focusing on measurable phenomena is worrying even if you acknowledge that the emphasis is on the phenomena being narrow rather than measurable.[1]


Putting this concern aside, Marcus is deploying a common strategy used by those who advocate a computational theory of mind. It is summarised with the, almost rhetorical, question “If the brain is not a serial algorithm-crunching machine, though, what is it?” This, Fodorian only-game-in–town like challenge has become all too familiar to those who have studied this debate. The answer, unsurprisingly, is that of course the brain must be a computer, why else would Marcus raise it? However, as I will show, what was once meant to be a rallying cry for a reasoned conclusion has become more and more like the 3am challenge of “wanna fight” issued by a freshly ejected drunk.


This is not to say that Marcus doesn’t put forward a case, he does in part. His argument revolves around dismissing three common reasons for rejecting the mind as a computer. The first is captured under the slogan ‘brains are parallel, but computers are serial.’ Marcus rightly points out that modern conceptions of computers are no longer simple serial machines, such a view is “woefully out of date.” Instead we have multi-parallel processing, modular components and this seems to just be the start of such developments. I readily accept this point that to be computer-like does not demand one CPU through which everything is processed.


The second argument he dismisses comes with another slogan: ‘brains are analog, while computers are digital.’ Once again I am happy to agree with Marcus’ conclusions, that the brain could be operating in a digital format or it could not, or it could be a combination of both. What is not explicitly said, but underpins the argument, is the claim that: provided the brain is an algorithm processing machine, then it doesn’t matter if the processing is done digitally or not. Just as long as there is the processing of algorithms, then the computational criteria are met. Expressed in this way, the issue is a bit more hazy. For example, how can you have algorithm processing that isn’t discrete and therefore digital? But I’ll put that aside for now.


This takes us to the third argument, that computers are unable to generate emotions while the brain is. This argument is one I have encountered more and more in the university undergraduate classroom, and one that does seem a fallback position to many who encounter artificial intelligence through pop culture. Once again, I agree with Marcus, the parts of the brain that have been isolated to be core areas of emotional processing are no different from other parts of the brain, so it is very likely that they operate in a similar way. This means that if we can reproduce these in a computer, for example a chess playing one, then we should be able to reproduce the emotional parts too. Again, as I have argued many times in the classroom, emotions seem no more special than other parts of our behaviour/mind so they do not make the mind some special, potentially non-physical thing. It is quite rewarding each year to change a few students’ minds on this point.


So with all this agreement why did I include the above comparison to a drunk trying to pick a fight? The reason is that, just like the drunk only challenging those who she thinks she can beat in a fight, so too has Marcus only presented easy pickings. Note the disparity between the central claim: that the brain is an algorithm-crunching machine and the arguments presented. None of them challenge this idea in the slightest or even discuss it. The anti-computationalist approach driven by the parallel processing ideas of the 1980s and 1990s, is one in which this core idea is questioned. Through puzzles like the frame problem, people like Hubert Dreyfus (1992) claim that algorithm processing just fails to explain human intelligence. Alternatively, roboticist Rodney Brooks has just moved on, turning intelligence into an engineering problem and leaving the high-end picture behind.


While Marcus most likely had a particular audience in mind when he wrote that piece, I cannot help but feel it is aimed too low (e.g. the philosophy undergrad) and not at those actively arguing and reasoning for a anti-computationalist account. In doing so he is not strengthening his case.


To be fair, there is a positive proposal presented by Marcus and I agree that the “real payoff in subscribing to the idea of a brain as a computer would come from using that idea to profitably guide research.” So what is the payoff? Marcus and his colleagues talk of the brain being made up of something like field programmable gate arrays. These are flexible clumps of logic gates that can be configured and reconfigured to perform a range of tasks. The move here seems to be away from an idea of central processing to a conception more consistent with neuroscience, that different parts of the brain perform different functions. In short, rather than the brain being a serial algorithm-crunching machine, it is a parallel algorithm-crunching machine, one with several areas all performing their own tasks.


However, there is a real problem here, especially if we adopt the high-level analysis suggested by Marcus. Are he and his colleagues simply suggesting that the brain might be modular? With parts capable of performing different tasks as the need arises? If so, then their allegiance to Fodor is clear, though I suspect they may want to catch up on some reading which made this point a little while ago (Fodor 1983).


If they are making the stronger claim, that the modularity of mind is best explained by there being separate parts of the brain being individual algorithm-crunching machines, then we have a more interesting claim.[2] However, on closer inspection, there is little evidence to back this up. The academic paper referred to in this New York Times piece is noticeably short and lacking clear justification for its claims. Disappointingly, the argument boils down to: the old conception of the mind as a computer is wrong, why not try out this parallel conception? There is no support other than a few studies that the theory just happens to work alongside and the authors conclude before anyone has the time to ask for “measurable phenomena.” They seem to be trying the old: throw it against the wall and see if it sticks, method of explaining the brain.


While it is true that I may be taking out my frustration of this topic a little heavily on Marcus and his own attempt to add something small to a long standing debate, to me this article epitomises the problem with the discussion in general. Too much is assumed, beating the easy challenges is seen as a win, and attempts at being different land as far away as a gentle breeze carries an apple falling from a tree. For me, the mysteries of the brain are going to require more creativity and new ideas than many are ready to allow for. Perhaps ultimately, it is something much simpler that I pine for, not having to argue with so many drunks at conferences as I try (mostly in vain) to present a genuine alternative to the only game in town.


[1] For full disclosure of the authors intentions, here is the entire passage:

“A lot of neuroscientists are inclined to disregard the big picture, focusing instead on understanding narrow, measurable phenomena (like the mechanics of how calcium ions are trafficked through a single neuron), without addressing the larger conceptual question of what it is that the brain does.”

[2] Though once again not that different from claims made by others.

Saturday 26 July 2014

Intro to Philosophy of Mind

Intro to Philosophy of Mind (Minds, Brains, Machines)

Finally getting to teach my first philosophy of mind course at uni which is both exciting and a bit daunting.  I will try and track my progression in this blog as I go.  The first step, already completed, was getting the reading list together.  Here I tried to balance my own desires to show the problems with representational theories of mind while presenting a fairly standard picture.  I also wanted to include Merleau-Ponty, who I think is essential phenomenologist needed to understand current debate (there was also a course dedicated to Heidegger the previous semester).  The mid-semester break was quite late in the year, so that also helped shape the course, making the last few weeks a concentrated look at the phenomenological approach.  I’ve included the full reading list below for those interested but I’ll mention some notable inclusions.

My biggest mark on this course is that I leave out the discussion of dualism you normally have in such a course, or at the start of a philosophy of mind textbook (e.g. Kim, Braddon-Mitchell and Jackson).  For me what is interesting about dualism now is its resurgence, which stems from the argument: by now we should have a naturalised account of the mind, because we do not, there may be problems with naturalism/phyiscalism.  For me, this new found doubt is of more compelling than the history reasons that drove Cartesisn dualism.   To this end I didn’t start with a passage from Descartes, rather Ryle’s Descartes’ Myth.  To me this piece helps introduce the core ideas of the modern debate as well as the critique them all at the same time.  It also starts the course with an impressive source material; it is fairly easy to read and sets a good philosophical tone.

Another notable quirk is my reliance on John Haugeland’s work.  For me his development from a traditional, computational account of the mind to the more Heideggerian approach is indicative on the development of Western philosophy of mind in general.  Of particular interest is how in his textbook from the 80s he talks, like many did, of the mind, almost, being accounted for, as if there are just a few kinks to be ironed out.  He then later critiques this position, in particular in the paper Mind Embodied Embedded where he pushes his more Heideggerian approach.  Additionally, Haugeland’s work not only reveals the moves away from computationalism but also how similar the theories are.  If you track what is kept the same in his work you see the base level assumptions that I think are problematic.  That said, I won’t be delving too much into that aspect, rather just showing the transition and how it shows how the debate around the mind is still active and developing.

Now its time to go write some lectures!


Reading list
- Gilbert Ryle (1949) “Descartes’ Myth”, Chapter 1 of The Concept of Mind
- John Haugeland (1985) “The Saga of the Modern Mind”, Chapter 1 of Artificial Intelligence
- Jaegwon Kim (2010) Chapter 3 AND Chapter 4 of Philosophy of Mind (3rd edition)
- David Braddon-Mitchell and Frank Jackson (2007) “Common-sense functionalism”, Chapter 3 of Philosophy of Mind and Cognition (2nd edition)
- John Heil (2012) “The Representational Theory of Mind”, Chapter 4, pp. 104-120 in Philosophy of Mind
- Ned Block (1980) “Troubles with functionalism”, Chapter 22 in Readings in Philosophy of Psychology, ed. Ned Block
- Susan Blackmore (2007) “David Chalmers” in Conversations on consciousness
- Frank Jackson (1982) “Epiphenomenal Qualia” Philosophical Quarterly 32 p.127–136
- William Ramsey (2013) Eliminative Materialism http://plato.stanford.edu/entries/materialism-eliminative/
- John Haugeland (1998) “Mind Embodied Embedded” in Having Thought: Essays in the Metaphysics of Mind
- Gallagher and Zahavi (2012) “Introduction p. 2-11” of The Phenomenological Mind 2nd Edition
- Maurice Merleau-Ponty Phenomenology of Perception TBD
- Gallagher and Zahavi (2012) “Perception”, Chapter 5 in The Phenomenological Mind 2nd Edition
- Taylor Carman (2005) “Sensation, Judgement, and the Phenomenal Field”, Chapter 2 in The Cambridge Companion to Merleau-Ponty

Thursday 4 April 2013

Google Nexus 7 academic review

After using the Nexus 7 since December I thought I would have a quick write up of how I thought it was for my studies.

Firstly, the model I have is the 32 GB, wifi version (so no mobile internet) and for its price it is a powerful machine.  Compared to some of the Android based phones I have tried it is very responsive and really has an excellent quality screen.  When buying it I was comparing it to the iPad mini which is the same size though more expensive.   In the end I thought the price difference was just too great and went for the Nexus, mainly because it was actually cheaper than the 16 GB iPad mini, and I knew I would need the extra space.

The primary reason for me getting a tablet is to read journal articles from.  For this the screen is great.  It is a nice size to both hold and to read.  I was worried the screen would be too small, but with the ease at which you can zoom in and out this has never been a problem.  I bought one book from the Google Play book store and it is a pleasure to read, especially with the inbuilt dictionary.  This I hear is an old feature for e-readers but it is one I very much appreciated.

However, when it comes to academic reading while it is still easy to do, what the Nexus is lacking is good, annotation software.  I have tried several now, the two that are worth mentioning are ezPDF Reader and Adobe Reader.  I bought the ezPDF reader after checking out some reviews and it does have a lot of features that are useful.  That said, it also has some severely annoying problems.  The first is that it does not support the continuous scrolling of documents, it operates on the page turning system.  This is fine for reading a book, but if I am annotating articles it really is a pain.  I often want to got back to page, check what I just read, or maybe highlight the end line of a page and first line of the next page, erase an annotation and so on.  For doing such document annotations the lack of scroll suddenly becomes very noticeable.

It also makes it extremely hard to select annotations that you have put on the page.  Often the texts I read are sometimes scanned, so there is no easy option to highlight the text, you literally have to draw a straight line under what you want to emphasise.  With my shaky hand this can often lead to mistakes which I want to easily remove or change.  For some reason this is not easy to do with the ezPDF Reader, when you try to click on an annotation to edit it, it rarely gets selected.  I ended up just going to a menu, where all the annotations are listed and selecting it from there.  However, this process breaks your flow and is just too slow to be considered functional.

Instead, I found that the Adobe Reader is in fact then the best annotation software for me and it is free!  It lets me continuously scroll and lets me select the annotations easily for editing.  It doesn't have all the range of annotations that exPDF has but it does the basics well.  I would really like something that works as well as Adobe but with more features.  Something I hope will be created in the future...

I should mention that I do use a stylus when I annotating texts.  I just bought one from Officeworks, nothing fancy.  It works well enough, though like all capacitive screens the stylus works, at best, most of the time.  After playing with a Wii U I much prefer the Wii U's resistive screen but no tablets I know of use this technology.

This brings us to the other requirement of academia, putting text into the tablet.  Text entry is sloooow with the Nexus, at least for me.  Even with the swipe technology it really takes a while to enter text.  I am not the fastest sms texter, so it is no surprise that I am slow, but it is at a much slower pace than I had hoped for.  Don't get rid of your laptop and desktops just yet.  It is fine for annotations on PDFs or quick notes (Evernote is a great note taking app) but it is just that little bit too slow for note taking in a seminar or lecture.

I do have one final gripe to add, once the battery is low, plugging it into the wall does not produce enough energy to allow you to keep using it while it is charging.  This means you have to stop using the tablet even if it is plugged into the wall.  I have never had any device require this and I think it is a consequence of the low price.  This is definitely one area the iPad mini outclasses the Nexus 7.

Overall I find using the the Nexus fun and it is great for checking email, facebook, twitter etc.  It is also good for reading texts though I have yet to find the ultimate annotation software.  The main advantage for me is the move from paper to electronic copies of books and journal articles. It means I have my papers with me everywhere I go (thanks to the excellent Dropbox android app).  This has proven to be the biggest help as I can read something on the go, then have all the highlights on that text ready when I sit down to write.  As a creation tool, however, it is not quite there yet.  It really should be considered to be part of your academic technology tool box, rather than a do-it-all device.

Monday 4 March 2013

Cross reference

I have been busy updating my Academia.edu profile rather than this blog so I thought I would cross post to show I am alive.

One issue I am thinking about is whether I put up a "Gonzo Journalism" section on my Academia profile or not.  It would contain the pieces I have written, and will write, for the student newspaper.  These are general playful or outright silly in nature.

Can't work out whether it would hinder or help me....

Anyway, my profile is at:
http://latrobe.academia.edu/NikAlksnis

Wednesday 9 January 2013

A trip to Barbados

In November (2012) I attended the Cavehill Chips philosophy conference at the University of West Indies, Barbados. While everyone still insists that I only attended for a junket it was actually a very productive trip.

The primary reason I attended was that it was a small conference specifically in my area and it matched up with La Trobe's funding cycle. How rare that happens!

But it was a joy to attend a conference where everyone was on the same page. Added to that, most people were like me, trying to put new ideas out there in a hostile environment. Reassurance like that goes a long way.

The second reason was that Shaun Gallagher was there and, due to small nature of the conference, I hoped I could corner him for some serious questioning! I really wanted to see his responses to some issues I had with his work. This was mainly driven by some feedback I received saying I was creating a straw man out of the phenomenological embodied position. I did get my cornering chances and I left feeling quite assured that this was not the case. I think the tough questions I am asking are valid and something the phemon cogsci movement must address (namely offline cognition).

So the general feeling after the conference was keep calm and carry on!

That said, due to the location most attendees were from America, and I was once again reminded of just how political and tough their system is. This is not a critical comment, more one of sympathy for just what people must endure to eek out even a modicum of success if recognition. Unfortunately Australia and the many European universities are headed that way.

Next post I'll say something on the actual philosophy discussed there!

Sunday 30 December 2012

New year

Just wishing all a happy new year and add the typical pledge to update the blog more. I just got myself a Nexus 7 to help with this and to be active in the online philosophy community in general.

Will also post some thoughts on using the tablet for academia. So far the biggest lesson is that you need a stylus to annotate PDFs and other texts. Really makes the experience much more enjoyable.

But I will post shortly about the philosophical adventures I've been up to in the last few months.