My thoughts on the first BRAIN initiative meeting

cropped-Gogli1The first of a series of planning meetings for the BRAIN initiative took place early this week in Arlington, VA. Officially called the NSF Workshop on the Physical Principles of Brain Structure and Function, it collected many of the nation’s most prominent neuroscientists together into a stuffy hotel conference room to try and flush out some sort of consensus about what the goals of the fuzzily defined BRAIN initiative should be. They were kind enough to provide a webcast of the event (link) and, even better, Joshua Vogelstein (of Open Connectome fame, among other things) live tweeted a play-by-play of most of the important discussions. Watch the webcast, and you’ll be able to almost taste the excitement in the air… there is a real sense that with emerging technologies and new approaches, neuroscience is on the verge of significant leaps in our understanding of brain function. There may be some disagreement about how to get there, but there is a sense of almost universal and likely justified optimism.

What I would like to talk about here is what I think went right and what went wrong with the meeting, and to do so in the context of the “Synthesis and Discussion” at the end. First the good… there was a clear understanding that the scale of the data that can be collected with newer technologies far exceeds the ability of individual labs to handle it. There was also an emphasis on the need for openness of data, and of the need for standards to permit the industrialization of data collection and facilitate the sharing of data in useful ways between people approaching neuroscience from very different disciplines. There was some agreement that data needs to be collected with multiple modalities at multiple scales, and that behavior, as the primary output of a nervous system, must be quantified in a very nice way. There needs to be strong interaction between exploratory data, theoretical and experimental approaches. As a once sentence summary, there is a clear consensus that much can be gained by collecting neuroscience data Bigger, Better and Faster and that technological advances ripe for improvements can accomplish this.

Everyone is a critic, and I am not immune to this disease so I offer here my criticism. The biggest suggestion I would have had is that the expectations of what could be done should more often be expressed as broad questions that can be answered, rather than focusing on the toys and gadgets required to answer them. The way the question was formulated from the Whitehouse is as follows: “The brain initiative will accelerate the development of new technologies that enable researchers to produce dynamic pictures of the brain that show how individual brain cells and complex neural circuits interact at the speed of thought.” What I would have liked to have seen are some ideas for what questions can be answered in the short-term as early goals of the initiative. Towards the end, I think Bob Laughlin tried to bring up this point, and try and express things in a way that resonates with the public in a way that would help ensure long-term support and funding. His major point was that technology itself is not going to resonate with the public, and with this I agree. In reality, there are hundreds of questions we want to be answering, but I think selling it to the public to procure sustained funding in a difficult environment for such things requires the community to coalesce around more specific questions.. ones the public cares about and, importantly, ones for which we can realistically deliver results. So, if anyone is listening, present the public with more clearly defined goals and questions rather than expressing justified excitement about the technologies and investment of funds into our favorite topics.

Another lesser criticism is more specific to my own passion—anatomical synapse-level maps of connectivity. Inherent in any project on this scale is a tradeoff between quality and quantity. It seems to be the general feeling of the community that the tradeoff lean further towards quantity than is to my liking. For example, talk of 20 or 30nm resolution for electron microscopy reconstructions. While this is technically sufficient for the identification of synapses, it will almost certainly result in a much higher error rate than higher resolution imaging. This error rate could kill our ability to make meaningful abstractions. Another item on the quality-quantity tradeoff is the subject of variation between individuals. The temptation to minimize your sample size is tempting… it is 10 times as hard to collect 10 times as many specimens. But these are complex systems and complex questions. Not only must variation be understood in order to correct for errors in our mapping endeavors, but the complex nature of the system makes it likely that there is likely signal in the structure of the variation. There are currently zero systems for which we have adequately described variation, C. elegans  included (many don’t realize that all data are from composite animals, there is not a single dataset for a single animal!). If you have the resources, do it right!

I’d like to end on a note of hope… It seems no doubt that the scale of the initiative requires a strong community. The meeting left me optimistic that the community will likely be able to come together in a way that can meet the task. Exciting times indeed!

 

Edit:  for those interested, here is a go-to link, regularly updated, for tons of info on the BRAIN initiative: http://empiricalplanet.blogspot.de/2013/03/bam-links.html?spref=tw&m=1

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s