An obvious goal of connectomics is to use it is a tool for better understanding complex network function. As new data are becoming available it is not entirely clear what the best methods are for extracting useful information out of connectivity matrices. Part of the problem lies in the fact that while a wiring diagram is necessary for understanding nervous system function, it is hardly sufficient. The painful reality is that in complex systems like networks of neurons, small details of neuron and synapse function can dramatically alter system behavior. A recent Plos Computational Biology paper by Eduardo Izquierdo and Randall Beer at Indiana University makes what I think is a useful contribution to the problem by incorporating as much as possible what additional physiological data we do have into our understanding of the connectome.
They make the same observation near the beginning of the paper that I tried to emphasize in my recent paper, and that is that most graph theoretical treatments of “connectomes” have been documenting global properties of the network. But biologists very often want to know about specific details of function, and current graph theory seems ill-prepared to do the kind of things we want it to do. There are two aspects of their approach: the first is to simplify the network by making a series of assumptions. The second was to, using so-called evolutionary algorithms applied to this “minimal” network to try and estimate parameters that could lead to the behavior. Included in their model of worm behavior were known details about physiology and connectivity.. details that could be changed to include new information as it comes available.
One criticism of the style of the paper… it is a difficult one to digest. It calls on experimental biologists to test their results, yet it seems to have been written mostly to appease the computational biologists… I wonder if they didn’t choose the audience for the paper very wisely. From a computational perspective I can’t evaluate how novel it is, but I suspect that this is not where the primary value of the paper is coming from. There is a lot of vocabulary that is not really explained and it will not be very clear to the experimental geneticist who lightly reads the paper exactly what they have done here. It is difficult to communicate this kind of work, but I think it could have done more to reach out to the audience that ultimately needs to be convinced of its utility. Some groups are, unfortunately, rather critical of these kinds of computational approaches to extracting information out of connectivity and these groups should somehow be the target audience. For myself, I found the paper well worth the time it took to digest it.
The fist part in some ways attempts to do something in similar to the focused centrality analysis that I used in my pharynx connectivity paper… I had attempted to find a way of objectively ranking the nervous system according to how important it was to a particular node. In principle, I could have examined just information flow between specific sensory and motor neurons, and I would have arrived at something very much like their “minimal” network. Their approach was maybe a little less objective in that they applied a series of assumptions to trim down the network. It is probably not the case, but it almost felt that they were just adding more assumptions until the network size had become something more computationally convenient. There are several problems with this approach I think… one is that we don’t have a good understanding for how violating any of these assumptions might impact our end results. For example, they excluded synapses with a multiplicity of one.. but if they were left in, would the networks have evolved differently? A larger assumption they made was that the SMD motorneurons were the only ones involved in this behavior. This assumption may or may not be reasonable. They are probably the best understood of the worms head/neck muscles.. but they are not the only ones. What about SIA, SIB, RMD or SMB? If you had included them, would more interneurons (maybe AIB or RIA) been included in the “minimal” circuit? Exclusion of these nodes may not have been warranted and removes any explanations you offer from including them. Since we know less about their function, their exclusion prevents you from finding what could be something less expected and possibly more interesting. They do, however, mention in their discussion that examination of some of these assumptions is one of the things that would be worth doing in the future.
The second part of the paper I think is the more important. Using evolutionary models to do a heuristic search of how behavior might be implemented by the circuit, they seem to me to have shown that it can come up with some interesting and precise observations that may help us understand the circuit better with some targeted experiments. Some of these predictions ask for experiments that have not been conducted yet. I hope that someone gets around to testing them, as it would be a terrific validation for this kind of methodology.
They made five major predictions. The first is that the magnitude of turns should be larger in negative gradients than in positive gradients. This is interesting in that it points toward the need to more richly annotate behavior as a phenotype… published studies haven’t yet recorded this. The increased use of automated worm trackers should make it easy to generate such data… a little surprising that they didn’t do it. Two of their predictions suggest that doing single neuron ablations (the nervous system is generally bilaterally symmetrical) could be insightful. Specifically, ablations of a single AIY should induce more variance into the behavior than an ablation of a single AIZ, and could also yield insight into separate predictions of the details of AIY function. Their models also predicted that AIZ gap junctions are functionally more important than AIY gap junctions.
The important thing about their approach is that they don’t see their analysis as the endpoint… it is really intended to try and define a range of possibilities that care consistent with current data and to suggest experiments that can distinguish between them. All of the assumptions are explicit and their models of neuron function can be modified as new data comes in. By using a heuristic approach, they explore wide parameter space.. and if their model doesn’t closely match actual behavior they can re-examine all of their assumptions. Only experimental interrogation of these hypotheses will tell if the approach has much utility (a criticism I would leverage also on my own paper).. but I for one am pretty excited about the prospect! Anyone else read the paper? What did you think?