The February 15, 2007 edition of the Journal Neuron included an article about
"Mechanism for a Sliding Synaptic Modification Threshold" as we celebrated the 25th
anniversary of this theory.
It features the cover of our book which describes the theory
for synaptic plasticity with a sliding threshold.
The sliding threshold was first proposed by Bienenstock Cooper and Munro in 1982.
It has been modified following a mathematical analysis of its consequences by Intrator and Cooper
Early experimental evidence of the sliding threshold were reported by Kirkwood et al in 1996.
A Sampling of top ranking papers in Artificial_Intelligence/Machine_Learning/Neural_Networks
It has been argued that today's supercomputers are able to process information at a rate comparable to that of simple invertebrates. And yet, even ignoring physical constraints, no existing algorithm running on the fastest supercomputer could enable a robot to fly around a room, avoid obstacles, land upside down on the ceiling, feed, reproduce, and perform many of the other simple tasks that a housefly learns to perform without external training or supervision. The apparent simplicity with which flies and even much simpler biological organisms manage to survive in a constantly changing environment suggests that current machine learning and information processing could still benefit a lot from understanding neural computation. (See a related Conference).
My interests include theoretical studies of learning and memory in visual cortex, statistical aspects of learning, and the connection and application to feature extraction and pattern recognition. The study of statistical aspects of neural computation includes projection pursuit methods, learning and generalization in many-parameter models, in particular introducing prior knowledge during network training. Neural computation attempts to bridge the gap between neural learning and traditional statistics and machine learning theory. On one hand one would like to contribute to neural computation methods based on statistics, and improve and introduce new computational and statistical algorithms based on experience and knowledge in neural computation. Previous work has revealed the connection between a biologically motivated synaptic learning procedure and information extraction in high dimensional spaces, and application of related techniques to speech and object recognition. Further work in aspects of network estimation has resulted in robust algorithms for network model estimation and interpretation.
Research ProjectsNovel 3D object Recognition. Heinrich Bulthoff, Shimon Edelman , Nathan Intrator)
Previous work had concentrated on the ability of a small network receiving 2D snapshots of 3D objects to perform rotation invariant recognition. This was based on a sophisticated psychophysical experiment aimed at distinguishing between Object Recognition theories. Relevant publications include:
Sensor and Expert Fusion Books on the subject
Competitive learning. We have some new thoughts about the relevance of competitive learning and resource management in the brain. This is described in a short paper which discusses recent work on place cells in hippocampus in the context of resource allocation. We mention the need for a global signal for learning and wonder where that can come from. One possibility is the striatum which integrates a variety of sensor and motor information. (This is also relevant to learning about the more general area of basal ganglia.) The dynamic properties of the striatum have been discussed by Gobbel.
An attempt to recognize faces using an interest point detector for locating eyes and mouth, and applying the normalized image to an ANN classifier which had BCM constraints. Relevant publication:
The focus is on methods to introduce bias (prior knowledge) into NN training
and presented a framework to
introduce exploratory projection pursuit constraints into a supervised
There is a new buzz-word transfer (on-line bibliography) which is related to the idea of using prior knowledge during training. I have studied a related idea of incorporating additional knowledge from more difficult tasks and find this idea useful.
The use of ensemble averaging which is a simple case of combining estimators in NN prediction is appealing. We study specific methods to optimally train for ensemble averaging. I have recently made a comment to the discussion group on the subject that it not only important how to combine, but it is crucial what to combine, and in particular combine estimators whose errors are independent. That way the variance of the combination is reduced. One method for doing that is by injecting noise during training.
We study the interpretability properties of neural networks for the purpose of model inference. This is especially relevant in clinical data analysis. A simulation study by Orna describes some recent results. ApplicationsAn Integrated Approach to Multi-Sensor Active Vision for Tracking and Recognition
Current projects open for new studentsNeuronal coding
"Sunshine is the best disinfectant" Lewis Brandeis