Research Interests (see also Sagol School of Neuroscience)

Areas of Interest
  • Neural Computation & Cortical Plasticity
  • High Dim. Statistics & Pattern Recognition
  • Computer Vision
  • Sonar & Ultrasound systems
  • Bio Medical Signal Processing
  • Quick Links:  
  • 2014 Workshop Smart Agents & Physiological Monitoring   
  • 2014 Undergraduate Seminar in EEG Brain Scanning 
    See Projects for New Students in the NCSP Lab page: New Projects  
  • Media coverage:     On Elephants, Earthquakes and Brain Research     Reading Dogs' minds     EEG in Education     ApplySci Discoveries    

    Relevant Courses for Brain Sciences Students:   Neural Computation    Workshop - Sadna    Advanced Seminar   

    Theory of Cortical Plasticity: The Sliding Threshold
    Research Research

    The February 15, 2007 edition of the Journal Neuron included an article about "Mechanism for a Sliding Synaptic Modification Threshold" as we celebrated the 25th anniversary of this theory. It features the cover of our book which describes the theory for synaptic plasticity with a sliding threshold. The sliding threshold was first proposed by Bienenstock Cooper and Munro in 1982. It has been modified following a mathematical analysis of its consequences by Intrator and Cooper in 1992. Early experimental evidence of the sliding threshold were reported by Kirkwood et al in 1996.
    It has been a long journey until the Neuroscience community has started to accept the possibility that the threshold slides and put it on the cover of one of its prominent journals. Full details on the theory can be found in the book by Leon Cooper, Nathan Intrator, Harel Shouval and Brian Blais: Theory of Cortical Plasticity (2004).
    A short review of the BCM theory appears in Scholarpedia


  • Object recognition
  • Face Recognition
  • Single Cell and Network Theory
  • Network Training, Algorithms, Architectures and Statistics Theory
  • Ensemble averaging
  • Interpreting Neural Network Models
  • An Integrated Approach to Multi-Sensor Active Vision for Tracking and Recognition
  • Current projects open for new students
  • NIPS 1996 Workshop: Object features for visual shape representation  NIPS 2000 Workshop
  • On-line publications
  • Survey on statistical pattern recognition and some connection to neural networks
    A Sampling of top ranking papers in Artificial_Intelligence/Machine_Learning/Neural_Networks

    It has been argued that today's supercomputers are able to process information at a rate comparable to that of simple invertebrates. And yet, even ignoring physical constraints, no existing algorithm running on the fastest supercomputer could enable a robot to fly around a room, avoid obstacles, land upside down on the ceiling, feed, reproduce, and perform many of the other simple tasks that a housefly learns to perform without external training or supervision. The apparent simplicity with which flies and even much simpler biological organisms manage to survive in a constantly changing environment suggests that current machine learning and information processing could still benefit a lot from understanding neural computation. (See a related Conference).

    My interests include theoretical studies of learning and memory in visual cortex, statistical aspects of learning, and the connection and application to feature extraction and pattern recognition. The study of statistical aspects of neural computation includes projection pursuit methods, learning and generalization in many-parameter models, in particular introducing prior knowledge during network training. Neural computation attempts to bridge the gap between neural learning and traditional statistics and machine learning theory. On one hand one would like to contribute to neural computation methods based on statistics, and improve and introduce new computational and statistical algorithms based on experience and knowledge in neural computation. Previous work has revealed the connection between a biologically motivated synaptic learning procedure and information extraction in high dimensional spaces, and application of related techniques to speech and object recognition. Further work in aspects of network estimation has resulted in robust algorithms for network model estimation and interpretation.

    Research Projects

    Novel 3D object Recognition. Heinrich Bulthoff, Shimon Edelman , Nathan Intrator)

    Previous work had concentrated on the ability of a small network receiving 2D snapshots of 3D objects to perform rotation invariant recognition. This was based on a sophisticated psychophysical experiment aimed at distinguishing between Object Recognition theories. Relevant publications include:

    Sensor and Expert Fusion Books on the subject

    Competitive learning. We have some new thoughts about the relevance of competitive learning and resource management in the brain. This is described in a short paper which discusses recent work on place cells in hippocampus in the context of resource allocation. We mention the need for a global signal for learning and wonder where that can come from. One possibility is the striatum which integrates a variety of sensor and motor information. (This is also relevant to learning about the more general area of basal ganglia.) The dynamic properties of the striatum have been discussed by Gobbel.

    Complex cells

    Complex cells were suggested for binocular disparity. We argue that they are very useful for 3D object recognition.

    Face Recognition. ( Nathan Intrator, Danny Reisfeld, Hezy Yeshurun)

    An attempt to recognize faces using an interest point detector for locating eyes and mouth, and applying the normalized image to an ANN classifier which had BCM constraints. Relevant publication:

    Single Cell and Network Theory. ( Leon N Cooper , Nathan Intrator, Harel Shouval) Network Training, Algorithms, Architectures and Statistics Theory

    The focus is on methods to introduce bias (prior knowledge) into NN training and presented a framework to introduce exploratory projection pursuit constraints into a supervised network.
    One extension to that is Localized Exploratory Projection Pursuit.

    There is a new buzz-word transfer (on-line bibliography) which is related to the idea of using prior knowledge during training. I have studied a related idea of incorporating additional knowledge from more difficult tasks and find this idea useful.

    The use of ensemble averaging which is a simple case of combining estimators in NN prediction is appealing. We study specific methods to optimally train for ensemble averaging. I have recently made a comment to the discussion group on the subject that it not only important how to combine, but it is crucial what to combine, and in particular combine estimators whose errors are independent. That way the variance of the combination is reduced. One method for doing that is by injecting noise during training.

    Relevant publications: Interpreting Neural Network Models

    We study the interpretability properties of neural networks for the purpose of model inference. This is especially relevant in clinical data analysis. A simulation study by Orna describes some recent results. Applications

    An Integrated Approach to Multi-Sensor Active Vision for Tracking and Recognition

    Fredy Brukstein Ron Kimel Michael Lindenbaum  David Mendelovic Ehud Rivlin

    Current projects open for new students

    Neuronal coding
    A fundamental questions in neural computation and computer vision is concerned with the nature of object representations and the nature of representations of relationship between objects. In particular, we depend on the ability to adjust our expectations according to the past context. This suggests that neurons should in addition to detecting features in their input representation, transmit some information about the a-priori probability of occurrence of these features. See (Intrator, 1996) for additional information. A relevant page which also talks about precision spike trains and noise can be found in the Noise and Natural Scene Statistics
  • Detailed study of the statistics of natural scenes
    It is quite accepted by now that the second order statistics of natural images is not sufficient for the analysis of learning models (visual cortical plasticity). Recently, David Mumford came up with a model to characterize some properties of natural images. This has to do with the exponent of an exponential family of distributions. We intend to study learning models based on this new model of natural scenes.

    Selected Abstracts

    On-line Publications

    Copyright Notice
    The documents distributed here have been provided as a means to ensure timely dissemination of scholarly and technical work on a noncommercial basis. Copyright and all rights therein are maintained by the authors or by other copyright holders, notwithstanding that they have offered their works here electronically. It is understood that all persons copying this information will adhere to the terms and constraints invoked by each author's copyright. These works may not be reposted without the explicit permission of the copyright holder.















    • O. Pasternak, N. Sochen, N. Intrator and Y. Assaf  Mapping Neuronal Fibers Through Partial Volume Voxels Proceedings of the 14th meeting of the Organisation for Human Brain Mapping (HBM), Melbourne, Australia (2008).
    • O. Pasternak, N. Sochen, N. Intrator and Y. Assaf  Free water extraction from Diffusion Images Proceeding of the 16th International Society for Magnetic Resonance in Medicine (ISMRM), Toronto, Canada (2008).
    • O. Pasternak, N. Sochen, N. Intrator and Y. Assaf  Variational multiple-tensor fitting of fiber-ambiguous diffusion-weighted magnetic resonance imaging voxels MRI 26(8) pp. 1133-1144 (2008).
    • G. Amit, J. Lessick, N. Gavriely and N. Intrator  Acoustic Indices of Cardiac Functionality International Conference on Bio-inspired Systems and Signal Processing (BIOSIGNALS), Vol 2, pp 77-83 (2008).
    • K. Kim and N. Neretti and N. Intrator  MAP Fusion Method for Super-resolution of Images with Locally Varying Pixel Quality. Int. J. of Imaging Systems and Technology 18(4) pp. 242-250, (2008).







    • A. Apartsin, L.N. Cooper and N. Intrator. Energy-Efficient Time-of-Flight Estimation in the Presence of Outliers: A Machine Learning Approach. IEEE J. of Selected Topics in Applied Earth Observations and Remote Sensing, 7(4): 1306-1313 (2014).
    • Apartsin, A., Cooper, L. N., & Intrator, N.IEEE Transactions on Geoscience and Remote Sensing,52(6), 3382-3392 (2014).
    • Apartsin, A., Cooper, L. N., & Intrator, N. Time-of-Flight Estimation in the Presence of Outliers. Part II—Multiple Echo Processing. IEEE Transactions on Geoscience and Remote Sensing, 52(7), 3843-3850 (2014)
    • Y. Meir-Hasson, S. Kinreich, I. Podlipsky, T. Hendler, and N. Intrator. An EEG Finger-Print of fMRI deep regional activation. NeuroImage 102. 128-141 (2014).
    • S. Kinreich, I. Podlipsky, S. Jamshy, N. Intrator, and T. Hendler. Neural dynamics necessary and sufficient for transition into pre-sleep induced by EEG NeuroFeedback. NeuroImage (2014).
    • G. Castellani, N. Intrator and D. Remondini. Systems biology and brain activity in neuronal pathways by smart device and advanced signal processing Frontiers in Genetics (2014).
    • M. Bleich-Cohen, S. Jamshy, H. Sharon, R. Weizman, N. Intrator, M. Poyurovskyd and T. Hendler. Machine learning fMRI classifier delineates subgroups of schizophrenia patients Schizophrenia Res. Voll 11, 160(1), 196-200. (2014) Available online.



    See also list of Patents from the USPTO       Useful online resource       Mail to Nathan Intrator Note: Recent conference papers and book chapters appear in brown font.
    "Sunshine is the best disinfectant"   Lewis Brandeis
    Copyright © 1997-2013 Nathan Intrator. All rights reserved.