Abstracts and Deadlines (Other confereces)

Check NNet Events   ARVO ICPR CVPR ICCV CNS NIPS   CompVision
Information Fusion


2001  

Feb 27-Mar 3 Capri May 01


2000  

Feb 27-Mar 3 COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE Jan 00
... NIPS Workshop Dec 00
Apr 4-7 Snowbird  Ski - Lodge   submission Jan 28 (Regis)
Apr 17-19 DARPA PI Meeting
Apr 24-28 SPIE   Jan 00
May ... ARVO   sub.ShimonEdelman Dec 1, 99
May 31 - Jun 2 Computational Finance, London Jan 6, 00
Jun 13-15 CVPR South Carolina
Jun 20-23 Palermo Comp Vision   Now
Jun 21-23 Multiple Classifier Systems   sub.ShimonCohen  Feb 1, 00
Jul 16-20 CNS Brugge, Belgium. Jan 26, 00
Jul 23-26 Systemics, Cybernetics, Informatics (Been asked to organize session) Dec 16, 99
Sep 3-8 Barcelona ICPR Hotel Barcelo Sants    sub.Inna  sub.Nicola  sub.Pedja  Dec 1, 99
Sep 11-13 Workshop on Frontiers in Handwriting Recognition IWFHR-7  Skeleton Jan 21, 00

Neural Coding  Optics 

1999
  • Jan 26, 99, Center for Neural Computation - Hebrew University
    Robust ANN modeling: Specific control of variance and bias 
  • Apr 6-9, 99 Snowbird: Ski - Lodge
  • Jul 18-22 99, CNS99 Pittsburgh, Deadline Jan 26, 99
  • Aug 23-25 99, NNSP
  • Sep 20-22 99, Document Analysis and Recognition  Bangalore, India
  • Oct 10-13 99, Neural Computation in Science and Technology
    Blurred Face Recognition via a Hybrid Network Architecture
    From mine detection to seismic data analysis: Wavelet representations and Neural Network ensembles
  • From mine detection to seismic data analysis:
    Wavelet representations and Neural Network ensembles

    Mine detection in shallow-water is difficult due to the huge background variability -- the sea ground. We have introduced an integrated mine detection system that receives image inputs from a moving sonar sensor.

    We shall discuss the essential blocks in this scheme; the image denoising using wavelets, a detection based on a multi-scale representation and fusion of several detectors using neural networks.

    A similar scheme is being constructed for calcification detection in breast MRI for detection of early stages of cancer.


    1998
    p
  • Apr 7-10, 98 Snowbird (Passover 5-17)
  • Apr 13-17, 98 SPIE - Aerospace/Defense Sensing, Simulation, and Controls: Orlando  Image enhancement and feature extraction for pattern recognition (SESSION 8 1:40 to 3:20 pm Apr 14) Q. Q. Huynh, N. Intrator, G. J. Dobeck, N. Neretti [3392-39]
  • Apr 14-16, 98, Face Recog   Nara Japan
  • May 11, 98 Mixture of Dumb Experts, What can they do?
  • Jun 29-Jul 3, 98, Non-linear Time Series Technion
  • Jul 8-10, 98,   Time Series Competition http://www-svr.eng.cam.ac.uk/nnsp98
  • Jul 24-26, 98 ICML-98 Madison Wisc (Machine Learning) Sub. Deadline March 2.
  • Jul 26-30, 98 CNS98 Santa Barbara, Deadline Jan 26.
  • Aug 1, 98 Face Submission deadline
  • Aug 23-28, 98 Seismology Conference (Haifa) Deadline: Apr 15 (4 original copies of Abstracts/3 original copies of Papers should be submitted to the Assembly Secretariat and 1 copy to the first convener of the selected session)
  • Nov 30-Dec 5, 98 Nips   NIPS:
    Statistical Theories of Cortical Functions
    Combining supervised and unsupervised learning
  • Blurred Face Recognition via a Hybrid Network Architecture
  • Dec 4-6, 98 Health Policy Research
  • Dec 7-10, 98, Workshop in HongKong Yau Shu Wong. yaushu.wong@ualberta.ca

  • 1997
  • Jan 4-7, 97 AI and Stat Florida: Robust Interpretation of Neural-Network Models, Full paper
  • Jan 7-9, 97 EC-IS Vision Grenoble (Program) (Map) Natural goals for learning object representations
  • March 12-14, 97 AUDIO- AND VIDEO-BASED BIOMETRIC PERSON AUTHENTICATION Crans-Montana, Switzerland

  • Apr 1-4, 97 Snowbird: Feature extraction from Wavelet and Local Cosine Dictionaries
  • Apr 24-25, 97 DARPA meeting   Dr. A. Tsao,   Dr. Dennis Healy, Jr. Applied and Computational Mathematics Program DARPA/DSO
  • May, 97 Training grant to NIH
  • May, 97 Educational grant to NSF
  • May 6-8, 97 Onr Meeting NUARK

  • Jun 3-6, 97 The 1997 Canadian Workshop on Information Theory Yaushu Wong
  • Jul 14-18, 97 Third International Conference on Theoretical and Computational Acoustics, Holiday Inn, Newark, New Jersey Christopher DeAngelis, Kenneth Lima
    Non-linear Discrimination based on Time Frequency Dictionaries

  • Aug 7-10, 97 Cognitive Science, Stanford
    Learning as formation of low dimensional representation spaces  Talk
  • Aug 10-14, 97 ASA in Anaheim LA:
    On the Utility of Projection Pursuit to Classification Problems (Aug 13 8:35am)
  • Sep 11-14, 97 Statistics of natural images (Pamela Reinagel pam@v1.med.harvard.edu)
    Jiminy Peak Resort in Hancock, MA (Berkshires)
  • Oct 3-23, 97 Newton Inst   Programme  (Barlow) 
  • Oct 8, 97 Sheffield University   Many parameter models: Specific control of variance and bias
  • Oct 9, 97 Imperial College   Many parameter models: Specific control of variance and bias
  • Oct 27, 97 Symposium
  • Nov 28-30, 97 Similarity and Categorization Edinburgh, Scotland, What makes things similar? How should 'similarity' and/or 'categorisation' be defined? What principles govern similarity and/or categorisation? What is the relationship between similarity and categorisation?
    The maximum length for papers will be seven pages, conforming to the format laid down at http://www.dai.ed.ac.uk/misc/simcat/formats.html. (Deadline August 22nd 1997)
  • Dec 3-7, 97 NIPS:   Workshop   Improving Recognition via Reconstruction
  • Dec 15-17, 97 Comutational Finance

  • Ensembles of "what+where" cells can support the representation of object structure

    S. EDELMAN, B. HILES, I. STAINVAS, N. INTRATOR

    PURPOSE: Neurons responding selectively both to shapes and to their locations within a large portion of the visual field have been found in inferotemporal, parietal, and prefrontal cortex areas in the monkey. We investigated the ability of a computational model of an ensemble of such "what+where" cells to make explicit the spatial structure of the stimulus, a difficult task commonly considered as requiring representations based on decomposition into generic parts and on "symbolic" binding. METHODS: Two separate simulations were conducted, one involving animal-like shapes, and the other -- objects consisting of a pair of primitive shapes such as cube, sphere and cylinder. The system in both cases contained four modules, each trained (1) to discriminate among multiple objects, (2) to tolerate translation within a receptive field roughly corresponding to one of the four quadrants of the image, and (3) to provide an estimate of the reliability of its output, through a separate autoassociation mechanism aimed at reconstructing the stimulus. RESULTS: The outputs of the four modules provided a consistent coarse coding of novel objects belonging to the familiar category, which was useful for translation-tolerant recognition (i.e., a system trained on lion, goat, cheetah could be used to tell apart cow from horse). The reliability estimates carried information about category, allowing outputs for objects other than quadrupeds to be squelched. Most importantly, due to the spatial localization of the modules' receptive fields the system could distinguish between different configurations of the same shapes (e.g., sphere over cube vs. cube over sphere) while noting the component-wise similarities. In a simulation of our earlier psychophysical experiments, this model exhibited "priming" by conjunction of shape and location, but not by shape alone, just as our subjects had. CONCLUSIONS: Our results indicate that both the contingent of shapes comprising an object and their spatial arrangement can be adequately represented by a system of "what+where" cells, without recourse to generic parts or symbolic binding.


    Robust ANN modeling: Specific control of Variance and Bias

    Nathan Intrator
    CS Dept.   Tel-Aviv University

    The utility of drawing decision and predictions from an ensemble of predictors has been widely recognized. However, training methods for optimal performance of ensemble of estimators are just emerging. Several issues will be discussed in the context of controlling the variance and bias portion of the error; The effect of noise injection vs. smoothing, the importance to stabilize ensembles, and specific bias constraints that enhance internal representation of network models.


    1996

  • Jul 14-17, 96 CNS Boston
  • Nov 3-4, 96 Wavelet South Carolina
  • Dec 3-7, 96 NIPS

  • Deadlines
  • Aug 96 Book chapter on stat and nn to Jim Kay
  • Dec 15, 96 Special issue, wavelets
  • March 20, 1997 Book of tricks
  • May 97 Trends in Cog Sci with Edelman Competitive Learning
  • Aug 97 Book chapter on Projection Pursuit and Neural Networks C. R. Rao

  • 2/14/96 Paper on Ensembles of nets to Connection Science (A. Sharkey)

  • 2/22/96 Paper on transfer to Connection Science (Lorry Prat)
  • 3/17-20/96 Vision workshop ECIS Zichron
    Training methodologies for model free object recognition
  • 4/9-12/96 Snowbird Ski - Lodge
    Minimal discription length neurons and coincidence detection
  • 4/17/96 MIT CBCL Talk
    Recognition of partially occluded and distorted images
  • 4/24-26/96 Montreal
    General methods for training ensembles of regressors (Info)
  • 5/2/96 Brown, Neuroscience
    Desired information properties of cortical coding
  • 5/7/96 Brown Engineering vision talk at noon.
  • 6/18-21/96 IEEE Signal Proc TIME-FREQUENCY and TIME-SCALE ANALYSIS
    Classification using Feature Extraction Based on Time-Frequency Analysis and BCM Theory

  • 6/96 Machine learning paper to Thurn
  • 6/96 Book chapter with Edelman
  • July 12, 96 (Deadline) Up to one page send to Baddeley and to itb2@psy.ox.ac.uk 9/20-21/96 Info Theory (Baddeley)

  • 7/10-12/96 ONR Contractors meeting at Boston (Howard Eichenbaum)
  • 7/13-15/96 CNS @ Boston (Spike Data workshop Laubach)

  • 4/9-12/96 Snowbird Ski - Lodge

  • 9/First Week/96 ONR Contractors meeting in LosAngeles (Dick Lau)
  • July 12, 96 (Deadline) Up to one page send to Baddeley and to itb2@psy.ox.ac.uk 9/20-21/96 Info Theory (Baddeley)
  • 9/23-27/96 ICONIP96 Hong Kong

  • Nov 17-20, 96 Orna, GSA Washington

  • Dec 1, 97
    Special Issue of IEEE Transactions on Signal Processing:    
    Applications of Neural Networks to Signal Processing       
    Expected Publication Date:       November 1997 Issue                  
    Submission Deadline:             December 1, 1996                     
    Guest Editors: A. G. Constantinides, Simon Haykin, Yu Hen Hu,   
    Jenq-Neng Hwang, Shigeru Katagiri, Sun-Yuan Kung, T. A. Poggio
    Prospective authors are encouraged to SUBMIT MANUSCRIPTS BY 12/1/96 to:
    Professor Yu-Hen Hu                            E-mail:     hu@engr.wisc.edu  
    Univ. of Wisconsin - Madison,                  Phone: (608) 262-6724 
    Dept. of Electrical and Computer Engineering   Fax: (608) 262-1267 
    1415 Engineering Drive, Madison, WI 53706-1691  
    On the  cover letter, indicate the manuscript is submitted to the special 
    issue on neural network for signal processing .  All manuscripts should 
    conform to the submission guideline detailed in the "information for authors" 
    printed in each issue of the IEEE Transactions on Signal Processing. 
    pecifically, the length of each manuscript should not exceed 30 
    double-spaced pages. 
    SCHEDULE
    Manuscript received by:                 December 1, 1996 
    Completion of initial review:           March 31, 1997 
    Final manuscript received by :          June 30, 1997 
    
  • Dec 1, 96 (Deadline) 6/9-11/97 Vision Finland Deadline Dec 1, 96, Camera ready Mar 31, 97
    1995
  • 4/95 Snowbird
  • Paper to the Paris conference on Wavelets
  • 12/1/95 NIPS 95: Object Featuress Workshop

  • Shimshoni NNSP96 (Sep Japan), NICROSP'96 (august Venice), Oregon (4-8 Aug)
  • Aug 4-5, 96 INTEGRATING MULTIPLE LEARNED MODELS FOR IMPROVING AND SCALING MACHINE LEARNING ALGORITHMS (with Shimsh)
  • 9/28/96 Japan with shimsh

  • Mixture of Dumb Experts: What can they do?

    Nathan Intrator
    Tel-Aviv University

    Bootstrap samples with noise are shown to be an effective smoothness and capacity control technique for training feed-forward networks and for other statistical methods such as generalized additive models. It is shown that noisy bootstrap performs best in conjunction with regularization methods and ensemble averaging. The two-spiral problem, a highly non-linear noise-free data, is used to demonstrate these findings. The combination of noisy bootstrap and ensemble averaging is also shown useful for generalized additive modeling.


    Natural goals for learning object representations

    Nathan Intrator

    It is conceivable that the internal representation of objects is strongly related to the tasks usually performed with these objects and to the more frequent views from which the objects are seen. However, it is not entirely clear what makes a useful representation and, in particular, what kind of goals can be used to facilitate learning. More specifically, one may ask if good internal representation requires learning a discrimination task, a generalization task, or whether a reconstruction goal is sufficient. We argue that building a good internal representation requires a combination of discrimination and generalization tasks. Furthermore, both these tasks cannot be "too specific", namely, There can not be too many class labels, as the internal representation is very limited in its dimensionality and complexity.


    Smoothing Error Surfaces

    Nathan Intrator

    This work discusses the optimization problem in high dimensional parameter space. We emphasize the difference between various constraints during parameter estimation, and argue that while smoothness constraints on the predictor may lead to a better predictor in the sense of generalization, they do not simplify the search for such a predictor. Thus, only if one gets very close the the vicinity of the predictor, where the error surface becomes convex, a good solution can be found.

    This observation motivates the introduction of smoothing the error surface during training, in an attempt to overcome local minima and get to a vicinity of a good (hopefully global) minimum. Such smoothing has to be done locally in order to be computationally efficient, and has to be done gradually in order not to avoid the minimum.

    Key words: Synaptic Noise, Deterministic Annealing, Optimization


    Neuronal mechanisms for feature detection

    Nathan Intrator

    Barlow's theory regarding suspicious coincidence detectors, calls for the need to have a neuronal mechanism for prior probability estimation. Such a mechanism will be described including some implications on sensory representation.


    Minimal Entropy Neurons and Coincidence Detection

    Nathan Intrator

    Barlow's seminal work on minimal entropy codes and unsupervised learning is reiterated. In particular, the need to transmit the probability of events is put in a practical neuronal framework for detecting suspicious events. A variant of the BCM learning rule (Intrator and Cooper, 1992) is presented together with some mathematical results suggesting optimal minimal entropy coding. The resulting unsupervised learning rule may be useful for continuous Helmholtz machines (Dayan et al., 1995), the compositional vision proposal (Geman and Bienenstock, 1995) etc.

    Comparison with recent suggestions for high kurtosis detection will be presented as well as some applications.


    Feature extraction from Wavelet and Local Cosine Dictionaries

    Nathan Intrator

    Wavelet dictionaries can be a very efficient signal represention leading to good compression. Recently, wavelet basis and wavlet packets have been shown to be useful for classification as well \cite{c,BuckheitDonoho95}. Issues of dimensionality reduction and feature extraction are less clear when the task is classification rather than compression. In particular, since every wavelet basis has the property that the coordinates coincide with the principal components, namely, the covariance matrix of every basis is diagonal, thus suggesting that linear feature extraction may not be effective. We introduce nonlinear feature extraction from wavlet packets and two new methods for choosing a basis for discrimination. We then compare these methods with the local discriminating basis of Coifman and Saito (1994) and with discriminant analysis methods of Buckheit and Donoho (1995). Applications on acoustic data and images will be presented.

    Key words: Discrimination, Best Basis, Wavelet Basis Functions, Nonlinear Feature Extraction


    General methods for training ensembles of regressors

    Nathan Intrator
    Tel-Aviv University and Brown university
    http://www.physics.brown.edu/people/nin

    The utility of drawing decision and predictions from an ensemble of predictors has been widely recognized. However, training methods for optimal performance of ensemble of estimators are just emerging. Several issues will be discussed: The effect of noise injection vs. the effect of smoothing, and the importance of stabilizing the ensemble predictors. Optimal stopping rules for ensembles and ways to alleviate the effect of error correlation between the estimators on ensemble performance. Some applications and the specific details of neural network implementations will be described.


    Recognition of partially occluded and distorted images

    Nathan Intrator
    Tel-Aviv University and Brown university
    http://www.physics.brown.edu/people/nin

    A novel feed-forward architecture for recognition of partially occluded, distorted or blurred images will be introduced.

    Some results on face recognition will be presented.


    On the Utility of Projection Pursuit to Classification Problems

    Nathan Intrator
    Tel-Aviv University and Brown university
    http://www.physics.brown.edu/people/nin

    Projection pursuit can be used in a data preprocessing stage for creating a reduced data representation or they can be used as penalty terms imposing bias on the classification (density estimation) scheme. In both cases it is important to be able to find several projections concurrently and efficiently. Various implementations using (Neural) Network architectures will be reviewed and compared with more classical projection pursuit methods.

    The utility of Projection Pursuit for classification and discrimination in very high dimensional spaces will be demonstrated on some applications.


    Robust Interpretation of Neural-Network Models

    Orna Intrator and Nathan Intrator

    Artificial Neural Network seem very promising for regression and classification, especially for large covariate spaces. These methods represent a non-linear function as a composition of low dimensional ridge functions and therefore appear to be less sensitive to the dimensionality of the covariate space. However, due to non uniqueness of a global minimum and the existence of (possibly) many local minima, the model revealed by the network is non stable. We introduce a method to interpret neural network results which uses novel robustification techniques. This results in a robust interpretation of the model employed by the network. Simulated data from known models is used to demonstrate the interpretability results and to demonstrate the effects of different regularization methods on the robustness of the model. Graphical methods are introduced to present the interpretation results. We further demonstrate how interaction between covariates can be revealed. From this study we conclude that the interpretation method works well, but that NN models may sometimes be misinterpreted, especially if the approximations to the true model are less robust.


    Non-linear Discrimination based on Time Frequency Dictionaries

    Nathan Intrator     Quyen Huynh     Leon N Cooper

    Discrimination between mine-like targets based on acoustic back-scattered data is of great importance to the Navy. We exploit the properties of representations based on time-frequency dictionaries (wavelet packet, local cosine basis, matching pursuit and basis pursuit) in connection with discrimination from these acoustic signals. In particular, we study linear and nonlinear dimensionality reduction methods such as the BCM neural network learning theory (Bienenstock Cooper and Munro, 1982) from these representations and their applicability for robust classification.


    Improving Recognition via Reconstruction

    Inna Stainvas   Nathan Intrator   Amiram Moshaiov

    Learning a many-parameter model is generally an under-constrained problem that requires additional regularization. We study reconstruction as well as several information theoretic constraints and show their relevance to recognition of corrupted inputs.

    Results are demonstrated on a well known face recognition task in various resolutions and image degradations.


    Many parameter models: Specific control of variance and bias

    Nathan Intrator

    Real world classification and regression problems lead to estimation of many-parameter model. This is generally, an under-constrained problem that requires various regularizations. We discuss several techniques to control the variance and bias portion of the error separately, and demonstrate their usefulness on synthetic and real-world problems.


    The effect of unsupervised constraints on many parameter models: Specific control of variance and bias

    Nathan Intrator

    Real world classification and regression problems lead to estimation of many-parameter model. This is generally, an under-constrained problem that requires various regularizations. We discuss several techniques to control the variance and bias portion of the error separately, and demonstrate their usefulness on synthetic and real-world problems.


    Blurred Face Recognition via a Hybrid Network Architecture

    Inna Stainvas   Nathan Intrator   Amiram Moshaiov

    We demonstrate the effectiveness of a combination of supervised and unsupervised (reconstruction) training on a realistic high dimensional recognition task. We introduce an ensemble of hybrid networks where each optimizes concurrently reconstruction and recognition tasks with a different regularization parameter that controls the effect of reconstruction vs. recognition during training and testing.
    A network interpretation via minimum description length (MDL) [1,2] is given, where a scaled reconstruction error appears as a model-cost and a scaled recognition error as an error-cost. Under a blurred image recognition task, the network performs better when it is trained to reconstruct the original (unblurred) images. From a Bayesian viewpoint, the hybrid network is trained to maximize the joint probability of the original or deblurred inputs and their class labels, given the observed image. The scale error factors are interpreted as hyper-parameters and an additional integration over them is approximated by the ensemble averaging. This constrained ensemble is compared with various unconstrained ensembles to gain more insight about the effect of the reconstruction constraints and the integration over the regularization parameter.
    Results on two facial data sets [3,4], show a significant improvement in classification performance for blurred images and are further enhanced when state-of-the-art (deblur) techniques are also incorporated.

    References

    [1] Rissanen, J. (1985). Minimum description length principle. Encyclopedia of Statistical Sciences 5:523-527.
    [2] Zemel, R. and Hinton G. (1995). Developing Population Codes by Minimizing Description length. Neural Computation 7(3):549-564.
    [3] Turk, M. and Pentland, A. (1991). Eigenfaces for recognition. Journal of Cognitive Neuroscience 3:71-86.
    [4] A. Tankus, H. Yeshurun, N. Intrator. (1997)
    Face Detection by Direct Convexity EstimationPattern Recognition Letters 18(9):913-922.


    Other Interesting Conferences

    1997

    1996