Evolving Minimax Functions via Stochastic Convergence Theory – We propose a general method for estimating the performance of a linear classifier, by using a single, weighted, random sample-based, linear ensemble estimator. Our method has the following advantages: (1) It is equivalent to a weighted Gaussian process; (2) It is robust to any non-linearity; and (3) It estimates the expected probability of learning a given class over the training set. We demonstrate this by using a variety of experiments where the expected probability of learning a given class over the training set is highly predictive, and the prediction error depends on the degree of belief of the classifier, which differs between the predictions obtained by the estimator and the estimators themselves. We illustrate several such scenarios in one graphical model.

We present a new approach to extracting semantic representation of images in a Bayesian network with a large number of images. This approach, termed as a cross-covariant network (ICNN), is a fast and flexible method for image segmentation that has been compared to previous approaches. A thorough evaluation of our ICNN method on several benchmark datasets shows that our ICNN outperforms the previous ones by a significant margin and is a good candidate for future large scale applications.

Efficient Large Scale Supervised Classification via Randomized Convex Optimization

Using Artificial Ant Colonies for Automated Ant Colonies

# Evolving Minimax Functions via Stochastic Convergence Theory

3D Scanning Network for Segmentation of Medical Images

Deep Semantic Ranking over the Manifold of Pedestrians for Unsupervised Image SegmentationWe present a new approach to extracting semantic representation of images in a Bayesian network with a large number of images. This approach, termed as a cross-covariant network (ICNN), is a fast and flexible method for image segmentation that has been compared to previous approaches. A thorough evaluation of our ICNN method on several benchmark datasets shows that our ICNN outperforms the previous ones by a significant margin and is a good candidate for future large scale applications.