Efficient Online Convex Optimization with a Non-Convex Cost Function – Convolutional neural networks (CNN) have shown that they can achieve good predictive performance for different tasks. In this paper, we propose a novel algorithm for non-convex learning in CNNs. We build the CNN to efficiently learn the global sparse structure between two images in an online fashion. Then, we compute the loss function along with the underlying non-convex cost function in the CNN. The network can be trained in any state which preserves the sparsity of the image, which makes it suitable for many tasks. Our main contributions are: 1) we exploit the sparsity in CNNs for learning the loss function in a non-convex fashion. 2) we develop a general-domain CNN to learn the loss function by building a loss function that can be learned efficiently. 3) we conduct extensive experiments to show that our CNN can dramatically outperform state-of-the-art CNN-based systems when considering the sparse representation of images of the image.

Semi-supervised learning systems employ the nonlinearity of the inputs to train the network to make more observations per second. However, it is generally not known what is the optimal value of these representations as a function of the training set. We propose a non-linear learning rule to estimate the true values of the hidden representations, and show that this strategy, called learning the value of the noise by the nonlinearity, is accurate enough to achieve good results.

Efficient Learning for Convex Programming via Randomization

Towards Optimal Vehicle Detection and Steering

# Efficient Online Convex Optimization with a Non-Convex Cost Function

Efficient Geodesic Regularization on Graphs and Applications to Deep Learning Neural Networks

Tuning for Semi-Supervised Learning via Clustering and Sparse LiftingSemi-supervised learning systems employ the nonlinearity of the inputs to train the network to make more observations per second. However, it is generally not known what is the optimal value of these representations as a function of the training set. We propose a non-linear learning rule to estimate the true values of the hidden representations, and show that this strategy, called learning the value of the noise by the nonlinearity, is accurate enough to achieve good results.