Welcome to the IKCEST
Journal
Neural Networks

Neural Networks

Archives Papers: 1,047
Elsevier
Please choose volume & issue:
Deep neural network for traffic sign recognition systems: An analysis of spatial transformers and stochastic optimisation methods
Álvaro Arcos-García; Juan A. Álvarez-García; Luis M. Soria-Morillo;
Abstracts:This paper presents a Deep Learning approach for traffic sign recognition systems. Several classification experiments are conducted over publicly available traffic sign datasets from Germany and Belgium using a Deep Neural Network which comprises Convolutional layers and Spatial Transformer Networks. Such trials are built to measure the impact of diverse factors with the end goal of designing a Convolutional Neural Network that can improve the state-of-the-art of traffic sign classification task. First, different adaptive and non-adaptive stochastic gradient descent optimisation algorithms such as SGD, SGD-Nesterov, RMSprop and Adam are evaluated. Subsequently, multiple combinations of Spatial Transformer Networks placed at distinct positions within the main neural network are analysed. The recognition rate of the proposed Convolutional Neural Network reports an accuracy of 99.71% in the German Traffic Sign Recognition Benchmark, outperforming previous state-of-the-art methods and also being more efficient in terms of memory requirements.
Design of nonlinear optimal control for chaotic synchronization of coupled stochastic neural networks via Hamilton–Jacobi–Bellman equation
Ziqian Liu;
Abstracts:This paper presents a new theoretical design of nonlinear optimal control on achieving chaotic synchronization for coupled stochastic neural networks. To obtain an optimal control law, the proposed approach is developed rigorously by using Hamilton–Jacobi–Bellman (HJB) equation, Lyapunov technique, and inverse optimality, and hence guarantees that the chaotic drive network synchronizes with the chaotic response network influenced by uncertain noise signals. Furthermore, the paper provides four numerical examples to demonstrate the effectiveness of the proposed approach.
A novel type of activation function in artificial neural networks: Trained activation function
Ömer Faruk Ertuğrul;
Abstracts:Determining optimal activation function in artificial neural networks is an important issue because it is directly linked with obtained success rates. But, unfortunately, there is not any way to determine them analytically, optimal activation function is generally determined by trials or tuning. This paper addresses, a simpler and a more effective approach to determine optimal activation function. In this approach, which can be called as trained activation function, an activation function was trained for each particular neuron by linear regression. This training process was done based on the training dataset, which consists the sums of inputs of each neuron in the hidden layer and desired outputs. By this way, a different activation function was generated for each neuron in the hidden layer. This approach was employed in random weight artificial neural network (RWN) and validated by 50 benchmark datasets. Achieved success rates by RWN that used trained activation functions were higher than obtained success rates by RWN that used traditional activation functions. Obtained results showed that proposed approach is a successful, simple and an effective way to determine optimal activation function instead of trials or tuning in both randomized single and multilayer ANNs.
A new approach to detect the coding rule of the cortical spiking model in the information transmission
Soheila Nazari; Karim faez; Mahyar Janahmadi;
Abstracts:Investigation of the role of the local field potential (LFP) fluctuations in encoding the received sensory information by the nervous system remains largely unknown. On the other hand, transmission of these translation rules in information transmission between the structure of sensory stimuli and the cortical oscillations to the bio-inspired artificial neural networks operating at the efficiency of the nervous system is still a vague puzzle. In order to move towards this important goal, computational neuroscience tools can be useful so, we simulated a large-scale network of excitatory and inhibitory spiking neurons with synaptic connections consisting of AMPA and GABA currents as a model of cortical populations. Spiking network was equipped with spike-based unsupervised weight optimization based on the dynamical behavior of the excitatory (AMPA) and inhibitory (GABA) synapses using Spike Timing Dependent Plasticity (STDP) on the MNIST benchmark and we specified how the generated LFP by the network contained information about input patterns. The main result of this article is that the calculated coefficients of Prolate spheroidal wave functions (PSWF) from the input pattern with mean square error (MSE) criterion and power spectrum of LFP with maximum correntropy criterion (MCC) are equal. The more important result is that 82.3% of PSWF coefficients are the same as the connecting weights of the cortical neurons to the classifying neurons after the completion of the training process. Higher compliance percentage of coefficients with synaptic weights (82.3%) gives the expectance us that this coding rule will be able to extend to biological systems. Eventually, we introduced the cortical spiking network as an information channel, which transmits the information of the input pattern in the form of PSWF coefficients to the power spectrum of the output generated LFP.
Multiple types of synchronization analysis for discontinuous Cohen–Grossberg neural networks with time-varying delays
Jiarong Li; Haijun Jiang; Cheng Hu; Zhiyong Yu;
Abstracts:This paper is devoted to the exponential synchronization, finite time synchronization, and fixed-time synchronization of Cohen–Grossberg neural networks (CGNNs) with discontinuous activations and time-varying delays. Discontinuous feedback controller and Novel adaptive feedback controller are designed to realize global exponential synchronization, finite time synchronization and fixed-time synchronization by adjusting the values of the parameters ω in the controller. Furthermore, the settling time of the fixed-time synchronization derived in this paper is less conservative and more accurate. Finally, some numerical examples are provided to show the effectiveness and flexibility of the results derived in this paper.
H∞ state estimation of stochastic memristor-based neural networks with time-varying delays
Haibo Bao; Jinde Cao; Jürgen Kurths; Ahmed Alsaedi; Bashir Ahmad;
Abstracts:This paper addresses the problem of H state estimation for a class of stochastic memristor-based neural networks with time-varying delays. Under the framework of Filippov solution, the stochastic memristor-based neural networks are transformed into systems with interval parameters. The present paper is the first to investigate the H state estimation problem for continuous-time Itô-type stochastic memristor-based neural networks. By means of Lyapunov functionals and some stochastic technique, sufficient conditions are derived to ensure that the estimation error system is asymptotically stable in the mean square with a prescribed H performance. An explicit expression of the state estimator gain is given in terms of linear matrix inequalities (LMIs). Compared with other results, our results reduce control gain and control cost effectively. Finally, numerical simulations are provided to demonstrate the efficiency of the theoretical results.
A border-ownership model based on computational electromagnetism
Zaem Arif Zainal; Shunji Satoh;
Abstracts:The mathematical relation between a vector electric field and its corresponding scalar potential field is useful to formulate computational problems of lower/middle-order visual processing, specifically related to the assignment of borders to the side of the object: so-called border ownership (BO). BO coding is a key process for extracting the objects from the background, allowing one to organize a cluttered scene. We propose that the problem is solvable simultaneously by application of a theorem of electromagnetism, i.e., “conservative vector fields have zero rotation, or “curl.” We hypothesize that (i) the BO signal is definable as a vector electric field with arrowheads pointing to the inner side of perceived objects, and (ii) its corresponding scalar field carries information related to perceived order in depth of occluding/occluded objects. A simple model was developed based on this computational theory. Model results qualitatively agree with object-side selectivity of BO-coding neurons, and with perceptions of object order. The model update rule can be reproduced as a plausible neural network that presents new interpretations of existing physiological results. Results of this study also suggest that T-junction detectors are unnecessary to calculate depth order.
Unsupervised heart-rate estimation in wearables with Liquid states and a probabilistic readout
Anup Das; Paruthi Pradhapan; Willemijn Groenendaal; Prathyusha Adiraju; Raj Thilak Rajan; Francky Catthoor; Siebren Schaafsma; Jeffrey L. Krichmar; Nikil Dutt; Chris Van Hoof;
Abstracts:Heart-rate estimation is a fundamental feature of modern wearable devices. In this paper we propose a machine learning technique to estimate heart-rate from electrocardiogram (ECG) data collected using wearable devices. The novelty of our approach lies in (1) encoding spatio-temporal properties of ECG signals directly into spike train and using this to excite recurrently connected spiking neurons in a Liquid State Machine computation model; (2) a novel learning algorithm; and (3) an intelligently designed unsupervised readout based on Fuzzy c-Means clustering of spike responses from a subset of neurons (Liquid states), selected using particle swarm optimization. Our approach differs from existing works by learning directly from ECG signals (allowing personalization), without requiring costly data annotations. Additionally, our approach can be easily implemented on state-of-the-art spiking-based neuromorphic systems, offering high accuracy, yet significantly low energy footprint, leading to an extended battery-life of wearable devices. We validated our approach with CARLsim, a GPU accelerated spiking neural network simulator modeling Izhikevich spiking neurons with Spike Timing Dependent Plasticity (STDP) and homeostatic scaling. A range of subjects is considered from in-house clinical trials and public ECG databases. Results show high accuracy and low energy footprint in heart-rate estimation across subjects with and without cardiac irregularities, signifying the strong potential of this approach to be integrated in future wearable devices.
Smoothing inertial projection neural network for minimization Lp−q in sparse signal reconstruction
You Zhao; Xing He; Tingwen Huang; Junjian Huang;
Abstracts:In this paper, we investigate a more general sparse signal recovery minimization model and a smoothing neural network optimal method for compress sensing problem, where the objective function is a L p q minimization model which includes nonsmooth, nonconvex, and non-Lipschitz quasi-norm L p norms 1 p > 0 and nonsmooth L q norms 2 p > 1 , and its feasible set is a closed convex subset of R n . Firstly, under the restricted isometry property (RIP) condition, the uniqueness of solution for the minimization model with a given sparsity s is obtained through the theoretical analysis. With a mild condition, we get that the larger of the q , the more effective of the sparse recovery model under sensing matrix satisfies RIP conditions at fixed p . Secondly, using a smoothing approximate method, we propose the smoothing inertial projection neural network (SIPNN) algorithm for solving the proposed general model. Under certain conditions, the proposed algorithm can converge to a stationary point. Finally, convergence behavior and successful recover performance experiments and a comparison experiment confirm the effectiveness of the proposed SIPNN algorithm.
STDP-based spiking deep convolutional neural networks for object recognition
Saeed Reza Kheradpisheh; Mohammad Ganjtabesh; Simon J. Thorpe; Timothée Masquelier;
Abstracts:Previous studies have shown that spike-timing-dependent plasticity (STDP) can be used in spiking neural networks (SNN) to extract visual features of low or intermediate complexity in an unsupervised manner. These studies, however, used relatively shallow architectures, and only one layer was trainable. Another line of research has demonstrated – using rate-based neural networks trained with back-propagation – that having many layers increases the recognition robustness, an approach known as deep learning. We thus designed a deep SNN, comprising several convolutional (trainable with STDP) and pooling layers. We used a temporal coding scheme where the most strongly activated neurons fire first, and less activated neurons fire later or not at all. The network was exposed to natural images. Thanks to STDP, neurons progressively learned features corresponding to prototypical patterns that were both salient and frequent. Only a few tens of examples per category were required and no label was needed. After learning, the complexity of the extracted features increased along the hierarchy, from edge detectors in the first layer to object prototypes in the last layer. Coding was very sparse, with only a few thousands spikes per image, and in some cases the object category could be reasonably well inferred from the activity of a single higher-order neuron. More generally, the activity of a few hundreds of such neurons contained robust category information, as demonstrated using a classifier on Caltech 101, ETH-80, and MNIST databases. We also demonstrate the superiority of STDP over other unsupervised techniques such as random crops (HMAX) or auto-encoders. Taken together, our results suggest that the combination of STDP with latency coding may be a key to understanding the way that the primate visual system learns, its remarkable processing speed and its low energy consumption. These mechanisms are also interesting for artificial vision systems, particularly for hardware solutions.
Hot Journals