Publications by authors named "Srinivasan Gopalakrishnan"

13 Publications

  • Page 1 of 1

Combined two-level damage identification strategy using ultrasonic guided waves and physical knowledge assisted machine learning.

Ultrasonics 2021 Aug 2;115:106451. Epub 2021 May 2.

Department of Aerospace Engineering, Indian Institute of Science, Bangalore, Karnataka 560012, India. Electronic address:

Structural Health Monitoring of composite structures is one of the significant challenges faced by the aerospace industry. A combined two-level damage identification viz damage detection and localization is performed in this paper for a composite panel using ultrasonic guided waves. A novel physical knowledge-assisted machine learning technique is proposed in which domain knowledge and expert supervision is utilized to assist the learning process. Two supervised learning-based convolutional neural networks are trained for damage detection (binary classification) and localization (multi-class classification) on an experimental benchmark dataset. The performance of the trained models is evaluated using loss curve, accuracy, confusion matrix, and receiver-operating characteristics curve. It is observed that incorporating physical knowledge helps networks perform better than a direct deep learning approach. In this work, a combined damage identification strategy is proposed for a real-time application. In this strategy, the damage detection model works in an outer-loop and predicts the state of the structure (undamaged or damaged), whereas an inner-loop predicts the location of the damage only if the outer-loop detects damage. It is seen that the proposed technique offers advantages in terms of accuracy (above 99% for both detection and localization), computational time (prediction time per signal in milliseconds), sensor optimization, in-situ monitoring, and robustness towards the noise.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1016/j.ultras.2021.106451DOI Listing
August 2021

Perovskite neural trees.

Nat Commun 2020 05 7;11(1):2245. Epub 2020 May 7.

School of Materials Engineering, Purdue University, West Lafayette, IN, 47907, USA.

Trees are used by animals, humans and machines to classify information and make decisions. Natural tree structures displayed by synapses of the brain involves potentiation and depression capable of branching and is essential for survival and learning. Demonstration of such features in synthetic matter is challenging due to the need to host a complex energy landscape capable of learning, memory and electrical interrogation. We report experimental realization of tree-like conductance states at room temperature in strongly correlated perovskite nickelates by modulating proton distribution under high speed electric pulses. This demonstration represents physical realization of ultrametric trees, a concept from number theory applied to the study of spin glasses in physics that inspired early neural network theory dating almost forty years ago. We apply the tree-like memory features in spiking neural networks to demonstrate high fidelity object recognition, and in future can open new directions for neuromorphic computing and artificial intelligence.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1038/s41467-020-16105-yDOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC7206050PMC
May 2020

Enabling Spike-Based Backpropagation for Training Deep Neural Network Architectures.

Front Neurosci 2020 28;14:119. Epub 2020 Feb 28.

Nanoelectronics Research Laboratory, School of Electrical and Computer Engineering, Purdue University, West Lafayette, IN, United States.

Spiking Neural Networks (SNNs) have recently emerged as a prominent neural computing paradigm. However, the typical shallow SNN architectures have limited capacity for expressing complex representations while training deep SNNs using input spikes has not been successful so far. Diverse methods have been proposed to get around this issue such as converting off-the-shelf trained deep Artificial Neural Networks (ANNs) to SNNs. However, the ANN-SNN conversion scheme fails to capture the temporal dynamics of a spiking system. On the other hand, it is still a difficult problem to directly train deep SNNs using input spike events due to the discontinuous, non-differentiable nature of the spike generation function. To overcome this problem, we propose an approximate derivative method that accounts for the leaky behavior of LIF neurons. This method enables training deep convolutional SNNs directly (with input spike events) using spike-based backpropagation. Our experiments show the effectiveness of the proposed spike-based learning on deep networks (VGG and Residual architectures) by achieving the best classification accuracies in MNIST, SVHN, and CIFAR-10 datasets compared to other SNNs trained with a spike-based learning. Moreover, we analyze sparse event-based computations to demonstrate the efficacy of the proposed SNN training method for inference operation in the spiking domain.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.3389/fnins.2020.00119DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC7059737PMC
February 2020

unsupervised learning using stochastic switching in magneto-electric magnetic tunnel junctions.

Philos Trans A Math Phys Eng Sci 2020 Feb 23;378(2164):20190157. Epub 2019 Dec 23.

School of Electrical and Computer Engineering, Purdue University, 465, Northwestern Ave, West Lafayette, IN 47906, USA.

Spiking neural networks (SNNs) offer a bio-plausible and potentially power-efficient alternative to conventional deep learning. Although there has been progress towards implementing SNN functionalities in custom CMOS-based hardware using beyond Von Neumann architectures, the power-efficiency of the human brain has remained elusive. This has necessitated investigations of novel material systems which can efficiently mimic the functional units of SNNs, such as neurons and synapses. In this paper, we present a magnetoelectric-magnetic tunnel junction (ME-MTJ) device as a synapse. We arrange these synapses in a crossbar fashion and perform unsupervised learning. We leverage the capacitive nature of write-ports in ME-MTJs, wherein by applying appropriately shaped voltage pulses across the write-port, the ME-MTJ can be switched in a probabilistic manner. We further exploit the sigmoidal switching characteristics of ME-MTJ to tune the synapses to follow the well-known spike timing-dependent plasticity (STDP) rule in a stochastic fashion. Finally, we use the stochastic STDP rule in ME-MTJ synapses to simulate a two-layered SNN to perform image classification tasks on a handwritten digit dataset. Thus, the capacitive write-port and the decoupled-nature of read-write path of ME-MTJs allow us to construct a transistor-less crossbar, suitable for energy-efficient implementation of learning in SNNs. This article is part of the theme issue 'Harmonizing energy-autonomous computing and intelligence'.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1098/rsta.2019.0157DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC6939242PMC
February 2020

Reinforcement Learning With Low-Complexity Liquid State Machines.

Front Neurosci 2019 27;13:883. Epub 2019 Aug 27.

Department of ECE, Purdue University, West Lafayette, IN, United States.

We propose reinforcement learning on simple networks consisting of random connections of spiking neurons (both recurrent and feed-forward) that can learn complex tasks with very little trainable parameters. Such sparse and randomly interconnected recurrent spiking networks exhibit highly non-linear dynamics that transform the inputs into rich high-dimensional representations based on the current and past context. The random input representations can be efficiently interpreted by an output (or readout) layer with trainable parameters. Systematic initialization of the random connections and training of the readout layer using Q-learning algorithm enable such small random spiking networks to learn optimally and achieve the same learning efficiency as humans on complex reinforcement learning (RL) tasks like Atari games. In fact, the sparse recurrent connections cause these networks to retain fading memory of past inputs, thereby enabling them to perform temporal integration across successive RL time-steps and learn with partial state inputs. The spike-based approach using small random recurrent networks provides a computationally efficient alternative to state-of-the-art deep reinforcement learning networks with several layers of trainable parameters.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.3389/fnins.2019.00883DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC6718696PMC
August 2019

Nutritional and functional roles of millets-A review.

J Food Biochem 2019 07 14;43(7):e12859. Epub 2019 Apr 14.

Department of Environmental Sciences, Bharathiar University, Coimbatore, India.

The available cultivable plant-based food resources in developing tropical countries are inadequate to supply proteins for both human and animals. Such limition of available plant food sources are due to shrinking of agricultural land, rapid urbanization, climate change, and tough competition between food and feed industries for existing food and feed crops. However, the cheapest food materials are those that are derived from plant sources which although they occur in abundance in nature, are still underutilized. At this juncture, identification, evaluation, and introduction of underexploited millet crops, including crops of tribal utility which are generally rich in protein is one of the long-term viable solutions for a sustainable supply of food and feed materials. In view of the above, the present review endeavors to highlight the nutritional and functional potential of underexploited millet crops. PRACTICAL APPLICATIONS: Millets are an important food crop at a global level with a significant economic impact on developing countries. Millets have advantageous characteristics as they are drought and pest-resistance grains. Millets are considered as high-energy yielding nourishing foods which help in addressing malnutrition. Millet-based foods are considered as potential prebiotic and probiotics with prospective health benefits. Grains of these millet species are widely consumed as a source of traditional medicines and important food to preserve health.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1111/jfbc.12859DOI Listing
July 2019

Analysis of Liquid Ensembles for Enhancing the Performance and Accuracy of Liquid State Machines.

Front Neurosci 2019 28;13:504. Epub 2019 May 28.

School of Electrical and Computer Engineering, Purdue University, West Lafayette, IN, United States.

Liquid state machine (LSM), a bio-inspired computing model consisting of the input sparsely connected to a randomly interlinked reservoir (or liquid) of spiking neurons followed by a readout layer, finds utility in a range of applications varying from robot control and sequence generation to action, speech, and image recognition. LSMs stand out among other Recurrent Neural Network (RNN) architectures due to their simplistic structure and lower training complexity. Plethora of recent efforts have been focused toward mimicking certain characteristics of biological systems to enhance the performance of modern artificial neural networks. It has been shown that biological neurons are more likely to be connected to other neurons in the close proximity, and tend to be disconnected as the neurons are spatially far apart. Inspired by this, we propose a group of locally connected neuron reservoirs, or an ensemble of liquids approach, for LSMs. We analyze how the segmentation of a single large liquid to create an ensemble of multiple smaller liquids affects the latency and accuracy of an LSM. In our analysis, we quantify the ability of the proposed ensemble approach to provide an improved representation of the input using the Separation Property (SP) and Approximation Property (AP). Our results illustrate that the ensemble approach enhances class discrimination (quantified as the ratio between the SP and AP), leading to better accuracy in speech and image recognition tasks, when compared to a single large liquid. Furthermore, we obtain performance benefits in terms of improved inference time and reduced memory requirements, due to lowered number of connections and the freedom to parallelize the liquid evaluation process.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.3389/fnins.2019.00504DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC6546930PMC
May 2019

ReStoCNet: Residual Stochastic Binary Convolutional Spiking Neural Network for Memory-Efficient Neuromorphic Computing.

Front Neurosci 2019 19;13:189. Epub 2019 Mar 19.

Department of ECE, Purdue University, West Lafayette, IN, United States.

In this work, we propose ReStoCNet, a residual stochastic multilayer convolutional Spiking Neural Network (SNN) composed of binary kernels, to reduce the synaptic memory footprint and enhance the computational efficiency of SNNs for complex pattern recognition tasks. ReStoCNet consists of an input layer followed by stacked convolutional layers for hierarchical input feature extraction, pooling layers for dimensionality reduction, and fully-connected layer for inference. In addition, we introduce residual connections between the stacked convolutional layers to improve the hierarchical feature learning capability of deep SNNs. We propose Spike Timing Dependent Plasticity (STDP) based probabilistic learning algorithm, referred to as Hybrid-STDP (HB-STDP), incorporating Hebbian and anti-Hebbian learning mechanisms, to train the binary kernels forming ReStoCNet in a layer-wise unsupervised manner. We demonstrate the efficacy of ReStoCNet and the presented HB-STDP based unsupervised training methodology on the MNIST and CIFAR-10 datasets. We show that residual connections enable the deeper convolutional layers to self-learn useful high-level input features and mitigate the accuracy loss observed in deep SNNs devoid of residual connections. The proposed ReStoCNet offers >20 × kernel memory compression compared to full-precision (32-bit) SNN while yielding high enough classification accuracy on the chosen pattern recognition tasks.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.3389/fnins.2019.00189DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC6434391PMC
March 2019

SpiLinC: Spiking Liquid-Ensemble Computing for Unsupervised Speech and Image Recognition.

Front Neurosci 2018 23;12:524. Epub 2018 Aug 23.

Department of ECE, Purdue University, West Lafayette, IN, United States.

In this work, we propose a Spiking Neural Network (SNN) consisting of input neurons sparsely connected by plastic synapses to a randomly interlinked liquid, referred to as Liquid-SNN, for unsupervised speech and image recognition. We adapt the strength of the synapses interconnecting the input and liquid using Spike Timing Dependent Plasticity (STDP), which enables the neurons to self-learn a general representation of unique classes of input patterns. The presented unsupervised learning methodology makes it possible to infer the class of a test input directly using the liquid neuronal spiking activity. This is in contrast to standard Liquid State Machines (LSMs) that have fixed synaptic connections between the input and liquid followed by a readout layer (trained in a supervised manner) to extract the liquid states and infer the class of the input patterns. Moreover, the utility of LSMs has primarily been demonstrated for speech recognition. We find that training such LSMs is challenging for complex pattern recognition tasks because of the information loss incurred by using fixed input to liquid synaptic connections. We show that our Liquid-SNN is capable of efficiently recognizing both speech and image patterns by learning the rich temporal information contained in the respective input patterns. However, the need to enlarge the liquid for improving the accuracy introduces scalability challenges and training inefficiencies. We propose SpiLinC that is composed of an ensemble of multiple liquids operating in parallel. We use a "divide and learn" strategy for SpiLinC, where each liquid is trained on a unique segment of the input patterns that causes the neurons to self-learn distinctive input features. SpiLinC effectively recognizes a test pattern by combining the spiking activity of the constituent liquids, each of which identifies characteristic input features. As a result, SpiLinC offers competitive classification accuracy compared to the Liquid-SNN with added sparsity in synaptic connectivity and faster training convergence, both of which lead to improved energy efficiency in neuromorphic hardware implementations. We validate the efficacy of the proposed Liquid-SNN and SpiLinC on the entire digit subset of the TI46 speech corpus and handwritten digits from the MNIST dataset.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.3389/fnins.2018.00524DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC6116788PMC
August 2018

Training Deep Spiking Convolutional Neural Networks With STDP-Based Unsupervised Pre-training Followed by Supervised Fine-Tuning.

Front Neurosci 2018 3;12:435. Epub 2018 Aug 3.

Nanoelectronics Research Laboratory, School of Electrical and Computer Engineering, Purdue University, West Lafayette, IN, United States.

Spiking Neural Networks (SNNs) are fast becoming a promising candidate for brain-inspired neuromorphic computing because of their inherent power efficiency and impressive inference accuracy across several cognitive tasks such as image classification and speech recognition. The recent efforts in SNNs have been focused on implementing deeper networks with multiple hidden layers to incorporate exponentially more difficult functional representations. In this paper, we propose a pre-training scheme using biologically plausible unsupervised learning, namely Spike-Timing-Dependent-Plasticity (STDP), in order to better initialize the parameters in multi-layer systems prior to supervised optimization. The multi-layer SNN is comprised of alternating convolutional and pooling layers followed by fully-connected layers, which are populated with leaky integrate-and-fire spiking neurons. We train the deep SNNs in two phases wherein, first, convolutional kernels are pre-trained in a layer-wise manner with unsupervised learning followed by fine-tuning the synaptic weights with spike-based supervised gradient descent backpropagation. Our experiments on digit recognition demonstrate that the STDP-based pre-training with gradient-based optimization provides improved robustness, faster (~2.5 ×) training time and better generalization compared with purely gradient-based training without pre-training.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.3389/fnins.2018.00435DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC6085488PMC
August 2018

Magnetic Tunnel Junction Based Long-Term Short-Term Stochastic Synapse for a Spiking Neural Network with On-Chip STDP Learning.

Sci Rep 2016 07 13;6:29545. Epub 2016 Jul 13.

School of Electrical and Computer Engineering, Purdue University, West Lafayette, US.

Spiking Neural Networks (SNNs) have emerged as a powerful neuromorphic computing paradigm to carry out classification and recognition tasks. Nevertheless, the general purpose computing platforms and the custom hardware architectures implemented using standard CMOS technology, have been unable to rival the power efficiency of the human brain. Hence, there is a need for novel nanoelectronic devices that can efficiently model the neurons and synapses constituting an SNN. In this work, we propose a heterostructure composed of a Magnetic Tunnel Junction (MTJ) and a heavy metal as a stochastic binary synapse. Synaptic plasticity is achieved by the stochastic switching of the MTJ conductance states, based on the temporal correlation between the spiking activities of the interconnecting neurons. Additionally, we present a significance driven long-term short-term stochastic synapse comprising two unique binary synaptic elements, in order to improve the synaptic learning efficiency. We demonstrate the efficacy of the proposed synaptic configurations and the stochastic learning algorithm on an SNN trained to classify handwritten digits from the MNIST dataset, using a device to system-level simulation framework. The power efficiency of the proposed neuromorphic system stems from the ultra-low programming energy of the spintronic synapses.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1038/srep29545DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC4942786PMC
July 2016

Numerical analysis of lamb wave generation in piezoelectric composite IDT.

IEEE Trans Ultrason Ferroelectr Freq Control 2005 Oct;52(10):1851-60

Department of Aerospace Engineering, Indian Institute of Science, Bangalore 560012, India.

An equivalent, single-layer model for Lamb wave generation by interdigital transducer (IDT) on composite host structures is developed. The additional complexities generally encountered while launching the surface acoustic wave (SAW) on composite structure, such as the coupling between the Lamb wave modes, the complicated nature of the electromechanical actuation etc. are considered. The model of infinite IDT is extended to deal with the finite IDT with edge discontinuities. The effect of electromechanical actuation on the wavelength shifts with respect to the passive case is investigated. The problem of electrically driven instability within the IDT is analyzed. Numerical results are reported by considering a model of the IDT integrated with the host structure, which shows that there are significant deviations from the conventional design estimates while launching a targeted mode. The proposed approach may enable one to obtain new designs in material systems and geometry that avoid mode-mixing, or to introduce it by choice.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1109/tuffc.2005.1561641DOI Listing
October 2005