Keynotes Talks

The Super Neuron Model – A new generation of ANN-based Machine Learning and Applications

Prof. Moncef Gabbouj

Tampere university

ABSTRACT:Operational Neural Networks (ONNs) are new generation network models targeting to address two major drawbacks of conventional Convolutional Neural Networks (CNNs): the homogenous network configuration and the “linear” neuron model that can only perform linear transformations over previous layer outputs. ONNs can perform any linear or non-linear transformation with a proper combination of “nodal” and “pool” operators. This is a great leap towards expanding the neuron’s learning capacity in CNNs, which thus far required the use of a single nodal operator for all synaptic connections for each neuron. This restriction has recently been lifted by introducing a superior neuron called the “generative neuron” where each nodal operator can be customized during the training in order to maximize learning. As a result, the network is able to self-organize the nodal operators of its neurons’ connections. Self-Organized ONNs (Self-ONNs) equipped with superior generative neurons can achieve diversity even with a compact configuration. We shall explore several signal processing applications of neural network models equipped with the superior neuron.

BIOGRAPHY:MONCEF GABBOUJ received his BS degree in 1985 from Oklahoma State University, and his MS and PhD degrees from Purdue University, in 1986 and 1989, respectively, all in electrical engineering. Dr. Gabbouj is a Professor of Information Technology at the Department of Computing Sciences, Tampere University, Tampere, Finland. He was Academy of Finland Professor during 2011-2015. His research interests include Big Data analytics, multimedia content-based analysis, indexing and retrieval, artificial intelligence, machine learning, pattern recognition, nonlinear signal and image processing and analysis, voice conversion, and video processing and coding. Dr. Gabbouj is a Fellow of the IEEE and member of the Academia Europaea and the Finnish Academy of Science and Letters. He is the past Chairman of the IEEE CAS TC on DSP and committee member of the IEEE Fourier Award for Signal Processing. He served as associate editor and guest editor of many IEEE, and international journals and Distinguished Lecturer for the IEEE CASS. Dr. Gabbouj served as General Co-Chair of IEEE ISCAS 2019, ICIP 2020, ICIP 2024 and ICME 2021. Gabbouj is Finland Site Director of the USA NSF IUCRC funded Center for Visual and Decision Informatics (CVDI) and led the Artificial Intelligence Research Task Force of Finland’s Ministry of Economic Affairs and Employment funded Research Alliance on Autonomous Systems (RAAS).

Title: Neuromorphic Intelligence: mixed signal analog/digital implementations of spiking neural networks for real-time signal processing

Prof. Giacomo Indiveri

University of Zurich and ETH Zurich

ABSTRACT:Artificial Intelligence (AI) neural networks and machine learning inference accelerators represent a successful technology for solving a wide range of complex tasks. However for many practical purposes that involve fast real-time interactions with the environment these systems still cannot match the performance and efficiency of their biological counterparts. One possible reason lies in the differences between the principles of computation used by nervous systems and those used by conventional time-multiplexed computing systems. In this talk I will present neuromorphic electronic circuits that directly emulate the physics of computation used in animal brains to build neural processing systems which use spike-based representations and brain-inspired adaptation and learning mechanisms. I will show how large-scale multi-core architectures can be built by combining these circuits with asynchronous digital logic ones, and I will present examples of chips that are ideally suited for real-world sensory-processing edge-computing applications.

BIOGRAPHY:Giacomo Indiveri is a dual professor at the University of Zurich and ETH Zurich, and the director of the Institute of Neuroinformatics, Zurich, Switzerland. He obtained an M.Sc. degree in electrical engineering in 1992 and a Ph.D. degree in computer science from the University of Genoa, Italy in 2004. Engineer by training, Indiveri has also expertise in neuroscience, computer science, and machine learning. He has been combining these disciplines by studying natural and artificial intelligence in neural processing systems and in neuromorphic cognitive agents. His latest research interests lie in the study of spike-based learning mechanisms and recurrent networks of biologically plausible neurons, and in their integration in real-time closed-loop sensory-motor systems designed using analog/digital circuits and emerging memory technologies. Indiveri is senior member of the IEEE society, and a recipient of the 2021 IEEE Biomedical Circuits and Systems Best Paper Award. He is also an ERC fellow, recipient of three European Research Council grants.

Title: Integrated Memristor Networks for Higher-complexity Neuromorphic Computing

Prof. Yuchao Yang

Peking University

ABSTRACT:As Moore’s law slows down and memory-intensive tasks get prevalent, digital computing becomes increasingly capacity- and power-limited. In order to meet the requirement for increased computing capacity and efficiency in the post-Moore era, emerging computing architectures, such as in-memory computing and neuromorphic computing architectures based on memristors, have been extensively pursued and become an important candidate for new-generation non-von Neumann computers. Since the connection of the theoretical memristor concept with ! resistive switching devices in 2008, tremendous progress has been made in their applications in-memory and computing systems. Here, we report an optoelectronic synapse that has controllable temporal dynamics under electrical and optical stimuli. Tight coupling between ferroelectric and optoelectronic processes in the synapse can be used to realize heterosynaptic plasticity, with relaxation timescales that are tunable via light intensity or back-gate voltage. We use the synapses to create a multimode reservoir computing system with adjustable nonlinear transformation and multisensory fusion, which is demonstrated using a multimode handwritten digit recognition task and a QR code recognition task. We also realize a multiscale reservoir computing system via the tunable relaxation timescale, which is tested using a temporal signal prediction task.

BIOGRAPHY:Yuchao Yang is a Boya Distinguished Professor at School of Integrated Circuits, Peking University. He serves as Deputy Dean for School of Electronic and Computer Engineering, and Director of Center for Brain Inspired Chips. His research interests include memristors, neuromorphic computing, and inmemory computing. He has published over 130 papers in high-profile journals and conferences such as Nature Electronics, Nature Reviews Materials, Nature Communications, Nature Nanotechnology, Science Advances, Advanced Materials, Nano Letters, IEDM, etc. as well as 5 book chapters. He was invited to give >40 keynote/invited talks on international conferences and serves as TPC chair or member for 9 international conferences. Yuchao Yang serves as the Associate Editor for 3 journals including Microelectronic Engineering, APL Machine Learning and Nano Select, and editorial board member of National Science Review, Chip, Scientific Reports and Science China Information Sciences. He was invited to guest edit 5 special issues and write 12 News & Views, review articles, etc. He is a recipient of the National Outstanding Youth Science Fund, Qiu Shi Outstanding Young Scholar Award, Wiley Young Researcher Award, MIT Technology Review Innovators Under 35 in China, and the EXPLORER PRIZE. He was recognized as Highly Cited Chinese Researchers by Elsevier in 2020 and 2021.