Brought by Zhejiang Lab, Science/AAAS, and Science Robotics
One-day event on November 16, 2021
Hybrid forum free for registration – webcasting and on-site
On-site venue at Zhejiang Lab, Hangzhou, China
300 limited seats, registration deadline - November 10, 2021.
With the development of Cyber-Physical-Social integration digital society, we need more powerful, smart, ubiquitous, and transparent computing infrastructure. Current computing technologies are facing different kinds of limitations to support the new era. In this keynote, I will propose a new concept: Intelligent computing. Different from supercomputer and cloud computing, intelligent computing has its own theoretical basis, system architecture and technical ability. Intelligent computing includes six key conceptual components: data and knowledge base, computing resources, algorithm library, terminal resources, user interface, and schedule engine. I will also present Zhejiang lab’s intelligent computing research and development plan. Most importantly, a new intelligent computing scientific facility “Intelligent Computing Digital Reactor” will be detailly introduced.
Professor Shiqiang Zhu is a distinguished PhD advisor and a State Council Special Allowance Expert. He graduated from Zhejiang University with a PhD in Mechatronics Control. He currently is leading Zhejiang Lab as its President, and serves as Vice President and Dean of the Robotics Institute at Zhejiang University. He has been recognized as a distinguished expert by the Zhejiang High-Level Talent Support Program. In addition, he is Vice Chairman of the China Artificial Intelligence Industry Alliance, President of the Zhejiang Robot Association, Deputy Director of the Zhejiang Manufacturing Strategy Advisory Committee, a member of the general expert panel for a national key research and development program on intelligent robotics. With long years of exploration in the field of intelligent science and technology, he has carried out in-depth research in areas such as intelligent home service robots, exoskeleton robots, machine vision, high-performance multi-axis motion controllers, and ocean electronics engineering and intelligent systems, and has achieved a number of significant outcomes.
Graph computing has been widely-used to solve many complex association problems in the real-world. Due to the extraordinarily random memory access patterns, graph computing is becoming gradually challenging to meet the demands of high throughput, high scalability, and application diversity. In this talk, some technical advances for achieving the high-performance and scalable graph computing will be presented. Also, this talk will be expected to outlook one ambitious project in building a general-purpose graph computing computer for not only conventional graph traversal but also graph mining, graph learning, and beyond.
Hai Jin is a Chair Professor of computer science and engineering at Huazhong University of Science and Technology (HUST) in China, and the Director of HUST-ZJ Lab Joint Research Center for Graph Computing. Jin received his PhD in computer engineering from HUST in 1994. In 1996, he was awarded a German Academic Exchange Service fellowship to visit the Technical University of Chemnitz in Germany. Jin worked at The University of Hong Kong between 1998 and 2000, and as a visiting scholar at the University of Southern California between 1999 and 2000. He was awarded Excellent Youth Award from the National Science Foundation of China in 2001. Jin is a Fellow of IEEE, Fellow of CCF, and a life member of the ACM. He has co-authored more than 20 books and published over 900 research papers. His research interests include computer architecture, parallel and distributed computing, big data processing, data storage, and system security.
Shiqiang Zhu, Zhejiang Lab
Hai Jin, HUST-ZJ Lab Joint Research Center for Graph Computing
Huajin Tang, Zhejiang University
Wei D. Lu, University of Michigan
Xin Liu, Zhejiang Lab
Significant historic events appear to be occurring more frequently as time goes on. Interestingly, it seems like subsequent intervals between these events are shrinking exponentially by a factor of four. This process looks like it should converge around the year 2040. The last of these major events can be said to have occurred around 1990 when the cold war ended, the WWW was born, mobile phones became mainstream, the first self-driving cars appeared, and modern AI with very deep neural networks came into being. In this talk, I will focus on the latter, with emphasis on Meta-learning since 1987 and what I call “the miraculous year of deep learning” which saw the birth of — among other things — (1) very deep learning through unsupervised pre-training, (2) the vanishing gradient analysis that led to the LSTMs running on your smartphones and to the really deep Highway Nets/ResNets, (3) neural fast weight programmers that are formally equivalent to what’s now called linear Transformers, (4) artificial curiosity for agents that invent their own problems (familiar to many nowadays in the form of GANs), (5) the learning of sequential neural attention, (6) the distilling of teacher nets into student nets, and (7) reinforcement learning and planning with recurrent world models. I will discuss how in the 2000s much of this has begun to impact billions of human lives, how the timeline predicts the next big event to be around 2030, what the final decade until convergence might hold, and what will happen in the subsequent 40 billion years. Take all of this with a grain of salt though.
Since age 15 or so, the main goal of Professor Jürgen Schmidhuber has been to build a self-improving Artificial Intelligence (AI) smarter than himself, then retire. The media have often called him the “father of modern AI”. His lab’s Deep Learning Neural Networks based on ideas published in the “Annus Mirabilis” 1990-1991 have revolutionized machine learning. By the mid 2010s, they were on 3 billion devices, and used billions of times per day through users of the world’s most valuable public companies, e.g., for greatly improved (CTC-LSTM-based) speech recognition on all Android phones, greatly improved machine translation through Google Translate and Facebook (over 4 billion LSTM-based translations per day), Apple’s Siri and Quicktype on all iPhones, the answers of Amazon’s Alexa, and numerous other applications. In 2011, his team was the first to win official computer vision contests through deep neural nets, with superhuman performance. In 2012, they had the first deep neural network to win a medical imaging contest (on cancer detection). All of this attracted enormous interest from industry. His research group also established the fields of mathematically rigorous universal AI and recursive self-improvement in meta-learning machines that learn to learn (since 1987). In 1990, he introduced unsupervised adversarial neural networks that fight each other in a minimax game to achieve artificial curiosity (GANs are a special case). In 1991, he introduced very deep learning through unsupervised pre-training, and neural fast weight programmers formally equivalent to what’s now called linear Transformers. His formal theory of creativity & curiosity & fun explains art, science, music, and humor. He also generalized algorithmic information theory and the many-worlds theory of physics, and introduced the concept of Low-Complexity Art, the information age’s extreme form of minimal art. He is recipient of numerous awards, author of over 350 peer-reviewed papers, and Chief Scientist of the company NNAISENSE, which aims at building the first practical general purpose AI. He is a frequent keynote speaker, and advising various governments on AI strategies.
Artificial intelligence has always led to computationally demanding algorithms when scaled to relevant problems. Today’s neuronally inspired wave is no exception and to enable its application at scale in sensing, situation awareness, and robotics, we need to redefine computing from bottom up. Neuromorphic computing is one approach to develop a novel computing paradigm for AI. Neuromorphic computing establishes a new type of hardware, algorithms, and software, drawing inspiration from biological neural systems – the most adaptive autonomous intelligent systems we know so far. In this talk, I will introduce the neuromorphic computing technology and Intel’s contribution to this field. I will overview recent results, obtained by the Neuromorphic Computing Lab at Intel Labs as well as by dozens of researchers in Intel’s Neuromorphic Research Community (INRC), showing orders of magnitude advantages in computing time and energy in some use cases.
Dr. Yulia Sandamirskaya leads the Applications Research team of the Neuromorphic Computing Lab at Intel. Her team in Munich develops spiking neuronal network based algorithms for neuromorphic hardware to demonstrate the potential of neuromorphic computing in real-world applications. She has 15 years of research experience in the fields of neural dynamics, embodied cognition, and autonomous robotics. She led a research group “Neuromorphic Cognitive Robots” at the Institute of Neuroinformatics of the University of Zurich and ETH Zurich, Switzerland and the “Autonomous learning” group at the Institute for Neural Computation at the Ruhr-University Bochum, Germany. She chaired the European Society for Artificial Cognitive Systems (EUCog) and coordinated a networking project NEUROTECH, shaping and supporting the neuromorphic computing technology community in Europe.
This talk discusses the main limitations of the deep learning approach to artificial intelligence and proposes to overcome the limitations by means of an evolutionary developmental approach. We first provide a brief introduction to the evolution and development of human brain and nervous systems. Then computational models of neural and morphological evolution and development are presented. Our experimental results reveal that energy minimization is the main principle behind the organization of nervous systems and there is a close coupling between body and brain in evolution and development. Finally, we describe computational models of neural plasticity embedded in the reservoir computing and discuss their influences on the learning performance of echo state networks and spiking neural networks.
Dr. Yaochu Jin is an Alexander von Humboldt Professor for Artificial Intelligence endowed by the Germany Federal Minister of Education and Research, Faculty of Technology, Bielefeld University, Germany. He is also a Distinguished Chair in Computational Intelligence, Department of Computer Science, University of Surrey, Guildford, U.K. He was a “Finland Distinguished Professor” of University of Jyväskylä, Finland, “Changjiang Distinguished Visiting Professor”, Northeastern University, China, and “Distinguished Visiting Scholar”, University of Technology Sydney, Australia. His main research interests include evolutionary optimization, evolutionary learning, trustworthy machine learning, and evolutionary developmental systems. Prof Jin is presently the Editor-in-Chief of the IEEE TRANSACTIONS ON COGNITIVE AND DEVELOPMENTAL SYSTEMS and the Editor-in-Chief of Complex & Intelligent Systems. He is a Member of Academia Europaea and Fellow of IEEE.
The advances of Artificial Intelligence (AI) are increasingly limited by the capabilities of the hardware systems that run the AI algorithms. Historically, the performances of processor chips naturally improve over time through technical advances commonly known as Moore’s Law. However, the throughput and power consumption of modern workloads are now limited by memory access and data routing, instead of logic operations. As a result, addressing the AI hardware challenges requires fundamentally re-thinking the computing architecture. In this talk, I will discuss memory-centric computing architectures that allow in-memory computing and fine-grained parallelism to drastically improve the throughput and energy efficiency when running AI algorithms. Beyond deep neural networks, the internal dynamics of memristive devices can also be used to natively process temporal data, e.g. performing time series analysis, and can potentially allow artificial neural networks to be tightly integrated with biological neural networks for exciting new applications.
Wei D. Lu is a Professor in the Electrical Engineering and Computer Science department at the University of Michigan. He received B.S. in physics from Tsinghua University, Beijing, China, in 1996, and Ph.D. in physics from Rice University, Houston, TX in 2003. From 2003 to 2005, he was a postdoctoral research fellow at Harvard University, Cambridge, MA. He joined the faculty of the University of Michigan in 2005. His research interest includes resistive-random access memory (RRAM), memristor-based logic circuits, neuromorphic computing systems, aggressively scaled transistor devices, and electrical transport in low-dimensional systems. To date Prof. Lu has published over 200 journal and conference articles with 30,000 citations and h-factor of 79. He is a recipient of the NSF CAREER award, an IEEE Fellow, and co-founder of Crossbar Inc, and MemryX Inc.
Acton potential like spikes of electrical potential are commonly attributed to neural cells. In experimental laboratory and numerical modelling studies we show that neuron-like electrical activity is observed in liquid marbles with Belousov-Zhabotinsky medium, slime mould Physarum polycephalum and various species of fungi. We demonstrate that sensing and computation can be implemented with the spikes and analyse potential architectures of neural network made of liquid marbles, fungi and slime mould.
Andrew Adamatzky is Professor of Unconventional Computing and Director of the Unconventional Computing Laboratory, Department of Computer Science, University of the West of England, Bristol, UK. He does research in molecular computing, fungal computing, reaction-diffusion computing, collision-based computing, cellular automata, slime mould computing, massive parallel computation, applied mathematics, complexity, nature-inspired optimisation, collective intelligence and robotics, bionics, computational psychology, non-linear science, novel hardware, and future and emergent computation. He authored seven books, mostly notable are `Reaction-Diffusion Computing’, `Dynamics of Crow Minds’, `Physarum Machines’, and edited twenty-two books in computing, most notable are `Collision Based Computing’, `Game of Life Cellular Automata’, `Memristor Networks’; he also produced a series of influential artworks published in the atlas `Silence of Slime Mould’. He is founding editor-in-chief of ‘Journal of Cellular Automata’ and “Journal of Unconventional Computing’ and editor-in-chief of “Journal Parallel, Emergent, Distributed Systems’ and ‘Parallel Processing Letters’.
Supercomputers are designed to solve large-scale computing tasks, and have created huge value for scientific computing application fields, such as climate changing, weather forecasting, and physical simulations. In recent years, intelligent computing has become one of the most popular application fields, and shows a rapidly increasing demand for computing power, which will lead to a convergence of supercomputing and intelligent computing. The intelligent supercomputer will be one of the solutions. In this talk, we will discuss the target application scenarios and the challenges of designing intelligent supercomputers, and will show the related research that we are working on.
Professor Guangwen Yang is director of the National Supercomputing Center in Wuxi, and head of the Intelligent Computing Center of Zhejiang Lab. His research interests include high performance computing, parallel algorithms for numerical applications, and high-performance machine learning method. He has published over 200 high-quality articles, and has been awarded the ACM Gordon Bell Prize (2016, 2017), and the IEEE FPL Most Significant Paper Award in 25 Years, etc.
This talk will demonstrate the long, on-going and fruitful journey on exploiting the potential power of deep learning techniques in the area of software engineering. It will show how to model the code. It will also show how such models can be leveraged to support software engineers to perform different tasks that require proficient programming knowledge, such as code prediction and completion, code clone detection, code comments and summarization, etc. The exploratory work show that code implies learnable knowledge, more precisely the learnable tacit knowledge. Although such knowledge is difficult to transfer among human beings, it can be transferred among the automatically programming tasks. A vision for future research in this area will be laid out as the conclusion.
Zhi Jin is a Full Professor of Software Engineering at Peking University. She is the Vice Director of the Key Laboratory of High Confidence Software Technologies, Peking University, Ministry of Education of China. She is/was principal investigator of more than 10 national competitive grants, including the chief scientist of a national basic research project (973 project) of the Ministry of Science and Technology of China. She serves frequently as PC members for many well-known conferences like ICSE, FSE and RE. She is executive Editor-in-Chief of Chinese Journal of Software (2013- ), Associate Editor of IEEE TSE (2018- ) and IEEE TR (2019- ). She also serves in the Editorial Board of JCST (2010- ) REJ (2014- ) and ESEM (2020- ). She is a standing board member of China Computer Federation (CCF), the chair of CCF Technical Committee of System Software and was elected to CCF fellow.
Materials are the fundamental driving force of human society. Traditionally, materials design and optimization are driven by scientiﬁc intuition followed by experimental study. The increasing computational power and efficient computational approaches make computation a rapid and efficient method for the understanding, prediction, and design of materials, termed computational materials science (CMS). Within the past decade, artificial intelligence (AI) has achieved breakthroughs in many fields, its application in materials science has been described as the “fourth paradigm”, with the ﬁrst three being experiments, theory, and simulation. The combination of AI and CMS provides plentiful opportunities to enhance the capability of the current computational approaches and reduce the timeline of materials research and development. In the present talk, I will give a brief introduction of hybrid applications of AI and CMS for materials design.
Jincang Zhang is Distinguished Professor in condensed matter and materials physics, and executive-dean of Materials Genome Institute, Shanghai University. He was awarded a Ph.D. degree in Materials Science and Engineering from Toyama University of Japan. He currently works as the Director of SHU-ZJ Lab Joint Research Center for Computational Materials Science. He has taken a special interest in materials design by machine learning in artificial intelligence and experimental technology. He developed high-throughput experimental methods, such as PLD, OMBE and XRD et al, for functional thin film and bulk material, including magnetic, ferroelectric, multiferroic, spintronic, superconducting materials and their devices applications. So far, he has more than 500 publications and more than 20 patents.
Science magazine is a leading outlet for scientific news, commentary, and cutting-edge research. Through its print and online incarnations, Science reaches an estimated worldwide readership of more than one million. Science’s authorship is global too, and its articles consistently rank among the world's most cited research. This talk will detail the editorial process at Science Journals, in particular Science Robotics. Science Robotics covers new developments in robotics and related fields, with a dual focus on the science of robotics as well as introducing researchers more broadly to how robots can be used to accelerate scientific study.
Michael Lee is the Editor of Science Robotics. Prior to joining Science Robotics, Michael was a co-founding editor of Nature Electronics. He started his editorial career at Nature Communications as an associate editor handling applied physics and engineering research. Before becoming an editor Michael conducted research at the Paul Scherrer Institute (Switzerland) and University of Oxford (UK).