Brought by Zhejiang Lab, Science/AAAS, and Science Robotics
One-day event on November 16, 2021
Hybrid forum free for registration – webcasting and on-site
On-site venue at Zhejiang Lab, Hangzhou, China
300 limited seats, registration deadline - November 10, 2021.
Keynote Speakers
Invited Speakers
With the development of Cyber-Physical-Social integration digital society, we need more powerful, smart, ubiquitous, and transparent computing infrastructure. Current computing technologies are facing different kinds of limitations to support the new era. In this keynote, I will propose a new concept: Intelligent computing. Different from supercomputer and cloud computing, intelligent computing has its own theoretical basis, system architecture and technical ability. Intelligent computing includes six key conceptual components: data and knowledge base, computing resources, algorithm library, terminal resources, user interface, and schedule engine. I will also present Zhejiang lab’s intelligent computing research and development plan. Most importantly, a new intelligent computing scientific facility “Intelligent Computing Digital Reactor” will be detailly introduced.
Graph computing has been widely-used to solve many complex association problems in the real-world. Due to the extraordinarily random memory access patterns, graph computing is becoming gradually challenging to meet the demands of high throughput, high scalability, and application diversity. In this talk, some technical advances for achieving the high-performance and scalable graph computing will be presented. Also, this talk will be expected to outlook one ambitious project in building a general-purpose graph computing computer for not only conventional graph traversal but also graph mining, graph learning, and beyond.
Shiqiang Zhu, Zhejiang Lab
Hai Jin, HUST-ZJ Lab Joint Research Center for Graph Computing
Huajin Tang, Zhejiang University
Wei D. Lu, University of Michigan
Xin Liu, Zhejiang Lab
Significant historic events appear to be occurring more frequently as time goes on. Interestingly, it seems like subsequent intervals between these events are shrinking exponentially by a factor of four. This process looks like it should converge around the year 2040. The last of these major events can be said to have occurred around 1990 when the cold war ended, the WWW was born, mobile phones became mainstream, the first self-driving cars appeared, and modern AI with very deep neural networks came into being. In this talk, I will focus on the latter, with emphasis on Meta-learning since 1987 and what I call “the miraculous year of deep learning” which saw the birth of — among other things — (1) very deep learning through unsupervised pre-training, (2) the vanishing gradient analysis that led to the LSTMs running on your smartphones and to the really deep Highway Nets/ResNets, (3) neural fast weight programmers that are formally equivalent to what’s now called linear Transformers, (4) artificial curiosity for agents that invent their own problems (familiar to many nowadays in the form of GANs), (5) the learning of sequential neural attention, (6) the distilling of teacher nets into student nets, and (7) reinforcement learning and planning with recurrent world models. I will discuss how in the 2000s much of this has begun to impact billions of human lives, how the timeline predicts the next big event to be around 2030, what the final decade until convergence might hold, and what will happen in the subsequent 40 billion years. Take all of this with a grain of salt though.
Artificial intelligence has always led to computationally demanding algorithms when scaled to relevant problems. Today’s neuronally inspired wave is no exception and to enable its application at scale in sensing, situation awareness, and robotics, we need to redefine computing from bottom up. Neuromorphic computing is one approach to develop a novel computing paradigm for AI. Neuromorphic computing establishes a new type of hardware, algorithms, and software, drawing inspiration from biological neural systems – the most adaptive autonomous intelligent systems we know so far. In this talk, I will introduce the neuromorphic computing technology and Intel’s contribution to this field. I will overview recent results, obtained by the Neuromorphic Computing Lab at Intel Labs as well as by dozens of researchers in Intel’s Neuromorphic Research Community (INRC), showing orders of magnitude advantages in computing time and energy in some use cases.
This talk discusses the main limitations of the deep learning approach to artificial intelligence and proposes to overcome the limitations by means of an evolutionary developmental approach. We first provide a brief introduction to the evolution and development of human brain and nervous systems. Then computational models of neural and morphological evolution and development are presented. Our experimental results reveal that energy minimization is the main principle behind the organization of nervous systems and there is a close coupling between body and brain in evolution and development. Finally, we describe computational models of neural plasticity embedded in the reservoir computing and discuss their influences on the learning performance of echo state networks and spiking neural networks.
The advances of Artificial Intelligence (AI) are increasingly limited by the capabilities of the hardware systems that run the AI algorithms. Historically, the performances of processor chips naturally improve over time through technical advances commonly known as Moore’s Law. However, the throughput and power consumption of modern workloads are now limited by memory access and data routing, instead of logic operations. As a result, addressing the AI hardware challenges requires fundamentally re-thinking the computing architecture. In this talk, I will discuss memory-centric computing architectures that allow in-memory computing and fine-grained parallelism to drastically improve the throughput and energy efficiency when running AI algorithms. Beyond deep neural networks, the internal dynamics of memristive devices can also be used to natively process temporal data, e.g. performing time series analysis, and can potentially allow artificial neural networks to be tightly integrated with biological neural networks for exciting new applications.
Acton potential like spikes of electrical potential are commonly attributed to neural cells. In experimental laboratory and numerical modelling studies we show that neuron-like electrical activity is observed in liquid marbles with Belousov-Zhabotinsky medium, slime mould Physarum polycephalum and various species of fungi. We demonstrate that sensing and computation can be implemented with the spikes and analyse potential architectures of neural network made of liquid marbles, fungi and slime mould.
Supercomputers are designed to solve large-scale computing tasks, and have created huge value for scientific computing application fields, such as climate changing, weather forecasting, and physical simulations. In recent years, intelligent computing has become one of the most popular application fields, and shows a rapidly increasing demand for computing power, which will lead to a convergence of supercomputing and intelligent computing. The intelligent supercomputer will be one of the solutions. In this talk, we will discuss the target application scenarios and the challenges of designing intelligent supercomputers, and will show the related research that we are working on.
This talk will demonstrate the long, on-going and fruitful journey on exploiting the potential power of deep learning techniques in the area of software engineering. It will show how to model the code. It will also show how such models can be leveraged to support software engineers to perform different tasks that require proficient programming knowledge, such as code prediction and completion, code clone detection, code comments and summarization, etc. The exploratory work show that code implies learnable knowledge, more precisely the learnable tacit knowledge. Although such knowledge is difficult to transfer among human beings, it can be transferred among the automatically programming tasks. A vision for future research in this area will be laid out as the conclusion.
Materials are the fundamental driving force of human society. Traditionally, materials design and optimization are driven by scientific intuition followed by experimental study. The increasing computational power and efficient computational approaches make computation a rapid and efficient method for the understanding, prediction, and design of materials, termed computational materials science (CMS). Within the past decade, artificial intelligence (AI) has achieved breakthroughs in many fields, its application in materials science has been described as the “fourth paradigm”, with the first three being experiments, theory, and simulation. The combination of AI and CMS provides plentiful opportunities to enhance the capability of the current computational approaches and reduce the timeline of materials research and development. In the present talk, I will give a brief introduction of hybrid applications of AI and CMS for materials design.
Science magazine is a leading outlet for scientific news, commentary, and cutting-edge research. Through its print and online incarnations, Science reaches an estimated worldwide readership of more than one million. Science’s authorship is global too, and its articles consistently rank among the world's most cited research. This talk will detail the editorial process at Science Journals, in particular Science Robotics. Science Robotics covers new developments in robotics and related fields, with a dual focus on the science of robotics as well as introducing researchers more broadly to how robots can be used to accelerate scientific study.