Brought by Zhejiang Lab, Science/AAAS, and Science Robotics
Two-day event on October 19 – 20, 2022
Hybrid forum free for registration – webcasting and on-site
On-site venue at Zhejiang Lab, Hangzhou, China
Keynote Speakers
Featured Speakers
In the speech, the progress of some relevant areas are first reviewed. Then a system model of intelligent computing is proposed, which is composed of task, algorithm, computing power, data and knowledge. For any of scenarios of intelligent computinig, the system model forms a closed loop, which means that the system can consistently be running, and be optimized with more data and knowledge. The speech will also report the practice in Zhejiang lab for intelligent computing, including the data reactor and some scientific reseach projects.
The goal of neuromorphic computing is to bridge the orders-of-magnitude gap in efficiency, adaptability, and speed between biological brains and today's computing technology. The past several years have seen significant progress in neuromorphic computing research, with chips like Intel's Loihi demonstrating, for the first time, compelling quantitative gains over a range of workloads—from sensory perception to data efficient learning to combinatorial optimization. This talk surveys recent developments in this endeavor to re-think computing from transistors to software informed by biological principles. It outlines some of the remaining challenges and describes new tools for tackling them, such as Intel's Loihi 2 chip and the Lava software framework.
When machine learning systems are trained and deployed in the real world, we face various types of uncertainty. For example, training data at hand may contain insufficient information, label noise, and bias. In this talk, I will give an overview of our recent advances in robust machine learning, including weakly supervised classification (positive-unlabeled classification, positive-confidence classification, complementary-label classification, etc), noisy label learning (noise transition estimation, instance-dependent noise, clean sample selection, etc), and domain adaptation (joint importance-predictor learning for covariate shift adaptation, dynamic importance-predictor learning for full distribution shift, etc).
The rapid development of new-generation information technology, such as artificial intelligence, 5G/6G, block chain, brings great challenges to computing hardware. Conventional hardware platforms based on Boolean logic, CMOS devices and von Neumann architecture meet bottleneck on computing efficiency. Memristor based computation-in-memory (CIM) technology opens a new way to solve the computing efficiency issue by reducing the data movement and changing the computing paradigm. In this talk, I will introduce the recent progress on memristor based CIM technology. We have developed several CMOS/memristor hybrid integration chips with advanced technology. The chips can execute various deep neural network algorithms with our software tool chain. The chips demonstrate several orders of improvement on energy efficiency when comparing to the conventional technology. We also built a complete CIM computer based on multiple memristor chips and general-purposed CPU. Besides, I will also introduce some new applications and demonstrations based on our memristor system.
As humans, things, software and AI continue to become the entangled fabric of distributed systems, systems engineers and researchers are facing novel challenges. In this talk, we analyze the role of IoT, Edge, and Cloud, as well as AI in the co-evolution of distributed systems for the new decade. We identify challenges and discuss a roadmap that these new distributed systems have to address. We take a closer look at how a cyber-physical fabric will be complemented by AI operationalization to enable seamless end-to-end distributed systems.
Autonomous drones play a crucial role in search-and-rescue, delivery, and inspection missions. However, they are still far from human pilots regarding speed, versatility, and robustness. What does it take to fly autonomous drones as agile as or even better than human pilots? Autonomous, agile navigation through unknown, GPS-denied environments poses several challenges for robotics research in terms of perception, learning, planning, and control. In this talk, I will show how the combination of both model-based and machine learning methods united with the power of new, low-latency sensors, such as event cameras, can allow drones to achieve unprecedented speed and robustness by relying solely on onboard computing.
The plateauing of the Moore's law predicts increasing challenges in continously scaling up the computing power and efficiencies of electronic chips. Endowed with the merits of high-speed and low-loss propagation, light has shown great potential to revolutionize the next-generation computing technology. Unfortunately, photonic computing confronts with various challenges towards ubiquitous and general computing, such as the lack of model reconfigurability, learning capability, computing scalability etc. In this talk, I will start by introducing a large-scale reconfigurable optical processor (DPU) containing millions of parameters, which constructs various architectures of optical neural network (ONN) with orders of magnitude increase in the computing efficiency. To learn an opoelectronic computing model directly on the physical system, the gradient descent algorithm is implemented physically to train the ONNs through optical error backpropagation at light speed. Finally, a multi-channel interference-based architecture MONET will be introduced, enabling photonic computing to solve real-world advanced machine vision tasks, such as moving objects detection and depth map regression etc.
Agent-based modelling is an old idea that has seen a resurgence of interest over the past decade. The basic idea of agent-based modelling is to model socio-technical systems at the level of individual decision-makers (agents). Agent-based models make it possible to capture aspects of systems (such as social network structures) that cannot be represented with conventional modelling techniques. In this talk, I will describe our work towards a robust engineering science of agent based modelling, focussing on 4 key issues: how do we capture agent-based models transparently? how do we populate agent-based models with realistic agent behaviours? how do we calibrate agent-based models? And how do we verify such models? The talk will illustrate the key points with lessons learned from COVID models that were central to the UK’s decision to enter the first lockdown in March 2020.
Many problems in systems software can be transformed into those using machine learning. In this talk, I will share our past experiences of defining and optimizing optimization problems in systems software using machine learning, resulting in a set of learned systems ranging from operating systems, databases, Web systems and distributed systems. Based on these, I will further describe the learned lessons in applying machine learnings for systems, and discuss possible challenges and opportunities in fostering better synergy between systems and machine learning.
The Five-hundred-meter Aperture Spherical radio Telescope (FAST) has the largest collecting area in deca-centimeter bands. Since the unfortunate collapse of Arecibo, FAST’s advantage over other facilities in terms of raw sensitivity is at least one-order-of magnitude. To better utilize such advantage and circumvent the engineering challenges of moving the giant dish, we developed proprietary technologies to realize an unprecedented simultaneous survey of pulsars, HI imaging, HI galaxies, and FRBs, namely the Commensal Radio Astronomy FasT Survey (CRAFTS). In two years’ operation, CRAFTS has discovered more than 160 pulsars including one double neutron stars, 6 FRBs including the first persistently active repeater (FRB 20190520B), and with tens of thousands of new HI galaxies to be published soon. Astronomers are excellent at raising question, although seldomly reach coherent answers. With its excellent depth and cadence, aided by novel tools of intelligent computing, CRAFTS aspires to be among the significant human efforts that will raise new questions about the cosmos.
Scientific computing, big data, cloud computing and intelligent computing all require high-performance computing. This talk introduces a new technology - Big Chip, a method to increase the number of transistors on a chip by increasing the chip area, proposed by our team. Today, monolithic chip design faces the "area walls" challenge. Our approach to designing large-size chip is based-on chiplet integration. Chiplet is pre-fabricated die with specific functions that can be combined and integrated. This talk will introduce the challenges of parallelism, memory, interconnection and architectures faced by the development of big chip, and introduce our current Zhijiang big chip project. Through the new multi-chiplet architecture, we will achieve the goal of integrating ten thousand cores and the key techniques in architecture, circuit design and software, etc.
Recently, transistor scaling is approaching its physical limit, hindering the further development of the computing capability. In the post-Moore era, emerging logic and storage devices have been the fundamental hardware for expanding the capability of intelligent computing. In this presentation, the recent progress of ferroelectric devices for intelligent computing is reviewed. The material properties and electrical characteristics of ferroelectric devices are elucidated, followed by a discussion of novel ferroelectric materials and devices that can be used for intelligent computing. Ferroelectric capacitors, transistors, and tunneling junction devices used for low-power logic, high-performance memory, and neuromorphic applications are comprehensively reviewed and compared. In addition, to provide useful guidance for developing high-performance ferroelectric-based intelligent computing systems, the key challenges for realizing ultra-scaled ferroelectric devices for high-efficiency computing are discussed.
Science magazine is a leading outlet for scientific news, commentary, and cutting-edge research. Through its print and online incarnations, Science reaches an estimated worldwide readership of more than one million. Science’s authorship is global too, and its articles consistently rank among the world's most cited research. This talk will detail the editorial process at Science Journals, in particular Science Robotics. Science Robotics covers new developments in robotics and related fields, with a dual focus on the science of robotics as well as introducing researchers more broadly to how robots can be used to accelerate scientific study.
Robot swarms that are fully self-organized without any central coordinating entity have been widely demonstrated. A robot swarm with a fully decentralized architecture is highly redundant, and its collective behavior results indirectly from local interactions. These characteristics provide frequently cited advantages—including scalability, fault tolerance through redundancy, and having no single point of failure—but also result in fundamental problems, including lack of manageability. By contrast, centralized systems are easy to design, control, and manage, but have single points of failure and limited scalability. In the talk, I will present a novel robot swarm architecture for self-organized hierarchy, combining the advantageous features of self-organized and centralized control. Using a heterogeneous swarm of ground robots and aerial vehicles, I will demonstrate the ability of the proposed robot swarm architecture to self-organize a dynamic hierarchical control network using local asymmetric communication. I will show the results of experiments in which the architecture can split and merge independently controlled sub-swarms, replace faulty robots at any point in the hierarchical network, and dramatically modify the collective behavior of the swarm on the fly. I will also demonstrate that the proposed architecture maintains the fundamental advantages of using strict self-organization, including scalability of the swarm and interchangeability of individual robots.
Neuromorphic sensing and processing hold an important promise for creating autonomous tiny drones. Both promise to be light weight and highly energy efficient, while allowing for high-speed perception and control. For tiny drones, these characteristics are essential, as they are extremely restricted in terms of size, weight and power, while at smaller scales drones become even more agile. In my talk, I will present our work on developing neuromorphic sensing and processing for tiny autonomous drones. I will delve into the approach we followed for having spiking neural networks learn visual tasks such as optical flow estimation. Furthermore, I will explain our ongoing effort to integrate these networks in autonomously flying drones.