Brought by Zhejiang Lab, Science/AAAS, and Science Robotics
Two-day event on October 19 – 20, 2023
Hybrid forum free for registration – webcasting and on-site
On-site venue at Zhejiang Lab, Hangzhou, China
Keynote Speakers
Featured Speakers
1. Bill Moran,Publisher of Science/AAAS, USA
2. Jian WANG,Academician of CAE & President of Zhejiang Lab, China
Density-functional theory (DFT) is a cornerstone of modern computational chemistry. Specifically, Kohn-Sham DFT greatly reduces computational costs by focusing on the real-space electron density rather than the many-body wavefunction in a higher-dimensional space. However, the catch is that the exchange-correlation (XC) functional in Kohn-Sham equation is unknown and needs to be approximated. This leads to the errors that are too large to be predictive. We proposed and developed two approaches to address this problem, i.e., Δ-learning technique to calibrate the results of conventional DFT, and machine learning (ML)-based XC functional. As approximated XC functional leads to systematic error, a simple ML model with a little extra information will be capable for calibrating the less precise results to better counterparts. Δ-learning is thus designed to learn this error with a small amount of data. Our pioneering work back in 2003 proposed a framework, proving that as simple as a one-hidden-layer neural network, together with several molecular descriptors, is enough for calibrating DFT-level results to experimental level. Later in 2022, we replaced the time-consuming DFT calculation with a graph neural network, enabling the prediction of experimental-level heat of formation in nearly no time. Δ-learning method has also been applied to calibrate photophysical properties, open circuit voltages of lithium-ion batteries, etc. Another solution to improve the accuracy of DFT is to find a better XC functional. In 2004, we refined the three hybrid parameters of B3LYP XC functional to make them depend on molecular descriptors included the number of electrons, dipole moment, quadrupole moment, kinetic energy, and spin multiplicity. The functional exhibits a remarkable alignment with experimental data.
The current computer is the key to build the digital universe. Over the past half century, the driven force to develop computers with von Neumann architecture have been being scaling, which follows Moore's law. We are now approaching the physical limitation of scaling. One of the most promising way for the further development of computers is to develop brain-inspired computing (BIC). Although great efforts have been put into the development of BIC, there is yet to be a commonly accepted technological solution. In this talk, the recent progress in the development of BIC is reviewed, which includes BIC theory, chips, software, systems and applications. The main challenge, and possible solutions to develop BIC system are addressed. The key issues to develop general purpose BIC are also discussed.
This presentation briefly introduces the concept of Materials Informatics and Materials-GPT. Materials informatics is growing extremely fast by integrating Artificial Intelligence (AI) and machine learning with materials science and engineering to accelerate materials science, engineering and manufacturing innovations. Particularly, the birth of Chat-GPT-4 further pours oil on the flames of Materials Informatics and hastens the parturition of Materials-GPT. Two fundamentals are there in Materials-GPT, including materials AI robots and AI labs, and materials AI computations and AI software and the both “hard” and “soft” fundamentals must be very strong and developed under the guidance of domain knowledge, which will result in materials large multimodal model. The domain knowledge-guided machine learning strategy is the best way to create new knowledge, innovate and progress materials science and engineering, and speed up the materials manufacturing. Case studies are given on the oxidation behaviours of ferritic-martensitic steels in supercritical water and the oxidation behaviours of FeCrAlCoNi based high entropy alloys at high temperature. This strategy leads to the development of formulas with high generalization and accurate prediction power, which are most desirable to science, technology, and engineering.
Humans are skilled in a wide range of physical and decision-making tasks, and also demonstrate impressive learning abilities to acquire new ones. Most artificial system today still experience challenges, especially in unstructured environments. The gap between humans and artificial systems in decision making and motor skill learning may be explained not only by the highly versatile sensing and actuating capabilities of humans, but also by the way the sensorimotor process is intertwined. In this talk, we survey our and work of others in terms of systems perspective: integrating perception, interaction and collaboration in an effective manner for the development of more capable and safe artificial systems.
Light propagation in complex media, such as paint, clouds, or biological tissues, is a very challenging phenomenon, encompassing fundamental aspects in mesoscopic and statistical physics. It is also of utmost applied interest, in particular for imaging in tissues, where control of the incident light (wavefront shaping) has allowed tremendous advances in biological imaging. I will discuss how we can, surprisingly, also leverage this complexity for various computing and machine learning tasks, and will show some examples from image classification, but also for time-series prediction and feature detection.
As the network and information technology extend to the human society and the physical world in an all-round way, human society, cyber space and physical space continue to integrate with software as a link, forming software-defined smart vehicles, smart parks, smart factories and other application scenarios. Human-cyber-physical system oriented ubiquitous computing is gradually becoming a reality. It realizes a new kind of hyper-automation for the internet of everything, supporting the application programming and operating of human-cyber-physical systems in a software-defined way. In this talk, I will first introduce the concept of software-defined human-cyber-physical systems, and then present a framework for the programming and operating of human-cyber-physical systems. We will also analyze the role of robots in this kind of systems and introduce our initial exploration on LLM (large language model) based human-cyber-physical programming.
There is broad belief that emotions, thinking, and learning are linked and Goleman's EQ perspective on emotional intelligence names empathy, effective communication or social skills, self-awareness, self-regulation, and motivation as core skills. So how about lending AI EQ? For that, in this talk, the state-of-play in Affective Computing – the science of computing and emotions or more broadly speaking affect – will be presented in a nutshell. This includes the analysis and synthesis of emotions in human and general multimodal data such as audio, video, text, and physiological data. Furthermore, a perspective on using emotions in machine learning will be given. As the field is also impacted by the advent of large “foundation models” and their emergent behaviour, a view on the future of Affective Computing and its role in Emotionally Intelligent Computing will conclude the talk. Computers can already “be” empathetic and simulate social skills – is the rest yet to come?
As robots become smaller, more affordable, and more capable, it becomes more realistic to envision deploying them in human spaces such as homes and schools. However, modern robots suffer from a lack of flexibility: they are generally programmed to repeat actions aimed at doing only one or a few specific things, whereas human environments are dynamic, with shifting settings, constraints, and tasks. One possible mechanism for allowing robots to interact gracefully with people is using natural language to let people teach robots about the environment, a problem referred to as language grounding. In this talk, I will discuss approaches and mechanisms for grounding language in robotic perception and how that plays out in interactions with actual people. I will also describe some of the promise and ethical concerns of language-using robots and some of the tools we use for data collection for supporting the robot learning process.
In a recent publication, we: (I) argue that contemporary approaches to AI alignment based in artificial empathy omit the crucial affective component of empathy, thus encouraging sociopath-like agents; (II) lay out guidelines for achieving ingrained harm aversion in artificial agents via vulnerability and proxies for affect; (III) suggest that artificially vulnerable, fully empathic agents may leverage AI's scalable cognitive complexity to devise compassionate solutions to large-scale problems, hopefully shifting the promise of AGI from civilization-level risk to invaluable ally. I will summarize these points and discuss follow-up work currently underway to enact these ideas via simulation, as well as philosophical problems emerging from our proposed solution.
Controlled nuclear fusion manifests itself in notable merits like cleanness, safety, and limitlessness, and has been widely regarded as a leading candidate for the generation of sustainable electric power (or even the "ultimate energy" of mankind). Nevertheless, compared with nuclear fission, which witnesses the giant leap from experimental research to nuclear power plants in only 30 years, fusion-related research has staggered for nearly a hundred years, and many scientific research/engineering issues remain unsolved before producing any nuclear fusion energy in a controlled manner. The fundamental reasons stem from the small reaction cross-sections, high input energy requirements, high energy density after ignition, and the difficulty in achieving stable confinement in controlled nuclear fusion. In critical areas such as Plasma simulation, control, diagnostics, the limitations posed by time and resource costs are increasingly evident for traditional research methods. Fortunately, as a “Priority Research Opportunity” for fusion control, the integration of artificial intelligence (AI) and controlled nuclear fusion has promised apparent successes. For example, the embodiment of AI facilitates the reconstruction of plasma-shape parameters, accelerates simulations using surrogate models and more capably detects impending plasma disruptions. Furthermore, recalling the astonishing achievement of a deep reinforcement learning-based plasma control solution over that of the conventional PID control, it implies the arrival of novel, game-changing techniques. This Featured Talk mainly includes two aspects: the background & current status of controlled nuclear fusion research and the exploration of digital and intelligent technology applications.
The Zhijiang Zhuque Intelligent Graph Computing Platform is a cutting-edge solution designed to advance AI-enhanced scientific computing. This platform addresses the increasing complexity and scale of scientific computing tasks where traditional methods fall short, providing a versatile environment suitable for a variety of applications including, but not limited to, pharmaceutical industry, bioinformatics, social network analysis, and financial risk management. The Zhuque platform specializes in handling large-scale, heterogeneous graph structures, showcasing high compatibility with domestic hardware and demonstrating superior performance in graph learning. It is equipped with innovative algorithms and supports multiple Graph Neural Network frameworks, offering functionalities like graph visualization, analysis, task scheduling, and model deployment, making it a pioneered one-stop solution for graph development needs for scientific computing. Moreover, the platform's innovative algorithms like PSG have set new records in graph learning challenges, highlighting its capability to drive advancements in scientific research and AI. In addition, the presentation will touch upon the platform's future work focusing on multi-model advancements, aiming to further explore and optimize the integration of various models to enhance the capabilities of scientific computing.
Information technology has shown tremendous potential in advancing biotechnology. The development of various biotechnologies currently faces common challenges such as difficulties in regulation and lack of precision. Chemical-induced dimerization (CID) is a gene regulation method that utilizes chemical substances to induce specific protein dimerization, thereby controlling gene expression. However, CID tools still have limitations in terms of quantity, type, diversity, and in vivo applications. To address these issues, researchers have developed an expandable CID platform called Protein Targeting Chimeras for CID (PROTAC-CID), based on protein modification. By employing computational algorithms and data analysis, existing PROTAC systems have been engineered for gene regulation and editing. Through the construction of orthogonal PROTAC-CID systems, gene expression can be fine-tuned and biological signals can be multiplexed, combining biological understanding with computational modeling. Coupled with gene circuits, researchers have achieved digital induction of DNA recombinases and gene editing enzymes, harnessing the advantages of integrating biology and information technology. By introducing compact PROTAC-CID systems into viral vectors, reversible gene activation can be achieved in vivo, merging molecular biology with information technology-driven delivery systems. These studies have expanded the application of chemical-induced gene regulation in human cells and mice, demonstrating the potential of integrating biology and information technology in driving biomedical research...
As the perception, planning, and coordination technology going mature, autonomous robots are given higher expectations by people. Aerial robots, and their swarms, now are required not only to fly out of laboratory, but also to finish more complicated tasks. To this end, building smarter drones with sophisticated functionalities where perception and planning modules are coordinated or even coupled is an attractive research topic. In this talk, I will introduce some new methods of aerial robots developed in my group, by which we may broaden the application range and intelligence level of drones. Then, based on real-world requirements, some systematic solutions for specific tasks are presented, where the architecture, algorithms, engineering considerations, as well as closed-loop performances, are explained. Finally, I will turn to some of our most recent research, on which we are working towards a perception-planning coupled, flexibly coordinated, and large-scale aerial swarm system.
Enabling simulated and real-world embodied agents to think and move like animals and humans is one of the shared goals of AI researchers, roboticists and the computer graphics community. In my talk I will bring together results from several studies and explain how large scale RL, imitation learning, hierarchical skill representations and multi-agent training algorithms can work together to this end. I will show examples of our work to build intelligent simulated humanoid characters that possess locomotion and manipulation skills, can see and remember, and can interact with each other. I will also illustrate how these results can be brought to bear on real bipedal and quadrupedal robots, for instance to create humanoid robots that play soccer with each other.
1. Tianzi JIANG,Senior Research Professor, Zhejiang Lab, China
2. Feng LIN, Senior Research Expert, Zhejiang Lab, Singapore
3. Xindong WU, Senior Research Expert, Zhejiang Lab, Australia
4. Xiao YU, Associate Research Fellow, Zhejiang Lab, China
5. Aydogan Ozcan, Chancellor's Professor, UCLA, USA