ZHEJIANG LAB
News  Detail
Tianshu V1.2 Brings New Features and Optimized Performance
Date: 2021-01-29

Since Zhejiang Lab launched the Tianshu Artificial Intelligence Open Source Platform last year, the development team has been creating a new version of it almost every ninety days. With continuously optimized performance, the team is making the platform stronger, faster, and simpler. Recently, the latest version of the platform, Tianshu V1.2, was officially released to the public. In addition to new major functional modules such as cloud serving and model optimization, the team also optimized existing systems such as the model amalgamation platform, the deep-learning training framework, and the data processing system, greatly improving the platform's performance.


As one of the platform's "secret weapons", model amalgamation makes algorithms highly adaptable. For instance, in video surveillance and control, a major service area of the Tianshu Platform, we currently need a model to enable pedestrian identification and tracking during the day, and another model for surveillance at night. Model amalgamation can integrate the two models and create a model applicable in both scenarios, effectively extending the application scenarios of different algorithms and models.


For this new version, the team has reconstructed the original model amalgamation platform, integrating its front-end and back-end management modules with a one-stop AI platform. In the meantime, the model mapping and amalgamation engine remains independent, so as to make the architecture of the Tianshu Platform as simple as possible.


In order to distill an integrated model from several AI models through amalgamation algorithms, it is necessary to first determine whether the original models can be used for amalgamation. The metric management tool of the Tianshu Platform can automatically identify the similarities between models, as well as other tools, such as the graph visualization and graph list, can visualize the correlations of more than 10,000 AI models, helping developers see the connections in a more intuitive way, and therefore better amalgamate the models.


Trained AI models need to be deployed if they are to be applied like software. For this reason, the team has added a cloud serving module to the new Tianshu platform. The module provides a powerful environment and tools for the deployment and release of AI models. It supports mainstream deep-learning frameworks including Oneflow, Tensorflow, Pytorch, and Keras, as well as multiple communication modes, online services, and batch services.


Another highlight of this new version is the model optimization module, which has been entirely developed by the team. In order to cope with more complex and difficult AI tasks, developers' pre-trained models are often very large, and require huge computing resources, setting high standards for deployment environments and equipment. With twelve model compression strategies, the model optimization module uses approaches such as weight quantification, network pruning, and model distillation to help developers "slim down" complex AI models, enable lightweight AI models, and reduce the dependence on computing resources. Complex AI models can therefore be used on cell phones, cars, and even many IoT terminals, providing a technical solution to the wide application of lightweight AI.


As a core component of the Tianshu Platform, the deep-learning framework has also achieved considerable improvement in performance and user experience. In terms of its adaptability to hardware environments, the new version of the framework is the first deep-learning framework supporting CUDA 11.1 and Ampere graphic cards. In addition, the team has optimized its memory allocation logic, enabling it to save a significant amount of computational memory while maintaining its training efficiency. It supports quantization-aware training, allowing quantization compression models to be deployed with less accuracy loss. It also has higher system messaging efficiency, as it supports CFG (Control Flow Graph). With dozens of new or optimized operators, it supports more models and allows deep-learning models to compute faster.


With continuous updating and performance optimization, the ecosystem of the Tianshu Platform has been expanding steadily. Currently, it has gathered more than 600 institutions and individuals from the industry, academia, and research communities, with the number of core partners reaching sixty-six.


Office Website: http://tianshu.org.cn/ 

Codelab: http://codelab.org.cn/