2023/05/27-28 Shanghai Zhangjiang Science Hall
Register Now
0+
Foundation
0+
Communities
0+
Companies
0+
Projects
0+
Participants

Conference Introduction

The Global Open-source Technology Conference (GOTC) is a remarkable open-source technology event for developers all over the world, it was co-initiated by Open Atom Foundation, Shanghai Pudong Software Park,Linux Foundation APAC(LFAPAC),  OSCHINA presented and the China communities in the form of industry exhibition, keynotes, special forums and sub-forums, where the participants will discuss and learn the most advanced technology topics such as metaverse, 3D and games, eBPF, Web 3.0, blockchain, as well as other themes such as open source community, open source commercialization, open source education and talent development, cloud-native, etc., in order to explore the future of open source and help with open source development.

Honorable Guest Speakers

Forum Producers

Agenda
Updating
DAY1
May 27

09:00-09:20

Greetings

Greeting by special guests

09:20-09:30

Launching Ceremony for Cooperation

Launching Ceremony for Cooperation

09:30-09:40

Greetings from the conference guests

Greetings from the conference guests

09:40-10:00

Open source collaboration, gather strength for win-win.

Keynote

10:00-10:20

Evolution of AI Technology: From "Perception Intelligence" to "Cognitive Intelligence"

Evolution of AI Technology: From "Perception Intelligence" to "Cognitive Intelligence"

10:20-10:40

The large-scale model opens a new era of AI

The large-scale model opens a new era of AI

10:40-11:00

Ten years of hard work, a new mission for OSCHINA

Ten years of hard work, a new mission for OSCHINA

11:00-12:00

AIGC changes the world, starting with ChatGPT.

AIGC changes the world, starting with ChatGPT.
Round table host :Mark Shan | Chairman of TARS Foundation and Chairman of Tencent Open Source Alliance

13:30-13:45

The Road to Open Metaverse

The Road to Open Metaverse

13:45-14:00

Rust for a safe and sustainable future

Rust for a safe and sustainable future

14:00-14:30

Open source assistance, accelerating towards an intelligent world

Keynote

14:30-15:00

The Open Source Road of Tencent Operating System

The Open Source Road of Tencent Operating System

15:00-15:30

Shaping new momentum for open source development in open innovation

Shaping new momentum for open source development in open innovation

15:30-16:00

Embracing Open Source - ByteDance's Road to Open Source.

/

16:00-16:30

Open Source Thinking and Practice of China Mobile Cloud

Speech by China Mobile Leader

16:30-17:30

Strengthening Security and Trust in Software Supply Chain: Why SBOM, SLSA, and Sigstore are the Future of Software Engineering

The Software Bill of Materials (SBOM), Supply Chain Levels for Software Artifacts (SLSA), and Sigstore are open-source standards related to ensuring the security of software supply chains. SBOM is a list of software components used to build an application or system. It provides visibility into the software components and their dependencies, which can help identify vulnerabilities and manage risks in the software supply chain. SLSA is a framework for software supply chain security that defines trust levels for software artifacts. It provides a way to assess the security of software components throughout the entire supply chain from development to deployment. Sigstore is a service that provides encrypted signatures for software artifacts, including SBOMs and SLSA metadata. These signatures can be used to verify the authenticity and integrity of software products throughout the entire supply chain. Together, SBOM, SLSA, and Sigstore can help improve the security and trustworthiness of software supply chains. SBOM provides visibility into components used to build software, SLSA defines trust levels for artifacts, while Sigstore provides a way to verify authenticity and integrity of these artifacts.
Round table host :Hin Yang | VP,Linux Foundation APAC

Check Sessions

DAY2
Morning, May 28

09:15-09:30

Keynote: Developing Open Source Talent in China: The Role of the Linux Foundation

As the Senior Vice President and General Manager of Training and Certification at the Linux Foundation, Clyde will share with you the latest global trends in open source talent, as well as the hottest open source technologies. Clyde will also share how to help Chinese developers effectively learn and master open source technology, and start your open source career.

09:30-09:45

Keynote: LFOSSA:Existence for China Open Source Talent Development 下载PPT

LFOSSA relying on the world's largest open source software organization, has trained a large number of software talents for enterprises. It not only has rich online professional courses, but also course instructors are senior experts in the industry. The certificates issued are globally recognized professional qualifications. LFOSSA is a member of China's open source ecosystem and is committed to promoting innovation and technology development through open source. This sharing will comprehensively introduce our courses and certification system, development plans, and how we can become your partner to help you take your career to the next level.

09:45-10:00

LF APAC Open Source Evangelist Education SIG Overview 下载PPT

updating

10:00-10:20

Keynote: Ibrahim Haddad from the Linux Foundation in Conversation with Prof. Wang from East China Normal Univesrity - Global Open Source Talent Development

Ibrahim Haddad from the Linux Foundation in Conversation with Prof. Wang from East China Normal Univesrity - Global Open Source Talent Development

10:20-11:00

Panel Discussion: Open Source and Open Source Talent Development from different perspectives

In the recent Linux Foundation's Open Source jobs Report, cloud computing and digital transformation are continued to be widely adopted in various industries, the demand for open source talent is very high. Open source experts from different fields are invited to discuss the challenges of open source education and talent development, the trend of the most popular open source technologies in order to help developers to understand the ways to participate in the open source and start their career in open source.
Round table host :Xudong Guo | Senior Cloud Native Architect of Jihu GitLab Technology Limited, LFAPAC Open Source Evangelist

11:00-11:40

Panel Discussion: Woman’s Power in Open Source

There have been more male than female in IT industry and in open source community. Therefore, diversity in open source and woman's participation are very key and very valueable to the community. In this panel discussion, we have invited these female professionals in the open source community. They are going to to share their experience in open source and discuss the difficulties and challenges that women face in their work and life. Let's join us and empower woman in open source!
Round table host :Li Rui | Associate Researcher, Peng Cheng Laboratory

11:40-12:00

LFOSSA Open Source Education Annual Ceremony 2022

LFOSSA Open Source Education Annual Ceremony 2022

Check Sessions

09:10-09:15

Keynote

Keynote

09:15-09:20

Welcome and Opening

Welcome and Opening

09:20-09:35

LFAPAC OSPO SIG Overview

The goal of the LF APAC China Open Source Evangelist Team is to promote the development of the open source community in China. We look forward to working together with evangelists to drive more activities and support for the open source community in China! The LF APAC Open Source Evangelist OSPO SIG is an OSPO group under the LFAPAC evangelist team, mainly promoting OSPO (Open Source Program Office) and making more individuals and enterprises aware of open source and OSPO. We will share our past work and plans for this year, and we look forward to everyone's participation.

09:35-09:50

Practice of Red Hat OSPO 下载PPT

The Red Hat Open Source Project Office (OSPO) is an internal team at Red Hat dedicated to promoting and advancing open source software and communities. The OSPO is responsible for guiding Red Hat's contributions to open source projects, coordinating with other open source organizations, and supporting the use of open source software by the wider community of users and partners, including how to contribute code, documentation, and other resources upstream as well as providing feedback and guidance for these projects. In addition, it provides resources and support for external organizations and communities, including consulting services, legal and licensing guidance, tools and resources for managing and contributing to open source projects, etc. OSPO plays a key role in Red Hat's commitment to open source software and supporting its community, helping to ensure that open source continues to thrive and innovate. This topic will delve into the history, operation, organization, functions, practices of Red Hat's OSPO.

09:50-10:35

Roundtable - Gaining Trust

1. How can OSPO gain the trust of developers? 2. What positive roles can OSPO play in building open source communities or technology ecosystems for commercial companies? 3. What role does OSPO play in open source productization and community building? This is a challenging question that we need to face directly. Many teams think that OSPO is an office and are unwilling to join the open source working group, but as a center of expertise, it can provide many benefits. Some people also see us as money or channels. 4. How can OSPO become the "center of expertise" linking ecosystems and projects?
Round table host :Zhiqiang Yu | Co-Chair of LFAPAC OSPO SIG

10:35-10:55

LFAPAC Open Source Evangelist Translation SIG

The LFAPAC Open Source Evangelist Translation SIG, established on June 1st, 2022, hereby introduces the achievements of the translation team over the past year to the public and calls on interested friends to join the ranks of translators and contribute to open source development.

10:55-11:15

Risk analysis of "out of supply" of open source software

With the changes in the international situation, there is a so-called "out of supply" risk in many fields. In the open source field, there are also many concerns about this, but many people believe that open source software does not have a "out of supply" risk. This controversy has also affected the attitudes of various organizations and enterprises towards the application and contribution of open source software. This speech attempts to analyze and discuss whether open source software has a "out of supply" risk and what aspects need to be seriously considered and addressed.

11:15-11:35

Looking at enterprise open source compliance from the first successful domestic GPL defense case. 下载PPT

In the case of FutureSoft v. YQTSOFT and Liu, which involves infringement of computer software copyright, the court denied the protectability of the plaintiff's entire software on the grounds that only a small part of it unintentionally used GPL software. This is the first time that a domestic court has rejected almost all of the plaintiff's claims based on a defendant's GPL defense. The "no compliance, no protection" standard established in this case imposes higher compliance requirements on high-tech companies in terms of following open source licenses and protecting software copyrights. At the same time, this case boldly recognizes and supports the effectiveness of open source licenses, providing new ideas for computer software rights protection and defense. As someone involved in handling this case, I plan to share key points and difficulties related to enterprise open source compliance and governance from a lawyer's perspective, helping enterprises use open source technology safely while promoting their own open source strategies.

11:35-11:55

OpenChain General Manager, The Linux Foundation

Shane Coughlan is an expert in communication, security and business development. His professional accomplishments include building the largest open source governance community in the world through the OpenChain Project, spearheading the licensing team that elevated Open Invention Network into the largest patent non-aggression community in history and establishing the first global network for open source legal experts. He is a founder of both the first law journal and the first law book dedicated to open source. He currently leads the OpenChain Project and is a General Assembly Member of OpenForum Europe.

Check Sessions

09:00-09:30

Software Engineering in the Era of Intelligence: the correct posture of embracing large models

The large model technology represented by ChatGPT has brought tremendous impact to many fields including software engineering, and also caused widespread anxiety. In order to see a little direction in the fog, we have been discussing and thinking about "software engineering in the era of large models" based on various technical literature, practical sharing, and our own preliminary exploration. Embracing large models should be a correct and even necessary direction for both the academic and industrial communities of software engineering, but how to achieve systematic and comprehensive intelligent development of software still requires calm thinking, as well as many basic work that needs to be done. This report will share our preliminary understanding and prospects for future development directions.

09:30-10:00

From code-specific AI to code-integrated general AI

ChatGPT and GPT-4 have opened the door to general artificial intelligence. Code plays a crucial role in these basic models. This report will start with AI models designed for code tasks, then move on to foundation models that integrate code data, and finally introduce our TaskMatrix architecture proposed for building AGI. We have done a good job of open-sourcing all the work covered in this report, hoping to promote the development of AI for Code and explore key technologies for AGI with Code.

10:00-10:30

New Paradigm of Software Development under Software Engineering 3.0 下载PPT

1. From Software Engineering 1.0 to Software Engineering 3.0 (SE3.0) 2. The new form of SE3.0 3. The new development paradigm of SE3.0 4. How will programming unfold under the new paradigm? 5. How can enterprises better utilize the new paradigm? 6. Future prospects and challenges

10:30-11:00

Solution and Application of Huawei's Large Model

1. Industry Insights of AIGC for SE 2. Huawei's Large-scale Code Model Solution and Application 3. Key Issues and Technical Challenges of Large-scale Code Models 4. Opportunities and Prospects of AIGC for SE

11:00-11:30

Intelligent Code: Exploring the Path from Task-Specific Models to General Large Models. 下载PPT

In this sharing session, we will explore the exploration process and future trends of code intelligence-related tasks in the field of software engineering, including three main parts: (1) The exploration path on task-specific models: In this part, we will review several works on task-specific models, including their core ideas and achievements in solving specific programming tasks. (2) The exploration path of general models in the software engineering field: This section will discuss the requirements and preliminary explorations from dedicated models to general models. (3) The road to future exploration: In the final part, we will explore the possibility of combining both and look forward to future development trends in code intelligence research.

11:30-12:00

How does AI programming improve productivity

How does AI programming improve productivity
Round table host :Qianxiang Wang | Chief Expert of Huawei Cloud Intelligent Software Development

Check Sessions

09:15-09:30

Ecological collaboration between LF CHAOSS community and OSS-Compass community

The LF CHAOSS community is committed to helping people understand the health status of the open source projects they rely on. As a community that spans multiple open source projects and organizations, CHAOSS is dedicated to defining common metrics and methods for evaluating and improving the health of open source communities. We work closely with OSS-Compass to provide more comprehensive and in-depth assessments of the health status of open source communities. In this presentation, we will introduce the collaboration background and benefits between these two communities.

09:30-10:10

New Forms of Delivery in the Digital Wave 下载PPT

/

10:10-10:50

Interpretation of the OSS Compass Model and Release of New Features 下载PPT

This topic will introduce the OSS-Compass community and its SaaS services provided to the public, helping people understand the ecological health of open source communities. We will use evaluation models to gain a deeper understanding of community health, and provide detailed introductions to new features such as insight report subscription systems, developer milestone models, and Compass Lab. The report subscription system allows users to easily obtain evaluation reports for managing or caring about communities in a timely manner; the developer milestone model helps users understand whether different links in open source communities are smooth experiences and promotion channels, as well as how software version iterations and operational activities affect developers; finally, Compass Lab can help community decision-makers customize evaluation dashboards that are suitable for their communities to better understand community health. We hope to provide valuable information and ideas for those who care about the health of open source communities.

10:50-11:30

Roundtable - Open Source Community Health Assessment, Illuminating Lights for Your Operations and Governance.

1. Definition and Goals of Open Source Community Health Assessment a. What is open source community health assessment? b. Why do we need to assess the health of open source communities? c. The main goals and indicators of open source community health assessment. 2. Core Elements of Open Source Communities a. Community member participation b. Communication and collaboration c. Project management and governance d. Community diversity and inclusivity e. Sustainable development and innovation capabilities. 3.Methods and Tools for Assessing the Health of Open Source Communities a.Measurement and evaluation of quantitative indicators b.Qualitative analysis with case studies c.Common tools and resources for assessing the health of open source communities d.How to develop improvement strategies based on evaluation results. 4.Sharing Successful Cases, Lessons Learned, And Key Factors Analysis a.Sharing successful cases in healthy open-source communities b.Analyzing key factors in successful cases c.Discussing challenges that may arise during the process of evaluating the health status in an open-source community, as well as solutions to these challenges. 5.Future Development Of Open-Source Community Health Assessment a.The impact of technological changes on open-source communities b.New trends and innovations in assessing the health status within an open-source community c.How to continuously improve the level of a healthy state within an opensource community

Check Sessions

09:00-09:20

Capital Promotes Innovation: The Value of Open Source in the Wave of AI

/

09:20-09:40

From domestic alternatives to fully self-controllable - investment opportunities in hard technology under the gamble of great powers

/

09:40-10:00

Investing in the Open Source Ecosystem of AI 2.0 Era 下载PPT

/

10:00-10:20

From Industry to Industry - Technology Investment from an Industrial Perspective.

/

10:20-10:40

Be a good technology watchtower for Lenovo 下载PPT

/

10:40-11:00

Automated software development based on large models 下载PPT

The recent launch of large model technology represented by GPT-4 has had a profound impact on software development. What is the program generation ability of large models? What are the problems in program analysis and generation? What changes will happen in the future software development? The Peking University Program Understanding and Generation Research Team is an early pioneer and continuous contributor to deep learning-based program understanding and generation. Based on their own research experience, the speaker briefly elaborates on the research process and development status of large-model-based program understanding and generation methods, focusing on exploring the problems currently existing in large-model-based program design automation.

11:00-12:00

Roundtable: National Capital Supports the Development of China's Open Source Ecology, Industry and Enterprises.

/

Check Sessions

DAY2
Afternoon, May 28

13:30-14:05

Achieve zero-intrusive cloud-native observability with eBPF 下载PPT

The driving forces of microservices and cloud-native have brought about a huge transformation in application architecture. While the number of services has increased and the complexity of individual services has decreased, the overall complexity of distributed applications has sharply increased. In a cloud-native environment, how to achieve application observability and make business controllable has become an important challenge for developers. Leveraging the new kernel programmability released by eBPF, DeepFlow innovatively implements AutoMetrics, AutoTracing, and AutoLogging capabilities without requiring developers to manually insert code or instrumentation, enabling full-stack observability for cloud-native applications. Outline: ● From cBPF to eBPF: AutoMetrics capability ● From InProcess to Distributed: AutoTracing capability ● From kprobe to uprobe: AutoLogging capability Target audience and benefits: ● Obtain best practices for eBPF in observability field ● Understand DeepFlow's cloud-native observability platform

14:05-14:40

BPF cold upgrade - Let low version kernels use new features 下载PPT

As one of the popular areas in the kernel in recent years, eBPF has been developing rapidly in upstream. However, in production environments, stability is often pursued by the kernel, and business users often want to use stable old versions of the kernel while also using some newer BPF features. Based on the plugsched scheduler hot upgrade technology, we have developed a modularized BPF subsystem that can adapt to flexible development needs on a stable low version of the kernel, achieving both goals. The cold upgrade of BPF (plugbpf) inherits the advantages of plugsched without requiring machine restarts and with millisecond-level downtime. By replacing internal syscall and interface functions, users are unaware that an upgrade has occurred; they only need to run their own BPF programs normally and directly as if they were running them on a higher version of the kernel. Plugbpf works as a module and currently supports versions Linux 4.19 and 5.10 on x86 platforms. Users can load modules after ensuring that there are no active BPF programs in their original system.

14:40-15:15

The combination of eBPF and confidential computing ecosystem 下载PPT

In this topic, we will discuss the basics of eBPF and Confidential Computing, some ecological combinations based on open source practices in these two fields, as well as thoughts on the future development of eBPF and Confidential Computing.

15:15-15:50

Operation and maintenance North Star Metric system based on eBPF Trace Profiling 下载PPT

Observable technology still faces difficulties in root cause analysis despite the existence of trace, log, and metrics technologies. Internet companies face significant challenges due to the immaturity of current root cause analysis technologies, with most relying on technical personnel experience. Continues Profiling based on eBPF technology is a hot topic abroad because it is expected to find problem roots. However, according to our research, Continues profiling can only solve single-dimensional CPU problems and is difficult to reach trace level. It is challenging to use in production environments with multiple user accesses. Kindling has pioneered trace_profiling technology based on eBPF technology by reducing the granularity of profiling to the trace level. This helps users locate a single request and convert the execution process of trace code into a resource consumption process at the trace level through eBPF technology, providing standardized ways for root cause analysis. This session will discuss how Kindling builds trace_profiling and its applicable scenarios.

15:50-16:25

Use eBPF instead of iptables to accelerate service mesh 下载PPT

In the service mesh scenario, in order to utilize Sidecar for traffic management without affecting the application, it is necessary to forward both inbound and outbound traffic of Pods to Sidecar. The most common solution in this case is to use iptables (netfilter) redirection capability. The disadvantage of this approach is increased network latency because iptables intercepts both outbound and inbound traffic. For example, for inbound traffic that originally flows directly to the application, it needs to be forwarded by iptables to the Sidecar first and then by the Sidecar to the actual application. What used to require only two kernel-level processing links now becomes four times, resulting in a significant loss of performance. This presentation will introduce Merbridge project's implementation principle and explain how it uses eBPF for network acceleration in various service meshes such as Istio, kuma, linkerd2.

16:25-17:00

Practice of eBPF technology in cloud native field 下载PPT

1. Introduction to BPF technology 2. Application of BPF technology in cloud-native field 3. Baidu's cloud-native practice with BPF technology

Check Sessions

13:30-14:00

Alibaba Cloud PolarDB Architecture and Technical Evolution 下载PPT

The main content includes: 1. Progress of Alibaba Cloud PolarDB open source; 2. Architecture and technical evolution of PolarDB, introduce PolarDB open source, storage-compute separation architecture, high availability, high performance, HTAP and other architectures and core code. 3. The Alibaba Cloud open source community actively promotes PG kernel technology and launches kernel courses such as "PG Kernel Interpretation Course" and "Database Kernel from Entry to Mastery"."

14:00-14:30

Taking HTAP as an example, look at the evolution of modern data stack application architecture and scenarios 下载PPT

The modern data stack is a new generation concept of cloud-based Data Middle Platform architecture, but its definition does not include the integrated database system. This sharing will analyze the application architecture and trend of evolution of the modern data stack based on the cloud architecture of one of the hottest categories in integrated databases - HTAP and then shares MatrixOne's current design and future evolution to meet this trend. Outline: a. Understanding the Modern Data Stack b. Best practices for HTAP architectures on the cloud c. Analysis of MatrixOne Architecture

14:30-15:00

ByConity: Innovation and Openness in Analytical Database Technology

1. Key technology selection in ByteConity 2. The story behind the open source and openness of ByteConity

15:00-15:30

From HTAP to Serverless, the technological evolution of TiDB 下载PPT

How does PingCAP continue to stay at the forefront of database technology development? How will TiDB HTAP develop in today's database cloud-native, serverless, and DB microservices environment? What are the most important technological or application trends in the database field or the entire Infra field? This topic will introduce the evolution of TiDB from cloud-native to HTAP and then to serverless, as well as thoughts and gains during its growth process.

15:30-16:00

GreatSQL Open Source Community - Creating a popular open source database in China. 下载PPT

Introduction to GreatSQL Community Explanation of GreatSQL Advantages and Features Optimization and Bug Fixes for MySQL in the Project GreatSQL vs MySQL Recent Plans and Future Prospects for GreatSQL"

16:00-16:30

Build a database change workflow based on GitOps 下载PPT

With the increase in business complexity and database instances, the risk of database changes has also increased exponentially when responding to rapidly changing business needs. Problems such as table structure deviation, misoperation, and low-quality statements often occur. Bytebase introduces the concept of GitOps into the field of database change management. By integrating with platforms such as GitLab/GitHub, it deeply integrates database change management into the development process. Through a code management platform, it achieves more standardized, automatic and efficient management workflows, significantly improving work efficiency and achieving fast yet orderly database changes.

16:30-17:00

AI native database built for large models 下载PPT

Updating

17:00-17:30

Build a domestic distributed database based on openGauss for industrial scenarios 下载PPT

Taking JD's business as an example, introduce how to innovate and create a domestically developed distributed database using OpenGauss

Check Sessions

13:00-13:35

Opening

Opening

13:35-13:45

Greetings

Gitee's 10th anniversary summary, user data summary, full module product architecture, and user case summary.

13:45-14:15

Build an efficient digital organization driven by business value. 下载PPT

The adoption of the DevOps philosophy and tools benefits the development team in engineering efficiency, but cannot provide an answer to the problem of disconnection between engineering and business. This time, combining practical cases, we will share how large organizations break down team functional barriers through BizDevOps concepts and tools to achieve efficient collaboration centered on enterprise strategy and customer value.

14:15-14:45

Gitee: Ten years of hard work, the domestic leader in code asset management. 下载PPT

Gitee Code Service Function Panorama Gitee code service situation of serving customers in China Typical customer practice cases of Gitee code service Future planning of Gitee code service

14:45-15:10

Solving the last mile of delivery with Delivery As Code - a new form of delivery in the digital age. 下载PPT

In the era of mobile internet, during the golden age of ToC products, major enterprises rely on relevant DevOps practices to take root in the enterprise and accelerate product iteration, greatly improving research and development efficiency, winning time and competitiveness. However, now that we have entered an era of stock games, ToC product traffic growth has stagnated and various enterprises are beginning to seek a shift towards investing in ToB. The multi-line version management faced by ToB product iteration, "last mile" delivery, customer upgrade issues are all problems that have not been encountered under ToC business. Application integration release is also precisely the key to measuring the quality of ToB business products. Based on the current problems faced by our business as well as the latest practices in DevOps theory, we propose using "Delivery As Code" as a core concept to build an industrialized end-to-end ToB delivery workflow to solve problems with application integration release and last-mile delivery for ToB businesses.

15:10-15:30

Construction and Practice Sharing of Agile R&D Platform for Large Financial Enterprises 下载PPT

The construction and comprehensive implementation of agile R&D platform for large financial enterprises, as well as the subsequent platform planning.

15:30-15:45

Working with open source to build an innovative technology ecosystem 下载PPT

/

15:30-15:45

OSCHINA and Inspur jointly release DevOps hyper-converged integrated machine

/

15:45-16:00

Afternoon Tea Time

Afternoon Tea Time

16:00-16:20

Discussion on the Value of Knowledge Base Construction in DevOps and Release of Gitee Intelligent Knowledge Base Solution for YINXIANG 下载PPT

/

16:00-16:20

/

/

16:20-16:45

Open Source Governance Solution for Enterprise Software Supply Chain Security 下载PPT

Mainly includes the practical experience of OSCHINA in the field of software supply chain security, and the ideas for building a trusted open source component library within enterprises.

16:45-17:10

A Sharing about the DevOps Integration and Delivery Practice 下载PPT

By combining the DevOps integrated platform CI/CD and integration delivery capabilities, establish an agile and stable dual-state R&D system to achieve overall improvement in efficiency, quality, and control of R&D delivery. Truly implement the landing of continuous delivery capability for financial enterprises.

17:10-17:35

iFLYTEK Huoshi Platform - The base platform for the iFLYTEK Spark

Introduce the technical challenges faced by large-scale model training and inference compared to traditional solutions, as well as the solution of iFLYTEK Spark large-scale model, and the construction of the iFLYTEK Huoshi Platform for large-scale model tasks.

17:35-18:00

/

/

Check Sessions

13:30-13:40

Welcome and Introduction

Welcome and Introduction

13:40-14:00

Keynote

Keynote

14:00-14:20

Enterprise-level open source supply chain security solution based on network elasticity bill. 下载PPT

Due to the widespread use of open source components, network security attacks and data breaches caused by vulnerabilities and code quality issues in open source components have become frequent, leading to a crisis of trust in the security of the open source supply chain. Various countries and regions have introduced regulations and provisions to enhance the security of the open source supply chain and improve the security of digital products. This topic discusses an enterprise-level open source supply chain security solution based on network resilience legislation.

14:20-14:40

Sigstore helps the implementation of open source software supply chain security framework SLSA

LFAPAC open source evangelist, CDF ambassador, deputy leader of the OpenSSF China working group, and a member of the Cloud Native Community Steering Committee. He focuses on cloud native and DevSecOps fields. He has been a speaker at DevOps Community Summit, TiD Quality Competitiveness Conference, QECon, GOTC and other conferences. Currently actively promoting open source software supply chain security.

15:00-15:20

Open source risk management practice based on SBOM.

1. Overview of Challenges Faced in Using Open Source Software 2. The Foundation of Open Source Risk Management - SBOM 3. Selecting Reliable and Appropriate Software for SBOM - Open Source Software Selection 4. How to Integrate Open Source Governance into Existing Enterprise Development and Delivery Processes (SBOM Generation, Updates, Circulation, and Archiving) 5. Digitization and Automation of Enterprise Open Source Risk Management (Automatic Tracking and Issue Handling Based on SBOM) 6. In addition to SBOM, What Other Capabilities Do Enterprises Need to Improve Their Level of Open Source Governance?

15:00-15:20

Prevent small risks from becoming big ones and build an open-source security defense system for enterprises

The current open source development is thriving, but it also brings software supply chain security threats. Huawei embraces open source and actively invests resources in open source security tools and governance. This topic includes the following parts: 1. Industry trends and practical insights on software supply chain security. 2. Huawei's analysis and practice of software supply chain security, including sharing practices based on SBOM, as well as other security measures. 3. Some suggestions for future open source security.

15:20-15:40

Looking at open source security from Amazon's unique culture

For Amazon, safety is always the top priority and action guideline. The culture of safety has a profound impact on Amazon's interaction with open source. We hope to explore in depth the design concept and experience of Firecracker, an open source project, as well as share Amazon Web Services' best practices in choosing Rust for open source projects and using it extensively. This will help builders better understand Amazon's pursuit and implementation of security in all dimensions and details of open source.

15:40-16:00

Using SBOM to enhance software supply chain security. 下载PPT

This speech will introduce the background of SBOM, the global status and development direction of promoting SBOM, as well as methods and standards for building SBOM, and how to use SBOM to enhance software supply chain security. SBOM, also known as a software bill of materials, can reveal the composition of software components to software users. With the development of software technology, mixed-source development has become mainstream. More than 90% of system software and application software contain open source code. On one hand, China's information creation industry cannot do without open source software from operating systems to databases to upper-layer applications; on the other hand, open source software greatly promotes the development of an open-source ecosystem and provides a good foundation for China's information creation supply chain. China has also become the world's second-largest contributor country in open-source software and an important force in this field. However, with the popularity of open source code comes security and compliance issues that need attention. To ensure software supply chain security, industry is promoting the application of SBOM within their respective fields. Relevant laws have been introduced in America and Europe while Europe is following suit; corresponding standards are being formulated in China too. The main methods for analyzing SBOM include code snippet analysis and dependency relationship analysis which can be used to analyze license lists or vulnerability lists by means such as these technical measures so that users can understand compliance with codes or hidden safety hazards through both lists along with using technological means to resolve potential problems making their own supply chains more secure.

16:00-16:20

Best Practices for Secure Construction of Multi-Workload in Production Environment 下载PPT

With the evolution of traditional physical and virtual machines to containers and container clusters, the security risks of enterprise production environment workloads have also changed. This presentation will combine experience in production environments to share with everyone the security challenges and corresponding measures for multiple workloads.

16:20-16:40

Open source software supply chain security governance based on code vaccine technology. 下载PPT

In the context of mixed-source development and agile delivery, open source software has become an important part of the software supply chain, and its security has become a key link in software supply chain security governance. For known open source risks, SCA tools can conduct a comprehensive asset inventory of third-party components involved in software and applications, while understanding the open source vulnerabilities introduced by related components to facilitate insight into and monitoring of open source risks. When new security vulnerabilities are discovered and there are no new version components available for replacement yet, RASP technology can identify and block attacks and malicious requests through hot patching without modifying the source code, achieving timely governance of unknown open source risks and buying time for vulnerability repairs. Through the combination of SCA and RASP, scenarios with known vulnerabilities as well as unknown ones can be covered to achieve closed-loop management of open-source software supply chain security from development to operation, empowering enterprise developers' code safety.

16:40-17:00

The Challenges and Practices of Open Source Software Supply Chain Security. 下载PPT

In the increasingly severe situation of network security threats and attacks worldwide, the accelerated promotion of enterprise digitization requires overall planning. Mr. Wang Yu will combine the introduction and summary of the following content to explain in simple terms the ways in which software supply chain risks are introduced and the key points for governing open source software supply chains. The speech will be practical, informative, leading-edge, empowering enterprises comprehensively. The speech will cover the following key points: Traditional software supply chain vs open source software supply chain In-depth analysis and interpretation of software supply chain security incidents Impact and harm of open source vulnerabilities Software supply chain composition and methods for introducing security risks Open source security challenges from a technical perspective Key issues in software supply chain security and OSS governance SCA tools for multiple application scenarios Trusted open-source management and operation

Check Sessions

13:30-14:15

What else does WebAssembly need to become a first-class citizen of Rust runtime? 下载PPT

Unlike most "modern programming languages", one of Rust's highlights is that it can be directly compiled into machine code without needing an intermediate "runtime". However, when Rust is used in scenarios such as browsers, cloud-native environments, and edge devices, running machine code directly is not allowed. In this case, we need a runtime to run Rust code. In practice, WebAssembly has become one of the preferred runtimes for Rust. The Rust compiler has also added a target for the Wasm platform. However, many common Rust crates still have difficulties running in WebAssembly. In this talk, I will introduce the current status, limitations, solutions and future directions of the Rust WebAssembly compiler and standard/common libraries to provide advice for developers who want to develop Rust-Wasm applications.

14:15-15:00

Challenges and Breakthroughs in Rust Parallel Compilation 下载PPT

The problem of slow compilation efficiency is a challenge that Rust language must face as it gradually moves towards large-scale development. Nowadays, the optimization of single-threaded compilation efficiency in Rust has reached a bottleneck, and parallel compilation has become the key technology to break through this bottleneck. As a core developer of Rust's parallel compilation feature, the speaker will introduce to you the series of challenges and breakthroughs faced by this feature.

15:00-15:45

Pilota: Why is a code generation tool so complicated? 下载PPT

For a Rust RPC framework, code generation based on IDL is to make it more convenient for users to use the framework. The quality of generated code and its surrounding capabilities will have a very direct impact on the user's development experience. Therefore, we developed a framework like Pilota to generate good code for users. Also, because of some special requirements within ByteDance, our code generation framework has brought us great challenges. This sharing session will introduce the design principles of Pilota and some challenges faced. 1. What problem does Pilota solve? 2. Detailed explanation of Pilota's design structure 3. Type System in Pilota 4. Experience optimization done by Pilota when facing large IDLs

15:45-16:30

Rspack: Next-generation front-end toolchain 下载PPT

Rspack is a high-performance build engine based on Rust, which can interact with the Webpack ecosystem and provide better build performance. When dealing with monolithic applications with complex build configurations, Rspack can provide 5-10 times faster compilation performance. This time we will share: 1. How to choose native technology for front-end toolchain 2. Performance optimization: a. Transforming existing single-threaded algorithms into multi-threaded ones to improve parallel performance b. How to optimize core library lock contention to improve application performance. c. How to use profile tools to troubleshoot memory and IO bottlenecks and optimize application performance. d. By using the above methods, we doubled our initial performance improvement. 3. How to improve rust and js interop and optimize tool plugin capabilities

16:30-17:15

Advanced SQL Parser and Efficient Expression Execution Framework implemented in Rust - Design and Implementation of Databend Database Expression Framework. 下载PPT

A complete expression execution framework covering SQL parsing, type system construction, and efficient vectorized evaluation. It deeply analyzes the unique advantages of Rust in implementing efficient SQL parsers and building complex type systems. At the same time, it will also demonstrate how to use Rust's type system to implement a high-performance vectorized evaluation system, helping Databend database provide faster and more powerful solutions in practice.

17:15-18:00

Research and analysis of Rust programming practice issues and automated testing technology

Rust is a language that promises memory safety and concurrency safety, so ensuring the security and reliability of Rust libraries is an extremely important issue. Although Rust code effectively guarantees memory safety, this does not mean that bugs will not occur in Rust language programs. For example, the unsafe mechanism provided by Rust for low-level system programming may still introduce security risks such as dangling pointers. Currently, a small number of studies on Rust security risks have already summarized some code patterns that may cause memory problems artificially, but there is no systematic summary of common bug patterns that appear in real Rust projects. Therefore, we conducted empirical research using code mining technology to summarize common code bug fixing patterns from real-world Rust language program projects and explore bugs related to Rust language features. At the same time, in order to further ensure the safety of Rust libraries, we propose a method based on the existing Rust ecosystem to generate fuzzy test targets. This method uses MIR parsing technology to find API calls and dependencies between APIs in projects within the ecosystem where the library under test is located, thereby extracting API sequences for testing purposes and generating fuzzy test targets for Rust libraries accordingly. To this end, we have implemented a tool for generating fuzzy test targets which utilizes AFL for fuzz testing. This tool proposes a new way of generating fuzzy test targets for rust which greatly reduces manual generation costs while being able to generate API call sequences more consistent with human programming habits making it easier to detect bugs commonly encountered during actual development processes thus having better practicality overall.

Check Sessions

13:30-13:50

The future of JavaScript is coming: accelerated computing, security and portability in the latest standards

JS Containers enable developers to create portable and lightweight applications, WebGPU API accelerates the rendering, and WebAssembly (Wasm) offers powerful capabilities for executing high-performance, low-level code directly in the browser. Join Natalia Venditto on this exciting journey into the future of JavaScript as she explores the importance of these technologies in the fast-paced, ever-changing software development landscape, discusses the opportunities and challenges they present and demonstrates how they can be used to build the next generation of web applications, accelerate your development workflow, build more secure applications, and unlock unprecedented levels of performance.

13:50-14:30

From alt-JS to var-TS - A review and outlook of variant languages in the JS/TS ecosystem

/

14:30-15:10

Milestone in open source 3D engine: data-oriented custom rendering pipeline.

What is Render Graph? How does it differ from other game engine technologies? What is its significance for open source games? The two speakers will focus on revealing how Render Graph achieves better scalability, demonstrating the architecture of this method and the benefits it brings to rendering pipeline developers. They will also showcase Cocos' latest research results in graphics rendering, representing Cocos' continuous efforts in pursuing key technological breakthroughs.

15:10-15:50

Thinking and exploring the application framework under the concept of "Internet of Everything". 下载PPT

Application framework is a key infrastructure that connects the operating system with the developer ecosystem and enrich user experience. Among them, development efficiency and running experience are eternal demands, and the industry is also continuously developing and evolving. This topic focuses on mobile application frameworks, sorts out their key development trends, analyzes the underlying technological evolution ideas and current limitations; at the same time, further combines new scenarios and ecosystems of IoT to design and evolve corresponding application frameworks, sharing thoughts, practices, and next steps in this field.

15:50-16:30

Thinking and Practice of Front-end Full-Chain Tracing Based on Otel 下载PPT

Log Service SLS is a cloud-native observability and analysis platform that provides large-scale, low-cost, real-time platform services for data such as logs, metrics, traces, etc. It can comprehensively enhance the digital capabilities of development, operation, management, security and other scenarios. As front-end and back-end system architectures become increasingly complex, monitoring methods for front-end and back-end separation are no longer sufficient to meet business needs. In recent years, the OpenTelemetry standard has developed rapidly with the aim of providing standardized solutions in the field of observability to solve problems related to data models, collection processing and exportation of observation data. This topic mainly introduces how SLS builds an integrated monitoring system for front-end and back-end based on OpenTelemetry: 1. Construction of an integrated front-end and back-end OpenTelemetry protocol, all-end platform probes based on JS: web applications, mini-programs, mini-games, and potential platforms. 2. Construction of tracing services for front-ends and back-ends along with problem diagnosis.

16:30-17:10

Rspack is a high-performance web building tool based on Rust. 下载PPT

Rspack is a high-performance build engine based on Rust, which can interact with the Webpack ecosystem and provide better build performance. When dealing with monolithic applications with complex build configurations, Rspack can provide 5-10 times faster compilation performance improvement. This session will cover: 1. What is Rspack and what problems does it solve? 2. How fast is Rspack and what features does it have? 3. Compatibility with Webpack, how to migrate from Webpack? 4. Architecture design and future of Rspack

17:10-17:50

More than just improving efficiency - the open source journey of the front-end component library HaloE. 下载PPT

The product interface is the first intuitive impression of users on the mobile cloud brand. Mobile Cloud has a huge product system, and in cross-departmental and cross-team collaboration, there must be a set of front-end component libraries suitable for Mobile Cloud's own business. HaloE front-end component library is a set of VUE-based, fully business-covered component libraries that can effectively target the special nature of mobile cloud mobile scenarios, increase design reusability, reduce interaction conflicts, and improve experience; standardized components use unified development languages to reduce front-end development costs and testing costs; using unified components to improve the consistency of the entire public cloud product, enhance user experience, reduce user learning costs while forming a unified brand image and improving product quality. This topic combines some technical innovations and practices such as cloud-native and low-code in the construction process of Mobile Cloud's front-end component library to share with everyone.

Check Sessions

DAY2
All Day, May 28

09:00-09:40

Teaclave Java: Building a Secure Shield for Java Applications 下载PPT

Confidential computing ensures data security by providing hardware-level system isolation, but the trusted execution environment (TEE) that protects it only supports running native programs and cannot directly run Java programs. If a Java application and the entire JVM are deployed in the TEE, there will be a problem of too large a Trusted Computing Base (TCB), which weakens the security of the TEE. We use Java static compilation technology to automatically divide Java applications into secure sensitive and non-sensitive parts, statically compile secure sensitive parts into native libraries, and then deploy them in the TEE for interaction with non-sensitive parts outside of the TEE to obtain security guarantees provided by hardware. This technology has features such as minimal modification to existing Java programs, high automation level, and small TCB. It is being open-sourced incubated in Apache community and has won an Outstanding Paper Award at ICSE 2023.

09:40-10:20

The current status of open-source RISC-V cloud computing software and China Telecom's exploration 下载PPT

1) RISC-V adaptation and support for open source software in cloud computing, including open source operating systems, compilers, virtual machines, cloud native applications, cloud storage, cloud networking, databases and trusted computing. 2) China Telecom Research Institute has released the first RISC-V cloud computing open source software supply chain directory. The directory can be found at https://gitee.com/risc-v-cloud/rvchain. It provides a classification summary of open source cloud computing software that supports the RISC-V instruction set and continuously solicits participation from developers and suppliers of RISC-V cloud computing open source software to promote the development of the RISC-V cloud computing ecosystem. 3) China Telecom Research Institute has opened its work on adapting TeleVM - a lightweight virtual machine for RISC-V architecture - which includes BootLoader, CPU virtualization, memory virtualization and interrupt virtualization.

10:20-11:00

From the development of OpenCloudOS, observe the advancement and progression of open-source operating systems 下载PPT

Against the backdrop of the sweeping trend of cloud-native technology in various industries and the rapid iteration of various business architectures, containerization, microservices, and serverless computing have posed new challenges and requirements for underlying infrastructure (including core OS). Simply adapting or optimizing operating systems for cloud scenarios is no longer sufficient to meet new business needs. So how can domestic operating systems be redesigned to cater to cloud-native scenarios and demands, fully embracing cloud-native technology? This session will take OpenCloudOS as a case study to introduce it to you.

11:00-11:40

Apache RocketMQ Event-Driven Engine 下载PPT

Starting from RocketMQ 5.0, we have introduced a new concept: events. What are the differences between messages and events? How does RocketMQ bring changes with this introduction? When should we use events instead of messages? Let's explore these questions together. Outline: ● Messages vs Events: Based on previous summit sharing, we will emphasize and deepen the relationship between messages and events. ● Application Scenarios Analysis: Different from previous sharing, we will focus on the core scenario "event push" for detailed explanation. We will also explain the difference between "pull" and "push". Although they do not distinguish between "messages" and "events", why is event more suitable for pushing? In what scenarios is it appropriate to use event push? What are its advantages and limitations? Why is "push" harder to implement than "pull"? (Introducing EventBridge implementation in part 3) ● EventBridge Solution: Besides introducing EB's basic framework, we will highlight how open-source EB achieves good performance in pushing, including exception handling, retry strategy, dead letter queue (DLQ), end-to-end tracing, API management etc. (related to our recently developed built-in Runtime). ● Future Plans for Open-Source EB: Including flow control, backpressure mechanism, monitoring & alerting etc.

13:30-14:10

Starting from the implementation and practice of new features in C++, discusses the development trend of the C++ ecosystem and its impact 下载PPT

The industrial programming languages, C++, was once praised as a dragon-slaying sword, although it was unfathomable and there were no dragons to slay. However, in recent years, with the slowdown of hardware speed increase and the continuous increase in computing power requirements, along with C++'s steady development over the years, many people who had declared C++ outdated were surprised to see it slowly becoming popular again. Last year, it even won the TIOBE Programming Language of the Year award. In fact, both the C++ language and compiler have been continuously evolving and introducing many exciting new features such as Coroutine and Module language features as well as AutoFDO and ThinLTO compiler features. There are also many new features still in incubation stage such as SIMD, Network, Static Reflection etc., not to mention numerous small changes aimed at improving runtime efficiency and programming productivity. However for many developers and managers working on industrial-grade C++ projects they may understand that upgrading to newer language standards or compilers can bring benefits but due to uncertainty about potential problems or risks during upgrade process they cannot make a decision on whether or not to upgrade. As a result many industrial-grade C++ projects continue using older compiler versions and language standards without being able to enjoy dividends brought by new language standards or compilers. At the same time, the Rust language, which is known for its safety and high performance, is maturing, and the Carbon language, which is called the next C++, is on the rise. In this session, we will talk about the implementation of new C++ language features in the Clang/LLVM open source community and the large-scale application of new C++ features in the enterprise, as well as our experience in the upstream and downstream evolution of the C++ ecosystem.

14:10-14:50

Apache HugeGraph's Open Source Evolution of Distributed Storage and Computing 下载PPT

After joining the Apache community for a year, HugeGraph released its official version 1.0. This year, we continue to evolve towards the brand new version 2.0 and promote the integration of internal and open source versions. In this sharing session, we will introduce the design and implementation of distributed storage and computing parts, as well as how to better participate in open source communities. Finally, we will discuss our future plans.

14:50-15:30

Real-time data integration architecture evolution: from ESB to Kafka to DaaS 下载PPT

In early system design, without considering data interoperability, traditional ERP, OA, CRM... each system is independent and has a natural hierarchy between different architectures. The databases are mostly monolithic. Today, with the exponential growth of data, these systems have fallen into a predicament where their performance cannot be scaled up and the problem of data silos will become increasingly painful for enterprises. At the same time, for some interactive apps or operational analysis scenarios, traditional big data platforms are unable to provide effective support due to their limited support for real-time data. However, in interactive business scenarios that require higher real-time requirements (OLTP or Operational Applications), such as unified product or order queries commonly found in e-commerce industry; real-time risk control in financial industry; customer CDP in service industry etc., these scenarios are often critical tasks for enterprises. In addition, many new generation operational analytics are gradually becoming mainstream applications of data. The characteristics of operational analytics also require the latest real-time data from business systems to help customers make more timely business responses. When combining the pain points of both non-real-time and data silos together,how can we solve them? Common real-time integrated data architectures include: ESB and Kafka ETL. And DaaS architecture which is emerging recently will be deeply analyzed around these architectural solutions in this topic,and try to come up with a conclusion on who can "break through" better under the scenario of "real-time breaking through Data Silo". Key sharing points will include: - Current situation and key points of Data Silos - Business scenarios for Real-Time Data Integration - Introduction to common Real-Time Integrated Data Architectures: - ESB - Kafka - Technical key points of Kafka ETL - What is DaaS Architecture? - Architecture Characteristics and Advantages of DaaS - A Comparison between Implementation Scenarios: Kafka ETL vs DaaS - Code volume - Development time - Debuggability - Conclusion and Suggestions

15:30-16:10

File storage in the AI era: Practice and evolution 下载PPT

In the era of large-scale models and big data, large-scale distributed training has become a necessary condition for accelerating model training. However, with the increasing use of enterprise GPUs and the rapid growth in demand for file capacity, improving the performance and efficiency of underlying storage has become a challenge. File systems were born in the 1980s, accompanying explosive growth in data demand and evolving from single-machine to distributed systems. At the same time, cloud computing is also driving storage development as more and more enterprises begin using it for backup and archiving. Some traditional high-performance computing scenarios that were previously conducted locally are also migrating to the cloud along with many AI applications. Therefore, file systems are also evolving towards cloud-native architectures. JuiceFS is an open-source distributed file system designed specifically for cloud environments that integrates with object storage. Currently, JuiceFS has been applied in AI applications across multiple industries including life sciences, autonomous driving, quantitative investment etc. This sharing session will introduce JuiceFS's design and practice in AI storage fields while sharing case studies on managing billions of small files in autonomous driving scenarios as well as high-throughput model training scenarios in quantitative finance.

16:10-16:50

Build an all-in-one Go microservice ecosystem based on Dubbo

As the most popular cloud-native language, Go has gained high popularity in recent years, and many enterprises have also transformed their technology stacks to Go. However, as the Go language ecosystem continues to flourish, its completeness still lags behind other ecosystems. For small and medium-sized enterprises, a Go framework similar to Spring is still needed to support daily business development. They crave for ease of use and stability that comes with Dubbo's ecosystem. In response to such demands, the Dubbo-go service framework was born. From monolithic architecture to cloud-native architecture, Dubbo-go strives to decouple business code from middleware step by step and provide unified programming interfaces as much as possible. By abstracting service calls through AOP thinking, standardizing interfaces and sinking infrastructure implementation down below. While ensuring high availability and stability of network communication on the premise of integrating a batch of commonly used open-source components into a consistent programming interface for expansion and invocation,Dubbo-go does not stop at existing user usage scenarios or basic framework capabilities but chooses to pursue advantages such as high availability,multi-language,and cross-ecosystem integration in order  to build a new generation microservice infrastructure that bridges the gap between X And Go while simplifying Go microservice development and providing rich service governance capabilities。

16:50-17:30

Technical innovation and practice of openKylin

The open-source operating system, openKylin, was founded in 2022 and has already gained hundreds of thousands of active users worldwide. This speech will be based on the open-source practice history of openKylin and introduce the innovative achievements of the team in areas such as kernel, desktop environment, key applications, etc. It will also share practical experience from the team in terms of open-source technology research and development, building an open-source community, and cultivating talents for open source.

Check Sessions

09:00-09:05

Greeting + Ceremony

Greeting + Ceremony

09:05-09:20

Building the Zhangjiang Metaverse Innovation Ecological System

At the 2022 Shanghai Global Investment Promotion Conference, the action plan for build a new metaverse was officially released. At the same time, "Zhangjiang Digital Chain", a metaverse theme park mainly based in Pudong Software Park, became Shanghai's third batch of city-level characteristic industrial parks. Zhangjiang aims to create a world-class metaverse innovation community and work with everyone to lead innovation, build metaverse, and contribute important strength to breaking through an industry scale of 350 billion yuan.

09:20-09:40

Keynote

Keynote Speaker

09:40-10:00

Prospects and Open Source Opportunities Analysis of the Ultimate Form of Metaverse 下载PPT

/

10:00-10:20

Technical Analysis of EasyAR Metaverse Spatial Computing Platform 下载PPT

Cloud-based spatial computing is the future development direction. The EasyAR team has developed the EasyAR Mega Metaverse Spatial Computing Platform, which provides a complete toolchain to help developers create metaverse applications. At the same time, this platform has four major advantages. This presentation will provide a detailed explanation of the product system, toolchain, and typical applications.

10:20-10:40

Building new infrastructure for the development of the metaverse 下载PPT

The topic will revolve around the new infrastructure in the metaverse field, namely the "Metaverse Patent Pool", and will discuss it from various aspects such as technology, history, sources, suggested significance, and development trends. The establishment of the Metaverse Patent Pool is committed to promoting technological adoption and innovation in the metaverse field, driving closer technical cooperation along the industry chain, allowing innovative vitality to flow fully while protecting small and medium-sized enterprises and other participants in the metaverse ecosystem. It also assists government to fully play their role in attracting investment and talent.

10:40-11:40

Panel Discussion: Games and 3D rendering engines

/
Round table host :Keith Chan | CNCF China Director, LF Asia Pacific Strategic Director, Hyperledger China Strategic Director

13:30-14:00

The integration of open source technology and patented technology in the metaverse.

Metaverse technology involves a huge amount of open-source software, as well as many legally monopolized patented technologies. How to handle the conflict between open source and patents in the Metaverse world, and balance the relationship between open source and patents is of great significance for the future. This presentation will analyze the legal characteristics and restrictions of open-source software technology, introduce patent protection avoidance and layout, with the hope of providing inspiration and assistance to relevant personnel in the field of Metaverse technology.

14:00-14:30

Application of AIGC and large language models based on open source technology 下载PPT

This speech will focus on the application of AI generation and large language models based on open source technology. In the speech, we will introduce DATA GRAND's current technological exploration and application practice. The purpose of the speech is to help the audience gain a deeper understanding of the application of open source technology and large language models in natural language processing.

14:30-15:00

Keynote

Keynote Speaker

15:00-15:30

As the metaverse returns offline, Space AIGC becomes a technological singularity

The metaverse itself is an innovation in the paradigm of interaction between people and space, involving upgrades to interactive means as well as content carriers and presentation formats. Moving the interactive interface into the real-world metaverse space first means bringing people's experiences back offline, but the content carrier becomes a three-dimensional space that blends reality with virtuality. By overlaying digital information in this space, the forms of content that people can perceive will become three-dimensional, diverse, and customizable. Secondly, the interactive tools available to people are no longer limited to traditional electronic devices such as PCs and mobile phones; they also include new-generation intelligent devices such as XR. These devices are fundamentally windows through which people observe spaces. Based on visual digitization, people can quickly obtain diverse information contained within buildings and various objects in real-world spaces - for example, service contents available at specific locations, prices required for corresponding services or store style classifications. This is expected to reduce offline search costs and improve offline interaction efficiency. The metaverse based on spatial computing and spatial AIGC breaks down physical space's uniqueness and monotony while endowing it with infinite imagination and incremental value.

15:30-16:00

The Ecological Advantages of Web3D in the Metaverse and Opportunities for WebGPU Engines.

The web ecosystem has great advantages for metaverse, and the recent WebGPU standard will also bring revolutionary changes to Web3D engines. Combined with the current trend of AGI, there will be huge new opportunities and space for the development of 3D engines combined with AI capabilities.

16:00-16:30

Open Source Community: Accelerator for Iterating Metaverse Experience

/

16:30-17:00

Cloud-based digital twin system and AIGC innovative application

Digital China is an important engine for promoting Chinese-style modernization in the digital age. Through digital twinning, it promotes the digital transformation of various industries and drives production, life, and governance changes through digitization. It is an indispensable booster to achieve Digital China. However, facing the era of comprehensive cloudification and generative AI changing content creation, how can digital twinning empower the digital transformation of various industries? In this lecture, DGene will share their experience in developing cloud-native digital twin products and using AIGC for content creation to bring different ideas for implementing digital twins and let more people see the possibilities brought by cloud-native and AIGC empowering them.

Check Sessions

09:00-09:30

The Practice of Open Source and Open Platform for Automotive Software 下载PPT

Under the trend of software-defined vehicles, the automotive industry is increasing its embrace of open source. Open and open-source organizations for the automotive field are also emerging internationally. OpenSDV Automotive Software Open Source Alliance, as a professional open-source organization originating from China and targeting the automotive industry, is committed to building a global development-oriented open-source community with code as its core and incubating internationally influential open-source projects. The aim is to collaborate with domestic and foreign industrial, academic, and research forces to strengthen the verticality, professionalism, and openness construction of the open-source platform; make bold explorations, beneficial practices, and positive contributions in promoting the development of open source software for automobiles and ecological construction.

09:30-10:00

Photon-Linux OS 下载PPT

The Photon-Linux Operating System is a series of operating systems launched by KERNELSOFT, including RTOS, Hypervisor, Linux/Auto operating system. It empowers the basic software in the era of software-defined vehicles and generates long-term sustainable value.

10:00-10:30

Sharing of open-source governance practices in the automotive industry 下载PPT

With the advancement of technologies such as Internet of Vehicles and intelligent driving, the automotive industry is entering a new era of software-defined vehicles. Behind the electrification, intelligence, networking, and sharing of automobiles is an increasing amount of code. It is estimated that by 2025, the number of lines of source code used in automobiles will exceed 200 million, and with continuous optimization of intelligent driving technology, it is expected to reach one billion lines. Volkswagen expects that by around 2030, software development costs will account for more than half of vehicle development costs. It can be foreseen that future cars will be software-defined vehicles and open source will define the software. As a new energy vehicle company, Zeekr has launched its own open-source governance project less than a year after its establishment and established its own OSPO (Open Source Program Office), which has been successfully carried out for over a year.

10:30-11:45

Roundtable "SDV2025"

This discussion will focus on the anticipated technological advancements in 2025 and strategies for planning, designing, and implementing them to achieve these goals.
Round table host :Zhaozhi Teng | Technical and Ecological Director of the OpenSDV Automotive Software Open Source Alliance

13:30-14:00

The Open Source Road of Red Hat in Automobile Industry. 下载PPT

Based on the technological development of the automotive industry's E/E architecture and ADAS/AD, we can see the demand for technology in the intelligent automotive industry. We will also share Red Hat's open source approach to automobiles and how community efforts can drive the development of automotive operating systems.

14:00-14:30

Container-based management solution for in-vehicle operating systems applications 下载PPT

updating

14:30-15:00

KubeEdge Vehicle-Cloud Collaboration Platform Innovation Practice 下载PPT

With the rapid development of intelligence and networking, the connection between automotive applications and the cloud is becoming increasingly close. With the help of cloud-native technology, electronic architecture can quickly evolve from a single-vehicle architecture to a vehicle-cloud collaborative architecture. This topic will share how KubeEdge combines cloud-native technology with automotive applications to achieve an innovative architecture for vehicle-cloud collaboration platform, and build a cloud-native technology ecosystem that meets the needs of the automotive industry.

15:00-15:30

Practice of RT-Thread Smart Operating System 下载PPT

System adaptation and optimization of MPU

15:30-16:00

Rethinking the safety for linux-based autommotive system

The Linux is widely used for automotive system, especailly the electrical-veacles, it could handle the Infortainment system very well just like the digital-cockpit, but the IVI is only a part of the whole automotove software system, auto-engineer will question that the linux is not safety enough to work for critical system , ex: digital dashboard, adas, self-drivers, but linux has rich ecosystem for AI, graphic and middleware stack, so many companies try to find a new way to make the linux could fit the safety and ecosystem include huawei, so I would share some interesting experience and future thinking about using linux to work for automotive system.

16:00-16:30

A vehicle safety inspection tool based on Clang static analysis and symbolic execution - zchecker 下载PPT

Clang static analysis is a source code analysis tool based on the LLVM project, used to detect errors and potential issues in programs which is written by C and C++. It analyzes the code during compilation without executing the program, thereby improving code quality and security. Static analysis can identify many types of problems, such as memory leaks, null pointer dereferences, array out-of-bounds access, etc. Symbolic execution is a program analysis technique that explores all possible execution paths of a program by reasoning about symbolic inputs (rather than concrete values). Combining Clang Symbolic Execution with static analysis allows for deeper discovery of potential issues in the code without executing the program. This approach can find more complex errors such as race conditions and deadlocks. Our zchecker vehicle safety inspection tool implements MISRA C/C++, HIS and other vehicle safety inspection standards based on the architecture of Clang Static Analysis and Symbolic Execution. In this presentation we will focus on process-sensitive, path-sensitive, abstract syntax tree, control flow graph (CFG), explosion diagram and other analytical techniques used in our vehicle safety inspection tool along with implementation challenges.

16:30-17:00

Digital platform for automobiles based on a unified open architecture 下载PPT

In the era of software-defined vehicles, there is a shorter time requirement from concept to delivery for car models. The use of digital means for high-speed iteration throughout the entire lifecycle has become a necessity. Through the iterative evolution of digital prototypes 1.0, 2.0, and 3.0, it helps to break down barriers between multiple departments and reduce overall vehicle development time and costs. By using the MWORKS digital platform based on a unified open architecture to create a digital delivery process, product construction is carried out based on unified data standards and interface standards which provide visible decision-making basis for all departments and help various business units achieve rapid iteration.

Check Sessions

09:00-09:20

Welcome and Introduction 下载PPT

/

09:20-09:40

Keynote

Keynote

09:40-10:00

AI & Data: painpoints and the future. 下载PPT

Currently, the large-scale language model revolution ignited by ChatGPT is having a profound impact. As one of the scarcest resources in the intelligent era, the importance of data is beyond doubt and often becomes a bottleneck for model development and tuning in major enterprises and research institutions. This topic focuses on discussing the data pain points behind large models and future-oriented solutions.

10:00-10:20

Application of Large-scale Language Models in Intelligent Document QA: A Solution Based on Langchain and Langchain-serve. 下载PPT

The task of a document question-answering system is to search for answers related to user questions from document data. As the number of documents continues to increase, traditional search methods can no longer meet people's needs. With the development of deep learning models, document question-answering systems have migrated from character matching-based methods to vector representation-based methods. However, they still can only return paragraphs relevant to the question and cannot directly provide answers, especially for yes/no questions. Recently, the ability of large-scale language models has been continuously improving, providing a solution for generating answers in document question-answering systems. The next generation of document question-answering systems will integrate traditional models, deep learning question-answering models and large-scale language model technologies together to provide users with more comprehensive document question-answering services. This presentation will introduce how to use Langchain development framework and Langchain-Serve deployment tool to develop intelligent document question-answering systems.

10:20-10:40

One-stop Easy-to-use Practice for MindSpore Large Model 下载PPT

Artificial intelligence has gradually moved from "refining models" to "refining large models". Compared with traditional models trained for specific application scenarios, large models have strong generalization ability and are no longer limited to a single specific scenario. Therefore, they require larger and broader data input and stronger computing power for training. All these require huge costs that most developers cannot afford. How to reduce the threshold for training and applying large models has become a new challenge. In this topic, we will share the practical experience of using MindSpore's one-stop easy-to-use large model platform, which integrates model selection, online inference, and online training. It supports online experience and fine-tuning of large models so that developers can get close contact with applications such as generating text from text; generating images from text; remote sensing detection based on big models.

10:40-11:00

Application and Practice of AI Database OpenMLDB 下载PPT

AI has become an indispensable part of the computer infrastructure, and databases optimized for AI scenarios have emerged. AI databases not only need to meet the requirements of feature engineering and machine learning model deployment in terms of functionality, but also have higher requirements for offline and online performance. This sharing will take the OpenMLDB project as an example to introduce in-depth the application scenarios and performance optimization of AI databases, achieving rapid implementation of specific AI scenarios and several times or even dozens of times performance improvement.

11:00-11:20

Vector database: Massive memory for AIGC 下载PPT

During the era of AIGC, vector databases are playing an increasingly important role in processing massive unstructured data. This sharing will focus on how vector databases empower AI in the wave of AIGC.

11:20-11:55

PyTorch 2.0: the journey of bringing compiler technologies to the core of PyTorch

PyTorch 2.0 uses compilers to deliver faster training and inference without sacrificing the flexibility and ease of use PyTorch. This talk will provide an overview of the technology stack behind the new torch.compile() API, discussing the key features of PyTorch 2.0, including its full backward compatibility and 43% faster model training. We will introduce various stack components, such as TorchDynamo, AOTAutograd, PrimTorch, and TorchInductor, and how they work together to streamline the model development process. Attendees will gain a deeper understanding of the PyTorch 2.0 architecture and the benefits of incorporating compiler technologies into deeper learning frameworks.

13:30-14:00

Deep learning platform + large models to solidify the foundation of industrial intelligence

This speech combines the latest trends in generative AI and Baidu's practice, introducing the progress of Baidu's deep learning platform + large model core technology research and development, product innovation, and ecological construction. The speech also shares thoughts on the development of an industrial-grade open-source platform for deep learning based on PaddlePaddle and the integration of industry and education to build an ecological system under new trends.

14:00-14:20

When Federated Learning meet Large Language Models 下载PPT

Federated learning enables the collaborative training of a model by multiple data sources without the need to share their data. In recent years, large language models based on transformers have become increasingly popular. However, these models present challenges due to their high computational resource requirements and complex algorithms. In this presentation, we will introduce FATE’s latest efforts in applying federated learning to large language models such as GPT-J, ChatGLM-6B, GLM, and LLaMA in financial use cases. FATE combines the distributed training mechanism of federated learning with large models to keep sensitive data from all parties within their local domains while allowing for computational investment based on each party’s actual data volume. This enables joint training of large models and mutual benefit. We will also discuss technical and practical considerations, real-world use cases, and the need for privacy-preserving mechanisms.

14:20-14:40

Model inference optimization, exploring the potential of AI implementation 下载PPT

The trend of large models is unstoppable, and how to improve model inference efficiency has become an urgent problem. This sharing will introduce the current status and trends of model inference optimization technology, and share Adlik's practice in this field.

14:40-15:00

Xtreme1 is the next-generation multimodal open-source training data platform 下载PPT

UBS Global research report found that 70%-90% of AI engineers' time is spent on training data. Many algorithms are already very good in practice, and data has become a new bottleneck for developing AI models. Based on this situation, the BasicFinder team developed the Xtreme1 training data platform, dedicated to building the easiest-to-reach open-source Data-Centric MLOps infrastructure to connect people, models and data. Xtreme1 is the world's first open-source tool that supports multi-modal data annotation and introduces ontology to penetrate different AI clients' problem abstractions. It fully follows cloud-native architecture principles to ensure service performance scalability, deployment flexibility, and service resilience in case of failures.

15:00-15:20

OPPO's exploration and practice in the field of mobile graphics technology - O3DE Mobile WG and shaderNN 下载PPT

In recent years, with the continuous improvement of mobile computing power and the rapid development of deep learning research, especially the increasing demand for data security and the maturity of small network models, more and more inference that was originally executed in the cloud has been transferred to mobile devices. The deep learning inference on mobile platforms involves hardware platforms, drivers, compilation optimization, model compression, operator algorithm optimization and deployment. Efficient inference frameworks suitable for system business development have become an urgent need and development focus in the industry. Based on efficient AI inference requirements for graphic image post-processing on mobile devices to reduce business integration costs and improve efficiency, we have developed ShaderNN - an efficient inference engine based on GPU shader. It directly performs efficient inference based on GPU textures to save I/O time without relying on third-party libraries. It is compatible across different hardware platforms, supports mainstream deep learning training frameworks, convenient for optimization, integration, deployment and upgrade.

15:20-15:40

Next-generation knowledge tools: user-centered personalized language models and hybrid deployment strategies. 下载PPT

/

15:40-16:00

Intel’s Journey with PyTorch: Democratizing AI with ubiquitous hardware and open software 下载PPT

PyTorch is one of the most popular frameworks for deep learning and machine learning. Intel has been a long-term contributor and evangelist in PyTorch community. In this talk, we will share our experiences in contributing to PyTorch, both in the core framework and in its ecosystem libraries. We will elaborate our optimizations in torch.compile(), the flagship new feature of PyTorch 2.0, and showcase its benefit on CPUs. We will demonstrate the value of open software and ubiquitous hardware by showcasing generative AI applications powered by diffusion and large language models running with PyTorch on Intel CPUs and GPUs. We will also touch base on some of the PyTorch ecosystem projects that we contributed to, such as HuggingFace, DeepSpeed, PyG etc. Finally, we will discuss our future plans and vision for continuing our partnership with the PyTorch Foundation and advancing the state-of-the-art in deep learning and machine learning.

16:00-16:20

DeepRec: High-performance deep learning framework for recommendation scenarios 下载PPT

DeepRec is a high-performance deep learning framework for recommendation scenarios, open-sourced by Alibaba Cloud's machine learning platform PAI. It has deeply optimized the performance of sparse models in distributed computing, graph optimization, operators, runtime and other aspects. At the same time, it provides a series of functions such as dynamic elastic features, dynamic elastic dimensions, adaptive EmbeddingVariable, incremental model export and loading. DeepRec is applied internally in Alibaba Group's core businesses such as Taobao, Tmall, Ali Mama, AMap, ITao,AliExpress and Lazada to support large-scale sparse training with billions of features and trillions of samples. Since its open-source release over a year ago, DeepRec has been widely used in search promotion business scenarios by dozens of companies, bringing significant business value.

16:20-16:40

Building a production ecosystem around MegEngine's algorithms. 下载PPT

Currently, the application of AI technology has been validated in various fields and it has higher productivity than traditional algorithms. However, with the increasing demand for a large number of AI algorithms, the traditional algorithm generation method that focuses on specific scenarios for data collection, annotation, model training, validation and delivery has become a bottleneck for AI implementation. The MegEngine team proposes a standardized algorithm production method based on each stage around the MegEngine training framework to reduce the threshold for AI implementation. In order to achieve algorithm production, MegEngine has developed a series of components that together form the ecosystem of MegEngine's algorithm production and are gradually being open-sourced.

16:40-17:00

Primus - Universal Distributed Training Scheduling Framework 下载PPT

In recent years, machine learning technology has been deeply rooted in various application fields and has successfully brought significant improvements. In order to meet the increasing amount of training data and model size, the concept of distributed training has emerged for more efficient model training. As a general-purpose distributed training scheduling framework, Primus provides a universal interface that bridges distributed training tasks and physical computing resources, allowing data scientists to focus on designing learning algorithms. It also allows dispersed training tasks to run on different types of computing clusters such as Kubernetes and YARN. Based on this foundation, Primus also provides fault tolerance capabilities and data scheduling capabilities required for distributed training tasks, further enhancing the usability of distributed training. [Outline] 1. Overview of Distributed Training 2. Structure and Functionality of Primus a. Data Scheduling Capability b.Primus UI c.Primus API 3.Current Status and Future Development Plans for Primus [You Will Gain] Insights into ByteDance's current status and practices with Primus Related challenges in the field of distributed training along with future prospects

17:00-17:20

Boost ML Upstream Frameworks with transparent backend graph compilers seamlessly 下载PPT

As an emerging trend being observed from cloud to edge, AI workloads tend to be managed and orchestrated successfully by the top ML frameworks like Ray. But at the same time, AI accelerations have been enabled by diverse vendors' AI accelerators such as Nvidia GPU series, Intel Movidius VPU, GoogleTPU, etc. Actually you can see many ASIC-based AI accelerators. On the other hand, a variety of graph compilers like TVM, Intel OpenVINO, TensorRT, etc are existing to improve ML performance but fragmented. So, users have the challenges around empowering these heterogeneous AI accelerators with different software accelerations in the real world due to missing a general unified framework supporting them naturally. Here we'd like to review if-how we introduce our transparent backend acceleration technologies to boost ML performance automatically on heterogeneous AI accelerators with those ML graph compilers mainstream seamlessly on popular ML upstream frameworks such as Tensorflow, Pytorch, TorchServe, Tensorflow Serving, etc. With our zero code change approach to mainstream ML frameworks, users can see their ML/AI performance boosted on their original AI application.

17:20-17:40

Challenges and Attempts in Developing Multi-modal AI Applications 下载PPT

Compared to traditional single-modal AI applications, there are still many technical issues that need to be solved in the development of multi-modal AI applications. In this context, Jina AI explores this new application scenario and technological challenges in depth, providing developers with a one-stop MLOps platform. Jina AI empowers all developers to implement super cool multi-modal AI ideas.

Check Sessions

09:00-09:20

Istio Ambient Mesh Present and Future

Recently, developers around the world witnessed the release of a new mode for service mesh Istio called Ambient Mesh, which is completely different from Sidecar. Since its open source release in 2017, Sidecar has been regarded as a revolutionary innovation for zero-intrusion agents. However, after five years, users have found that there are many side effects that are difficult to solve through Sidecar. In September of this year, in addition to Sidecar, the Istio community announced another data plane mode called Ambient Mesh, which aims to simplify operations, increase application compatibility and reduce infrastructure costs. In this presentation, I will give an overall introduction to the Ambient mode and demonstrate how it works. Then compare it with the Sidecar mode. Finally, I will share the official views of the Istio community from the perspective of core contributors: how Ambient Mesh will evolve in the future and why the Istio community is redesigning a new lightweight proxy "ztunnel" using Rust.

09:20-09:40

KubeSkoop: An automated diagnostic system for container network issues 下载PPT

Kubernetes itself is relatively complex, with a high threshold for use. Users often encounter various problems when starting container migration. Due to the lack of skills and tools for fault diagnosis, users often feel frustrated and even give up on business containerization. Among them, network issues are particularly severe, as Kubernetes network virtualization makes it difficult to troubleshoot network problems. KubeSkoop is designed to reduce the difficulty of troubleshooting network issues and enable people without networking knowledge to automatically locate network issues through self-service automation. KubeSkoop can automatically build an access path in the container network for a given source and destination address, automate the collection and analysis of each network node's configuration on the link, combine eBPF kernel monitoring with IAAS-level network configuration checks to identify root causes that cause networks to be unavailable, greatly reducing the time required for locating network problems so that even users without any networking skills can use it. Currently deployed in Alibaba Cloud Container Service environments as a self-operated tool that solves large-scale Kubernetes cluster networking issues encountered by many customers. Recently Alibaba Cloud has open-sourced KubeSkoop which supports mainstream networking plugins and cloud vendors' Kubernetes cluster diagnostics worldwide.This topic will introduce how to use KubeSkoop diagnostic system architecture design as well as some technical details about its diagnostic capabilities implementation

09:40-10:00

Cloud-native microservice practice based on Kitex Proxyless and Istio 下载PPT

With the increasing popularity of Istio, the classic sidecar model is also well known. The biggest highlight of this model is that it is non-intrusive to business, and it is precisely this advantage that has made the concept of service mesh deeply rooted in people's hearts and meets most scenario requirements. However, in some scenarios that are sensitive to performance, the sidecar mode will inevitably bring some problems such as application protocol binding, performance loss, resource overheads, and increased operational complexity. CloudWeGo-Kitex is an RPC framework that supports multiple protocols. ByteDance mainly uses Thrift protocol internally and has done a lot of optimization on it. Kitex hopes to help other enterprises quickly build microservices. However, using Kitex-gRPC with Istio-Sidecar solution will encounter the above-mentioned problems. At the same time, we hope that users who use Thrift protocol can implement service governance based on Istio. Therefore, for multi-protocol support, Kitex supports Proxyless mode based on Istio. However compared with gRPC interface directly accessing Istio there are some issues which will be introduced in this sharing along with how to solve them.We expect that Kitex Proxyless can meet some business demands which are sensitive to performance while enriching deployment forms under unified governance plane and heterogeneous data plane scenarios. This sharing session will explore from the implementation principle of Kitex Proxyless to landing full-chain swimming lanes based on Kitex Proxyless together with everyone

10:00-10:20

Use container tools to build and manage WebAssembly applications 下载PPT

Wasm has emerged as a secure, portable, lightweight, and high-performance runtime sandbox suitable for cloud-native workloads such as microservices and serverless functions. Docker Desktop recently integrated WasmEdge and now supports Wasm containers. Today, there is already a large number of battle-tested tools that enable developers to create, manage, and deploy Linux container applications in development and production environments. Developers want to use the same tools to manage their Wasm applications to reduce learning curves and operational risks. More importantly, using the same tools will allow Wasm containers to run in parallel with Linux containers. This provides architectural flexibility where some workloads (such as lightweight, stateless, transactional, scalable) can run in Wasm containers while others (such as long-running heavyweight ones) can run in Linux containers. In this talk I will introduce how to use Docker Desktop , Podman , containerd , and various versions of Kubernetes to create, publish share and deploy real-world Wasm applications. These examples will demonstrate mixed-container types showing how Wasm containers work alongside existing Linux container applications.

10:20-10:40

OpenKruise: Comprehensive Enhancement of Cloud-Native Application Management Capability

Cloud-native application workloads are well-known for their Kubernetes native workloads (Deployment, StatefulSet), but on the other hand, we also see that from small and medium-sized startups to large Internet companies, the larger the scale of the application scenario, these native workloads are unable to meet complex business deployment demands. Therefore, many companies have developed custom workloads suitable for their own scenarios. However, among them, only OpenKruise open-sourced by Alibaba Cloud has achieved maturity in terms of generalization, comprehensiveness and stability as an open-source component that has become a CNCF Incubation project. In this sharing session, we will start with Kubernetes' native workloads to introduce the responsibilities and implementation basics of cloud-native application workloads. We will then analyze the real demands of application workloads in ultra-large-scale business scenarios. We will discuss how OpenKruise meets these needs through what kind of methods and its development trends in subsequent open-source ecosystems. 1. Problems and challenges in cloud-native application deployment 2. How does OpenKruise meet deployment demands in large-scale business scenarios? 3. Using Alibaba's application scenario as an example to introduce practical applications using OpenKruise for application management

10:40-11:00

When FinOps Meets Cloud Native - How Tencent Optimizes Cloud Costs Based on Crane 下载PPT

User research shows that more and more companies are migrating their businesses to Kubernetes. However, the packing rate and utilization of cloud resources are far lower than expected, resulting in significant waste of cloud spending. Tencent Cloud follows the "cloud financial management" method of FinOps and practices resource optimization and cost optimization based on Kubernetes. We have summarized these cloud optimization experiences and open-sourced them as Crane: Cloud Resource Analytics and Economics. I will share Tencent's experience in implementing application profiling, cost monitoring, and hybrid deployment in large-scale cluster scenarios based on Crane.

11:00-11:20

Cloud-native technology helps to reduce energy consumption and emissions in data centers 下载PPT

Green computing has now become the object of pursuit in various industries. In the digital economy era, "computing power is productivity" has become an important consensus in the industry. However, behind the growth of computing power, the energy consumption of data centers will also increase. In the context of carbon peaking and carbon neutrality strategies, how to improve efficiency and reduce energy consumption is a grand proposition. When it comes to "green computing," external attention is generally focused on how to reduce data center PUE, but it also includes how to use computing resources reasonably. For example, under the premise of ensuring service stability, reasonable allocation of computing resources can improve resource utilization and reduce server usage, thereby reducing carbon emissions. Cloud-native technology has significant advantages over traditional cloud computing technologies in terms of energy consumption through efficient use of computational resources. It has gradually become the mainstream technology foundation for cloud services and provides more advanced solutions for achieving green computing. The topic will be shared from seven aspects: comparison of runtime resource utilization rate; comparison of static service consumption; comparison of microservice frameworks; comparison efficiency analysis on cloud management platforms; analysis on R&D service energy saving; analysis on cloud-native ecological related technical energy saving; and other non-obvious key points for comparing energy-saving measures.

11:20-11:40

Kubernetes: Cross-Cluster Traffic Management Practice 下载PPT

In today's cloud-native application environment, it has become increasingly common for many companies to use multiple Kubernetes clusters to support their applications. Effective traffic management is crucial to ensuring the reliable and efficient operation of modern cloud-native applications. As these applications become more complex and require support for high levels of user traffic, efficient traffic management is more important than ever before. By properly managing traffic between multiple Kubernetes clusters, organizations can ensure that their applications run smoothly and users have the best possible experience. "Traffic Management for Multiple Kubernetes Clusters" is an important topic for modern cloud-native applications. Understanding the best practices and tools for managing traffic between clusters can help organizations achieve better performance and reliability, thereby improving the performance and reliability of their applications. This sharing session starts from the driving factors behind multi-cluster environments, introducing how to achieve cross-cluster communication of applications to achieve high availability, disaster recovery, and global load balancing.

11:40-12:00

Private key protection for workloads in service mesh 下载PPT

HSM SDS Server is an open source software, and its open source address is: https://github.com/istio-ecosystem/hsm-sds-server. The project is based on the service mesh project Istio and follows the SDS extension standard of Envoy. Then it implements a solution for the external SDS server of the service mesh through "Hardware Security Module" (HSM). After applying this project, users can maintain credentials managed by Istio/Envoy in a more secure environment through an external SDS server. In addition to supporting the management of newly created credentials for workloads, it also allows users to upload existing workload credentials and manage them at a higher security level, such as certificate rotation functions. This project can be used to save workload credential information in two scenarios: cloud-native service mesh workloads and service mesh gateway. This project uses Intel® SGX technology to protect user workload private keys within the data plane of the service mesh. User private keys are created and stored in SGX enclave memory, and accessed by applications authorized with SGX key-handle to access user private keys saved in encrypted memory. Therefore, user private keys will never be stored anywhere in plaintext form on the system, achieving a higher level of security.

13:30-13:50

A workflow orchestration engine called JobFlow based on the cloud-native batch computing platform Volcano

Workflow orchestration engines are widely used in high-performance computing, AI, biomedicine, image processing, beauty enhancement, game AGI, scientific computing and other scenarios to help users simplify the management of parallelism and dependency relationships between multiple tasks and significantly improve overall computational efficiency. JobFlow is a lightweight task flow orchestration engine that focuses on job scheduling for the cloud-native batch computing platform Volcano. It provides various types of job dependencies for Volcano such as completion dependencies, probes, job failure rate tolerance dependencies etc., supports complex process control primitives such as serial or parallel execution, if-then-else statements, selection statements and loop execution etc. In fields such as HPC,AI,and big data analysis,users can use JobFlow to define concise task processing templates to reduce human waiting time and greatly save manpower and time costs. JobFlow has been applied in a well-known research institute in China to solve problems such as user data preheating/recovery,business resource limitations,node crashes caused by excessive IO etc. through task flow orchestration which improves task calculation efficiency under equivalent hardware environment. In this sharing session,Wang Yang and Zhou Mingcheng will introduce: 1. The main challenges faced by Volcano in workflow orchestration scenarios 2. The design concept and application scenarios of JobFlow 3. Application practice and benefits of JobFlow in production environment Ecological benefits: Volcano is the first cloud-native batch computing project in the industry donated by Huawei Cloud to Cloud Native Computing Foundation (CNCF)in 2019.It is currently at incubation stage with participating companies including Huawei,AWS,Baidu,Tencent,Jingdong,Xiaohongshu etc. JobFlow is a sub-project incubated within the Volcano community led by Boyun together with community developers' joint contribution.We believe that this sharing session can bring you a different method of Volcano job scheduling.In addition,the audience can also learn: 1. Boyun's management practice for task orchestration such as AI and big data analysis 2. The design background, difficulties encountered, solutions etc. of JobFlow

13:50-14:10

ByteDance's Large-Scale Cluster Federation Technology Practice Based on Kubernetes 下载PPT

With the evolution of cloud-native within various business systems in ByteDance, the number and scale of k8s clusters have grown rapidly, leading to increasing maintenance costs. Additionally, the numerous and diverse cluster types also bring cognitive burden for users when selecting a deployment cluster. To solve these problems, we have independently developed a large-scale cluster federation system called KubeAdmiral to provide users with a unified service deployment entrance that facilitates task load transfer between multiple clusters. This lays the foundation for creating a unified resource pool and improving resource utilization efficiency.

14:10-14:30

Cloud-native edge intelligent device management framework: KubeEdge DMI 下载PPT

Edge device management is an important application scenario in edge computing, facing many problems such as edge device lifecycle management, mapping cloud-native digital twin models for edge devices, lightweight edge frameworks, and how to store, distribute and consume data collected from massive edge devices. KubeEdge is a cloud-native open source platform for edge computing built on Kubernetes and has become a CNCF incubation project. KubeEdge supports the collaboration of cloud-edge applications in complex edge-cloud network environments and provides an Edge Device Management Framework (DMI) that supports various protocols for managing edge devices in the form of cloud-native digital twin models. This topic introduces the DMI device management framework of KubeEdge. Under the design of the DMI framework, devices are no longer just data sources but are abstracted as microservices that provide data services to device data consumers in a cloud-native way. The device data access under the DMI framework supports multiple scenarios and is very flexible. The DMI framework can provide strong support for managing cloud-native intelligent devices based on KubeEdge. This topic is a joint presentation with Liu Chenlin, R&D engineer at Shanghai Daoke Network Technology Co., Ltd. and member of the KubeEdge community.

14:30-14:50

The best practice of machine learning platform storage based on CubeFS 下载PPT

In order to meet the company's growing needs for AI training, OPPO has created a one-stop machine learning platform. With the rapid growth of business, the diversity and surge of training tasks pose challenges to storage scalability, cost, and high performance. It is difficult for the storage systems used in the early days to meet the above challenges. The speakers will share how they use the cloud-native distributed file system CubeFS to build 50PB data capacity and tens of billions of small file storage, and realize the unified storage of machine learning platforms in hybrid cloud , and support the daily training of AI business for 200 teams, 10000+ daily training tasks. They will focus on the solutions and practical experience of CubeFS's metadata management of tens of billions of small files, storage management and cache acceleration capabilities under the hybrid cloud architecture, throughout the data lifecycle to flexibly store hot and cold data.

14:50-15:10

From load balancing to cloud-native traffic management platform.

The topic will analyze the current situation and problems of load balancing, explore the demand and development trend of traffic management platform. Through the BFE open source project, it will analyze the advanced features of application load balancing and its support for Kubernetes. It will also introduce a new generation of security architecture and explain how to integrate security functions into BFE.

15:10-15:30

Exploration and Practice of Multi-cluster HPA Based on Karmada by Ctrip 下载PPT

With the rapid development of Ctrip's business, the Kubernetes cluster has rapidly expanded to support online businesses and offline businesses including big data, machine learning and other scenarios. In order to improve resource utilization, enhance platform reliability and reduce cluster operation and maintenance costs, Ctrip has built a new generation of multi-cloud and multi-cluster architecture platforms based on Karmada, and extended key capabilities for cross-cluster elastic scaling of applications. This sharing mainly involves Ctrip's multi-cluster architecture as well as exploration and practice of cross-cluster application elastic scaling.

15:30-15:50

Practices and thoughts of China Telecom Network Cloud Native

This speech is aimed at the problems of network element closure, form cloudization and low resource utilization in the cloudization of virtualized network elements, focuses on the implementation of the network functions of the cloud-native network elements themselves. It abstracts CNF's commonality in network functions, fully considers the flexibility and elastic scalability of clouds, proposes a target architecture for cloud-native network elements, puts forward a general framework (Framework of CNF) for cloud-native network elements based on this target architecture. It also provides an implementation plan combined with open-source products and feasibility verification. The implementation plan can open up the black box of network elements, change the form in which they provide services externally to offer external observability for them.

15:50-16:10

Clusterpedia - Aggregated Retrieval of Resources in Multi-Cluster Scenarios. 下载PPT

The current multi-cluster field is in a stage of rapid development. There are already many projects and tools that can distribute and deploy resources among multiple clusters, but it can be difficult to simultaneously view these resources located in multiple clusters. At this time, using Clusterpedia can solve such problems, allowing users to simultaneously view resources from multiple clusters and support complex search conditions. Additionally, Clusterpedia is compatible with Kubernetes OpenAPI's list/get methods. Without using the UI, existing tools like kubectl can still be used to retrieve data. For the numerous management platforms in the multi-cloud ecosystem (such as Karmada, Clusternet, Cluster-API or self-built cloud management platforms), Clusterpedia provides cluster auto-discovery to be compatible with multi-cloud management platforms and reduce additional operational maintenance for Clusterpedia.

16:10-16:30

FluidTable: Data Table Abstraction and Elastic Cache System in Cloud-Native Environment.

Data-intensive applications (such as deep learning and big data queries) face multiple challenges in terms of data access on cloud-native platforms. To address these issues, the open-source cloud-native elastic data acceleration system Fluid, under the CNCF, has proposed technologies such as cloud-native data abstraction, cache elastic scaling, and collaborative orchestration of data applications. This report will introduce the latest release of Fluid's cloud-native data table abstraction and its cache elastic scaling design with performance analysis evaluation.

16:30-17:10

Kubeflow Chart: One IDE for MLOps

Outline: - Introduction to the kubeflow-chart project - MLOps IDE based on JupyterLab - Distributed training with workflow scheduling - How enterprises can quickly apply kubeflow-chart Audience: Developers and enterprises with MLOps and AI platform requirements. Professionals in the field of AI.

Check Sessions

09:30-09:45

A responsive front-end development framework.

A front-end development framework that uses virtual DOM and provides API features consistent with React, achieving seamless compatibility with related ecosystems. At the same time, it also provides another set of fine-grained responsive APIs based on different rendering methods to improve page performance.

09:45-10:00

Overview of Baidu Intelligent Edge and MQTT Message Middleware Open Source Project. 下载PPT

By introducing Baidu's intelligent edge open source framework Baetyl and the soon-to-be-open-source MQTT message middleware BifroMQ, this presentation aims to explain Baidu's technical layout and open source thinking in the Internet of Things field, and attract more developers to use and maintain related open source projects.

10:30-10:45

Build an open source, independent and controllable IoT operating system OneOS

China Mobile IoT has launched a lightweight open-source operating system called OneOS for the Internet of Things (IoT) field. The system uses the Apache 2.0 open source license and is the first in China to pass IEC 61508 SIL3 functional safety certification and CCRC EAL4+ information security certification, with a kernel self-reliance rate of 100%. It helps developers achieve one-stop rapid development. Currently, it has been widely used in various scenarios such as industrial control, consumer electronics, smart cities, and smart security. It has accumulated more than 360 partners and installed over 32 million devices.

10:45-11:00

The Evolution of Cloud-Native Microservice Governance Technology towards Proxyless Architecture

The induction of cloud-native has never stopped evolving in the industry. But in general, microservices and containerization are basically two eternal topics. This presentation will focus on the evolution of microservice architecture, analyze the limitations and advantages of various architectures, and introduce the current and future evolution of cloud-native proxyless microservice architecture form and representative open source project Sermant.

14:00-14:15

openGemini: The Core Technology Behind High Performance. 下载PPT

With the rapid development of IoT technology in various industries, big data storage and analysis are gradually becoming new business models in various industries. Traditional databases can no longer meet the performance requirements of businesses for storing and analyzing massive telemetry data. A time-series database with high concurrency, high throughput, low cost, and low latency is receiving more and more attention from enterprises. openGemini is an open-source domestic time-series database with excellent performance. This presentation will reveal its core technology behind the application!

14:15-14:30

openBrain open source vulnerability intelligence-aware technology 下载PPT

openBrain is an automated vulnerability intelligence-aware system for the open source community. By aggregating hundreds of vulnerability intelligence data sources, it achieves real-time and efficient tracking and reporting of vulnerabilities, improves the efficiency of community vulnerability handling, and reduces manpower costs. This report will introduce the security best practices of openBrain in China's top open source communities.

15:00-15:15

Zhilu OS - an open source intelligent networked roadside unit operating system 下载PPT

Zhilu OS is an open-source "vehicle-road-cloud integration" software platform developed under the guidance of the Ministry of Industry and Information Technology, with high-level autonomous driving as its aim. This sharing will introduce the version evolution of Zhilu OS, the architecture of roadside edge perception system, V2X framework for autonomous vehicles, and the latest features and developer guide for Zhilu OS 1.0 version.

15:15-15:30

Enterprise-level open source front-end component library OpenTiny. 下载PPT

This topic mainly introduces OpenTiny, which is Huawei Cloud's open source brand and provides the front-end application development infrastructure that Huawei has accumulated for many years. The TinyVue component library of OpenTiny's Vue version has three core competencies: the first is configuration-based, which supports both traditional component library declaration usage and attribute configuration usage. The second is to provide functional features of more than 150 components, covering over 90% of Huawei's internal IT application scenarios, with more than 3,000 internal users at present. In addition, it has the ability to cross platforms and themes, supporting a set of code that can be adapted to both PC and mobile terminals simultaneously, and realizing the industry's first component library that supports both Vue2 and Vue3 versions at the same time.

Check Sessions

Speakers
/ /
/ /
/ /
/ /
/ /
Harris Hui Harris Hui
SPONSORS
DIAMOND SPONSOR
GOLD SPONSOR
SILVER SPONSOR
Partners
Updating
Cooperative Community
Updating
Support Media
Venue & Travel
Shenzhen Convention & Exhibition Center
  • Address:Fuhua 3rd Rd, Fu Tian C B D, Futian District, Shenzhen, Guangdong Province, China
World Expo Center
  • Address:1500 Shibo Ave, Pudong, Shanghai, China

Contact Us

General Enquiry

Contact:栾春岩

Email Address: luanchunyan@oschina.cn

WeChat (Tel): 15801253940

Business Partnership

Contact:Hao Ping

Email Address: haoping@oschina.cn

WeChat (Tel): 13520780247

Media Partnership

Contact:栾春岩

Email Address: luanchunyan@oschina.cn

WeChat (Tel): 15801253940

Subscribe our WeChat offical account to get the latest conference information.

©开源中国(OSChina.NET) 深圳市奥思网络科技有限公司版权所有 粤ICP备12009483号