IETF and IRTF Deep-Drive Training in Beijing
I gave a talk about the Internet Research Task Force (IRTF) at an IETF Standards Culture and Process Deep-Dive Training that took place in Beijing on May 8th, 2025. The training was hosted by the China Internet Network Information Center (CNNIC). My talk explained what the IRTF is, how it works, and how to best contribute to its work.
Resources
NetSenseML accepted at Euro-Par
Our paper on NetSenseML: Network-Adaptive Compression for
Efficient Distributed Machine Learning has been accepted at the 31st International European on Parallel and Distributed Computing (Euro-Par-2025).
Abstract:
Training large-scale distributed machine learning models imposes considerable demands on network infrastructure, often resulting in sudden traffic spikes that lead to congestion, increased latency, and reduced throughput, which would ultimately affect convergence times and overall training performance. While gradient compression techniques are commonly employed to alleviate network load, they frequently compromise model accuracy due to the loss of gradient information.
This paper introduces NetSenseML, a novel network adaptive distributed deep learning framework that dynamically adjusts quantization, pruning, and compression strategies in response to real-time network conditions. By actively monitoring network conditions, NetSenseML applies gradient compression only when network congestion negatively impacts convergence speed, thus effectively balancing data payload reduction and model accuracy preservation.
Our approach ensures efficient resource usage by adapting reduction techniques based on current network conditions, leading to shorter convergence times and improved training efficiency. We present the design of the NetSenseML adaptive data reduction function and experimental evaluations show that NetSenseML can improve training throughput by a factor of 1.55x to 9.84x compared to state-of-the-art compression-enabled systems for representative DDL training jobs in bandwidth-constrained conditions.
References
Yisu Wang, Xinjiao Li, Ruilong Wu, Huangxun Chen, Dirk Kutscher; NetSenseML: Network-Adaptive Compression for Efficient Distributed Machine Learning; 31st International European on Parallel and Distributed Computing (Euro-Par-2025); August 2025; accepted for publication
Trochilus accepted at USENIX ATC
Our paper on Trochilus, titled Learning-Enhanced High-Throughput Pattern Matching Based on Programmable Data Plane has been accepted at USENIX ATC-2025. This is joint work with Qing LI's group at Peng Cheng Lab, and the first author is Guanglin DUAN.
Abstract:
Pattern matching is critical in various network security applications. However, existing pattern matching solutions struggle to maintain high throughput and low cost in the face of growing network traffic and increasingly complex patterns. Besides, managing and updating these systems is labor intensive, requiring expert intervention to adapt to new patterns and threats. In this paper, we propose Trochilus, a novel framework that enables high-throughput and accurate pattern matching directly on programmable data planes, making it highly relevant to modern large-scale network systems. Trochilus innovated by combining the learning ability of model inference with the high-throughput and cost-effective advantages of data plane processing. It leverages a byte-level recurrent neural network (BRNN) to model complex patterns, preserving expert knowledge while enabling automated updates for sustained accuracy. To address the challenge of limited labeled data, Trochilus proposes a semi-supervised knowledge distillation (SSKD) mechanism, converting the BRNN into a lightweight, data-plane-friendly soft multi-view forest (SMF), which can be efficiently deployed as match-action tables. Trochilus minimizes the need for expensive TCAM through a novel entry cluster algorithm, making it scalable to large network environments. Our evaluations show that Trochilus achieves multi-Tbps throughput, supports various pattern sets, and maintains high accuracy through automatic updates.
References
- Guanglin Duan, Yucheng Huang, Zhengxin Zhang, Qing Li, Dan Zhao, Zili Meng, Dirk Kutscher, Ruoyu Li, Yong Jiang, and Mingwei Xu. Learning-Enhanced High-Throughput Pattern Matching Based on Programmable Data Plane. Usenix ATC 2025. accepted for publication
- Extended Summary by Peng Cheng Lab
Rethinking Dynamic Networks and Heterogeneous Computing with Automatic Parallelization accepted at ACM APNET
Our paper on Rethinking Dynamic Networks and Heterogeneous Computing with Automatic Parallelization has been accepted by the 9th Asia-Pacific Workshop on Networking (APNET'25).
Abstract:
Hybrid parallelism techniques are crucial for the efficient training of large language models (LLMs). However, these techniques often introduce differentiated computational and communication tasks across nodes. Existing automatic parallel planning frameworks typically fail to consider both node heterogeneity and dynamic changes in network topology simultaneously, limiting their practical performance. In this paper, we address this issue by positioning heterogeneous nodes within dynamic network environments and employing a simulator to identify optimal parallel strategies. Our approach achieves fine-grained workload distribution in scenarios featuring node heterogeneity and complex networks, while also matching state-of-the-art performance in regular topologies and stable network conditions. Moreover, to mitigate the excessively long search times caused by large search spaces in existing frameworks, we propose a strategy pruning technique to rapidly eliminate infeasible parallel configurations. We further accelerate the search process by executing search tasks in parallel within the simulator. Preliminary evaluation results demonstrate that our method significantly improves training performance on heterogeneous nodes, and the proposed dynamic network design offers enhanced adaptability for complex scenarios such as cloud computing environments.
References
Ruilong Wu, Xinjiao Li, Yisu Wang, Xinyu Chen, Dirk Kutscher; Rethinking Dynamic Networks and Heterogeneous Computing with Automatic Parallelization; The 9th Asia-Pacific Workshop on Networking (APNET'25); August 2025; accepted for publication
ViFusion accepted at ACM ICMR
Our paper on ViFusion: In-Network Tensor Fusion for Scalable Video Feature Indexing has been accepted at the ACM International Conference on Multimedia Retrieval 2025 (CCF-B).
Abstract:
Large-scale video feature indexing in datacenters is critically dependent on efficient data transfer. Although in-network computation has emerged as a compelling strategy for accelerating feature extraction and reducing overhead in distributed multimedia systems, harnessing advanced networking resources at both the switch and host levels remains a formidable challenge. These difficulties are compounded by heterogeneous hardware, diverse application requirements, and complex multipath topologies. Existing methods focus primarily on optimizing inference for large neural network models using specialized collective communication libraries, which often face performance degradation in network congestion scenarios.
To overcome these limitations, we present ViFusion, a communication aware tensor fusion framework that streamlines distributed video indexing by merging numerous small feature tensors into consolidated and more manageable units. By integrating an in-network computation module and a dedicated tensor fusion mechanism within datacenter environments, ViFusion substantially improves the efficiency of video feature indexing workflows. The deployment results show that ViFusion improves the throughput of the video retrieval system by 8–22x with the same level of latency as state-of-the-art systems.
Stay tuned for the pre-print.
References
Yisu Wang, Yixiang Zhu, Dirk Kutscher; ViFusion: In-Network Tensor Fusion for Scalable Video Feature Indexing; The 15th ACM International Conference on Multimedia Retrieval; June 2025; accepted for publication.
Interview on the IETF Blog
The IETF has recently published an interview with me on the IETF Blog.
Networked Metaverse Systems: Among the Most popular paper IEEE OJCOMS Paper 2024 – 2025
Our 2024 paper on Networked Metaverse Systems: Foundations, Gaps, Research Directions has been mentioned as one most popular and impactful papers of the IEEE Open Journal of the Communications Society (OJCOMS) 2024–2025.
References
- https://dirk-kutscher.info/publications/networked-metaverse-systems/
- Y. Zhang, D. Kutscher and Y. Cui, "Networked Metaverse Systems: Foundations, Gaps, Research Directions," in IEEE Open Journal of the Communications Society, vol. 5, pp. 5488-5539, 2024, doi: 10.1109/OJCOMS.2024.3426098.
Report Published: Greening Networking: Toward a Net Zero Internet (Dagstuhl Seminar 24402)
We have published the report of the Dagstuhl Seminar 24402 on Greening Networking: Toward a Net Zero Internet that took place from September 29th to October 2nd 2024. The seminar discussed the most impactful networking improvements for reducing carbon emissions in three different areas: 1) applications, systems, and stakeholders; 2) network technologies; and 3) lifecycle and control loops. As a major result of the seminar, the following problems and topics for future research were identified: 1) characterizing the Internet footprint on carbon emissions accurately; 2) understanding attributional and consequential accounting of carbon emissions in networked systems; and 3) identifying potential solutions to give network systems more flexibility in better supporting energy grids and connecting to renewable energy sources. One of the concrete results of this seminar is a list of technologies and research opportunities for which we estimated the potential impact and time horizons.
References
- https://dirk-kutscher.info/events/dagstuhl-greening-networking/
- Alexander Clemm, Dirk Kutscher, Michael Welzl, Cedric Westphal, Noa Zilberman, and Simone Ferlin-Reiter. Greening Networking: Toward a Net Zero Internet (Dagstuhl Seminar 24402). In Dagstuhl Reports, Volume 14, Issue 9, pp. 167-192, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025) https://doi.org/10.4230/DagRep.14.9.167
PacTrain accepted at DAC-2025
Our paper on PacTrain: Pruning and Adaptive Sparse Gradient Compression for Efficient Collective Communication in Distributed Deep Learning has been accepted at the Design Automation Conference DAC (2025) (CCF-A).
Abstract:
Large-scale deep neural networks (DNN) exhibit excellent performance for various tasks. As DNNs and datasets grow, distributed training becomes extremely time-consuming and demands larger clusters. A main bottleneck is the resulting gradient aggregation overhead. While gradient compression and sparse collective communication techniques are commonly employed to alleviate network load, many gradient compression schemes do not achieve acceleration of the training process while also preserving accuracy. This paper introduces PacTrain, a novel framework that accelerates distributed training by combining pruning with sparse gradient compression. Active pruning of the neural network makes the model weights and gradients sparse.
By ensuring the global knowledge of the gradient sparsity among all distributed training workers, we can perform lightweight compression communication without harming accuracy. We show that the PacTrain compression scheme achieves a near-optimal compression strategy while remaining compatible with the all- reduce primitive. Experimental evaluations show that PacTrain improves training throughput by 1.25 to 8.72× compared to state-of-the-art compression-enabled systems for representative vision and language models training tasks under bandwidth-constrained conditions.
Stay tuned for the pre-print.
References
Yisu Wang, Ruilong Wu, Xinjiao Li , Dirk Kutscher; PacTrain: Pruning and Adaptive Sparse Gradient Compression for Efficient Collective Communication in Distributed Deep Learning; Design Automation Conference (DAC) 2025
HKUST Internet Research Workshop (HKIRW) 2025
We are organizing the 2025 HKUST Internet Research Workshop (HKIRW) in the week before the IETF-122 meeting in Bangkok. This workshop aims to bring together researchers in computer networking and systems around the globe to a live forum discussing innovative ideas at their early stages. The mission of the workshop is that promising but not-yet-mature ideas can receive timely feedback from the community and experienced researchers, leading them into future IRTF work, Internet Drafts, or IETF working groups.
The workshop will operate like a “one day Dagstuhl seminar” and will focus on discussion and ideas exchange and less on conference-style presentations. The objective is to identify topics and connect like-minded people for potential future collaboration.
Please see https://hkirw.github.io/2025/ for details.