Archive for the ‘publications’ tag
ViFusion accepted at ACM ICMR
Our paper on ViFusion: In-Network Tensor Fusion for Scalable Video Feature Indexing has been accepted at the ACM International Conference on Multimedia Retrieval 2025 (CCF-B).
Abstract:
Large-scale video feature indexing in datacenters is critically dependent on efficient data transfer. Although in-network computation has emerged as a compelling strategy for accelerating feature extraction and reducing overhead in distributed multimedia systems, harnessing advanced networking resources at both the switch and host levels remains a formidable challenge. These difficulties are compounded by heterogeneous hardware, diverse application requirements, and complex multipath topologies. Existing methods focus primarily on optimizing inference for large neural network models using specialized collective communication libraries, which often face performance degradation in network congestion scenarios.
To overcome these limitations, we present ViFusion, a communication aware tensor fusion framework that streamlines distributed video indexing by merging numerous small feature tensors into consolidated and more manageable units. By integrating an in-network computation module and a dedicated tensor fusion mechanism within datacenter environments, ViFusion substantially improves the efficiency of video feature indexing workflows. The deployment results show that ViFusion improves the throughput of the video retrieval system by 8–22x with the same level of latency as state-of-the-art systems.
Stay tuned for the pre-print.
References
Yisu Wang, Yixiang Zhu, Dirk Kutscher; ViFusion: In-Network Tensor Fusion for Scalable Video Feature Indexing; The 15th ACM International Conference on Multimedia Retrieval; June 2025; accepted for publication.
PacTrain accepted at DAC-2025
Our paper on PacTrain: Pruning and Adaptive Sparse Gradient Compression for Efficient Collective Communication in Distributed Deep Learning has been accepted at the Design Automation Conference DAC (2025) (CCF-A).
Abstract:
Large-scale deep neural networks (DNN) exhibit excellent performance for various tasks. As DNNs and datasets grow, distributed training becomes extremely time-consuming and demands larger clusters. A main bottleneck is the resulting gradient aggregation overhead. While gradient compression and sparse collective communication techniques are commonly employed to alleviate network load, many gradient compression schemes do not achieve acceleration of the training process while also preserving accuracy. This paper introduces PacTrain, a novel framework that accelerates distributed training by combining pruning with sparse gradient compression. Active pruning of the neural network makes the model weights and gradients sparse.
By ensuring the global knowledge of the gradient sparsity among all distributed training workers, we can perform lightweight compression communication without harming accuracy. We show that the PacTrain compression scheme achieves a near-optimal compression strategy while remaining compatible with the all- reduce primitive. Experimental evaluations show that PacTrain improves training throughput by 1.25 to 8.72× compared to state-of-the-art compression-enabled systems for representative vision and language models training tasks under bandwidth-constrained conditions.
Stay tuned for the pre-print.
References
Yisu Wang, Ruilong Wu, Xinjiao Li , Dirk Kutscher; PacTrain: Pruning and Adaptive Sparse Gradient Compression for Efficient Collective Communication in Distributed Deep Learning; Design Automation Conference (DAC) 2025