Archive for the ‘networking’ tag
Networked Metaverse Systems
The term ‘Metaverse’ often denotes a wide range of existing and fictional applications. Nevertheless, there are actual systems today that can be studied and analyzed. However, whereas a considerable body of work has been published on applications and application ideas, there is less work on the technical implementation of such systems, especially from a networked systems perspective.
In a recently published open access journal article, we share some insights into the technical design of Metaverse systems, their key technologies, and their shortcomings, predominantly from a networked systems perspective. For the scope of this study, we define the ‘Metaverse’ as follows. The ‘Metaverse’ encompasses various current and emerging technologies, and the term is used to describe different applications, ranging from Augmented Reality (AR), Virtual Reality (VR),and Extended Reality (XR) to a new form of the Internet or Web. A key feature distinguishing the Metaverse from simple AR/VR is its inherently collaborative and shared nature, enabling interaction and collaboration among users in a virtual environment.
Building on Existing Platforms and Network Stacks
Most current Metaverse systems and designs are built on existing technologies and networks. For example, massively multiplayer online games such as Fortnite use a generalized client-server model. In this model, the server authoritatively manages the game state, while the client maintains a local subset of this state and can predict game flow by executing the same game code as the server on approximately the same data. Servers send information about the game world to clients by replicating relevant actors and their properties. Commercial social VR platforms such as Horizon Worlds and AltspaceVR use HTTPS to report client-side information and synchronize in-game clocks across users.
Mozilla Hubs, built with A-Frame (a web framework for building virtual reality experiences), uses WebRTC communication with a Selective Forwarding Unit (SFU). The SFU receives multiple audio and video data streams from its peers, then determines and forwards relevant data streams to connected peers. Blockchain or Non-Fungible Token (NFT)-based online games, such as Decentraland, run exclusively on the client side but allow for various data flow models, ranging from local effects and traditional client-server architectures to peer-to-peer (P2P) interactions based on state channels; Upland is built on EOSIO, an open-source blockchain protocol for scalable decentralized applications, and transports data through HTTPS. Connections between peers in Upland are established using TLS or VPN tunnels.
Many studies have focused on improving various aspects of Metaverse systems. For example, EdgeXAR is a mobile AR framework using edge offloading to enable lightweight tracking with six degrees of freedom (DOF) while reducing offloading delay from the user’s view; SORAS is an optimal resource allocation scheme for edgeenabled Metaverse, using stochastic integer programming to minimize the total network cost; Ibrahim et al. explores the issue of partial computation offloading for multiple subtasks in an in-network computing environment, aiming to minimize energy consumption and delay. However, these ideas for offloading computation and rendering tasks to edge platforms often conflict with the existing end-to-end transport protocols and overlay deployment models. Recently, a Deep Reinforcement Learning (DRL)-based multipath network orchestration framework designed for remote healthcare services is presented, automating subflow management to handle multipath networks. However, proposals for scalable multi-party communication would require interdomain multicast services, unavailable on today’s Internet.
Disconnect Between High-Level Concepts and Actual Systems
In practice, there is a significant disconnect between high-level Metaverse concepts, ideas for technical improvements, and systems that are actually developed and partially deployed. A 2022 ACM IMC paper titled Are we ready for metaverse?: a measurement study of social virtual reality platforms analyzes the performance of various social VR systems, pinpointing numerous issues related to performance, communication overhead, and scalability. These issues are primarily due to the fact that current systems leverage existing platforms, protocols, and system architectures, which cannot tap into any of the proposed architectural and technical enhancements, such as scalable multi-party communication, offloading computation, rendering tasks, etc.
Rather than merely layering ‘the Metaverse’ on top of legacy and not always ideal foundations, we consider Metaverse as a driver for future network and web applications and actively develop new designs to that end. In our article, we take a comprehensive systems approach and technically describe current Metaverse systems, focusing on their networking aspects. We document the requirements and challenges of Metaverse systems and propose a principled approach to system design for these requirements and challenges based on a thorough understanding of the needs of Metaverse systems, the current constraints and limitations, and the potential solutions of Internet technologies.
Article Overview
- We present a technical description of the ‘Metaverse’ based on existing and emerging systems, including a discussion of its fundamental properties, applications, and architectural models.
- We comprehensively study relevant enabling technologies for Metaverse systems, including HCI/XR technologies, networking, communications, media encoding, simulation, real-time rendering and AI. We also discuss current Metaverse system architectures and the integration of these technologies into actual applications.
- We conduct a detailed requirements analysis for constructing Metaverse systems. We analyze applications specific requirements and identify existing gaps in four key aspects: communication performance, mobility, large-scale operation,and end system architecture. For each area, we propose candidate technologies to address these gaps.
- We propose a research agenda for future Metaverse systems, based on our gap analysis and candidate technologies discussion. We re-assess the fundamental goals and requirements, without necessarily being constrained by existing system architectures and protocols. Based on a comprehensive understanding of what Metaverse systems need and what end-systems, devices, networks and communication services can theoretically provide, we propose specific design ideas and future research directions to realize Metaverse systems that can meet the expectations often articulated in the literature.
References
- Y. Zhang, D. Kutscher and Y. Cui; Networked Metaverse Systems: Foundations, Gaps, Research Directions; in IEEE Open Journal of the Communications Society, doi: 10.1109/OJCOMS.2024.3426098.
- Tianyuan Yu, Xinyu Ma, Varun Patil, Yekta Kocaogullar, Yulong Zhang, Jeff Burke, Dirk Kutscher, Lixia Zhang; Secure Web Objects: Building Blocks for Metaverse Interoperability and Decentralization; IEEE MetaCom 2024; August 12-14 2024; Hong Kong, China
- Dirk Kutscher, Jeff Burke, Giuseppe Fioccola, Paulo Mendes;
Statement: The Metaverse as an Information-Centric Network; 10th ACM Conference on Information-Centric Networking (ACM ICN '23); October 9 — 10, 2023, Reykjavik, Iceland - Giuseppe Fioccola , Paulo Mendes , Jeff Burke , Dirk Kutscher;
Information-Centric Metaverse; Internet Draft draft-fmbk-icnrg-metaverse-01; Work in Progress; July 2023
Networked Systems for Distributed Machine Learning at Scale
On July 3rd, 2024, I gave a talk at the UCL/Huawei Joint Lab Workshop on "Building Better Protocols for Future Smart Networks" that took place on UCL's campus in London.
Talk Abstract
Large-scale distributed machine learning training networks are increasingly facing scaling problems with respect to FLOPS per deployed compute node. Communication bottlenecks can inhibit the effective utilization of expensive GPU resources. The root cause of these performance problems is not insufficient transmission speed or slow servers; it is the structure of the distributed computing and the communication characteristics it incurs. Large machine learning workloads typically provide relatively asymmetric, and sometimes centralized, communication structures, such as gradient aggregation and model update distribution. Even when training networks are less centralized, the amount of data that needs to be sent to aggregate several thousand input values through collective communication functions such as AllReduce can lead to Incast problems that overload network resources and servers. This talk discusses challenges and opportunities for developing in-network aggregation systems from a distributed computing and networked systems perspective.
Network Abstractions for Continuous Innovation
In a joint panel at ACM ICN-2023 and IEEE ICNP-2023 in Reykjavik, Ken Calvert, Jim Kurose, Lixia Zhang, and myself discussed future network abstractions. The panel was moderated by Dave Oran. This was one of the more interesting and interactive panel sessions I participated in, so I am providing a summary here.
Since the Internet's initial rollout ~40 years ago, not only its global connectivity has brought fundamental changes to society and daily life, but its protocol suite and implementations have also gone through many iterations of changes, with SDN, NFV, and programmability among other changes over the last decade. This panel looks into next decade of network research by asking a set of questions regarding where lies the future direction to enable continued innovations.
Opportunities and Challenges for Future Network Innovations
Lixia Zhang: Rethinking Internet Architecture Fundamentals
Lixia Zhang (UCLA), quoting Einstein, said that the formulation of the problem is often more essential than the solution and pointed at the complexities of today's protocols stacks that are apparently needed to achieve desired functionality. For example, Lixia mentioned RFC 9298 on proxying UDP in HTTP, specifically on tunneling UDP to a server acting as a UDP-specific proxy over HTTP. UDP over IP was once conceived as a minial message-oriented communication service that was intended for DNS and interactive real-time communication. Due to its push-based communication model, it can be used with minimal effort for useful but also harmful application, including large-scale DDOS attacks. Proxing UDP over HTTP addresses this and other concerns, by providing a secure channel to a server in a web context, so that the server can authorize tunnel endpoints, and so that the UDP communication is congestion controlled by the underlying transport protocol (TCP or QUIC). This specification can be seen as a work-around: sending unsolicted (and un-authenticated) messages over the Internet is a major problem in today's Internet. There is no general approach for authenticating such messages and no concept for trust in peer identities. Instead of analyzing the root cause of such problems, the Internet communities (and the dominant players in that space) prefer to come up with (highly inefficient) workarounds.
This problem was discussed more generally by Oliver Spatscheck of AT&T Labs in his 2013 article titled Layers of Success, where he discussed the (actually deployed) excessive layering in production networks, for example mobile communication networks, where regular Internet traffic is routinely tunneled over GTP/UDP/IP/MPLS:
The main issue with layering is that layers hide information from each other. We could see this as a benefit, because it reduces the complexities involved in adding more layers, thus reducing the cost of introducing more services. However, hiding information can lead to complex and dynamic layer interactions that hamper the end-to-end system’s reliability and are extremely difficult if not impossible to debug and operate. So, much of the savings achieved when introducing new services is being spent operating them reliably.
According to Lixia, the excessive layering stems from more fundamental problems with today's network architecture, notably the lack of identity and trust in the core Internet protocols and the lack of functionality in the forwarding system – leading to significant problems today as exemplied by recent DDoS attacks. Quoting Einstein again, she said that we cannot solve problems by using the same kind of thinking we used when we created them, calling for a more fundamental redesign based on information-centric networking principles.
Ken Calvert: Domain-specific Networking
Ken Calvert (University of Kentucky) provided a retrospective of networking research and looked at selected papers published at the first IEEE ICNP conference in 1993. According to Ken, the dominant theme at that time was How to design, build, and analyze protocols, for example as discussed in his 1993 ICNP paper titled Beyond layering: modularity considerations for protocol architectures.
Ken offered a set of challenges and opportunities for future networking research, such as:
- Domain-specific networking à la Ex uno pluria, a 2018 CCR editorial discussing:
- infrastructure ossification;
- lack of service innovation; and
- a fragmentation into "ManyNets" that could re-create a service-infrastructure innovation cycle.
- Incentives and "money flow"
- Can we escape from the advertising-driven Internet app ecosystem? Should we?
- Wide-area multicast (many-many) service
- Building block for building distributed applications?
- Inter-AS trust relationships
- Ossification of the Inter-AS interface – cannot be solved by a protocol!
- Impact ⇐ Applications ⇐ Business opportunities ($)
- What user problem cannot be solved today?
- "The core challenge of CS ... is a conceptual one, viz., what (abstract) mechanisms we can conceive without getting lost in the complexities of our own making." - Dijkstra
For his vision for networking in 30 years, Ken suggested that:
- IP addresses will still be in use
- but visible only at interfaces between different owners' infrastructures
- Network infrastructure might consist of access ASes + separate core networks operated by the "Big Five".
- Users might communicate via direct brain interfaces with AI systems.
Dirk Kutscher: Principled Approach to Network Programmability
I offered the perspective of introducing a principled approach to programmability that could provide better programmability (for humans and AI), based on more powerful network abstractions.
Previous work in SDN with protocols such as OpenFlow and dataplane programming languages such as P4 have only scratched the surface of what could be possible. OpenFlow was a great first idea, but it was fundamentally constrained by the IP and Ethernet-based abstractions that were built into it. It can be used for programming some applications in that domain, such as firewalls, virtual networking etc., but the idea of continuous innovation has not really materialized.
Similarly, P4 was advertized as an enabler for new levels of dataplane programmability, but even simple systems such as NetCache have to go to quite some extend to achieve minimal functionality for a proof-of-concept. Another P4 problem that is often reported is the hardware heterogeneity so that universal programmability is not really possible. In my opinion, this raises some questions with respect to applicability of current dataplane programming for in-network computing. A good example of a more productive application of P4 is the recent SIGCOMM paper on NetClone that describes as fast, scalable, and dynamic request cloning for microsecond-Scale RPCs. Here P4 is used as an accelerator for programming relatively simple functionality (protocol parsing, forwarding).
This may not be enough for future universal programmability though. During the panel discussion, I drew an analogy to computer programming language. We are not seeing the first programming language and IDEs that are designed from the ground up for better AI. What would that mean for network programmability? What abstractions and APIs would we need?
In my opinion, we would have to take a step back and think about the intended functionality and the required observability for future (automated) network programmability that is really protocol-independent. This would then entail more work on:
- the fundamental forwarding service (informed by hardware constraints);
- the telemetry approach;
- suitable protocol semantics;
- APIs for applications and management; and
- new network emulation & debugging approach (a long the lines of "network digital twin" concepts).
Overall, I am expecting new exiciting research in the direction of principled approaches to network programmability.
Jim Kurose: Open Research Infrastructures and Softwarization
Jim reminded us that the key reason Internet research flourished was the availability of open infrastructure with no incumbent providers initially. The infrastructure was owned by researchers, labs, and universities and allowed for a lot of experimentation.
This open infrastructure has recently been challenged by ossification with the rise of production ISP services at scale, and the emergence of closed ISPs, cellular carriers, hyperscalers operating large portion of the network.
As an example for emerging environments that offer interesting opportunities for experiments and new developments, Jim mentioned 4G/5G private networks, i.e., licensed spectrum created closed ecosystems – but open to researchers, creating opportunities for:
- innovation in private 5G networks such as Citizens Broadband Radio Service (CBRS) that could enables innovation in open, deployed systems and a democratization of 5G+ networks and edge applications;
- testbeds, such as Platforms for Advanced Wireless Research (PAWR); and
- the integration of WiFi, 5G as link-layer edge RANs.
Jim was also suggesting further opportunities in softwarization and programmability, such as (formal) methods for logical correctness and configuration management, as well as programmability to add services beyond the "minimal viable service", such as closed loop automatic control and management.
Finally Jim also mentioned opportunities in emerging new networks such as LEOs, IoT and home networks.
IEEE MetaCom Workshop on Decentralized, Data-Oriented Networking for the Metaverse (DORM)
IEEE MetaCom Workshop on Decentralized, Data-Oriented Networking for the Metaverse (DORM)
Organizers
- Jeff Burke, UCLA
- Dirk Kutscher, HKUST(GZ)
- Dave Oran, Network Systems Research & Design
- Lixia Zhang, UCLA
Workshop Description
The DORM workshop is a forum to explore new directions and early research results on Metaverse system architecture, protocols, and security, along a data-oriented design direction that can encourage and facilitate decentralized realizations. Here we broadly interpret the phrase “Metaverse” as a new phase of networking with multi-dimensional shared views in open realms.
Most prototype implementations of such systems today replicate the social media platform model: they run on cloud servers offered by a small number of providers, and have identities and trust management anchored at these servers. Consequently, all communications are mediated through such servers, together with extensive CDN overlay infrastructures or the equivalent.
Although the cloud services may be extended to edges to address performance and delay issues, the centralization of control power that stems from this cloud-centric approach can be problematic from a societal perspective. It also reflects a significant semantic mismatch between the existing address-based network support and many aspirations for open realm applications and interoperability: the applications, by and large, operate on named data principles at the application layer, but need to deploy multiple layers of middleware services, which are provider-specific, to bridge the gap. These added complexities prohibit new ways of interacting (leveraging new data formats such as USD and gITF) and are not conducive to flexible distributed computing in the edge-to-cloud continuum.
This workshop solicits efforts that explore new directions in metaverse realization and work that takes a principled approach to key topics in the areas of 1) Networking as the Platform, 2) Objects and Experiences, and 3) Trust and Transactions without being constrained by inherited platforms.
Networking as the Platform
Metaverse systems will rely on a variety of communication patterns such as client-server RPC, massively scalable multi-destination communication, publish-subscribe etc. In systems that are designed with a cloud-based, centralized architecture in mind, such interactions are typically mediated by central servers and supported by overlay CDN infrastructure, with operational inflexibility and lacking optimization mechanisms, for example in order to leverage specific network link layer capabilities such as broadcast/multicast features. Underlying reliance on existing stacks also introduces familiar complications in providing disruption-tolerant, mobile-friendly extended reality applications, limiting their viability for eventual use in critical infrastructure and require significant engineering support to use in demanding entertainment applications, such as large-scale live events.
This workshop seeks research on new strategies for Metaverse system design that can promote innovation by lowering barriers to entry for new applications that perform robustly under a variety of conditions. We solicit research on Metaverse system design that addresses architectural and protocol-level issues without the reliance on a centralized cloud-based architecture. Instead, we expect the DORM workshop submissions to start with a distributed system assumption, focusing on individual protocol and security elements that enable decentralized Metaverse realizations.
Many Metaverse-relevant interactions such as video streaming and distribution of event data today inherently rely on abstractions for accessing named data objects such as video chunks, for example in DASH-based video streaming. The DORM workshop will therefore particularly invite contributions that explore new systems and protocol designs that leverage that principle, thus exploring new opportunities to re-imagine the relationship between application/network and link/physical layer protocols in order to better support Metaverse system implementations. This could include work on new hypermedia concepts based on the named data principle and cross-layer designs for simplifying and optimizing the implementation and operation of such protocols.
We expect such systems to as well be better suited to elegant, efficient integration of computing into the network, thus providing more flexible and adaptive platforms for offloading computation and supporting more elaborate data dissemination strategies.
From Objects to Experiences
In our perceived Metaverse/open realm systems, there are different existing and emerging media representations and encodings such as current video encodings as well as scene and 3D object description and transmission formats such as USD and glTF. Similar to previous developments in the networked audio/video area, it is interesting to investigate opportunities for new scene and 3D object representation formats that are suitable not only for efficient creation and file-like unidirectional transmission but also for streaming, granular composition and access, de-structuring, efficient multi-destination transmission, possibly using network coding techniques.
The workshop is therefore soliciting contributions that explore a holistic approach to media/object representation within network/distributed computing, enabling better performance, composability and robustness of future distributed Metaverse systems. Submissions that explore cross-layer approaches to supporting emerging media types such as volumetric video and neural network codecs are encouraged, as are considerations of how code implementing object behaviors and interactions can be supported - providing a path to the interoperable experiences expressed in various Metaverse visions.
Trust and Transactions
Finally, distributed open realm systems need innovative solutions in identity management and security support that enable interoperation among multiple systems including a diverse population of users. We note that mechanisms to support trust are inherently coupled with various identities, from "real world" identities to application-specific identities that users may adopt in different contexts. Proposed solutions need to consider not just media asset exchange but also the interactions among objects, and the data flows needed to support it.
The workshop solicits contributions that identify specific technical challenges, for example system bootstrapping, trust establishment, authenticated information discovery, and that propose new approaches to the identified challenges. Researchers are encouraged to consider cross-layer designs that address disconnects between layers of trust in many current systems - e.g., the reliance on third-party certificate authorities for authentications, the inherent trust in connections rather than the objects themselves, that tends to generate brittleness for even local communications if connectivity to the global network is compromised.
Call for Papers
The Decentralized Data-Oriented Networking for the Metaverse (DORM) workshop is intended as a forum to explore new directions and early research results on the system architecture, protocols, and security to support Metaverse applications, focusing on data-oriented, decentralized system designs. We view Metaverse as a new phase of networking with multi-dimensional shared views in open realms.
Most Metaverse systems today replicate the social media platform model, i.e., they assume a cloud platform provider-based system architecture where identities and the trust among them is anchored via a centralized administrative structure and where communication is mediated through servers and an extensive CDN overlay infrastructure operated by that administration. The centralization that stems from this approach can be problematic both from a control and from a performance & efficiency perspective. Despite operating on named data principles conceptually, such systems typically exhibit traditional layering approaches that prohibit new ways of interacting (leveraging new data formats such as USD and gITF) and that are not conducive for flexible distributed computing in the edge-to-cloud continuum.
This workshop solicits work that takes a principled approach at key research topics in the areas of 1) Networking as the Platform, 2) Objects and Experiences, and 3) Trust and Transactions without being constrained by inherited platform designs, including but no limited to:
- Distributed Metaverse architectures
- Computing in the network as an integral component for better communication and interaction support
- Application-layer protocols for a rich set of interaction styles in open realms
- Supporting Metaverse via data-oriented techniques
- Security, Privacy and Identity Management in Metaverse systems
- New concepts for improved network support for Metaverse systems, e.g., through facilitating ubiquitous multipath forwarding and multi-destination delivery
- Cross-layer designs
- Emerging scene description and media formats
- Quality of Experience for Metaverse applications
- Distributed consensus and state synchronization
- Security, Privacy and Identity Management in Metaverse systems
Given the breadth and emerging nature of the field, all papers should include the articulation of a specific vision of Metaverse that provides clarifying assumptions for the technical content.
Submissions and Formatting
The workshop invites submission of manuscripts with early and original research results that have not been previously published or posted on public websites or that are not currently under review by another conference or journal. Submitted manuscripts must be prepared according to IEEE Computer Society Proceedings Format (double column, 10pt font, letter paper) and submitted in the PDF format. The manuscript submitted for review should be no longer than 6 pages without references. Reviewing will be double-blind. Submissions must not reveal the authors’ names and their affiliations and avoid obvious self-references. Accepted and presented papers will be published in the IEEE MetaCom 2023 Conference Proceedings and included in IEEE Xplore.
Manuscript templates can be found here. All submissions to IEEE MetaCom 2023 must be uploaded to EasyChair at https://easychair.org/conferences/?conf=metacom2023.
Organization Committee
- Jeff Burke, UCLA
- Dirk Kutscher, HKUST(GZ)
- Dave Oran, Network Systems Research & Design
- Lixia Zhang, UCLA
Technical Program Committee
- Alex Afanasyev, Florida International University
- Hitoshi Asaeda, NICT
- Ali Begen, Ozyegin University
- Taejoong Chung, Virginia Tech
- Serge Fdida, Sorbonne University Paris
- Carlos Guimarães, ZettaScale Technology SARL
- Peter Gusav, UCLA
- Toru Hasagawa, Osaka University
- Jungha Hong, ETRI
- Kenji Kanai, Waseda University
- Ruidong Li, Kanazawa University
- Spyridon Mastorakis, University of Nebraska Omaha
- Kazuhisa Matsuzono, NICT
- Marie-Jose Montpetit, Concordia University Montreal
- Jörg Ott, Technical University Munich
- Yiannis Psarras, Protocol Labs
- Eve Schooler, Intel
- Tian Song, Beijing Institute of Technology
- Kazuaki Ueda, KDDI Research
- Cedric Westphal, Futurewei
- Edmund Yeh, Northeastern University
- Jiadong Yu, HKUST(GZ)
- Yu Zhang, Harbin Institute of Technology
Important Dates
- March 20, 2023, Paper submission deadline
- April 20, 2023 Notification of paper acceptance
- May 10, 2023, Camera-ready paper submissions