The Metaverse as an Information-Centric Network
This is an introduction to our paper:
- Dirk Kutscher, Jeff Burke, Giuseppe Fioccola, Paulo Mendes; Statement: The Metaverse as an Information-Centric Network; 10th ACM Conference on Information-Centric Networking (ACM ICN '23); October 9 — 10, 2023, Reykjavik, Iceland; https://dl.acm.org/doi/10.1145/3623565.3623761; pre-print available at http://arxiv.org/abs/2309.09147
The Web Today
The Web today has a specific technical definition: it includes presentation layer technologies, protocols, agreed-upon ways of achieving certain semantics such as Representational State Transfer (REST), and security infrastructure. However, from a user perspective, it can be viewed as a universe of consistently navigable content and (occasionally) interoperable services. The user experience and architectural underpinnings have evolved in parallel and have influenced each other: for many end users, the Web and the network are synonymous. Rather than building up "Metaverse" as an application domain based on IP, we aim to explore "the Metaverse" as strongly intertwined with ICN, just as the modern concept of the Web and its technology stack are inseparable for a broad set of applications.
As a placeholder name for a range of new technologies and experiences, "the Metaverse" is even less well-defined than the Web. We adopt the commonly used concept of a shared, interoperable, and persistent XR. Some descriptions and early prototypes for social AR/VR systems suggest leveraging existing Internet and Web protocols to provide Metaverse services, without addressing the technical complexity and centralization of control required to provide the underlying cloud service infrastructure.
Metaverse as an Information-Centric Concept
Here, we do not take as given current designs and deployment models that consider the Metaverse as an overlay application with corresponding infrastructure dependencies, as this exacerbates the current gaps (and the resulting costs and technical complexity) between distributed applications and the underlying network architecture. Instead, we assume a fundamentally information/centric system in which most applications participate in granular 3D content exchange, context-aware integration with the physical world, and other Metaverse-relevant services.
"The Metaverse" is an information-centric concept that likely will become synonymous with the network itself. We argue that reciprocal design of the network and applications will open new opportunities for the deployment of Metaverse-suggestive experiences even today.
Experientially, this Metaverse is an extension of the Web into immersive XR modalities that are often aligned with physical space, as in augmented reality (AR). We conceive the Metaverse not only as a shared XR environment, but the next generation of the web, extending into 3D interaction/immersion and optionally overlaid on physical spaces. Instead of rendering data objects into a 2D page (within a tab within a window) on a device, we envision such objects being rendered into a shared 3D space, interacting among each other and with end users.
Architecturally, leveraging ICN concepts provides support for decentralized publishing, content interoperability and co-existence, based on general building blocks and not within separated application silos as today's initial prototypes. We claim that such properties are required to achieve the generally circulated visions of Metaverse systems, but are not achievable today because of the host- and connection-centric way in which the web operates and is presented to users in browsers.
ICN Capabilities
We point out four ICN capabilities critical to Metaverse concepts:
- scalable and robust multi-destination communication, overcoming IP multicast challenges, such as inter-domain routing, scalability, and routing communication overhead;
- leveraging wireless broadcast to support shared local views and low-latency interactivity without application-awareness in edge routers;
- privacy, selective attention, content filtering, and autonomous interactions, as well as ownership and control on the publishing side; and
- supporting in-network processing for objects replication and transformation.
Interactive Holographic Communication
For example, imagine interactive holographic communication consisting of participants' 3D video, spatial audio, and shared 3D documents. In ICN, such an application can represent virtual content as secure data objects and share them efficiently in a larger group of peers, fetching only the data necessary to reconstruct a suitable representation while being aware of the constraints of user devices and access networks.
Furthermore, while experiencing 3D objects shared by the group, each participant may also interact in the same XR environment with personal services such as wayfinding, messaging, and Internet of Things (IoT) device status. Interactions between private and shared 3D objects would be simplified if these objects use similar conventions but with different security. This concept is semantically well-aligned with ICN properties, particularly for security, as it revolves around object-level data exchange rather than hosts or channels. Integration and interoperability within a shared XR environment, without centralization, is challenging if one has to negotiate not only data interactions but also the underlying service connections and security relationship using host-centric paradigms. It also exacerbates the impact of intermittent connectivity on interactivity when the global network is required for functions such as rendezvous -- that are handled locally in ICN.
Creating Shared Environments
As a second example, consider creating a shared environment -- e.g., to pre-visualize engineering models of an aircraft – from a collection of collaboratively edited 3D documents. Imagine component documents interacting in a simulation. Documents can be modularized, linked, and overlaid in a web-like manner. Today, such cross-platform interoperability and visualization without centralized hubs is impractical, and it is difficult to create secure, granular data flows required for interaction between co-existing 3D elements to "bring them to life" in a virtual world. In an ICN approach, such modules could be independently authored and published, shared between applications, becoming building blocks of a richer, interacting system of user- and machine-generated content.
We introduce some technical challenges and research direction in our paper (link below).
Further Reading
The Metaverse as an Information-Centric Network
- Dirk Kutscher, Jeff Burke, Giuseppe Fioccola, Paulo Mendes; Statement: The Metaverse as an Information-Centric Network; 10th ACM Conference on Information-Centric Networking (ACM ICN '23); October 9 — 10, 2023, Reykjavik, Iceland; https://doi.org/10.1145/3623565.3623712; pre-print available at http://arxiv.org/abs/2309.09147
- Giuseppe Fioccola , Paulo Mendes , Jeff Burke , Dirk Kutscher;
Information-Centric Metaverse; Internet Draft draft-fmbk-icnrg-metaverse-01; Work in Progress; July 2023 - Jeff Burke, Lixia Zhang, Dirk Kutscher; Named Data Microverse project
- Dirk Kutscher, Jeff Burke, Paulo Mendes, Michelle Munson, Todd Hodes; Named Data Metaverse Panel at NDNComm-2023
- Dirk Kutscher, Lixia Zhang, Jeff Burke, Dave Oran; IEEE MetaCom Workshop on Decentralized, Data-Oriented Networking for the Metaverse (DORM); IEEE Metacom-2023
- Dirk Kutscher, Dave Oran; Statement: RESTful Information-Centric Networking; ACM Conference on Information-Centric Networking (ICN 2022); Osaka, Japan; September 2022; https://dirk-kutscher.info/publications/icn-rest/
References
- Cheng, R., Wu, N., Varvello, M., Chen, S., and Han, B; Are we ready for metaverse?: a measurement study of social virtual reality platforms; In Proceedings of the 22nd ACM Internet Measurement Conference, IMC 2022, Nice, France; October 25-27, 2022 (2022); https://dl.acm.org/doi/10.1145/3517745.3561417
- Erickson, L; Interoperability in the immersive web – part 1; https://hubs.mozilla.com/labs/interoperability-in-the-immersive-web/, Feb 2023.
- Fielding, R. T.; Architectural Styles and the Design of Network-based Software Architectures; PhD thesis, University of California, Irvine, 2000. http://www.ics.uci.edu/fielding/pubs/dissertation/top.htm
- Gruessing, J., and Dawkins, S; Media over quic - use cases and requirements for media transport protocol design; Internet-Draft https://datatracker.ietf.org/doc/draft-ietf-moq-requirements/, version 01; IETF Secretariat, July 2023.
- Jennings, C. F., Nandakumar, S., and Huitema, C. Quicr – media delivery protocol over quic. Internet-Draft https://datatracker.ietf.org/doc/draft-jennings-moq-quicr-proto/, version 01, IETF Secretariat, January 2023.
- LAMINA1. Decentralized system services for the open metaverse; https://uploads-ssl.webflow.com/63fe332d7b9ae4159d741e55/64499d8f08bd5bdd1fe6bce1_MaaS_Whitepaper_v1.0.pdf
- Moll, P., Patil, V., Wang, L., and Zhang, L.; The evolution of distributed dataset synchronization solutions in NDN: sok; In 9th ACM Conference on Information-Centric Networking; ICN 2022; Osaka Japan; September 19-21, 2022 (2022); https://dl.acm.org/doi/10.1145/3517212.3558092
- Moore, M. B. T.; How we ruined the internet; CoRR abs/2306.01101 (2023); https://arxiv.org/abs/2306.01101
- NVIDIA. What is universal scene description; https://developer.nvidia.com/usd.
- Oran, D. R.; Considerations in the Development of a QoS Architecture for CCNx-Like Information-Centric Networking Protocols; RFC 9064; June 2021; https://datatracker.ietf.org/doc/rfc9064/
- Patil, V., Desai, H., and Zhang, L; Kua: A distributed object store over named data networking; In Conference on Information-Centric Networking, ICN 2022, Osaka Japan, September 19-21, 2022 (2022); https://dl.acm.org/doi/10.1145/3517212.3558083
- Radoff, J.; Metaverse interoperability, part 1: Challenges. https://medium.com/building-the-metaverse/metaverse-interoperabilitypart-1-challenges-716455ca439e, Apr 2022.
- Khronos Group; glTF runtime 3d asset delivery; https://www.khronos.org/gltf/
- Yu, Y., Afanasyev, A., Clark, D., claffy, k., Jacobson, V., and Zhang, L.; Schematizing trust in named data networking; In Proceedings of the 2nd ACM Conference on Information-Centric Networking (New York, NY, USA, 2015), ACMICN ’15, Association for Computing Machinery; https://dl.acm.org/doi/10.1145/2810156.2810170
Distributed Computing in Information-Centric Networking
This is an introduction to our paper:
- Wei Geng, Yulong Zhang, Dirk Kutscher, Abhishek Kumar, Sasu Tarkoma, Pan Hui; Sok: Distributed Computing in ICN; 10th ACM Conference on Information-Centric Networking (ACM ICN '23); October 9 — 10, 2023, Reykjavik, Iceland; https://doi.org/10.1145/3623565.3623712; pre-print available at https://arxiv.org/abs/2309.08973.
Distributed computing is the basis for all relevant applications on the Internet. Based on well-established principles, different mechanisms, implementations, and applications have been developed that form the foundation of the modern Web.
The Internet with its stateless forwarding service and end-to-endcommunication model promotes certain types of communication for distributed computing. For example, IP addresses and/or DNS names provide different means for identifying computing components. Reliable transport protocols (e.g., TCP, QUIC) promote interconnecting modules. Communication patterns such as REST and protocol implementations such as HTTP enable certain types of distributed computing interactions, and security frameworks such as TLS and the web PKI constrain the use of public-key cryptography for different security functions.
From Distributed Computing...
Distributed computing has different facets, for example, client-server computing, web services, stream processing, distributed consensus systems, and Turing-complete distributed computing platforms. There are also different perspectives on how distributed computing should be implemented on servers and network platforms, a research area that we refer to as Computing in the Network. Active Networking, one of the earliest works on computing in the network, intended to inject programmability and customization of data packets in the network itself; however, security and complexity considerations proved to be major limiting factors, preventing its wider deployment.
Dataplane programmability refers to the ability to program behavior, including application logic, on network elements and SmartNICs, thus enabling some form in-network computing. Alternatively, different types of server platforms and light-weight execution environments are enabling other forms of distributing computation in networked systems, such as architectural patterns, such as edge computing.
... To Computing in the Network
With currently available Internet technologies, we can observe a relatively succinct layering of networking and distributed computing, i.e., distributed computing is typically implemented in overlays with Content Distribution Networks (CDNs) being prominent and ubiquitous example. Recently, there has been growing interest in revisiting this relationship, for example by the IRTF Computing in the NetworkResearch Group (COINRG) – motivated by advances in network and server platforms, e.g., through the development of programmable data plane platforms and the development of different types of distributed computing frameworks, e.g., stream processing and microservice frameworks.
This is also motivated by the recent development of new distributed computing applications such as distributed machine learning (ML), and emerging new applications such as Metaverse suggest new levels of scale in terms of data volume for distributed computing and the pervasiveness of distributed computing tasks in such systems. There are two research questions that stem from these developments:
-
How can we build distributed computing systems in the network that can leverage the on-path location of compute functions, e.g., optimally aligning stream processing topologies with networked computing platform topologies?
-
How can the network support distributed computing in general, so that the design and operation of such systems can be simplified, but also so that different optimizations can be achieved to improve performance and robustness?
Issues in Legacy Distributed Computing
Although there are many distributed computing applications, it is also worth noting that there are many limitations and performance issues. Factors such as network latency, data skew, checkpoint overhead, back pressure, garbage collection overhead, and issues related to performance, memory management, and serialization and deserialization overhead can all influence the efficiency. Various optimization techniques can be implemented to alleviate these issues, including memory adjustment, refining the checkpointing process, and adopting efficient data structures and algorithms.
Some performance problems and complexity issues stem from the overlay nature of current systems and their way of achieving the above-mentioned mechanisms with temporary solutions based on TCP/IP and associated protocols such as DNS. For example, Network Service Mesh has been characterized as architecturally complex because of the so-called sidecar approaches and their implementation problems.
In systems that are layered on top of HTTP or TCP (or QUIC), compute nodes typically cannot assess the network performance directly – only indirectly through observed throughput and buffer under-runs. Information-centric data-flow systems, such as IceFlow, intend to provide better visibility and thus better joint optimization potential by more direct access to data-oriented communication resources. Then, some coordination tasks that are based on exchanging updates of shared application state can be elegantly mapped to named data publication in a hierarchical namespace, as the different dataset synchronization (Sync) protocols in NDN demonstrated.
Information-Centric Distributed Computing
In our paper on Distributed Computing in ICN at ACM ICN-2023, we focus on distributed computing and on how information-centricity in the network and application layer can support the development and operation of such systems. The rich set of distributed computing systems in ICN suggests that ICN provides some benefits for distributed computing that could offer advantages such as better performance, security, and productivity when building corresponding applications.
ICN with its data-oriented operation and generally more powerful forwarding layer provides an attractive platform for distributed computing. Several different distributed computing protocols and systems have been proposed for ICN, with different feature sets and different technical approaches, including Remote Method Invocation (RMI) as an interaction model as well as more comprehensive distributed computing platforms. RMI systems such as RICE leverage the fundamental named-based forwarding service in ICN systems and map requests to Interest messages and method names to content names (although the actual implementation is more intricate). Method parameters and results are also represented as content objects, which provides an elegant platform for such interactions.
ICN generally attempts to provide a more useful service to data-oriented applications but can also be leveraged to support distributed computing specifically.
Names
Accessing named data in the network as a native service can remove the need for mapping application logic identifiers such as function names to network and process identifiers (IP addresses, port numbers), thus simplifying implementation and run-time operation, as demonstrated by systems such as Named Function Networking (NFN), RICE, and IceFlow. It is worth noting that, although ICN does not generally require an explicit mapping of names to other domain identifiers, such networks require suitable forwarding state, e.g., obtained from configuration, dynamic learning, or routing.
Data-orientedness
ICN's notion of immutable data with strong name-content binding through cryptographic signatures and hashes seems to be conducive to many distributed computing scenarios, as both static data objects and dynamic computation results in those systems such as input parameters and result values can be directly sent as ICN data objects. NFN has first demonstrated this.
Securing distributed computing could be supported better in so far as ICN does not require additional dependencies on public-key or pipe securing infrastructure, as keys and certificates are simply named data objects and centralized trust anchors are not necessarily needed. Larger data collections can be aggregated and re-purposed by manifests (FLIC), enabling "small" and "big data" computing in one single framework that is congruent to the packet-level communication in a network. IceFlow uses such an aggregation approach to share identical stream processing results objects in multiple consumer contexts.
Data-orientedness eliminates the need for connections; even reliable communication in ICN is completely data-oriented. If higher-layer (distributed computing) transactions can be mapped to the network layer data retrieval, then server complexity can be reduced (no need to maintain several connections), and consumers get direct visibility into network performance. This can enable performance optimizations, such as linking network and computing flow control loops (one realization of joint optimization), as showed by IceFlow.
Location independence and data sharing
Embracing the principle of accessing named and authenticated data also enables location independence, i.e., corresponding data can be obtained from any place in the network, such as replication points (repos) and caches. This fundamentally enables better multi-source/path capabilities as well as data sharing, i.e., multiple data retrieval operations for one named data object by different consumers can potentially be completed by a cache, repo, or peer in the network.
Stateful Forwarding
ICN provides stateful, symmetric forwarding, which enables general performance optimizations such as in-network retransmissions, more control over multipath forwarding, and load balancing. This concept could be extended to support distributed computing specifically, for example, if load balancing is performed based on RTT observations for idempotent remote-method invocations.
More Networking, less Management
The combination of data-oriented, connection-less operation, and stateful (more powerful) forwarding in ICN shifts functionality from management and orchestration layers (back) to the network layer, which can enable complexity reduction, which can be especially pronounced in distributed computing. For example, legacy stream processing and service mesh platforms typically must manage connectivity between deployment units (pods in Kubernetes). In Apache Flink, a central orchestrator manages the connections between task managers (node agents). Systems such as IceFlow have demonstrated a more self-organized and decentralized stream-processing approach, and the presented principles are applicable to other forms of distributed computing.
In summary, we can observe that ICN's general approach of having the network providing a more natural (data retrieval) platform for applications benefits distributed computing in similar ways as it benefits other applications. One particularly promising approach is the elimination of layer barriers, which enables certain optimizations.
In addition to NFN, there are other approaches that jointly optimize the utilization of network and computing resources to provide network service mesh-like platforms, such as edge intelligence using federated learning, advanced CDNs where nodes can dynamically adapt to user demands according to content popularity, such as iCDN and OpenCDN, and general computing systems, such as Compute-First Networking, IceFlow, and ICedge.

Our paper on Distributed Computing in ICN at ACM ICN-2023 provides a comprehensive analysis and understanding of distributed computing systems in ICN, based on a survey of more than 50 papers. Naturally, these different efforts cannot be directly compared due to their difference in nature. We categorized different ICN distributed computing systems, and individual approaches and highlighted their specific properties.
The scope of this study is technologies for ICN-enabled distributed computing. Specifically, we divide the different approaches into four categories, as shown in the figure above: enablers, protocols, orchestration, and applications. The contributions of this study are as follows:
- A discussion of the benefits and challenges of distributed computing in ICN.
- A categorization of different proposed distributed computing systems in ICN.
- A discussion of lessons learned from these systems.
- A discussion of existing challenges and promising directions for future work.
Recent Research on Distributed Computing in ICN
I am providing some pointers to my previous research on distributed computing in ICN below.
The paper that has led to this article:
- Wei Geng, Yulong Zhang, Dirk Kutscher, Abhishek Kumar, Sasu Tarkoma, Pan Hui; Sok: Distributed Computing in ICN; 10th ACM Conference on Information-Centric Networking (ACM ICN '23); October 9 — 10, 2023, Reykjavik, Iceland; https://doi.org/10.1145/3623565.3623712; pre-print available at https://arxiv.org/abs/2309.08973.
Current work in the Computing in the Network Research Group of the IRTF:
- Dirk Kutscher, Teemu Kärkkäinen, Jörg Ott; Directions for Computing in the Network; Internet Draft draft-irtf-coinrg-dir-00, Work in Progress; August 2023
Reflexive Forwarding and Remote Method Invocation
Providing a unified remote computation capability in ICN presents some unique challenges, among which are timer management, client authorization, and binding to state held by servers, while maintaining the advantages of ICN protocol designs like CCN and NDN. In the RICE work,we developed a unified approach to remote function invocation in ICN that exploits the attractive ICN properties of name-based routing, receiver-driven flow and congestion control, flow balance, and object-oriented security while presenting a natural programming model to the application developer. The RICE protocol is leveraging an ICN extension called Reflexive Forwarding that provides ICN-idiomatic method parameter transmission.
- RICE: Remote Method Invocation in ICN (best paper award at ACM ICN-2018)
- Reflexive Forwarding in ICN
Distributed Computing Frameworks
Leveraging RICE as a mechanism, we have developed Compute-First Networking (CFN) in ICN, a Turing-complete distributed computing platform. IceFlow is a proposal for Dataflow in ICN in a decentralized manner.
- Compute-First Networking (CFN): Distributed Computing Meets ICN
- IceFlow: Information-Centric Dataflow: Re-Imagining Reactive Distributed Computing
Applications
Based on Reflexive Forwarding, we have developed a concept for RESTful ICN that leverages CCNx key exchange for setting up security contexts and keys that could then be used for secure, data-oriented REST-like communication.
Delay-Tolerant LoRa leveraged Reflexive Forwarding to enable constrained LoRa nodes to "phone home" when they want to transmit data, thus enabling new ways (without central network and application servers) for connecting LoRa networks to the Internet.
Reflexive Forwarding in Named Data Networking
Current Information-Centric Networking protocols such as CCNx and NDN have a wide range of useful applications in content retrieval and other scenarios that depend only on a robust two-way exchange in the form of a request and response (represented by an Interest-Data exchange in the case of the two protocols noted above). A number of important applications however, require placing large amounts of data in the Interest message, and/or more than one two-way handshake.
While these can be accomplished using independent Interest-Data exchanges by reversing the roles of consumer and producer, such approaches can be both clumsy for applications and problematic from a state management, congestion control, or security standpoint. Reflexive Forwarding is a proposed extension to the CCNx and NDN protocol architectures that eliminates the problems inherent in using independent Interest-Data exchanges for such applications.
The protocol is specified in draft-oran-icnrg-reflexive-forwarding and has been used in a few of our research projects such as:
- RICE: Remote Method Invocation in ICN (best paper award at ACM ICN-2018)
- Compute-First Networking (CFN): Distributed Computing Meets ICN
- RESTful ICN
- Delay-Tolerant LoRa ICN Networking
My student intern Xinchen Jin from ShanghaiTech has implemented the Reflexing Forwarding specification in NDN (with modifications to ndn-cxx and NFD) and set up a testbed in mini-NDN for experiments over multiple forwarders.
Resources
Platforms, Economics, Minimal Global Broadcast
Decentralization of the Internet Research Group at IETF-117
The Decentralization of the Internet Research Group (DINRG) of the Internet Research Task Force (IRTF) had a meeting on 2023-07-27 at the 117th meeting of the Internet Engineering Task Force (IETF). DINRG aims to provide for the research and engineering community, both an open forum to discuss the Internet centralization phenomena and associated potential threats, and a platform to facilitate the coordination of efforts in identifying the causes of observed consolidations and the mitigation solutions.
For context, we recently published a workshop report that discusses some fundamental problems: ACM SIGCOMM CCR: Report of 2021 DINRG Workshop on Centralization in the Internet
The DINRG meeting at IETF-117 meeting featured three highly interesting talks by Cory Doctorow, Volker Stocker & William Lehr, and Christian Tschudin that created quite some attention and led to lively discussions during and after the meeting. There is a full meeting recording on youTube, and we have published meeting minutes. Special thanks to Ryo Yanagida, A.J. Stein, and Eve Schooler for taking notes at the meeting.
Cory Doctorow: Let The Platforms Burn: Bringing Back the Good Fire of the Old Internet

Cory Doctorow, a science fiction author, activist and journalist, talked about a trend in platform evolution that he calls enshittification, where platforms go through different phases after growing user bases quickly as per platform economics and market domination strategies and of locking in users through technical and economic barriers. In advertisement (and digital online market) platforms, the platform operator sits between the users and other companies (so-called "two-sided market" scenarios), where the user base, the obtained personal information and behavioral surveillance results become assets to attracts such companies.
For example, in order to make avertising more effective, social media platforms would increase control of users's timelines, i.e., content that is presented to them, and make it harder for users to leave the platform. Overall this results in negative user experience. For increasing advertisment revenue, platforms would then sell attention more directly, i.e., exploit their position in the advertisement market. See Cory's posting on enshittification for details.
This process and the difficulties in effectively controlling and regulating platform companies, has led to a permanent crisis that Cory compares to a fire hazard situation. Platforms were rocked by scandals private data theft, accidental leaks and intended data sharing with other insitutions etc.
While the computer and networking world has seen a constant emerging and vanishing of "platforms" (operating systems, PC companies, online services) before, the current concentrated tech market makes is impossible to let harmful (or not very user-friendly) dissolve. This is due to network effects (Metcalfe's law) and switching costs, for example when trying to leave a dominant social media platforms and thereby losing connections to friends. This monopoly situation is enabled by a legal environment with ineffective antitrust laws, which has allowed for dominating platform to constantly acquire competing companies and potentially disruptive businesses.
With new laws for content moderation and censorship, platform get even more control over their users (in the name of preventing harrassment), without making in any easier to leave platforms. In his article (and podcast epsidode) called "Let the Platforms Burn", Cory concluded
Platforms collapse "slowly, then all at once." The only way to prevent sudden platform collapse syndrome is to block interoperability so users can't escape the harms of your walled garden without giving up the benefits they give to each other.
We should stop trying to make the platforms good. We should make them gone. We should restore the "good fire" that ended with the growth of financialized Big Tech empires. We should aim for soft landings for users, and stop pretending that there's any safe way to life in the fire zone.
We should let the platforms burn.
WIth respect to the (de-)centralization discussion in DINRG and the Internet community, this raises some important question as to
- what is the role of open interfaces, standards etc today in reality? Are we still using them to build interoperable, possibly federated systems?
- how should technology development, standards setting and regulation evolve to effectively enable user choice (migration, platform selection)?
Discussion
There was a question whether the real issue was that platforms are making a remarkable griphold, buying each other, but they are buying the users, i.e., whether the primary concern is the size of these platforms with this method or the method itself only? Cory replied in saying that size certainly promotes distortions. Scale was problem for two reasons. The contract enforcement function dominates. When the referee is less powerful than the team, it allows teams to cheat.
Secondly, even if we stipulated that companies are well run by smart people, they all make errors, and at that scale the mistakes are much more consequential.
Another question was who is willing to implement the interoperability standards and how companies can be convinved to do that. Cory talked about companies' motivation, i.e., companies wanted walled gardens, or to have APIs with advantages (vs disadvantages) to them. What they really seeked (over competitive interoperability) was to have legal remedies for those who reverse engineer to competitively enter the market. When there was a mandate and permission for inter-operators, if restoring that power was possible, that would help to avoid unquantifiable risk.
Some of these strategies are discussed in Cory upcoming book "How to seize the means of computation”.
Volker Stocker and William Lehr: Ecosystem Evolution and Digital Infrastructure Policy Challenges: Insights & Reflections from an Economics Perspective"
Volker Stocker of the Weizenbaum Institute for the Networked Society and William Lehr of the Advanced Networking Architecture Group in CSAIL at MIT presented their research on ecosysem evolution and policy challenges from an economics perspective. Volker is an economist with broad experience in interdisciplinay research, and William is a telecommunications and Internet economist and consultant.
Volker talked about the convergence of digital and non-digital worlds and mentioned a few trends that needed attention:
- The shift to the edge and shift to the localization of traffic.
- Ownership and management has shifted in the Internet ecosystem: sometimes hyper giant content providers with proprietary networks, sometimes edge clouds or roving resources.
- Potential consequences: value chain constellations are more complex, diverse & dynamic, resulting in changing ownership and governance structures, industry structures as well as competititve and innovational dynamics.
Volker made three points in his reflections on ecosystem evolution:
- Essential digital infrastructure is about more than connectiivty, not just connectivities like IPX and ISPs.
- The majority of the requisite investmenet will be private! E.g., access ISPs, CAPs, CDNs, upstream ISPs, and end-users are all investing.
- More and new forms of resource sharing will be needed. More network sharing agreements: active & passive sharing arrangements and optimal models are evolving.
William highlighted that the legacy Internet is not the Internet of today, and the economics of yesterday are not those of today. One of the questions is how to restore meaningful competition?
He mentioned the following challenges and paths forward:
- Multidisciplinary Engagement & Feedback
- Assymetric Info & Measurements: Metrics and data (and their provenance)
- Capacity to Detect and Act
Discussion
There was a discussion about how the private sector is expected to profit from the infrastructure development needed by society (assuming investments from the private sector). William replied in saying that
government built/subsidized most infrastructure in most places, with small investments needed initially. Some say significant investment should come from the utilities, which we should not dismiss. But we likely will need a strong argument on how to get there. Either we say there is a lot of money coming from public sector (for example, through taxes) or we have to find a way to manage private actors. Thus policy issues are important. Some of these questions are discussed in Williams paper on "Getting to the Broadband Future Efficiently with BEAD funding”.
Another question alluded to policy lagging behing the technical development, i.e., the mismatch of speed of innovation and speed of regulation (which is really hard at the national and internationl levels). William said that the best hope is standards and architectures that provides options and mentioned the importance of open source software.
Christian Tschudin: Minimal Global Broadcast
Christian Tschudin of the University of Basel presented a research idea called "Minimal Global Broadcast" (abstract). Christian is a computer science professor with a track record of research in Information-Centric Networking, distributed computing, and decentralized systems.

Christian started out from the observation that contacting peers in a decentralized environment is challenging. The key question is how do you learn about a peer’s current coordinates and their preferences? The platforms themselves often offer directories, but these are logically centralized rendezvous servers with a partial view and require trust in these platforms. Instead of conceptualizing an uber directory service Christian proposed a global information dissemination system that focuses on the data, asserting an allowance of “200 bytes of novelty per month
and citizen”.
This global broadcast channel can (and should) be implemented in many ways, starting from sneakernets to shortwave communication and including Internet-based online-services. Christian explained how such a service could be used to facilitate user migration and user discovery on their current preferred platform(s).
Discussion
There were some question on trust in user identities. Christian said that trust roots would be external to MGB, and that there would be different levels of trust, e.g., for inter-personal relationship vs. business relationships.
References
- DINRG charter
- DINRG@IETF-117 Meeting Material
- DINRG@IETF-117 Meeting Minutes
- DINRG@IETF-117 Video Recording
- Volker Stocker and William Lehr; Ecosystem Evolution and Digital Infrastructure Policy Challenges: Insights & Reflections from an Economics Perspective; presentation at IRTF DINRG; 2023-07-25; San Francisco
- ACM SIGCOMM CCR: Report of 2021 DINRG Workshop on Centralization in the Internet
- Cory Doctorow; The 'Enshittification' of TikTok – Or how, exactly, platforms die; Wired; 2023-01-23
- Cory Doctorow; Let the Platforms Burn; 2023-07-10
- Lehr, W. (2023), “Getting to the Broadband Future Efficiently with BEAD funding,” white paper prepared with support from WISPA, January 2023, available at https://www.wispa.org/docs/Lehr_White_Paper_Final.pdf
- Lehr, W. (2022), “5G and AI Convergence, and the Challenge of Regulating Smart Contracts,” in Europe’s Future Connected: Policies and Challenges for 5G and 6G Networks, edited by E. Bohlin and F. Cappelletti, European Liberal Forum (ELF), pages 72-80, available at https://liberalforum.eu/publication/europes-future-connected-policies-and-challenges-for-5g-and-6g-networks/.
- Lehr, W. and V. Stocker (2023), "Next-generation Networks: Necessity of Edge Sharing," forthcoming Frontiers in Computer Science: Networks and Communications, Summer 2023
- Oughton, E., W. Lehr, K. Katsaros, I. Selinis, D. Bubley, and J. Kusuma (2021), "Revisiting Wireless Internet Connectivity: 5G vs. Wi-Fi 6," Telecommunications Policy, 45 (2021) 102127, available at https://authors.elsevier.com/sd/article/S0308-5961(21)00032-X.
- Stocker, V. and W. Lehr (2022), “Regulatory Policy for Broadband: A Response to the ‘ETNO Report’s’ Proposal for Intervention in Europe’s Internet Ecosystem,” white paper, available at https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4263096.
- Stocker, V., Smaragdakis, G., and Lehr, W. (2020). The state of network neutrality regulation. SIGCOMM Comput. Commun. Rev. 50, 1 (January 2020), 45–59. https://doi.org/10.1145/3390251.3390258
Work in progress - Frias, Z., Mendo, L., Lehr, W. and Stocker, V. (2023), "Measuring NextGen Mobile Broadband: Challenges and Research Agenda for Policymaking", 32nd European International Telecommunications Society Conference (EuroITS2023), June 19-20, Madrid, Spain.
- Lehr, W., D. Sicker, D. Raychaudhuri, V. Singh (2023), “Edge Computing: digital infrastructure beyond broadband connectivity,” TPRC51: Annual Research Conference on Communications, Information and Internet Policy, September 22-23, 2023, American University, Washington DC
- Stocker, V. Bauer, J., and Pourdamghani, A. (2023). "Innovation Dynamics in the Internet Ecosystem & Digital Economy Policy”,, 32nd European International Telecommunications Society Conference (EuroITS2023), June 19-20, Madrid, Spain.
- Christian Tschudin; Minimal Global Broadcast (MGB); presentation and abstract at IRTF DINRG meeting at IETF-117, 2023-07-25
Directions for Computing in the Network
We have updated our Internet Draft on Directions for Computing in the Network.
In-network computing can be conceived in many different ways -- from active networking, data plane programmability, running virtualized functions, service chaining, to distributed computing.
This memo proposes a particular direction for Computing in the Networking (COIN) research and lists suggested research challenges.
This is now an adopted COINRG work item.
Link to draft: draft-irtf-coin-dir.
ACM SIGCOMM CCR: Report of 2021 DINRG Workshop on Centralization in the Internet
ACM SIGCOMM CCR just published the report of our 2021 DINRG meeting on Centralization in the Internet.
Executive Summary
There is a consensus within the networking community that the Internet consolidation and centralization trend has progressed rapidly over recent years, as measured by the structural changes to the data delivery infrastructure, the control power over system platforms, application development and deployment, and even in the standard development efforts. This trend has brought impactful technical, societal, and economical consequences.
When the Internet was first conceived as a decentralized system 40+ years back, few people, if any, could have foreseen how it looks today. How has the Internet evolved from there to here? What have been the driving forces for the observed consolidation? From a retrospective view, was there anything that might have been done differently to influence the course the Internet has taken? And most importantly, what should and can be done now to mitigate the trend of centralization? Although there are significant interests in these topics, there has not been much structured discussion on how to answer these important questions.
The IRTF Research Group on Decentralizing the Internet (DINRG) organized a workshop on “Centralization in the Internet” on June 3, 2021, with the objective of starting an organized open discussion on the above questions. Although there seems to be an urgent need for effective countermeasures to the centralization problem, this workshop took a step back: before jumping into solution development to steer the Internet away from centralization, we wanted to discuss how the Internet has evolved and changed, and what have been the driving forces and enablers for those changes. The organizers and part of the community believe that a sound and evidence-based understanding is the key towards devising effective remedy and action plans. In particular, we would like to deepen our understanding of the relationship between the architectural properties and economic developments.
This workshop consisted of two panels, each panel started with an opening presentation, followed by panel discussions, then open-floor discussions. There was also an all-hand discussion at the end. Three hours of the workshop presentations and discussions showed that this Internet centralization problem space is highly complex and filled with intrinsic interplays between technical and economic factors.
This report aims to summarize the workshop outcome with a broad-brush picture of the problem space. We hope that this big picture view could help the research group, as well as the broader IETF community, to reach a clearer and shared high-level understanding of the problem, and from there to identify what actions are needed, which of them require technical solutions, and which of them are regulatory issues which require technical community to provide inputs to regulatory sectors to develop effective regulation policies.
You can find the report in the ACM Digital Library. We also have a pre-print version.
IRTF Decentralization of the Internet Research Group at IETF-117
Recent years have witnessed the consolidations of the Internet applications, services, as well as the infrastructure. The Decentralization of the Internet Research Group (DINRG) aims to provide for the research and engineering community, both an open forum to discuss the Internet centralization phenomena and associated potential threats, and a platform to facilitate the coordination of efforts in identifying the causes of observed consolidations and the mitigation solutions.
Our upcoming DINRG meeting at IETF-117 will feature three talks – by Cory Doctorow, Volker Stocker & William Lehr, and Christian Tschudin.
| 1 | DINRG Chairs’ Presentation: Status, Updates | Chairs | 05 min |
| 2 | Let The Platforms Burn: Bringing Back the Good Fire of the Old Internet | Cory Doctorow | 30 min |
| 3 | Ecosystem Evolution and Digital Infrastructure Policy Challenges: Insights & Reflections from an Economics Perspective | Volker Stocker & William Lehr | 20 min |
| 4 | Minimal Global Broadcast (MGB) | Christian Tschudin | 20 min |
| 5 | Wrap-up & Buffer | All | 15 min |
Documents
Logistics
DINRG Meeting at IETF-117 – 2023-07-25, 20:00 to 21:30 UTC
Internet Centralization on the The Hedge
Lixia Zhang and myself discussed Internet centralization together with Russ White, Alvaro Retana and Tom Ammon on The Hedge podcast.
Recent years have witnessed the consolidations of Internet applications, services, as well as the infrastructure. The Decentralization of Internet Research Group (DINRG) aims to provide for the IRTF/IETF community both an open forum to discuss the Internet centralization phenomena and associated potential threats, and a platform to facilitate the coordination of efforts in identifying the causes of observed consolidations and the mitigation solutions.
DINRG's main objectives include the following:
- Measurement of Internet centralization and the consequential societal impacts;
- Characterization and assessment of observed Internet centralization;
- Investigation of the root causes of Internet centralization, and articulation of the impacts from market economy, architecture and protocol designs, as well as government regulations;
- Exploration of new research topics and technical solutions for decentralized system and application development;
- Documentation of the outcome from the above efforts; and
- Recommendations that may help steer Internet away from further consolidation.
Named Data Microverse
Our project proposal on Named Data Microverse was selected as a winner of the Future of Data Challenge
The Named Data Microverse project explores how Information-Centric Networking (ICN) can enable a free, open and decentralized approach to “the metaverse”. The project aims to balances scalability and market-based innovation with democratization, trustworthiness, and equitable empowerment of individuals. ICN provides an architectural foundation for secure, distributed applications to be created more easily and provides resilience in natural disasters, better mobility support, cloud-optional local communication, improved privacy, and other benefits that are not addressed solely by “Web3” technologies.
This is a joint project with Jeff Burke and Lixia Zhang at UCLA.
Named Data Metaverse
I had the pleasure of chairing a really interesting panel discussion at the NDN Community meeting (NDNComm 2023) on March 3rd 2023.

The panel discussed opportunities and challenges for building Metaverse systems with a Named Data Networking approach. Specific discussion questions include:
- What are architectural, security-related, and performance-related issues in Metaverse systems today?
- What communication patterns could be supported by NDN platforms?
- How can the data-oriented model and decentralized trust establishment help in developing better Metaverse systems and at what layer would NDN technologies help?
- What are gaps, challenges and research opportunities for NDN evolution to address Metaverse system requirements?

The panelists were:
- Paulo Mendes (Airbus Research)
- Michelle Munson (Eluvio)
- Todd Hodes (Eluvio)
- Jeff Burke (UCLA REMAP)
The panel discussed scenarios for Named Data in the Metaverse such as AR in live performance, real-time ML for transformed reality, architectures for emerging arts, media, and entertainment, commercial content distribution and experience delivery, as well as Metaverse VR experiences in challenged networks.
Jeff Burke introduced exciting ideas for re-imaging VR-enhanced live performances and shared some ideas and insights from building such applications. In his class of applications, there is a lot of local interaction (for example in a theater), creating interesting challenges and opportunities for local, decentralized Metaverses. On the application layer, Metaverse VR applications would like use scene and model descriptions such as USD and gITF, so the question arises, what opportunities exist for mapping the corresponding names to "network layer" names.
Michelle Munson and Todd Hodes introduced Eluvio's Content Fabric Protocol (CFP), a platform aimed at commercial-grade decentralized content distribition, providing content-native adressability programmability mechanisms for storage, distribution, and in-built streaming and content processing. CFP uses Blockchain governance for versioning, access control, and on-chain/cross-chain monetization. An example use case is the Warner Movieverse.
The panel discussed the different approaches of dealing with named-data as a fundamental building block and some specific use cases for networked Metaverse systems such as (secure) in-network content transformation. Overall, the panel was a great initial discussion on these ideas that should definitely be continued. Check out the list of related events below for possible venues.
Related Events
- Metaverse-focused ICN Research Group meeting at the upcoming IETF-116 meeting: (ICNRG meets on March 28, 09:30 to 11:00 JST, online participation possible).
- Metaverse side meeting at IETF-116 on March 30th at 11:30. See IETF Metaverse mailing list for agenda and details.
- IEEE MetaCom Workshop on Decentralized, Data-Oriented Networking for the Metaverse (DORM)



