Dirk Kutscher

Personal web page

ACM Conext-2024 Workshop on the Decentralization of the Internet

without comments

Our ACM CoNEXT-2024 workshop on the decentralization of the Internet on Monday, December 9th 2024 in LA has an exciting agenda – don't miss it! Check out the workshop homepage for up-to-date information.

09:00 Session 1: Keynotes

  1. Keynote by Cory Doctorow: DISENSHITTIFY OR DIE! How computer scientists can halt enshittification to make a new, good internet and condemn today's enshitternet to the scrapheap of history.
  2. Keynote by Michael Karanicolas: The Fediverse Papers: Constitutional, Governance, and Policy Questions for a New Paradigm of Networking

11:00 Session 2: Decentralized Systems

  1. Martin Kleppmann, et al.; Bluesky and the AT Protocol: Usable Decentralized Social Media
  2. Benjamin Schichtholz et al.; ReP2P Matrix: Decentralized Relays to Improve Reliability and Performance of Peer-to-Peer Matrix
  3. Tianyuan Yu et al.; On Empowering End Users in Future Networking

14:00 Session 3: Technologies for Decentralization

  1. Huaixi Lu et al.; Atomicity and Abstraction for Multi-Blockchain Interactions
  2. David Guzman et. el; Communication Cost for Permissionless Distributed Consensus at Internet Scale
  3. Yekta Kocaogullar et al.; Towards a Decentralized Internet Namespace

15:00 Session 4: Decentralization of the Internet – Quo Vadis?

  • Organizers: Lixia Zhang & Dirk Kutscher
  • Interactive panel discussion with Cory Doctorow, Michael Karanicola, and paper authors

Written by dkutscher

October 30th, 2024 at 7:25 am

IRTF DINRG Meeting at IETF-121

without comments

The IRTF DINRG Meeting at IETF-121 takes place on 2024-11-06 at 13:00 to 14:30 UTC.

1 DINRG Chairs’ Presentation: Status, Updates Chairs 05 min
2 Distributing DDoS Analytics among ASes Daniel Wagner 20 min
3 The Role of DNS names in Internet Decentralization Tianyuan Yu 20 min
4 Taxonomy of Internet Consolidation & Effects of Internet Consolidation Marc McFadden 15 min
5 DINRG – Next Steps Chairs & Panelists 30 min
6 Wrap-up & Buffer Chairs 00 min

Documents and Links to Resources

  1. United We Stand: Collaborative Detection and Mitigation of
    Amplification DDoS Attacks at
    Scale
  2. https://datatracker.ietf.org/doc/draft-mcfadden-consolidation-taxonomy/
  3. https://datatracker.ietf.org/doc/draft-mcfadden-cnsldtn-effects/

Notes

Please remember that all sessions are being recorded.

Written by dkutscher

October 30th, 2024 at 7:16 am

Posted in Events,IRTF

Tagged with , , , ,

IRTF ICNRG Meeting at IETF-121

without comments

The ICNRG Meeting at IETF-121 takes place on 2024-11-05, 13:00 to 14:30 UTC.

ICNRG Agenda

1 ICNRG Chairs’ Presentation: Status, Updates Chairs 05 min
2 FLIC Update Marc Mosko 15 min
3 CCNx Content Object Chunking Marc Mosko 15 min
4 Reflexive Forwarding Update Hitoshi Asaeda 20 min
5 ICN Challenges for Metaverse Platform Interoperability Jungha Hong 15 min
6 Distributed Micro Service Communication Aijun Wang 15 min
7 Buffer, Wrap Up and Next Steps Chairs 05 min

Please remember that all sessions are being recorded.

Material

  1. https://datatracker.ietf.org/doc/draft-irtf-icnrg-flic/
  2. https://datatracker.ietf.org/doc/draft-mosko-icnrg-ccnxchunking/
  3. https://github.com/mmosko/ccnpy
  4. https://datatracker.ietf.org/doc/draft-irtf-icnrg-reflexive-forwarding/
  5. https://datatracker.ietf.org/doc/draft-hong-icn-metaverse-interoperability/
  6. https://datatracker.ietf.org/doc/draft-li-icnrg-damc/

Written by dkutscher

October 30th, 2024 at 7:13 am

Posted in Events,IRTF

Tagged with , ,

New Internet Draft draft-irtf-icnrg-reflexive-forwarding-00

without comments

We updated our Internet Draft draft-irtf-icnrg-reflexive-forwarding-00 on Reflexive Forwarding for CCNx and NDN Protocols.

Current Information-Centric Networking protocols such as CCNx and NDN have a wide range of useful applications in content retrieval and other scenarios that depend only on a robust two-way exchange in the form of a request and response (represented by an Interest-Data exchange in the case of the two protocols noted above). A number of important applications however, require placing large amounts of data in the Interest message, and/or more than one two-way handshake. While these can be accomplished using independent Interest-Data exchanges by reversing the roles of consumer and producer, such approaches can be both clumsy for applications and problematic from a state management, congestion control, or security standpoint. This specification proposes a Reflexive Forwarding extension to the CCNx and NDN protocol architectures that eliminates the problems inherent in using independent Interest-Data exchanges for such applications. It updates RFC8569 and RFC8609.

The recent update includes a generalization of the main protocol specification, so that Reflexive Forwarding can be used in both CCNx and NDN.

Written by dkutscher

October 19th, 2024 at 7:52 am

Invited Talk at Airbus Workshop on Networking Systems

without comments

On October 10th, 2024, I was invited to give a talk at the 2nd Airbus Workshop on Networking Systems. The workshop largely discussed connected aircraft scenarios and technologies and features talks on security and reliability, IoT sensor fusioning, and future space and 6G network architectures.

My talk was on Connected Aircraft – Network Architectures and Technologies, and discussed relevant scenarios from my perspective, such as passenger services and new aircraft management applications. For the technology discussion, I focused on large-scale low-latency multimedia communication over the expected heterogeneous and dynamic aircraft connectivity networks and discussed current and emerging technologies such as Media over QUIC, ICN.

I also introduced the recently established Low-Altitude Systems and Economy Research Institute at HKUST(GZ), a cross-disciplinary research institute for the low-altitude domain (with similar but not identical requirements) and some of our recent projects such as Named Data Microverse.

Written by dkutscher

October 19th, 2024 at 5:20 am

Dagstuhl Seminar on Greening Networking: Toward a Net Zero Internet

without comments


We (Alexander Clemm, Michael Welzl, Cedric Westphal, Noa Zilbermann, and I) organized a Dagstuhl seminar on Green Networking: Toward a Net Zero Internet.

Making Networks Greener

As climate change triggered by CO2 emissions dramatically impacts our environment and our everyday life, the Internet has proved a fertile ground for solutions, such as enabling teleworking or teleconferencing to reduce travel emissions. It is also a significant contributor to greenhouse gas emissions, e.g. through its own significant power consumption. It is thus very important to make networks themselves "greener" and devise less carbon-intensive solutions while continuing to meet increasing network traffic demands and service requirements.

Computer scientists and engineers from world-leading universities and international companies, such as Ericsson, NEC, Netflix, Red Hat, and Telefonica came together in a Seminar on Green Networking (Toward a Net Zero Internet) at Schloss Dagstuhl – Leibniz Center for Informatics, between September 29th and October 2nd, 2024. Organized by leading Internet researchers from the Hong Kong University of Science and Technology (Guangzhou), the University of Oxford, the University of Oslo and the University of California, Santa Cruz, they met to identify and prioritize the most impactful networking improvements to reduce carbon emission, define action items for a carbon-aware networking research agenda, and foster/facilitate research collaboration in order to reduce carbon emissions and to positively impact climate change.

Interactions between the Power Grid, Larger Systems, and the Network

In addition to pure networking issues, the seminar also analyzed the impact of larger systems that are built with Internet technologies, such as AI, multimedia streaming, and mobile communication networks. For example, the seminar discussed energy proportionality in networked systems, to allow systems to adapt their energy consumption to actual changes in utilization, so that savings can be achieved in idle times. Such a behavior would require better adaptiveness of applications and network protocols to cost information (such as carbon impact).

Moreover, networked systems can interact with the power grid in different ways, for example adapting energy consumption to current availability and cost of renewable energy, which can be helpful for joint planning of grid and network/networked-systems/cloud, achieving maximum efficiency/savings.

The seminar attendees are working with international research and standardization organizations such as the Internet Engineering Task Force (IETF) and ETSI, and it is expected that the seminar will make contributions to future research and standardization agendas in such organizations to bring the Internet to Net Zero emissions.

Organizers

  • Alexander Clemm (Los Gatos, US)
  • Dirk Kutscher (HKUST - Guangzhou, CN)
  • Michael Welzl (University of Oslo, NO)
  • Cedric Westphal (University of California, Santa Cruz, US)
  • Noa Zilberman (University of Oxford, GB)

References

Written by dkutscher

October 2nd, 2024 at 11:30 am

Networked Metaverse Systems

without comments

The term ‘Metaverse’ often denotes a wide range of existing and fictional applications. Nevertheless, there are actual systems today that can be studied and analyzed. However, whereas a considerable body of work has been published on applications and application ideas, there is less work on the technical implementation of such systems, especially from a networked systems perspective.

In a recently published open access journal article, we share some insights into the technical design of Metaverse systems, their key technologies, and their shortcomings, predominantly from a networked systems perspective. For the scope of this study, we define the ‘Metaverse’ as follows. The ‘Metaverse’ encompasses various current and emerging technologies, and the term is used to describe different applications, ranging from Augmented Reality (AR), Virtual Reality (VR),and Extended Reality (XR) to a new form of the Internet or Web. A key feature distinguishing the Metaverse from simple AR/VR is its inherently collaborative and shared nature, enabling interaction and collaboration among users in a virtual environment.

Building on Existing Platforms and Network Stacks

Most current Metaverse systems and designs are built on existing technologies and networks. For example, massively multiplayer online games such as Fortnite use a generalized client-server model. In this model, the server authoritatively manages the game state, while the client maintains a local subset of this state and can predict game flow by executing the same game code as the server on approximately the same data. Servers send information about the game world to clients by replicating relevant actors and their properties. Commercial social VR platforms such as Horizon Worlds and AltspaceVR use HTTPS to report client-side information and synchronize in-game clocks across users.

Mozilla Hubs, built with A-Frame (a web framework for building virtual reality experiences), uses WebRTC communication with a Selective Forwarding Unit (SFU). The SFU receives multiple audio and video data streams from its peers, then determines and forwards relevant data streams to connected peers. Blockchain or Non-Fungible Token (NFT)-based online games, such as Decentraland, run exclusively on the client side but allow for various data flow models, ranging from local effects and traditional client-server architectures to peer-to-peer (P2P) interactions based on state channels; Upland is built on EOSIO, an open-source blockchain protocol for scalable decentralized applications, and transports data through HTTPS. Connections between peers in Upland are established using TLS or VPN tunnels.

Many studies have focused on improving various aspects of Metaverse systems. For example, EdgeXAR is a mobile AR framework using edge offloading to enable lightweight tracking with six degrees of freedom (DOF) while reducing offloading delay from the user’s view; SORAS is an optimal resource allocation scheme for edgeenabled Metaverse, using stochastic integer programming to minimize the total network cost; Ibrahim et al. explores the issue of partial computation offloading for multiple subtasks in an in-network computing environment, aiming to minimize energy consumption and delay. However, these ideas for offloading computation and rendering tasks to edge platforms often conflict with the existing end-to-end transport protocols and overlay deployment models. Recently, a Deep Reinforcement Learning (DRL)-based multipath network orchestration framework designed for remote healthcare services is presented, automating subflow management to handle multipath networks. However, proposals for scalable multi-party communication would require interdomain multicast services, unavailable on today’s Internet.

Disconnect Between High-Level Concepts and Actual Systems

In practice, there is a significant disconnect between high-level Metaverse concepts, ideas for technical improvements, and systems that are actually developed and partially deployed. A 2022 ACM IMC paper titled Are we ready for metaverse?: a measurement study of social virtual reality platforms analyzes the performance of various social VR systems, pinpointing numerous issues related to performance, communication overhead, and scalability. These issues are primarily due to the fact that current systems leverage existing platforms, protocols, and system architectures, which cannot tap into any of the proposed architectural and technical enhancements, such as scalable multi-party communication, offloading computation, rendering tasks, etc.

Rather than merely layering ‘the Metaverse’ on top of legacy and not always ideal foundations, we consider Metaverse as a driver for future network and web applications and actively develop new designs to that end. In our article, we take a comprehensive systems approach and technically describe current Metaverse systems, focusing on their networking aspects. We document the requirements and challenges of Metaverse systems and propose a principled approach to system design for these requirements and challenges based on a thorough understanding of the needs of Metaverse systems, the current constraints and limitations, and the potential solutions of Internet technologies.

Article Overview

  1. We present a technical description of the ‘Metaverse’ based on existing and emerging systems, including a discussion of its fundamental properties, applications, and architectural models.
  2. We comprehensively study relevant enabling technologies for Metaverse systems, including HCI/XR technologies, networking, communications, media encoding, simulation, real-time rendering and AI. We also discuss current Metaverse system architectures and the integration of these technologies into actual applications.
  3. We conduct a detailed requirements analysis for constructing Metaverse systems. We analyze applications specific requirements and identify existing gaps in four key aspects: communication performance, mobility, large-scale operation,and end system architecture. For each area, we propose candidate technologies to address these gaps.
  4. We propose a research agenda for future Metaverse systems, based on our gap analysis and candidate technologies discussion. We re-assess the fundamental goals and requirements, without necessarily being constrained by existing system architectures and protocols. Based on a comprehensive understanding of what Metaverse systems need and what end-systems, devices, networks and communication services can theoretically provide, we propose specific design ideas and future research directions to realize Metaverse systems that can meet the expectations often articulated in the literature.

References

Written by dkutscher

September 8th, 2024 at 7:47 am

Posted in Publications

Tagged with , , ,

Affordable HPC: Leveraging Small Clusters for Big Data and Graph Computing

without comments

In our paper at PCDS-2024, we are exploring strategies for academic researchers to optimize computational resources within limited budgets, focusing on building small, efficient computing clusters. We analyzed the comparative costs of purchasing versus renting servers, guided by market research and economic theories on tiered pricing. The paper offers detailed insights into the selection and assembly of hardware components such as CPUs, GPUs, and motherboards tailored to specific research needs. It introduces innovative methods to mitigate the performance issues caused by PCIe switch bandwidth limitations in order to enhance GPU task scheduling. Furthermore, a Graph Neural Network (GNN) framework is proposed to analyze and optimize parallelism in computing networks.

Growing Resource Demands for Large-Scale Machine Learning

Large machine learning (ML) models, such as language models (LLMs), are becoming increasingly powerful and gradually accessible to end users. However, the growth in the capabilities of these models has led to memory and inference computation demands exceeding those of personal computers and servers. To enable users, research teams, and others to utilize and experiment with these models, a distributed architecture is essential.

In recent years, scientific research has shifted from a ”wisdom paradigm” to a ”resource paradigm.” As the number of researchers and the depth of scientific exploration increase, a significant portion of research computing tasks has moved to servers. This shift has been facilitated by the development of computing frameworks and widespread use of computers, leading to an increased demand for computer procurement.

Despite the abundance of online tutorials for assembling personal computers, information on the establishment of large clusters is relatively scarce. Large Internet companies and multinational corporations usually employ professional architects and engineers or work closely with vendors to optimize their cluster performance. However, researchers often do not have access to these technical details and must rely on packaged solutions from service providers to build small clusters.

Towards Affordable HPC

In our paper "Affordable HPC: Leveraging Small Clusters for Big Data and Graph Computing", we aim to bridge this gap by providing opportunities for researchers with limited funds to build small clusters from scratch. We compiled the necessary technical details and guidelines to enable researchers to assemble clusters independently. In addition, we propose a method to mitigate the performance degradation caused by the bandwidth limitations of PCIe switches, which can help researchers prioritize GPU training tasks effectively.

The papers discusses:

  1. How to build cost-effective clusters: We provide a comprehensive guide for researchers with limited funds, helping them to independently build small clusters and contribute to the development of large models.
  2. Performance Optimization: We propose a method to address the performance degradation caused by PCIe switch bandwidth limitations. This method allows researchers to prioritize GPU training tasks effectively, thereby improving the overall cluster performance.
  3. GNN for Network and Neural network parallelism: We propose a GNN (Graph Neural Network) framework that combines neural networks with parallel network flows in distributed systems. Our aim is to integrate different types of data flows, communication patterns, and computational tasks, thereby providing a novel perspective for evaluating the performance of distributed systems.

References

Written by dkutscher

September 2nd, 2024 at 5:25 am

Next Steps for Content Syndication

without comments

This is a follow-up on Mark Nottinhgam's blog post on What RSS Needs that I read with some interest.

RSS and Atom have been enabling non-mediated feeds for website updates that are very useful and once were quite popular until the Web took a different direction. Mark is discussing some areas that should be addressed for revitalizing such feeds, based on what we know today. He talked about Community, User Agency, Interoperability Tests, Best Practices for Feeds, Browser Integration, Authenticated Feeds, and Publisher Engagement. Check out his blog posting for details.

I would like to offer some additional thoughts:

Features that should be maintained from RSS/Atom

Receiver-driven operation

The user device ("client") should generally be in control and fetch updates based on its own schedule and requirements. This fits well with typical web interactions, i.e., HTTP GET. See below for additional ideas in section "Protocol Independence".

Aggregation

Aggregation, i.e., the combination of different input feed for forming a new feed as a feature in RSS and Atom. This should obviously be maintained. It may need some additional security (authentication) mechanisms – see below under "Data-oriented security".

User-controlled interaction with feed content

Mark mentioned some features such as feedback from feed readers to content providers, e.g., using so-called "privacy-preserving measurement". This should be made clearly optional, and the user should be offered opting-in, i.e., it should not be the default.

New Ideas

Learn from ActivityPub

In general, it would be good to study ActivityPub and see what features and design elements would be useful. ActivityPub is a decentralized social networking protocol based on the ActivityStreams JSON data format. It does a lot more than one would need for syndication (notably it is designed for bi-directional updates), but some properties are, in my opinion, useful for syndication, too.

Modularization

In RSS, a feed is typically a single XML document that contains a channel with items for the individual updates. When a feed is updated, the entire document is regenerated, and the receiver then has to filter updates that had been received before. Atom had a feed paging concept that allowed clients to navigate through paginated feed entries, but each of those is still a standalone document.

To enable better sharing, re-use of feed updated in different context and more scalable distribution, feed updates could provide a more modular structure, in similar ways as ActivityPub does.

Protocol independence

RSS and Atom are technically not bound to HTTP, although that is of course the dominant way of using them. However, it is theoretically possible to disseminate feed updates through other means, e.g., e-mail, and I think this should be considered for a future syndication system as well.

More specifically, push-based operation should be enabled (beyond e-mail). For example, it should be possible to receive feed updates via broadcast/multicast channels.

Another example may be publish/subscribe-based updated. There is a W3C Recommendation called WebSub that specified a HTTP-based pub/sub framework for feed updates. I am suggesting to use this as an example, but not necessarily as the only way to do pub/sub and pushed updated.

Moreover, it should be possible to use the syndication framework in "local-first" environments, i.e., with non-public-facing servers.

Data-oriented security

Thes use cases have some security implications. It must be possible to authenticate feed updates independent of the communication channel.

Written by dkutscher

August 25th, 2024 at 3:24 pm

Posted in Posts

Tagged with , , , , ,

Nordwest-IX Internet Exchange Point

without comments

DE-CIX and EWE TEL opened the new Nordwest-IX Internet exchange point in Oldenburg, Germany on 2024-08-15.

DE-CIX, the largest Internet Exchange in Europe and the second-largest in the world, has eight locations in Germany now: Oldenburg, Berlin, Düsseldorf, Frankfurt, Hamburg, Leipzig, Munich, Ruhr region. They have recently begun to decentralize their IXPs in Germany by opening new IXPs in addition to their main location in Frankfurt.

Can IXPs help with Internet Decentralization?

In the IRTF Research Group on the Decentralization of the Internet (DINRG), we are investigating root causes for and potential counter-measures against Internet Centralization. There are two aspects for centralization/decentralization and IXPs:

  1. Internet peering happens mostly at public IXPs, locally centralized exchange points in an otherwise logically decentralized network of Autonomous Systems. Big application service providers ("hyperscalers") are also engaging in so-called "Direct Peering" (or "Private Peering") where they connect their network directly to, typically, Internet Service Providers that provide Internet access and can benefit from a direct connection to dominant content/service providers. Often, it is the hyperscaler who benefits most in terms of cost saving. Decentralizing IXPs can provide incentives for such networks to connect at IXPs instead of doing direct peering, which is often seen as beneficial as it increases connectivity options and it reduces cost and latency.
  2. IP connectivity alone is not a sufficient condition for low latency and decentralization though, as most hyperscaler applications rely on some form of CDN overlay network. Even with potential local IP forwarding, CDN proxies may be hosted at central locations. To counter that, it is important to create co-location and local edge service hosting opportunities at or closed to IXPs, which can be a business opportunity for the connected ISPs, such we EWE TEL for Nordwest-IX.

The Internet is evolving, and new technologies might change the role of overlays in the future. For example, technologies such as Media-over-QUIC (MoQ) might lead to massive caching and replication overlay structures that will or will not be shared across applications and hyperscalers. IXPs and co-location data centers can be natural places for operating MoQ relays.

Written by dkutscher

August 15th, 2024 at 6:09 pm

Posted in Posts

Tagged with ,