Dirk Kutscher

Personal web page

Archive for the ‘IETF’ Category

Towards a Unified Transport Protocol for In-Network Computing in Support of RPC-based Applications

without comments

The emerging term In-Network Computin (INC) [inc] in particular refers applying on-path programmable networking devices (e.g., switches and routers between clients and servers) as an accelerator or function offloader to boost throughput, reduce server load, or improve latency, typically in a well-controlled data center network environment.

Some INC implementations evolved from programmable data plane systems and align with the trend of network programmability at large. In recent year, it has been shown to support many promising applications (e.g., caching, aggregation, and agreement). For example, in distributed machine learning (DML), training nodes produce data (gradients) that needs to be aggregated or reduced -- and the result could be distributed to one or multiple consumers. As another example, the NetClone system [netclone] uses in-network forwarder to replicate RPC invocation messages and to perform more informed forwarding based on observed latencies for accelerating RPC communication.

While it is possible to achieve this kind of operation purely with end-to-end communication between worker nodes, performance can be dramatically improved by offloading both the operation processing and the data dissemination to nodes in the network. These in-network processors are often conceived as semi-transparent performance enhancing on-path elements, i.e., they are not the actual endpoints in transport protocol sessions and would intercept packets with application data and potentially generate new data that they would have to transmit.

In our Internet Draft draft-song-inc-transport-protocol-req-01.txt, we are discussing this problem and are formulating some requirements for the design of future transport protocols in this space.


Written by dkutscher

January 25th, 2024 at 7:02 am


without comments

We have posted the agenda for our DINRG meeting at IETF-118:



DINRG Meeting at IETF-118 – 2023-11-06, 08:30 to 10:30 UTC

Written by dkutscher

November 1st, 2023 at 9:21 am

Posted in Events,IETF,IRTF

Tagged with , , ,

Collective Communication: Better Network Abstractions for AI

without comments

We have submitted two new Internet Drafts on Collective Communication:

  1. Kehan Yao , Xu Shiping , Yizhou Li , Hongyi Huang , Dirk Kutscher; Collective Communication Optimization: Problem Statement and Use cases; Internet Draft draft-yao-tsvwg-cco-problem-statement-and-usecases-00; work in progress; October 2023

  2. Kehan Yao , Xu Shiping , Yizhou Li , Hongyi Huang , Dirk Kutscher; Collective Communication Optimization: Requirement and Analysis; Internet Draft draft-yao-tsvwg-cco-requirement-and-analysis-00; work in progress; October 2023

Collective Communication refers to communication between a group of processes in distributed computing contexts, for example involving interaction types such as broadcast, reduce, all-reduce. This data-oriented communication model is employed by distributed machine learning and other data processing systems, such as stream processing. Current Internet network and transport protocols (and corresponding transport layer security) make it difficult to support these interactions in the network, e.g., for aggregating data on topologically optimal nodes for performance enhancements. These two drafts discuss use cases, problems, and initial ideas for requirements for future system and protocol design for Collective Communication. They will be discussed at IETF-118.

Written by dkutscher

October 30th, 2023 at 8:03 am

Platforms, Economics, Minimal Global Broadcast

without comments

Decentralization of the Internet Research Group at IETF-117

The Decentralization of the Internet Research Group (DINRG) of the Internet Research Task Force (IRTF) had a meeting on 2023-07-27 at the 117th meeting of the Internet Engineering Task Force (IETF). DINRG aims to provide for the research and engineering community, both an open forum to discuss the Internet centralization phenomena and associated potential threats, and a platform to facilitate the coordination of efforts in identifying the causes of observed consolidations and the mitigation solutions.

For context, we recently published a workshop report that discusses some fundamental problems: ACM SIGCOMM CCR: Report of 2021 DINRG Workshop on Centralization in the Internet

The DINRG meeting at IETF-117 meeting featured three highly interesting talks by Cory Doctorow, Volker Stocker & William Lehr, and Christian Tschudin that created quite some attention and led to lively discussions during and after the meeting. There is a full meeting recording on youTube, and we have published meeting minutes. Special thanks to Ryo Yanagida, A.J. Stein, and Eve Schooler for taking notes at the meeting.

Cory Doctorow: Let The Platforms Burn: Bringing Back the Good Fire of the Old Internet

Cory Doctorow, a science fiction author, activist and journalist, talked about a trend in platform evolution that he calls enshittification, where platforms go through different phases after growing user bases quickly as per platform economics and market domination strategies and of locking in users through technical and economic barriers. In advertisement (and digital online market) platforms, the platform operator sits between the users and other companies (so-called "two-sided market" scenarios), where the user base, the obtained personal information and behavioral surveillance results become assets to attracts such companies.

For example, in order to make avertising more effective, social media platforms would increase control of users's timelines, i.e., content that is presented to them, and make it harder for users to leave the platform. Overall this results in negative user experience. For increasing advertisment revenue, platforms would then sell attention more directly, i.e., exploit their position in the advertisement market. See Cory's posting on enshittification for details.

This process and the difficulties in effectively controlling and regulating platform companies, has led to a permanent crisis that Cory compares to a fire hazard situation. Platforms were rocked by scandals private data theft, accidental leaks and intended data sharing with other insitutions etc.

While the computer and networking world has seen a constant emerging and vanishing of "platforms" (operating systems, PC companies, online services) before, the current concentrated tech market makes is impossible to let harmful (or not very user-friendly) dissolve. This is due to network effects (Metcalfe's law) and switching costs, for example when trying to leave a dominant social media platforms and thereby losing connections to friends. This monopoly situation is enabled by a legal environment with ineffective antitrust laws, which has allowed for dominating platform to constantly acquire competing companies and potentially disruptive businesses.

With new laws for content moderation and censorship, platform get even more control over their users (in the name of preventing harrassment), without making in any easier to leave platforms. In his article (and podcast epsidode) called "Let the Platforms Burn", Cory concluded

Platforms collapse "slowly, then all at once." The only way to prevent sudden platform collapse syndrome is to block interoperability so users can't escape the harms of your walled garden without giving up the benefits they give to each other.

We should stop trying to make the platforms good. We should make them gone. We should restore the "good fire" that ended with the growth of financialized Big Tech empires. We should aim for soft landings for users, and stop pretending that there's any safe way to life in the fire zone.

We should let the platforms burn.

WIth respect to the (de-)centralization discussion in DINRG and the Internet community, this raises some important question as to

  • what is the role of open interfaces, standards etc today in reality? Are we still using them to build interoperable, possibly federated systems?
  • how should technology development, standards setting and regulation evolve to effectively enable user choice (migration, platform selection)?


There was a question whether the real issue was that platforms are making a remarkable griphold, buying each other, but they are buying the users, i.e., whether the primary concern is the size of these platforms with this method or the method itself only? Cory replied in saying that size certainly promotes distortions. Scale was problem for two reasons. The contract enforcement function dominates. When the referee is less powerful than the team, it allows teams to cheat.
Secondly, even if we stipulated that companies are well run by smart people, they all make errors, and at that scale the mistakes are much more consequential.

Another question was who is willing to implement the interoperability standards and how companies can be convinved to do that. Cory talked about companies' motivation, i.e., companies wanted walled gardens, or to have APIs with advantages (vs disadvantages) to them. What they really seeked (over competitive interoperability) was to have legal remedies for those who reverse engineer to competitively enter the market. When there was a mandate and permission for inter-operators, if restoring that power was possible, that would help to avoid unquantifiable risk.

Some of these strategies are discussed in Cory upcoming book "How to seize the means of computation”.

Volker Stocker and William Lehr: Ecosystem Evolution and Digital Infrastructure Policy Challenges: Insights & Reflections from an Economics Perspective"

Volker Stocker of the Weizenbaum Institute for the Networked Society and William Lehr of the Advanced Networking Architecture Group in CSAIL at MIT presented their research on ecosysem evolution and policy challenges from an economics perspective. Volker is an economist with broad experience in interdisciplinay research, and William is a telecommunications and Internet economist and consultant.

Volker talked about the convergence of digital and non-digital worlds and mentioned a few trends that needed attention:

  • The shift to the edge and shift to the localization of traffic.
  • Ownership and management has shifted in the Internet ecosystem: sometimes hyper giant content providers with proprietary networks, sometimes edge clouds or roving resources.
  • Potential consequences: value chain constellations are more complex, diverse & dynamic, resulting in changing ownership and governance structures, industry structures as well as competititve and innovational dynamics.

Volker made three points in his reflections on ecosystem evolution:

  1. Essential digital infrastructure is about more than connectiivty, not just connectivities like IPX and ISPs.
  2. The majority of the requisite investmenet will be private! E.g., access ISPs, CAPs, CDNs, upstream ISPs, and end-users are all investing.
  3. More and new forms of resource sharing will be needed. More network sharing agreements: active & passive sharing arrangements and optimal models are evolving.

William highlighted that the legacy Internet is not the Internet of today, and the economics of yesterday are not those of today. One of the questions is how to restore meaningful competition?

He mentioned the following challenges and paths forward:

  1. Multidisciplinary Engagement & Feedback
  2. Assymetric Info & Measurements: Metrics and data (and their provenance)
  3. Capacity to Detect and Act


There was a discussion about how the private sector is expected to profit from the infrastructure development needed by society (assuming investments from the private sector). William replied in saying that
government built/subsidized most infrastructure in most places, with small investments needed initially. Some say significant investment should come from the utilities, which we should not dismiss. But we likely will need a strong argument on how to get there. Either we say there is a lot of money coming from public sector (for example, through taxes) or we have to find a way to manage private actors. Thus policy issues are important. Some of these questions are discussed in Williams paper on "Getting to the Broadband Future Efficiently with BEAD funding”.

Another question alluded to policy lagging behing the technical development, i.e., the mismatch of speed of innovation and speed of regulation (which is really hard at the national and internationl levels). William said that the best hope is standards and architectures that provides options and mentioned the importance of open source software.

Christian Tschudin: Minimal Global Broadcast

Christian Tschudin of the University of Basel presented a research idea called "Minimal Global Broadcast" (abstract). Christian is a computer science professor with a track record of research in Information-Centric Networking, distributed computing, and decentralized systems.

Christian started out from the observation that contacting peers in a decentralized environment is challenging. The key question is how do you learn about a peer’s current coordinates and their preferences? The platforms themselves often offer directories, but these are logically centralized rendezvous servers with a partial view and require trust in these platforms. Instead of conceptualizing an uber directory service Christian proposed a global information dissemination system that focuses on the data, asserting an allowance of “200 bytes of novelty per month
and citizen”.

This global broadcast channel can (and should) be implemented in many ways, starting from sneakernets to shortwave communication and including Internet-based online-services. Christian explained how such a service could be used to facilitate user migration and user discovery on their current preferred platform(s).


There were some question on trust in user identities. Christian said that trust roots would be external to MGB, and that there would be different levels of trust, e.g., for inter-personal relationship vs. business relationships.


Written by dkutscher

August 21st, 2023 at 5:40 pm

Posted in IETF,IRTF

Tagged with , ,

IRTF Decentralization of the Internet Research Group at IETF-117

without comments

Recent years have witnessed the consolidations of the Internet applications, services, as well as the infrastructure. The Decentralization of the Internet Research Group (DINRG) aims to provide for the research and engineering community, both an open forum to discuss the Internet centralization phenomena and associated potential threats, and a platform to facilitate the coordination of efforts in identifying the causes of observed consolidations and the mitigation solutions.

Our upcoming DINRG meeting at IETF-117 will feature three talks – by Cory Doctorow, Volker Stocker & William Lehr, and Christian Tschudin.

1DINRG Chairs’ Presentation: Status, UpdatesChairs05 min
2Let The Platforms Burn: Bringing Back the Good Fire of the Old InternetCory Doctorow30 min
3Ecosystem Evolution and Digital Infrastructure Policy Challenges: Insights & Reflections from an Economics PerspectiveVolker Stocker & William Lehr20 min
4Minimal Global Broadcast (MGB)Christian Tschudin20 min
5Wrap-up & BufferAll15 min



DINRG Meeting at IETF-117 – 2023-07-25, 20:00 to 21:30 UTC

IETF-117 Agenda

Written by dkutscher

July 17th, 2023 at 5:44 pm

Posted in Events,IETF,IRTF

Tagged with , , ,

Addressing in the Internet

without comments

There was a side meeting on Internet Addressing at IETF-112 this week, discussing potential gaps in Internet Addressing and potential use cases that would suggest new addressing structures.

Looking at the realities in the Internet today, I do not think that actual relevant use cases and current issues in the Internet are served well by just a new addressing approach for the Internet Protocol. Instead I believe that there needs to be architectural discussion first – and addressing might eventually fall out as a result.

My slides for the panel discussion.

Written by dkutscher

November 11th, 2021 at 2:22 pm

Posted in IETF,Posts

Tagged with ,