Dirk Kutscher

Personal web page

Archive for the ‘IRTF’ tag


without comments

We have posted the agenda for our DINRG meeting at IETF-118:



DINRG Meeting at IETF-118 – 2023-11-06, 08:30 to 10:30 UTC

Written by dkutscher

November 1st, 2023 at 9:21 am

Posted in Events,IETF,IRTF

Tagged with , , ,


without comments

Written by dkutscher

October 30th, 2023 at 1:10 pm

Posted in Events,ICN,IRTF

Tagged with , , ,

Collective Communication: Better Network Abstractions for AI

without comments

We have submitted two new Internet Drafts on Collective Communication:

  1. Kehan Yao , Xu Shiping , Yizhou Li , Hongyi Huang , Dirk Kutscher; Collective Communication Optimization: Problem Statement and Use cases; Internet Draft draft-yao-tsvwg-cco-problem-statement-and-usecases-00; work in progress; October 2023

  2. Kehan Yao , Xu Shiping , Yizhou Li , Hongyi Huang , Dirk Kutscher; Collective Communication Optimization: Requirement and Analysis; Internet Draft draft-yao-tsvwg-cco-requirement-and-analysis-00; work in progress; October 2023

Collective Communication refers to communication between a group of processes in distributed computing contexts, for example involving interaction types such as broadcast, reduce, all-reduce. This data-oriented communication model is employed by distributed machine learning and other data processing systems, such as stream processing. Current Internet network and transport protocols (and corresponding transport layer security) make it difficult to support these interactions in the network, e.g., for aggregating data on topologically optimal nodes for performance enhancements. These two drafts discuss use cases, problems, and initial ideas for requirements for future system and protocol design for Collective Communication. They will be discussed at IETF-118.

Written by dkutscher

October 30th, 2023 at 8:03 am

Platforms, Economics, Minimal Global Broadcast

without comments

Decentralization of the Internet Research Group at IETF-117

The Decentralization of the Internet Research Group (DINRG) of the Internet Research Task Force (IRTF) had a meeting on 2023-07-27 at the 117th meeting of the Internet Engineering Task Force (IETF). DINRG aims to provide for the research and engineering community, both an open forum to discuss the Internet centralization phenomena and associated potential threats, and a platform to facilitate the coordination of efforts in identifying the causes of observed consolidations and the mitigation solutions.

For context, we recently published a workshop report that discusses some fundamental problems: ACM SIGCOMM CCR: Report of 2021 DINRG Workshop on Centralization in the Internet

The DINRG meeting at IETF-117 meeting featured three highly interesting talks by Cory Doctorow, Volker Stocker & William Lehr, and Christian Tschudin that created quite some attention and led to lively discussions during and after the meeting. There is a full meeting recording on youTube, and we have published meeting minutes. Special thanks to Ryo Yanagida, A.J. Stein, and Eve Schooler for taking notes at the meeting.

Cory Doctorow: Let The Platforms Burn: Bringing Back the Good Fire of the Old Internet

Cory Doctorow, a science fiction author, activist and journalist, talked about a trend in platform evolution that he calls enshittification, where platforms go through different phases after growing user bases quickly as per platform economics and market domination strategies and of locking in users through technical and economic barriers. In advertisement (and digital online market) platforms, the platform operator sits between the users and other companies (so-called "two-sided market" scenarios), where the user base, the obtained personal information and behavioral surveillance results become assets to attracts such companies.

For example, in order to make avertising more effective, social media platforms would increase control of users's timelines, i.e., content that is presented to them, and make it harder for users to leave the platform. Overall this results in negative user experience. For increasing advertisment revenue, platforms would then sell attention more directly, i.e., exploit their position in the advertisement market. See Cory's posting on enshittification for details.

This process and the difficulties in effectively controlling and regulating platform companies, has led to a permanent crisis that Cory compares to a fire hazard situation. Platforms were rocked by scandals private data theft, accidental leaks and intended data sharing with other insitutions etc.

While the computer and networking world has seen a constant emerging and vanishing of "platforms" (operating systems, PC companies, online services) before, the current concentrated tech market makes is impossible to let harmful (or not very user-friendly) dissolve. This is due to network effects (Metcalfe's law) and switching costs, for example when trying to leave a dominant social media platforms and thereby losing connections to friends. This monopoly situation is enabled by a legal environment with ineffective antitrust laws, which has allowed for dominating platform to constantly acquire competing companies and potentially disruptive businesses.

With new laws for content moderation and censorship, platform get even more control over their users (in the name of preventing harrassment), without making in any easier to leave platforms. In his article (and podcast epsidode) called "Let the Platforms Burn", Cory concluded

Platforms collapse "slowly, then all at once." The only way to prevent sudden platform collapse syndrome is to block interoperability so users can't escape the harms of your walled garden without giving up the benefits they give to each other.

We should stop trying to make the platforms good. We should make them gone. We should restore the "good fire" that ended with the growth of financialized Big Tech empires. We should aim for soft landings for users, and stop pretending that there's any safe way to life in the fire zone.

We should let the platforms burn.

WIth respect to the (de-)centralization discussion in DINRG and the Internet community, this raises some important question as to

  • what is the role of open interfaces, standards etc today in reality? Are we still using them to build interoperable, possibly federated systems?
  • how should technology development, standards setting and regulation evolve to effectively enable user choice (migration, platform selection)?


There was a question whether the real issue was that platforms are making a remarkable griphold, buying each other, but they are buying the users, i.e., whether the primary concern is the size of these platforms with this method or the method itself only? Cory replied in saying that size certainly promotes distortions. Scale was problem for two reasons. The contract enforcement function dominates. When the referee is less powerful than the team, it allows teams to cheat.
Secondly, even if we stipulated that companies are well run by smart people, they all make errors, and at that scale the mistakes are much more consequential.

Another question was who is willing to implement the interoperability standards and how companies can be convinved to do that. Cory talked about companies' motivation, i.e., companies wanted walled gardens, or to have APIs with advantages (vs disadvantages) to them. What they really seeked (over competitive interoperability) was to have legal remedies for those who reverse engineer to competitively enter the market. When there was a mandate and permission for inter-operators, if restoring that power was possible, that would help to avoid unquantifiable risk.

Some of these strategies are discussed in Cory upcoming book "How to seize the means of computation”.

Volker Stocker and William Lehr: Ecosystem Evolution and Digital Infrastructure Policy Challenges: Insights & Reflections from an Economics Perspective"

Volker Stocker of the Weizenbaum Institute for the Networked Society and William Lehr of the Advanced Networking Architecture Group in CSAIL at MIT presented their research on ecosysem evolution and policy challenges from an economics perspective. Volker is an economist with broad experience in interdisciplinay research, and William is a telecommunications and Internet economist and consultant.

Volker talked about the convergence of digital and non-digital worlds and mentioned a few trends that needed attention:

  • The shift to the edge and shift to the localization of traffic.
  • Ownership and management has shifted in the Internet ecosystem: sometimes hyper giant content providers with proprietary networks, sometimes edge clouds or roving resources.
  • Potential consequences: value chain constellations are more complex, diverse & dynamic, resulting in changing ownership and governance structures, industry structures as well as competititve and innovational dynamics.

Volker made three points in his reflections on ecosystem evolution:

  1. Essential digital infrastructure is about more than connectiivty, not just connectivities like IPX and ISPs.
  2. The majority of the requisite investmenet will be private! E.g., access ISPs, CAPs, CDNs, upstream ISPs, and end-users are all investing.
  3. More and new forms of resource sharing will be needed. More network sharing agreements: active & passive sharing arrangements and optimal models are evolving.

William highlighted that the legacy Internet is not the Internet of today, and the economics of yesterday are not those of today. One of the questions is how to restore meaningful competition?

He mentioned the following challenges and paths forward:

  1. Multidisciplinary Engagement & Feedback
  2. Assymetric Info & Measurements: Metrics and data (and their provenance)
  3. Capacity to Detect and Act


There was a discussion about how the private sector is expected to profit from the infrastructure development needed by society (assuming investments from the private sector). William replied in saying that
government built/subsidized most infrastructure in most places, with small investments needed initially. Some say significant investment should come from the utilities, which we should not dismiss. But we likely will need a strong argument on how to get there. Either we say there is a lot of money coming from public sector (for example, through taxes) or we have to find a way to manage private actors. Thus policy issues are important. Some of these questions are discussed in Williams paper on "Getting to the Broadband Future Efficiently with BEAD funding”.

Another question alluded to policy lagging behing the technical development, i.e., the mismatch of speed of innovation and speed of regulation (which is really hard at the national and internationl levels). William said that the best hope is standards and architectures that provides options and mentioned the importance of open source software.

Christian Tschudin: Minimal Global Broadcast

Christian Tschudin of the University of Basel presented a research idea called "Minimal Global Broadcast" (abstract). Christian is a computer science professor with a track record of research in Information-Centric Networking, distributed computing, and decentralized systems.

Christian started out from the observation that contacting peers in a decentralized environment is challenging. The key question is how do you learn about a peer’s current coordinates and their preferences? The platforms themselves often offer directories, but these are logically centralized rendezvous servers with a partial view and require trust in these platforms. Instead of conceptualizing an uber directory service Christian proposed a global information dissemination system that focuses on the data, asserting an allowance of “200 bytes of novelty per month
and citizen”.

This global broadcast channel can (and should) be implemented in many ways, starting from sneakernets to shortwave communication and including Internet-based online-services. Christian explained how such a service could be used to facilitate user migration and user discovery on their current preferred platform(s).


There were some question on trust in user identities. Christian said that trust roots would be external to MGB, and that there would be different levels of trust, e.g., for inter-personal relationship vs. business relationships.


Written by dkutscher

August 21st, 2023 at 5:40 pm

Posted in IETF,IRTF

Tagged with , ,

Directions for Computing in the Network

without comments

We have updated our Internet Draft on Directions for Computing in the Network.

In-network computing can be conceived in many different ways -- from active networking, data plane programmability, running virtualized functions, service chaining, to distributed computing.

This memo proposes a particular direction for Computing in the Networking (COIN) research and lists suggested research challenges.

This is now an adopted COINRG work item.

Link to draft: draft-irtf-coin-dir.

Written by dkutscher

August 9th, 2023 at 9:45 am

Posted in IRTF

Tagged with , ,

ACM SIGCOMM CCR: Report of 2021 DINRG Workshop on Centralization in the Internet

without comments

ACM SIGCOMM CCR just published the report of our 2021 DINRG meeting on Centralization in the Internet.

Executive Summary

There is a consensus within the networking community that the Internet consolidation and centralization trend has progressed rapidly over recent years, as measured by the structural changes to the data delivery infrastructure, the control power over system platforms, application development and deployment, and even in the standard development efforts. This trend has brought impactful technical, societal, and economical consequences.

When the Internet was first conceived as a decentralized system 40+ years back, few people, if any, could have foreseen how it looks today. How has the Internet evolved from there to here? What have been the driving forces for the observed consolidation? From a retrospective view, was there anything that might have been done differently to influence the course the Internet has taken? And most importantly, what should and can be done now to mitigate the trend of centralization? Although there are significant interests in these topics, there has not been much structured discussion on how to answer these important questions.

The IRTF Research Group on Decentralizing the Internet (DINRG) organized a workshop on “Centralization in the Internet” on June 3, 2021, with the objective of starting an organized open discussion on the above questions. Although there seems to be an urgent need for effective countermeasures to the centralization problem, this workshop took a step back: before jumping into solution development to steer the Internet away from centralization, we wanted to discuss how the Internet has evolved and changed, and what have been the driving forces and enablers for those changes. The organizers and part of the community believe that a sound and evidence-based understanding is the key towards devising effective remedy and action plans. In particular, we would like to deepen our understanding of the relationship between the architectural properties and economic developments.

This workshop consisted of two panels, each panel started with an opening presentation, followed by panel discussions, then open-floor discussions. There was also an all-hand discussion at the end. Three hours of the workshop presentations and discussions showed that this Internet centralization problem space is highly complex and filled with intrinsic interplays between technical and economic factors.

This report aims to summarize the workshop outcome with a broad-brush picture of the problem space. We hope that this big picture view could help the research group, as well as the broader IETF community, to reach a clearer and shared high-level understanding of the problem, and from there to identify what actions are needed, which of them require technical solutions, and which of them are regulatory issues which require technical community to provide inputs to regulatory sectors to develop effective regulation policies.

You can find the report in the ACM Digital Library. We also have a pre-print version.

Written by dkutscher

July 27th, 2023 at 4:35 pm

IRTF Decentralization of the Internet Research Group at IETF-117

without comments

Recent years have witnessed the consolidations of the Internet applications, services, as well as the infrastructure. The Decentralization of the Internet Research Group (DINRG) aims to provide for the research and engineering community, both an open forum to discuss the Internet centralization phenomena and associated potential threats, and a platform to facilitate the coordination of efforts in identifying the causes of observed consolidations and the mitigation solutions.

Our upcoming DINRG meeting at IETF-117 will feature three talks – by Cory Doctorow, Volker Stocker & William Lehr, and Christian Tschudin.

1DINRG Chairs’ Presentation: Status, UpdatesChairs05 min
2Let The Platforms Burn: Bringing Back the Good Fire of the Old InternetCory Doctorow30 min
3Ecosystem Evolution and Digital Infrastructure Policy Challenges: Insights & Reflections from an Economics PerspectiveVolker Stocker & William Lehr20 min
4Minimal Global Broadcast (MGB)Christian Tschudin20 min
5Wrap-up & BufferAll15 min



DINRG Meeting at IETF-117 – 2023-07-25, 20:00 to 21:30 UTC

IETF-117 Agenda

Written by dkutscher

July 17th, 2023 at 5:44 pm

Posted in Events,IETF,IRTF

Tagged with , , ,

Internet Centralization on the The Hedge

without comments

Lixia Zhang and myself discussed Internet centralization together with Russ White, Alvaro Retana and Tom Ammon on The Hedge podcast.

Recent years have witnessed the consolidations of Internet applications, services, as well as the infrastructure. The Decentralization of Internet Research Group (DINRG) aims to provide for the IRTF/IETF community both an open forum to discuss the Internet centralization phenomena and associated potential threats, and a platform to facilitate the coordination of efforts in identifying the causes of observed consolidations and the mitigation solutions.

DINRG's main objectives include the following:

  • Measurement of Internet centralization and the consequential societal impacts;
  • Characterization and assessment of observed Internet centralization;
  • Investigation of the root causes of Internet centralization, and articulation of the impacts from market economy, architecture and protocol designs, as well as government regulations;
  • Exploration of new research topics and technical solutions for decentralized system and application development;
  • Documentation of the outcome from the above efforts; and
  • Recommendations that may help steer Internet away from further consolidation.

Written by dkutscher

June 17th, 2023 at 6:36 am

Posted in IRTF

Tagged with , ,

Information-Centric Networking Research Group at IETF-113 Summary

without comments

The Information-Centric Networking Research Group (ICNRG) of the Internet Research Task Force (IRTF) met during the 113th meeting of the Internet Engineering Task Force (IETF) that took place in Vienna from March 19th to March 25th 2022. IETF-113 was the IETF's first hybrid meeting with onsite and remote participants.

Presentation material and minutes are available online, and there is a full recording on youTube. I am summarizing the meeting below.

Edmund Yeh: NDN for Data-Intensive Science Experiments

Edmund Yeh (Northeastern University) presented the NSF-funded project NDN for Data-Intensive Science Experiments (N-DISE), a two-year inter-disciplinary project with participants from Northeastern, Caltech, UCLA, and Tennessee Tech that collaborates with the Large Hadron Collider (LHC), genomics researchers, and the NDN project team.

N-DISE is building data-centric ecosystem to provide agile, integrated, interoperable, scalable, robust and trustworthy solutions for heterogeneous data-intensive domains, in order to support very data-intensive science applications through an NDN-based communication and data sharing infrastructure. The LHC high energy physics program represents the leading target use case, but the project is also looking at BioGenome and other human genome projects as future use cases.

In many data-intensive science applications, data needs to distributed in real-time, archived, retrieved by multiple consumers etc. Within one data centers, but even more so in geographically distributed scenarios, this could lead to a signficant amount of duplicated transmissions with legacy system architectures. N-DISE would leverage general ICN features and concepts such as location-independent data naming, on-path caching and explicit replication through data repos to dramatically improve the efficiency but also to reduce the complexity of such data management systems and their applications.

The general approach of the N-DISE project is to leverage recent results in high-speed NDN networking such as ndn-dpdk to build a data science support infrastructure for petascale distribution, which involves research in high-througput forwarding/caching, the definition of container-based node architectures, FPGA acceleration subsystems and SDN control. The goal is to deliver LHC data over wide area networks at throughputs of approximately 100 Gpbs and to dramatically decrease download times by using optimized caching.

From an NDN perspective, the project provides several interesting lines of work:

  • Deployment architectures (how to build efficient container-based N-DISE nodes);
  • WAN Testbed creation and throughput testing;
  • Optimized caching and forwarding;
  • Congestion control and multi-path forwardind; and
  • FPGA acceleration.

There are several interesting ideas and connections to ongoing ICN research in N-DISE. For example, as people start building applications for high-speed data sharing but also distributed computing, the question of container-based ICN node architectures arise, i.e., how to enable easy cloud-native deployment of such systems without compromising too much on performance.

Another interesting aspect is congestion control in multi-path forwarding scenarios. Existing technologies such as Multipath TCP and Multipath QUIC are somewhat limited with respect to their ability to use multipath resources in the network efficiently. In ICN, with its different forwarding model multipath forwarding decisions could be made hop-by-hop, and consumers (receiving endpoints) could be given greater control over path selection. For example:

Cenk Gündoğan: Alternative Delta Time Encoding for CCNx Using Compact Floating-Point Arithmetic

Cenk Gündoğan of HAW Hamburg presented an update of draft-gundogan-icnrg-ccnx-timetlv, a proposal for an alternative logarithmic encoding of time values in ICN (CCNx) messages.

The motivation for this work lies in constrained networking where header compression as per RFC 9139 (ICNLoWPAN) would be applied and more compact time encoding would be desirable. The proposed approach would allow for a compact encoding with dynamic ranges (as in floating point arithmetics), but imposes challenges with respect to backwards compatibility.

ICNRG is considering adopting this work as a research group item to find the best way for updating the current CCNx specifications in the light of these questions.

Dave Oran: Ping & Traceroute Update

Dave Oran presented the recent updates to two specifications:

In IP, fundamental and very useful tools such as ping and traceroute were created years after the architecture and protocol definitions. In ICN there is an opportunity to leverage tooling at an earlier phase – but also to reason about needed tools and useful features.

ICN Ping provides the ability to ascertain reachability of names, which includes

  • to test the reachability and operational state of an ICN forwarder;
  • to test the reachability of a producer or a data repository;
  • to test whether a specific named object is cached in some on-path CS, and, if so, return the administrative name of the corresponding forwarder; and
    • to perform some simple network performance measurements.

ICN Traceroute provides ability to ascertain characteristics (transit forwarders
and delays) of at least one of the available routes to a name prefix, which includes

  • to trace one or more paths towards an ICN forwarder (for troubleshooting purposes);
  • to trace one or more paths along which a named data of an application can be reached;
  • to test whether a specific named object is cached in some on-path CS, and, if so, trace the path towards it and return the identity of the corresponding forwarder; and
  • to perform transit delay network measurements.

Both drafts completed Research Group Last Call in January 2022 and evoked some feedback that has now been addressed (see presentation for details). ICNRG will transfer these drafts to IRSG review and subsequent steps in the IRTF review and publication process soon.

Dave Oran: Path Steering Refresher

Dave Oran presented a refresher of a previously presented specification of Path Steering in ICN (draft-oran-icnrg-pathsteering). Path Steering is a mechanism to discover paths to the producers of ICN content objects and steer subsequent Interest messages along a previously discovered path. It has various uses, including the operation of state-of-the-art multipath congestion control algorithms and for network measurement and management.

In ICN, communication is inherently multi-path and potentially multidestination. But so far there is no mechanism for consumers to direct Interest traffic onto a
specific path, which could lead to
– Forwarding Strategies in ICN forwarders can spray Interests onto various paths;
– Consumers have a hard time interpreting failures and performance glitches;
– Troubleshooting and performance tools need path visibility and control to find problems and do simple measurements.

ICN Path Steering would enable

  • Discovering, monitor and troubleshoot multipath network connectivity based on names and name prefixes:
    • Ping
    • Traceroute
  • Accurately measure a performance of a specific network path.
  • Multipath Congestion control needs to:
    • Estimate/Count number of available paths
    • Reliably identify a path
    • Allocate traffic to each path
  • Traffic Engineering and SDN
    • Externally programmable end-to-end paths for Data Center and
      Service Provider networks.

Briefly, Path Steering works by using a Path Label (as an extension to existing protocol formats, see figure) for discovering and for specifying selected paths.

The technology would give consumers much more visibility and greater control of multipath usage and could be useful for many applications, especially those that want to leverage path diversity, for example high-volume file transfers, robust communication in dynamically changing networks, and distributed computing.

Dirk Kutscher: Reflexive Forwarding Re-Design

Dave Oran and I recently re-design a scheme that we called Reflexive Forwarding and that is specified in draft-oran-icnrg-reflexive-forwarding.

Current Information-Centric Networking protocols such as CCNx and NDN have a wide range of useful applications in content retrieval and other scenarios that depend only on a robust two-way exchange in the form of a request and response (represented by an Interest-Data exchange in the case of the two protocols noted above).

A number of important applications however, require placing large amounts of data in the Interest message, and/or more than one two-way handshake. While these can be accomplished using independent Interest-Data exchanges by reversing the roles of consumer and producer, such approaches can be both clumsy for applications and problematic from a state management, congestion control, or security standpoint.

This specification proposes a Reflexive Forwarding extension to the CCNx and NDN protocol architectures that eliminates the problems inherent in using independent Interest-Data exchanges for such applications. It updates RFC8569 and RFC8609.

Example: RESTful communication over ICN

In today HTTP deployments, requests such as HTTP GET requests are conceptionally stateless, but in fact they carry a lot of information that would allow server to process these requests correctly. This includes regular header fields, cookies but also input parameters (form data etc.) so that requests can become very large (sometimes larger than the corresponding result messages).

It is generally not a good idea to build client-server systems that require servers to parse and process a lot a client-supplied input data, as this could easily be exploited by computational overload attacks.

In ICN, in addition, Interest message should not be used to carry a lot of "client" parameters as this could lead to issues with respect to flow balance (congestion control schemes in ICN should work based on DATA message volume and rate), but would also force forwarders to store large Interest messages and could potentially even lead to Interest fragmentation, a highly undesirable consequence.

Reflexive Forwarding aims at providing a robust ICN-idiomatic way to transfer "input parameters", by enabling the "server side" to fetch parameters using regular ICN communication (Interest/Data). When doing so, we do not want to give up important ICN properties such as not requiring consumers (i.e., the "clients") to reveal their source address – a useful feature for enable easy consumer mobility and some form of privacy.

Reflexive Forwarding Design

Our Reflexive Forwarding scheme addresses this by letting the consumer specify a tempory, non-globally-routable prefix to the network and the producer that would allow the producer to get back to the consumer through Reflexive Interests for fetching the required input parameters at the producer's discretion. The figure above depicts the high-level protocol operation.

Our new design leverage tempory PIT (Pending Interest Table) state in forwarders and PIT Tokens (hop-by-hop protocol fields in NDN and CCNx) that would allow forwaders, to map Reflexive Interests to PIT entries of the actual Interest and thus forward the Reflexive Interest correctly, on the reverse path.

Potential Applications

Potential applications include

  • RESTful communication, e.g., Web over ICN;
  • Remote Method Invocation;
  • Phone-home scenarios; and
  • Peer state synchronization.

For example, we have used a previous design of this scheme in our paper RICE: Remote Method Invocation in ICN that leveraged Reflexive Forwarding for the invocation and input parameter transmission as depicted in the figure above.

Reflexive Forwarding requires relativly benign to ICN forwarder and endpoint behavior but could enable many relevant use cases in an ICN idiomatic way, without requiring large layering overhead and without giving important ICN properties.

Written by dkutscher

April 1st, 2022 at 2:36 pm

Posted in IRTF

Tagged with , , , ,

Information-Centric Networking Research Group Update December 2021

without comments

The Information-Centric Networking Research Group (ICNRG) of the Internet Research Task Force (IRTF) has recently published two RFC and held a research meeting on December 10th 2021.

Recent RFC Publications

ICNRG published two RFCs recently:

RFC 9139: ICN Adaptation to Low-Power Wireless Personal Area Networks (LoWPANs)

RFC 9139 defines a convergence layer for Content-Centric Networking (CCNx) and Named Data Networking (NDN) over IEEE 802.15.4 Low-Power Wireless Personal Area Networks (LoWPANs). A new frame format is specified to adapt CCNx and NDN packets to the small MTU size of IEEE 802.15.4. For that, syntactic and semantic changes to the TLV-based header formats are described.

To support compatibility with other LoWPAN technologies that may coexist on a wireless medium, the dispatching scheme provided by IPv6 over LoWPAN (6LoWPAN) is extended to include new dispatch types for CCNx and NDN. Additionally, the fragmentation component of the 6LoWPAN dispatching framework is applied to Information-Centric Network (ICN) chunks.

In its second part, the document defines stateless and stateful compression schemes to improve efficiency on constrained links. Stateless compression reduces TLV expressions to static header fields for common use cases. Stateful compression schemes elide states local to the LoWPAN and replace names in Data packets by short local identifiers.

The ICN LoWPAN specification is a great platform for future experiments with ICN in constrained networking environments, including but not limited to LoWPAN networks.

RFC 9138: Design Considerations for Name Resolution Service in Information-Centric Networking (ICN)

RFC 9138 provides the functionalities and design considerations for a Name Resolution Service (NRS) in Information-Centric Networking (ICN). The purpose of an NRS in ICN is to translate an object name into some other information such as a locator, another name, etc. in order to forward the object request.

Since naming data independently from its current location (where it is stored) is a primary concept of ICN, how to find any NDO using a location-independent name is one of the most important design challenges in ICN. Such ICN routing may comprise three steps:

  1. Name resolution: matches/translates a content name to the locator of the content producer or source that can provide the content.
  2. Content request routing: routes the content request towards the content's location based either on its name or locator.
  3. Content delivery: transfers the content to the requester.

Among the three steps of ICN routing, this document investigates only the name resolution step, which translates a content name to the content locator. In addition, this document covers various possible types of name resolution in ICN such as one name to another name, name to locator, name to manifest, name to metadata, etc.

ICNRG Meeting on December 10th 2021


1 Chairs’ Presentation: Status, Updates Chairs 05 min
2 Zenoh - The Edge Data Fabric Carlos Guimarães 30 min
3 The SPAN Network Architecture Rhett Sampson 30 min
4 NDNts API Design Junxiao Shi 30 min
6 Wrap-Up, Next Steps Chairs 5 min


Carlos Guimarães presented zenoh – The Edge Data Fabric. zenoh is an ICN-inspired data distribution and processing system that zenoh aims at unifying data in motion, data at rest and computations. It blends traditional pub/sub with geo distributed storage, queries and computations, adopting a hierarchical naming scheme and other ICN properties. zenoh provides a high level API for high performance pub/sub and distributed queries, data representation transcoding, an implementation of geo-distributed storage and distributed computed values.

SPAN Network Architecture

Rhett Sampson and Jaime Llorca presented GT Systems' SPAN-AI content distribution system (CDN as a service) that uses ICN/CCN/NDN for the implementation of their distributed content delivery system, leveraging name-based routing.

NDNts API Design

Junxiao Shi presented the NDNts API Design. NDNts is an NDN implementation in TypeScript, aiming to facilitate NDN application development in browsers and on the Node.js platform.

The development of NDNts led to some insights on NDN low-level API design (packet decoding, fragmentation, notion of "faces", retransmission logic etc.) that Junxiao shared in his presentation.

Written by dkutscher

December 21st, 2021 at 10:07 pm

Posted in IRTF

Tagged with , ,