Dirk Kutscher

Personal web page

Archive for the ‘Events’ Category

HKUST Internet Research Workshop (HKIRW) 2025

without comments

We are organizing the 2025 HKUST Internet Research Workshop (HKIRW) in the week before the IETF-122 meeting in Bangkok. This workshop aims to bring together researchers in computer networking and systems around the globe to a live forum discussing innovative ideas at their early stages. The mission of the workshop is that promising but not-yet-mature ideas can receive timely feedback from the community and experienced researchers, leading them into future IRTF work, Internet Drafts, or IETF working groups.

The workshop will operate like a “one day Dagstuhl seminar” and will focus on discussion and ideas exchange and less on conference-style presentations. The objective is to identify topics and connect like-minded people for potential future collaboration.

Please see https://hkirw.github.io/2025/ for details.

References

Written by dkutscher

December 23rd, 2024 at 3:35 pm

Report: ACM Conext-2024 Workshop on the Decentralization of the Internet

without comments

On Monday, December 9th, 2024, we held our Decentralization of the Internet (DIN) workshop at ACM CoNEXT-2024. It brought together network researchers, law and policy experts, and digital right activists to discuss the observed consolidation and centralization of the existing Internet applications, services, and the infrastructure in recent years. This trend has economic as well as technical implications for attributes commonly associated with the Internet, such as user-centricity and permissionless innovations.

The Decentralization of the Internet Research Group (DINRG) of the Internet Research Task Force (IRTF) has been working on identifying the root causes and consequences of Internet centralization at IETF meetings and focused workshops in the past, which has led to significant insights, especially with regard to the centralization of infrastructure and control power. This recent DIN workshop at ACM CoNEXT-2024, organized by my DINRG co-chairs Lixia Zhang and myself, provided a forum for academic researchers to present and discuss on-going efforts on this topic, and to create a greater awareness of this important issue in the broader network research community. The workshop attracted a diverse set of researchers who are working on Internet decentralization in fields such as Internet technologies, economics and law-making. The workshop featured two keynotes, two technical paper presentation sessions, and an interactive panel discussion.

Keynotes

The keynotes were presented by two renowned experts:

Keynote: Cory Doctorow: DISENSHITTIFY OR DIE!

Cory Doctorow, member of the Electronic Frontier Foundation (EFF), gave a talk titled DISENSHITTIFY OR DIE! How computer scientists can halt enshittification to make a new, good internet and condemn today's enshitternet to the scrapheap of history. Cory’s talk vividly explained the historic development of a process that he called enshittification, a process in which the providers of online products and services changed their policies subtly and gradually over time, grabbing the control of user data for profitability. Doctorow also discussed potential remedies and countermeasures, including removing the barriers for users to exit platforms and reinstalling the end-to-end principle in future application developments.



Keynote: Michael Karanicolas: The Fediverse Papers: Constitutional, Governance, and Policy Questions for a New Paradigm of Networking

Michael Karanicolas, the executive director of the UCLA Institute for Technology, Law and Policy, talked about the Fediverse Papers: Constitutional, Governance, and Policy Questions for a New Paradigm of Networking. Michael provided an overview of the history of digital speech and content governance. He highlighted the challenges in supporting effective content moderation in today’s Internet contexts, including issues around monetization, legislation, privacy, and the need for governance mechanisms to meet users, content owners, and governments’ expectations. He emphasized the importance of intentionality and a structured process to identify the essential policy questions and to evaluate various design choices for the future of decentralized platforms.

Decentralized Systems

Bluesky and the AT Protocol: Usable Decentralized Social Media

Authors: Martin Kleppmann, Paul Frazee, Jake Gold, Jay Graber, Daniel Holmgren, Devin Ivy, Jeromy Johnson, Bryan Newbold, Jaz Volpert

Abstract: Bluesky is a new social network built upon the AT Protocol, a decentralized foundation for public social media. It was launched in private beta in February 2023, and has grown to over 10 million registered users by October 2024. In this paper we introduce the architecture of Bluesky and the AT Protocol, and explain how the technical design of Bluesky is informed by our goals: to enable decentralization by having multiple interoperable providers for every part of the system; to make it easy for users to switch providers; to give users agency over the content they see; and to provide a simple user experience that does not burden users with complexity arising from the system’s decentralized nature. The system’s openness allows anybody to contribute to content moderation and community management, and we invite the research community to use Bluesky as a dataset and testing ground for new approaches in social media moderation.

ReP2P Matrix: Decentralized Relays to Improve Reliability and Performance of Peer-to-Peer Matrix

Authors: Benjamin Schichtholz, Roland Bless, Florian Jacob, Hannes Hartenstein, Martina Zitterbart

Abstract: Matrix is a decentralized middleware for low-latency group communication, most renowned for its use in the Element instant messenger. Proposals for peer-to-peer (P2P) Matrix architectures aim to decentralize the current architecture further, which is based on federated servers. These proposals require that the receiver and the originator, or another peer that already successfully received the message, are simultaneously online. We introduce relay-enhanced P2P Matrix (ReP2P Matrix) in order to improve message delivery between peers that are online at different times. The design maintains the advantages of P2P Matrix and integrates well into it, e.g., it reuses existing mechanisms for authentication and authorization. Using an extended real-world group messaging traffic dataset, we evaluate P2P Matrix by comparing it to P2P Matrix without relays. The results show that relays do not only improve reliability in message delivery, but also increase the share of low delivery latencies by 50% points in groups with up to 30 members.

On Empowering End Users in Future Networking

Authors: Tianyuan Yu, Xinyu Ma, Lixia Zhang

Abstract: In today's Internet, end users communicate largely via cloud-based apps, and user data are stored in cloud servers and controlled by cloud providers. Recent years have witnessed multiple efforts in developing decentralized social apps with various design approaches, although the community at large is yet to fully understand the effectiveness, viability, and limitations of these different designs. In this paper, we make a proposition that a necessary condition of moving towards Internet decentralization is enabling direct user-to-user (U2U) communications, and discuss the design choices in several decentralization efforts and identify their limitations. We then articulate why a DNS-derived namespace is the best choice in U2U app developments in general, and use a recently developed decentralized app, NDN Workspace (NWS), as an example to show how NWS' use of DNS-derived namespace enables secure U2U communications.

Technologies for Decentralization

Atomicity and Abstraction for Multi-Blockchain Interactions

Authors: Huaixi Lu, Akshay Jajoo, Kedar S. Namjoshi

Abstract: A blockchain enables secure, atomic transactions among untrusted parties. Atomicity is not guaranteed, however, for transactions whose operations span several blockchains; multi-chain atomicity must be enforced by a protocol. Such protocols are known only for special cases, such as cryptocurrency swaps, which are limited only to two chains. We propose a novel two-phase protocol that facilitates atomic executions of general multi-chain (>= 2) transactions. We formally analyze the protocol correctness and show that the proposed abstraction considerably simplifies the development of multi-chain applications. Our experiments with a prototype implementation show that the performance of the general atomicity protocol is comparable to that of custom-built implementations.

Communication Cost for Permissionless Distributed Consensus at Internet Scale

Authors: David Guzman, Dirk Trossen, Jörg Ott

Abstract: The diffusion of information that evolves a distributed computing state is a fundamental operation of a permissionless distributed consensus system (DCS). This permissionless participation decentralized the consensus over the distributed computing state, e.g., in cryptocurrencies and voting systems. For this, a permissionless DCS implements protocols to establish relationships among peers, which is then used to diffuse information. The relation establishment constitutes the control plane of the DCS, while the state diffusion is the data plane. The prevalent mechanism to realize both is a randomized peer-centric iterative diffusion. In this paper, we contrast this approach against a multicast-based design, focusing our comparison on the costs (bytes transmitted) for maintaining the relations, the control plane. We develop suitable models to account for those costs, parameterized through Internet-scale experimental insights we derived from existing DCS deployments. Our results show that the communication costs can be reduced by 30 times.

Towards a Decentralized Internet Namespace

Authors: Yekta Kocaogullar, Eric Osterweil, Lixia Zhang

Abstract: The Domain Name System (DNS) has been providing a decentralized global namespace to support all Internet applications and usages over the last few decades. In the recent years, a number of blockchain-based name systems have emerged with the claim of providing better namespace decentralization than DNS. The community at large seems uncertain with regard to which of these systems is the best in providing decentralized Internet namespace control. In this paper, we first deconstruct the design of DNS, identify its three essential components and explain who controls each of them. We then examine the Ethereum Name Service (ENS) as a representative example of blockchain-based naming systems, gauge the degree of its decentralization. Finally, we conduct a comparative analysis between DNS and ENS to assess the validity and affordability of each design and the (de)centralization in their namespace control and name system operations.

Panel Discussion: Decentralization of the Internet – Quo Vadis?

An interactive panel discussion with (from left to right) Michael Karanicolas (UCLA), Paul Mockapetris (ThreatSTOP), Dan Massey (USC ISI, NSF), and Cory Doctorow (EFF), articulated various next steps for countering Internet centralization. Among many things discussed, the panel and audience identified the notion of enabling direct user-to-user communication without reliance on third parties, and the required functionality to support that, such as how to provide user owned identities, tools for user mutual authentications and secure communications.

These and additional related topics will be further discussed at the IRTF DIN research group, which is a forum with open participation to serve the purpose of continuous international collaborative research on Internet decentralization.

References

Written by dkutscher

December 18th, 2024 at 8:34 pm

ACM Conext-2024 Workshop on the Decentralization of the Internet

without comments

Our ACM CoNEXT-2024 workshop on the decentralization of the Internet on Monday, December 9th 2024 in LA has an exciting agenda – don't miss it! Check out the workshop homepage for up-to-date information.

09:00 Session 1: Keynotes

  1. Keynote by Cory Doctorow: DISENSHITTIFY OR DIE! How computer scientists can halt enshittification to make a new, good internet and condemn today's enshitternet to the scrapheap of history.
  2. Keynote by Michael Karanicolas: The Fediverse Papers: Constitutional, Governance, and Policy Questions for a New Paradigm of Networking

11:00 Session 2: Decentralized Systems

  1. Martin Kleppmann, et al.; Bluesky and the AT Protocol: Usable Decentralized Social Media
  2. Benjamin Schichtholz et al.; ReP2P Matrix: Decentralized Relays to Improve Reliability and Performance of Peer-to-Peer Matrix
  3. Tianyuan Yu et al.; On Empowering End Users in Future Networking

14:00 Session 3: Technologies for Decentralization

  1. Huaixi Lu et al.; Atomicity and Abstraction for Multi-Blockchain Interactions
  2. David Guzman et. el; Communication Cost for Permissionless Distributed Consensus at Internet Scale
  3. Yekta Kocaogullar et al.; Towards a Decentralized Internet Namespace

15:00 Session 4: Decentralization of the Internet – Quo Vadis?

  • Organizers: Lixia Zhang & Dirk Kutscher
  • Interactive panel discussion with Cory Doctorow, Michael Karanicola, and paper authors

Written by dkutscher

October 30th, 2024 at 7:25 am

IRTF DINRG Meeting at IETF-121

without comments

The IRTF DINRG Meeting at IETF-121 takes place on 2024-11-06 at 13:00 to 14:30 UTC.

1 DINRG Chairs’ Presentation: Status, Updates Chairs 05 min
2 Distributing DDoS Analytics among ASes Daniel Wagner 20 min
3 The Role of DNS names in Internet Decentralization Tianyuan Yu 20 min
4 Taxonomy of Internet Consolidation & Effects of Internet Consolidation Marc McFadden 15 min
5 DINRG – Next Steps Chairs & Panelists 30 min
6 Wrap-up & Buffer Chairs 00 min

Documents and Links to Resources

  1. United We Stand: Collaborative Detection and Mitigation of
    Amplification DDoS Attacks at
    Scale
  2. https://datatracker.ietf.org/doc/draft-mcfadden-consolidation-taxonomy/
  3. https://datatracker.ietf.org/doc/draft-mcfadden-cnsldtn-effects/

Notes

Please remember that all sessions are being recorded.

Written by dkutscher

October 30th, 2024 at 7:16 am

Posted in Events,IRTF

Tagged with , , , ,

IRTF ICNRG Meeting at IETF-121

without comments

The ICNRG Meeting at IETF-121 takes place on 2024-11-05, 13:00 to 14:30 UTC.

ICNRG Agenda

1 ICNRG Chairs’ Presentation: Status, Updates Chairs 05 min
2 FLIC Update Marc Mosko 15 min
3 CCNx Content Object Chunking Marc Mosko 15 min
4 Reflexive Forwarding Update Hitoshi Asaeda 20 min
5 ICN Challenges for Metaverse Platform Interoperability Jungha Hong 15 min
6 Distributed Micro Service Communication Aijun Wang 15 min
7 Buffer, Wrap Up and Next Steps Chairs 05 min

Please remember that all sessions are being recorded.

Material

  1. https://datatracker.ietf.org/doc/draft-irtf-icnrg-flic/
  2. https://datatracker.ietf.org/doc/draft-mosko-icnrg-ccnxchunking/
  3. https://github.com/mmosko/ccnpy
  4. https://datatracker.ietf.org/doc/draft-irtf-icnrg-reflexive-forwarding/
  5. https://datatracker.ietf.org/doc/draft-hong-icn-metaverse-interoperability/
  6. https://datatracker.ietf.org/doc/draft-li-icnrg-damc/

Written by dkutscher

October 30th, 2024 at 7:13 am

Posted in Events,IRTF

Tagged with , ,

Invited Talk at Airbus Workshop on Networking Systems

without comments

On October 10th, 2024, I was invited to give a talk at the 2nd Airbus Workshop on Networking Systems. The workshop largely discussed connected aircraft scenarios and technologies and features talks on security and reliability, IoT sensor fusioning, and future space and 6G network architectures.

My talk was on Connected Aircraft – Network Architectures and Technologies, and discussed relevant scenarios from my perspective, such as passenger services and new aircraft management applications. For the technology discussion, I focused on large-scale low-latency multimedia communication over the expected heterogeneous and dynamic aircraft connectivity networks and discussed current and emerging technologies such as Media over QUIC, ICN.

I also introduced the recently established Low-Altitude Systems and Economy Research Institute at HKUST(GZ), a cross-disciplinary research institute for the low-altitude domain (with similar but not identical requirements) and some of our recent projects such as Named Data Microverse.

Written by dkutscher

October 19th, 2024 at 5:20 am

Dagstuhl Seminar on Greening Networking: Toward a Net Zero Internet

without comments


We (Alexander Clemm, Michael Welzl, Cedric Westphal, Noa Zilbermann, and I) organized a Dagstuhl seminar on Green Networking: Toward a Net Zero Internet.

Making Networks Greener

As climate change triggered by CO2 emissions dramatically impacts our environment and our everyday life, the Internet has proved a fertile ground for solutions, such as enabling teleworking or teleconferencing to reduce travel emissions. It is also a significant contributor to greenhouse gas emissions, e.g. through its own significant power consumption. It is thus very important to make networks themselves "greener" and devise less carbon-intensive solutions while continuing to meet increasing network traffic demands and service requirements.

Computer scientists and engineers from world-leading universities and international companies, such as Ericsson, NEC, Netflix, Red Hat, and Telefonica came together in a Seminar on Green Networking (Toward a Net Zero Internet) at Schloss Dagstuhl – Leibniz Center for Informatics, between September 29th and October 2nd, 2024. Organized by leading Internet researchers from the Hong Kong University of Science and Technology (Guangzhou), the University of Oxford, the University of Oslo and the University of California, Santa Cruz, they met to identify and prioritize the most impactful networking improvements to reduce carbon emission, define action items for a carbon-aware networking research agenda, and foster/facilitate research collaboration in order to reduce carbon emissions and to positively impact climate change.

Interactions between the Power Grid, Larger Systems, and the Network

In addition to pure networking issues, the seminar also analyzed the impact of larger systems that are built with Internet technologies, such as AI, multimedia streaming, and mobile communication networks. For example, the seminar discussed energy proportionality in networked systems, to allow systems to adapt their energy consumption to actual changes in utilization, so that savings can be achieved in idle times. Such a behavior would require better adaptiveness of applications and network protocols to cost information (such as carbon impact).

Moreover, networked systems can interact with the power grid in different ways, for example adapting energy consumption to current availability and cost of renewable energy, which can be helpful for joint planning of grid and network/networked-systems/cloud, achieving maximum efficiency/savings.

The seminar attendees are working with international research and standardization organizations such as the Internet Engineering Task Force (IETF) and ETSI, and it is expected that the seminar will make contributions to future research and standardization agendas in such organizations to bring the Internet to Net Zero emissions.

Organizers

  • Alexander Clemm (Los Gatos, US)
  • Dirk Kutscher (HKUST - Guangzhou, CN)
  • Michael Welzl (University of Oslo, NO)
  • Cedric Westphal (University of California, Santa Cruz, US)
  • Noa Zilberman (University of Oxford, GB)

References

Written by dkutscher

October 2nd, 2024 at 11:30 am

ACM Conext-2024 Workshop on the Decentralization of the Internet

without comments

Sponsors

Recent years have witnessed the consolidation and centralization of the Internet applications, services, as well as the infrastructure. This centralization has economic aspects and factors as well as technical ones. The effects are often characterized as detrimental to the original goals of the Internet, such as permissionless innovation, as well as to society at large, due to the amount of (personal) data that is obtained and capitalized on by large platforms.

We are organizing a workshop at ACM CoNEXT-2024 to provide a forum for academic researchers to present and discuss on-going work on this topic and to create greater awareness in the larger community for this topic. The workshop would solicit work on specific topics including but not limited to:

  • investigation of the root causes of Internet centralization, and articulation of the impacts of the market economy, architecture and protocol designs, as well as government regulations;
  • measurement of the Internet centralization and the consequential societal impacts;
  • characterization and assessment of observed Internet centralization;
  • new research topics and technical solutions for decentralized system and application development;
  • decentralized (cloud-independent) distributed system design;
  • protocols and algorithms for decentralized distributed systems; and
  • decentralized security and trust architectures and protocols for real-world Internet systems.

Submission Instructions

Please see the workshop homepage for details.

Written by dkutscher

May 31st, 2024 at 2:11 pm

IRTF ICNRG@IETF-119

without comments

The Information-Centric Networking Research Group (ICNRG) of the Internet Research Task Force (IRTF) met at IETF-119 in Brisbane. Here is my quick summary of the meeting:

Agenda:

1 ICNRG Chairs’ Presentation: Status, Updates Chairs
2 Secure Web Objects and Transactions Dirk Kutscher
3 Transaction Manifests Marc Mosko
4 Vanadium: Secure, Distributed Applications Marc Mosko
5 Global vs. Scoped Namespaces Marc Mosko


Meeting material:

ICNRG Status

ICNRG recently published four news RFCs – great achievement by all involved authors and the whole group!

See my blog posting for a more detailed description.

Secure Web Objects and Transactions

One focus of this meeting was transactions in ICN, i.e., interactions with the intention to achieve some durable state change at a remote peer – which imposes some challenges in a system that is designed around accessing named data.

In my presentation I talked about different ways to realize transactions in ICN:

  1. ICN as a network layer
    • Client-server communication between two nodes
    • Implement transaction semantics on top of an ICN messaging service
  2. Recording state changes in shared data structures
    • Shared namespace, potentially functioning as a transaction ledger
    • Still need to think about atomicity etc

For 1) transactions as messaging over ICN networks, the following considerations apply:

  • Client-server communication between two nodes
  • Implement transaction semantics on top of an ICN messaging service
  • Different approaches
    • A: Traditional layering: Using NDN-like systems as a messaging layer
    • Assign prefixes to client & servers
    • Send messages back and forth, and implement reliability and transactions semantics on top
    • B: ICN-native communication: Use Interest-Data as request-response abstraction for transactions
    • Mapping transaction communication and state evolution more directly to ICN, e.g., Interest-Data in NDN
    • Collapsing traditional network, transport, application layer functions

I mainly talked about variant 1B, ICN-native communication: Use InterestData as request-response abstraction for transactions and introduced the idea of "Secure Web Objects" (SWOs) for a data-oriened web as a motivation.

In such a system, not everything would be about accessing named data object – there is also a need for "client/server" state evolution, e.g., for online banking and similar use cases.

I introduced some ideas on RESTful ICN that we published in an earlier paper. The Restful ICN proposal leverages Reflexive Forwarding, for robust client-server communication and integrates elements of CCNx key exchange for security context setup and session resumption.

Summarizing, I wanted to initiate a discussion about how to realize transactions in information-centric systems? This discussion is not about mapping ICN to existing protocols, such as HTTP, but about actual distributed computing semantics, i.e., robust session setup and state evolution. Transactions with ICN-native communication are hard to provide with with basic Interest/Data. Reflexive Forwarding + CCNx Key Exchange + transaction semantics are an attempt to provide such a service in a mostly ICN-idiomatic way, with the downside that reflexive forwarding needs extensions to forwarders. This raises question on the minimal feature set of core ICN protocols, and to deal with extensions.

In the discussion, it was pointed out that lots of experience on distributed systems has shown that transactions or secure multi-interactions will generally require more than a single two-way exchange.

Others suggested that ICN and NDN has authentication carried out when the signed interest arrives which directly proves authentication, so that the authentication would in fact be done beforehand.

However, authentication may not be enough. For example, client authorization in client-server communication is a critical function which needs to be carefully designed in real-world networks. For example, forcing a server to do signature verification on initial request arrival has been shown in prior systems (e.g. TCP+TLS) to represent a serious computational DOS attack risk. Reflexive Forwarding in RICE tries to avoid exactly that problem, by enabling the server to iteratively authenticate and authorize clients before committing computing resources.

It was also said that whenever a protocol does authentication. you need to analyze in the context of specific examples to discuss, and that cannot only look at the problem at an abstract level.

Transaction Manifests

Marc Mosko presented another approach to transactions in ICN, called [Transaction Manifests](https://datatracker.ietf.org/meeting/119/materials/slides-119-icnrg-transaction-manifests-00 "Transaction Manifests "Transaction Manifests"). He explained that ICN can be transactional.

Typically, ICN is considered as a publish/subscribe or pre-publishing of named-data approach. Outside ICN, distributed transactions do exist, especially in DLTs. For example, considering a permissioned DLT with size N and K << N bookkeepers. In a DLT, they base their decision on the block hash history. In this talk, Marc discussed what would be an equivalent function in ICN, and introduced the notion of transaction manifests.

In ICN, there is a technology called FLIC (File-like collections), i.e., manifests for static objects. FLIC describes a single object that is re-constructed by traversing the manifest in order. In Marc's proposal, a transaction manifest describes a set of names that must be considered together. The transaction manifest names likely point to FLIC root manifests.


In the example above, transaction manifest entries entries point directly to objects. For a complete systems, you would also need a set of bookkeepers, e.g., systems like Hyperledger offering global ordering vis bespoke orderer nodes. Such bookkeeper would have to ensure that a transaction has current pre-conditions, current post-conditions, and no conflicts in post-conditions. Transaction manifests are a form of write-ahead logs (WAL), as used in databases, such as PostgreSQL.

Marc went on discussing a few challenges, such as interactions with repositories and caches, as well as distributed transaction manifests.

There was some discussion on the required ordering properties for this approach, i.e., whether, in a multi-bookkeeper system, livelocks and deadlocks could occur – and whether these could resolved without requiring a total order.

Marc is continueing to work on this. One of the next steps would be to design client-to-bookkepper and bookkeeper-to-bookkeeper protocols.

Vanadium: Secure, Distributed Applications

Marc Mosko introduced the Vanadium system, a secure, distributed RPC system based on distributed naming and discovery. Vanadium uses symmetrical authentication and encryption and may use private name discovery with Identity-Based-Encryption (IBE).

Vanadium has two parts:

  1. Principals and Blessings and Caveats (Security)
    • Use a hierarchical name, e.g. alice:home:tv.
    • Certificate based
    • Blessings are scoped delegations from one principal to another for a namespace (e.g. alice grants Bob “watch” permissions to the TV)
    • Caveats are restrictions on delegations (e.g. Bob can only watch 6pm – 9pm).
    • 3rd party caveats must be discharged before authorization
    • E.g. revocations or auditing
  2. The RPC mount tables (Object Naming)
    • These describe how to locate RPC namespaces
    • They provide relative naming

Vanadium is interesting because parts of its design resemble some ICN concepts, especially the security part:

  • It uses prefix matching and encryption
  • Namespaces work like groups
  • The colon : separates the blesser from the blessed
  • Authorizations match extensions.
    • If Alice authorized “read” to alice:hometv to alice:houseguests, and if Bob has a blessing for alice:houseguests:bob, then Bob has “read” to alice:hometv.
  • A special terminator :$ only matches the exact prefix.
    • A blessing to alice:houseguest:$ only matches that exact prefix.

Marc then explain the object naming structure and the entity resolution in Vanadium.

More details can be found in Marc's presentation and on Vanadium's web page.

In summary, Vanadium is a permissioned RPC service. A Vanadium name encodes the endpoint plus name suffix. The endpoint does not need to resolve to a single mount table server, it could be any server that possesses an appropriate blessing. Authentication is done via pair-wise key exchange and blessing validations. It can be private if using IBE, otherwise server name leaks. Authorizations and Blessings and Caveats use hierarchical, prefixmatching names.

From an ICN perspective, the security approach seems interesting. Blessings and Caveats and discharges and namespaces as groups. One question is how this differs from SDSI co-signings. The Vanadium identity service provides an interesting mapping of OAuth2 app:email tokens to PKI and blessings. The RPC approach exhibits some differences to ICN, e.g., embedding the endpoint identifier in the name. ICN technologies in this context are public-key scoped names in CCNx and schematized trust anchors in NDN.

In the discusion, it was noted that it would be interesting to do an apples-to-apples comparison to the NDN trust schema approach; Vanadium's approach with the ability to create blessings and caveats on demand seems to be much more granular and dynamic.

Global vs. Scoped Namespaces

Marc Mosko discussed global vs. scoped namespaces. For example, how do you know that the key you are looking at is the key that you should be looking at? IPFS punts that to out-of-band mechanisms. CCNX on the other hand uses public key scoped names; you can put a public key, publisher ID in an interest and say you only wanyt this name if signed with the associated key.

It was suggested to re-visit some of the concepts in the RPC system of OSF distributed computing, where all namespaces were scoped, and name discovery starts out as local. You could then "attach" a local namespace to more global namespace via an explicit "graft" operation. The key here was that the authoritative pointers representing the namespace graph were from child to parent, as opposed to parent to child as it is with systems like DNS. Your local trust root identifier could become a name in a higher layer space, yielding a trust root higher in the hierarchy tha could be used instead of or in addition to your local trust root. Doing this can create progressively more global name spaces out of local ones.

Please check out the meeting video for the complete discussion at the meeting.

Written by dkutscher

April 7th, 2024 at 3:41 pm

Posted in Events,ICN,IETF,IRTF

Tagged with , ,

HKUST Internet Research Workshop 2024

without comments

On March 15 2024, in the week before the IETF-119 meeting in Brisbane, Zili Meng and I organized the 1st HKUST Internet Research Workshop that brought together researchers in computer networking and systems around the globe to a live forum discussing innovative ideas at their early stages. The workshop took place at HKUST's Clear Water Bay campus in Hong Hong.

We ran the workshop like a “one day Dagstuhl seminar” and focused on discussion and ideas exchange and less on conference-style presentations. The objective was to identify topics and connect like-minded people for potential future collaboration, which worked out really well.

The agenda was:

  1. Dirk Kutscher: Networking for Distributed ML
  2. Zili Meng: Overview of the Low-Latency Video Delivery Pipeline
  3. Jianfei He: The philosophy behind computer networking
  4. Carsten Bormann: Towards a device-infrastructure continuum in IoT and OT networks
  5. Zili Meng: Network Research – Academia, Industry, or Both?

Dirk Kutscher: Networking for Distributed ML

With the ever-increasing demand for compute power from large-scale machine learning training we have started to realize that not only does Moore's Law no longer address increasing performance demand automatically, but also that the growth rate in terms of training FLOPs for transformers and other large-scale machine learning exhibits by far larger exponential factors.

This has been well illustrated by presentations in an AI data center side meeting at IETF-118, for example by Omer Shabtai who talked about Distributed Training in data centers.

WIth increasing scale, communication over networks becomes a bottleneck, and the question arises, what could be good system designs, protocols, and in-network support strategies to improve performance.

Current distributed machine learning systems typically use a technology called Collective Communication that was developed as a Message Passing Interface (MPI) abstraction for high-performance computing (HPC). Collective Communication is the combination of standardized aggregration and reduction function with communication abstractions, e.g., for "broadcasting" or "unicasting" results.

Collective Communication is implemented a few popular libraries such as OpenMPI and Nvidia's NCCL. When used in IP networks, the communication is usually mapped to iterations of peer-to-peer interactions, e.g., organizing nodes in a ring and sending data for aggregation within such rings. One potential way to achieve better performance would be to perform the aggregation "in the network", as in HPC systems, e.g., using the Scalable hierarchical aggregation protocol (SHArP). Previous work has attempted doing this with P4-based dataplane programming, however such approaches are typically limited due to the mostly stateless operation of the corresponding network elements.

In large-scale training sessions, running over shared infrastructure in multi-tenant data centers, communication needs to respond to congestion, packet loss, server overload etc., i.e., the features of typical transport protocols are needed.

I had previously discussed corresponding challenges and requirements in these Internet Drafts:

In my talk at HKIRW, I discussed ideas for corresponding transport protocols. There are interesting challenges in bringing together reliable communication, congestion control, flow control, single-destination as well multi-destination communication and in-network processing.

Zili Meng: Overview of the Low-Latency Video Delivery Pipeline

Zili talked about requirements for ultra-low latency for interactive streaming for the next-generation of immersive applications. Some application provide really stringent low-latency requirements, with a consistent service quality over many hours, and the talk suggested a better coordination between all elements of the streaming and rendering pipeline.

There was a discussion as to how achievable these requirements are in the Internet and whether applications might be re-designed in terms of providing acceptable user experience even without guaranteed high-bandwidth low-latency service, for example by employing technologies such as semantic communication, prediction, local control loops etc.

Jianfei He: The philosophy behind computer networking

In his talk, Jianfe He asked the question how the field of computer networked can be more precisely defined and how a more systematic could help with the understanding and design of future networked systems.

Specifically, he suggested considering basing design on a solid understanding of potentials and absolute constraints in a certain field, such as Shannon's theory/limit and on the notion of tradeoffs, i.e., consequences of certain design decisions, as represented by the CAP theorem in distributed systems. He mentioned two examples: 1) routing protocols and 2) transport protocols.

For routing protocols, there are well-known tradeoffs between convergence time, scaling limits, and required bandwidths. With changed network properties (bandwidth) – can we reasons about options for shifting the tradeoffs?

For transport protocols, there a goals such as reliability, congestion control etc., and tradeoff relationships between packet loss, line utilization, delay and buffer size. How would designs change if we changed the objective, e.g., to shortest flow completion times or shortest message completion time (or if we looked at collections of flows)? What if we added fairness to these objectives?

Jianfe asked the question whether it was possible to develop these tradeoffs/constraints into a more consistent theory.

Carsten Bormann: Towards a device-infrastructure continuum in IoT and OT networks

Carsten talked about requirements and available technologies for providing a secure management of IoT devices in a device-infrastructure continuum in IoT and OT networks, where scale demands high degrees of automation at run-time and only limited individual device configuration (at installation only). It is no longer possible to manually track each new "Thing" species.


Carsten mentioned technologies such as

  • RFC 8250: Manufacturer's Usage Description (MUD);
  • W3C Web of Things description model; and
  • IETF Semantic Definition Format (SDF).

In his talk, Carsten formulated the goal of "Well-Informed Networking", i.e., an approach where networks can obtain sufficient information about the existing devices, their legitimate communication requirements, and their current status (device health).

Zili Meng: Network Research – Academia, Industry, or Both?

Zili discussed the significance of consistently high numbers industry and industry-only papers at major networking conferences. Often such papers are based on operational experience that can only obtained by companies actually operating corresponding systems.

Sometimes papers seem to get accepted not necessarily on the basis of their technical merits but because they report on "large-scale deployments".

When academics get involved in such work, it is often not in a driving position, but rather through students who work in internship at corresponding companies. Naturally, such papers are not questioning the status quo and are generally not critical of the systems they discuss.

At the workshop, we discussed the changes in the networking research field over the past years, as well as the challenges of successful collaborations between academia and industry.

Written by dkutscher

April 6th, 2024 at 10:55 am