Archive for the ‘internet’ tag
HKUST Internet Research Workshop (HKIRW) 2025
We are organizing the 2025 HKUST Internet Research Workshop (HKIRW) in the week before the IETF-122 meeting in Bangkok. This workshop aims to bring together researchers in computer networking and systems around the globe to a live forum discussing innovative ideas at their early stages. The mission of the workshop is that promising but not-yet-mature ideas can receive timely feedback from the community and experienced researchers, leading them into future IRTF work, Internet Drafts, or IETF working groups.
The workshop will operate like a “one day Dagstuhl seminar” and will focus on discussion and ideas exchange and less on conference-style presentations. The objective is to identify topics and connect like-minded people for potential future collaboration.
Please see https://hkirw.github.io/2025/ for details.
References
Appointed as IRTF Chair
I am delighted that I have been appointed as the next Chair of the Internet Research Task Force (IRTF) by the Internet Architecture Board (IAB).
I have been involved in the IRTF for many years. It is a unique organization that conducts research of importance to the evolution of the Internet protocols, applications, architecture and technology. It has initiated and supported many important technology developments for the Internet in the past, in fields such as network architecture, security and privacy, congestion control, and many more.
The IRTF focuses on longer term research issues, and its various research groups are enabling international collaboration for continuous research on critical topics for the Internet by working with academic and industry research communities.
My term starts in March 2025. I am sincerely grateful for all the support I have received, I am looking forward to working with this community to help making the Internet work better through good research work.
References
Nordwest-IX Internet Exchange Point
DE-CIX and EWE TEL opened the new Nordwest-IX Internet exchange point in Oldenburg, Germany on 2024-08-15.
DE-CIX, the largest Internet Exchange in Europe and the second-largest in the world, has eight locations in Germany now: Oldenburg, Berlin, Düsseldorf, Frankfurt, Hamburg, Leipzig, Munich, Ruhr region. They have recently begun to decentralize their IXPs in Germany by opening new IXPs in addition to their main location in Frankfurt.
Can IXPs help with Internet Decentralization?
In the IRTF Research Group on the Decentralization of the Internet (DINRG), we are investigating root causes for and potential counter-measures against Internet Centralization. There are two aspects for centralization/decentralization and IXPs:
- Internet peering happens mostly at public IXPs, locally centralized exchange points in an otherwise logically decentralized network of Autonomous Systems. Big application service providers ("hyperscalers") are also engaging in so-called "Direct Peering" (or "Private Peering") where they connect their network directly to, typically, Internet Service Providers that provide Internet access and can benefit from a direct connection to dominant content/service providers. Often, it is the hyperscaler who benefits most in terms of cost saving. Decentralizing IXPs can provide incentives for such networks to connect at IXPs instead of doing direct peering, which is often seen as beneficial as it increases connectivity options and it reduces cost and latency.
- IP connectivity alone is not a sufficient condition for low latency and decentralization though, as most hyperscaler applications rely on some form of CDN overlay network. Even with potential local IP forwarding, CDN proxies may be hosted at central locations. To counter that, it is important to create co-location and local edge service hosting opportunities at or closed to IXPs, which can be a business opportunity for the connected ISPs, such we EWE TEL for Nordwest-IX.
The Internet is evolving, and new technologies might change the role of overlays in the future. For example, technologies such as Media-over-QUIC (MoQ) might lead to massive caching and replication overlay structures that will or will not be shared across applications and hyperscalers. IXPs and co-location data centers can be natural places for operating MoQ relays.
IRTF DINRG at IETF-120
We have an exciting agenda for our upcoming IRTF DINRG meeting (Wednesday, July 24th, 2024 at 09:30 in Vancouver) at IETF-120. If you do not attend the IETF-120 meeting locally, please consider attending online.
1 | DINRG Chairs’ Presentation: Status, Updates | Chairs | 05 min |
2 | Exploring Decentralized Digital Identity Protocols | Kaliya Young | 20 min |
3 | DNS-Bound Client and Sender Identities | Michael Richardson | 20 min |
4 | Internet Fragmentation | Sheetal Kumar | 20 min |
5 | SOLID: Your Data, Your Choice | Hadrian Zbarcea | 20 min |
6 | Panel discussion: Internet Decentralization – Next Steps | Chairs & Panelists | 30 min |
7 | Wrap-up & Buffer | Chairs | 05 min |
Documents and Links to Resources
- Policy Network on Internet
Fragmentation - https://datatracker.ietf.org/doc/draft-ietf-dance-architecture/06/
- https://datatracker.ietf.org/doc/rfc9518/
- SOLID Project
Panel Description
Internet Decentralization – Next Steps
The previous DINRG meetings all had lively open mic discussions. However we noticed that those spontaneous conversations, while being interesting and insightful, tend to head to different issues in diverse directions. At this meeting we will continue/extend the previous discussions by gathering a small group of panelists and start the discussion with a list of questions collected from the previous meetings. We will have an open mic for all audience and share the list of discussion questions on DINRG list before the meeting; by gathering a panel and preparing a list of questions, we hope to make the discussions more effective and fruitful, moving towards our overarching goal of identifying an ordered list of issues that DINRG aims to address in coming years.
Links
ACM Conext-2024 Workshop on the Decentralization of the Internet
Sponsors | |
---|---|
Recent years have witnessed the consolidation and centralization of the Internet applications, services, as well as the infrastructure. This centralization has economic aspects and factors as well as technical ones. The effects are often characterized as detrimental to the original goals of the Internet, such as permissionless innovation, as well as to society at large, due to the amount of (personal) data that is obtained and capitalized on by large platforms.
We are organizing a workshop at ACM CoNEXT-2024 to provide a forum for academic researchers to present and discuss on-going work on this topic and to create greater awareness in the larger community for this topic. The workshop would solicit work on specific topics including but not limited to:
- investigation of the root causes of Internet centralization, and articulation of the impacts of the market economy, architecture and protocol designs, as well as government regulations;
- measurement of the Internet centralization and the consequential societal impacts;
- characterization and assessment of observed Internet centralization;
- new research topics and technical solutions for decentralized system and application development;
- decentralized (cloud-independent) distributed system design;
- protocols and algorithms for decentralized distributed systems; and
- decentralized security and trust architectures and protocols for real-world Internet systems.
Submission Instructions
Please see the workshop homepage for details.
HKUST Internet Research Workshop 2024
On March 15 2024, in the week before the IETF-119 meeting in Brisbane, Zili Meng and I organized the 1st HKUST Internet Research Workshop that brought together researchers in computer networking and systems around the globe to a live forum discussing innovative ideas at their early stages. The workshop took place at HKUST's Clear Water Bay campus in Hong Hong.
We ran the workshop like a “one day Dagstuhl seminar” and focused on discussion and ideas exchange and less on conference-style presentations. The objective was to identify topics and connect like-minded people for potential future collaboration, which worked out really well.
The agenda was:
- Dirk Kutscher: Networking for Distributed ML
- Zili Meng: Overview of the Low-Latency Video Delivery Pipeline
- Jianfei He: The philosophy behind computer networking
- Carsten Bormann: Towards a device-infrastructure continuum in IoT and OT networks
- Zili Meng: Network Research – Academia, Industry, or Both?
Dirk Kutscher: Networking for Distributed ML
With the ever-increasing demand for compute power from large-scale machine learning training we have started to realize that not only does Moore's Law no longer address increasing performance demand automatically, but also that the growth rate in terms of training FLOPs for transformers and other large-scale machine learning exhibits by far larger exponential factors.
This has been well illustrated by presentations in an AI data center side meeting at IETF-118, for example by Omer Shabtai who talked about Distributed Training in data centers.
WIth increasing scale, communication over networks becomes a bottleneck, and the question arises, what could be good system designs, protocols, and in-network support strategies to improve performance.
Current distributed machine learning systems typically use a technology called Collective Communication that was developed as a Message Passing Interface (MPI) abstraction for high-performance computing (HPC). Collective Communication is the combination of standardized aggregration and reduction function with communication abstractions, e.g., for "broadcasting" or "unicasting" results.
Collective Communication is implemented a few popular libraries such as OpenMPI and Nvidia's NCCL. When used in IP networks, the communication is usually mapped to iterations of peer-to-peer interactions, e.g., organizing nodes in a ring and sending data for aggregation within such rings. One potential way to achieve better performance would be to perform the aggregation "in the network", as in HPC systems, e.g., using the Scalable hierarchical aggregation protocol (SHArP). Previous work has attempted doing this with P4-based dataplane programming, however such approaches are typically limited due to the mostly stateless operation of the corresponding network elements.
In large-scale training sessions, running over shared infrastructure in multi-tenant data centers, communication needs to respond to congestion, packet loss, server overload etc., i.e., the features of typical transport protocols are needed.
I had previously discussed corresponding challenges and requirements in these Internet Drafts:
- Collective Communication Optimization
- Towards a Unified Transport Protocol for In-Network Computing in Support of RPC-based Applications
In my talk at HKIRW, I discussed ideas for corresponding transport protocols. There are interesting challenges in bringing together reliable communication, congestion control, flow control, single-destination as well multi-destination communication and in-network processing.
Zili Meng: Overview of the Low-Latency Video Delivery Pipeline
Zili talked about requirements for ultra-low latency for interactive streaming for the next-generation of immersive applications. Some application provide really stringent low-latency requirements, with a consistent service quality over many hours, and the talk suggested a better coordination between all elements of the streaming and rendering pipeline.
There was a discussion as to how achievable these requirements are in the Internet and whether applications might be re-designed in terms of providing acceptable user experience even without guaranteed high-bandwidth low-latency service, for example by employing technologies such as semantic communication, prediction, local control loops etc.
Jianfei He: The philosophy behind computer networking
In his talk, Jianfe He asked the question how the field of computer networked can be more precisely defined and how a more systematic could help with the understanding and design of future networked systems.
Specifically, he suggested considering basing design on a solid understanding of potentials and absolute constraints in a certain field, such as Shannon's theory/limit and on the notion of tradeoffs, i.e., consequences of certain design decisions, as represented by the CAP theorem in distributed systems. He mentioned two examples: 1) routing protocols and 2) transport protocols.
For routing protocols, there are well-known tradeoffs between convergence time, scaling limits, and required bandwidths. With changed network properties (bandwidth) – can we reasons about options for shifting the tradeoffs?
For transport protocols, there a goals such as reliability, congestion control etc., and tradeoff relationships between packet loss, line utilization, delay and buffer size. How would designs change if we changed the objective, e.g., to shortest flow completion times or shortest message completion time (or if we looked at collections of flows)? What if we added fairness to these objectives?
Jianfe asked the question whether it was possible to develop these tradeoffs/constraints into a more consistent theory.
Carsten Bormann: Towards a device-infrastructure continuum in IoT and OT networks
Carsten talked about requirements and available technologies for providing a secure management of IoT devices in a device-infrastructure continuum in IoT and OT networks, where scale demands high degrees of automation at run-time and only limited individual device configuration (at installation only). It is no longer possible to manually track each new "Thing" species.
Carsten mentioned technologies such as
- RFC 8250: Manufacturer's Usage Description (MUD);
- W3C Web of Things description model; and
- IETF Semantic Definition Format (SDF).
In his talk, Carsten formulated the goal of "Well-Informed Networking", i.e., an approach where networks can obtain sufficient information about the existing devices, their legitimate communication requirements, and their current status (device health).
Zili Meng: Network Research – Academia, Industry, or Both?
Zili discussed the significance of consistently high numbers industry and industry-only papers at major networking conferences. Often such papers are based on operational experience that can only obtained by companies actually operating corresponding systems.
Sometimes papers seem to get accepted not necessarily on the basis of their technical merits but because they report on "large-scale deployments".
When academics get involved in such work, it is often not in a driving position, but rather through students who work in internship at corresponding companies. Naturally, such papers are not questioning the status quo and are generally not critical of the systems they discuss.
At the workshop, we discussed the changes in the networking research field over the past years, as well as the challenges of successful collaborations between academia and industry.
IRTF Decentralization of the Internet Research Group at IETF-117
Recent years have witnessed the consolidations of the Internet applications, services, as well as the infrastructure. The Decentralization of the Internet Research Group (DINRG) aims to provide for the research and engineering community, both an open forum to discuss the Internet centralization phenomena and associated potential threats, and a platform to facilitate the coordination of efforts in identifying the causes of observed consolidations and the mitigation solutions.
Our upcoming DINRG meeting at IETF-117 will feature three talks – by Cory Doctorow, Volker Stocker & William Lehr, and Christian Tschudin.
1 | DINRG Chairs’ Presentation: Status, Updates | Chairs | 05 min |
2 | Let The Platforms Burn: Bringing Back the Good Fire of the Old Internet | Cory Doctorow | 30 min |
3 | Ecosystem Evolution and Digital Infrastructure Policy Challenges: Insights & Reflections from an Economics Perspective | Volker Stocker & William Lehr | 20 min |
4 | Minimal Global Broadcast (MGB) | Christian Tschudin | 20 min |
5 | Wrap-up & Buffer | All | 15 min |
Documents
Logistics
DINRG Meeting at IETF-117 – 2023-07-25, 20:00 to 21:30 UTC
Internet Centralization on the The Hedge
Lixia Zhang and myself discussed Internet centralization together with Russ White, Alvaro Retana and Tom Ammon on The Hedge podcast.
Recent years have witnessed the consolidations of Internet applications, services, as well as the infrastructure. The Decentralization of Internet Research Group (DINRG) aims to provide for the IRTF/IETF community both an open forum to discuss the Internet centralization phenomena and associated potential threats, and a platform to facilitate the coordination of efforts in identifying the causes of observed consolidations and the mitigation solutions.
DINRG's main objectives include the following:
- Measurement of Internet centralization and the consequential societal impacts;
- Characterization and assessment of observed Internet centralization;
- Investigation of the root causes of Internet centralization, and articulation of the impacts from market economy, architecture and protocol designs, as well as government regulations;
- Exploration of new research topics and technical solutions for decentralized system and application development;
- Documentation of the outcome from the above efforts; and
- Recommendations that may help steer Internet away from further consolidation.
Addressing in the Internet
There was a side meeting on Internet Addressing at IETF-112 this week, discussing potential gaps in Internet Addressing and potential use cases that would suggest new addressing structures.
Looking at the realities in the Internet today, I do not think that actual relevant use cases and current issues in the Internet are served well by just a new addressing approach for the Internet Protocol. Instead I believe that there needs to be architectural discussion first – and addressing might eventually fall out as a result.
Zensur im Internet
In der neuen Folge unseres Podcasts Neulich im Netz widmen wir uns eines etwas delikateren Themas: Zensur im Internet
Insbesondere geht es um die "Great Firewall of China" (GFW), die wir in Bezug auf ihre technische Umsetzung und Probleme analysiert haben.
Anhand von Publikationen und eigenen Erfahrungen analyisieren wir, wie die GFW grob funtioniert, kontinuiierlich weiterentwickelt wird, und wie effektiv unterschiedliche Werkzeuge wie VPNs, shadowsocks usw. sind.
Diese und weitere Aspekte von Zensur im Internet in der dritten Episode von Neulich im Netz.