Dirk Kutscher

Personal web page

Archive for the ‘Events’ Category

DINRG @ IETF-118

without comments

We have posted the agenda for our DINRG meeting at IETF-118:

Documents

Logistics

DINRG Meeting at IETF-118 – 2023-11-06, 08:30 to 10:30 UTC

Written by dkutscher

November 1st, 2023 at 9:21 am

Posted in Events,IETF,IRTF

Tagged with , , ,

ICNRG @ IETF-118

without comments

Written by dkutscher

October 30th, 2023 at 1:10 pm

Posted in Events,ICN,IRTF

Tagged with , , ,

Seminar Talk: Accelerating Distributed Systems with In-Network Computation

without comments

In our invited talks series at HKUST(GZ), I am happy to be hosting Wenfei WU from Peking University on 2023-11-02, 14:00 CST, for his talk on Accelerating Distributed Systems with In-Network Computation.

Accelerating Distributed Systems with In-Network Computation

With Moore's Law slowing down, building distributed and heterogeneous systems becomes a new trend to support large-scale applications, such as large model training and big data analytics. In-Network Computing (INC) is an effective approach to building such distributed systems. INC leverages programmable network devices to process traversing data packets, and provides line-rate and low-latency data processing capabilities, which could compress traffic volume and accelerate the overall transmission and job efficiency. In this talk, we will share the progress and development of INC technologies, including INC protocol design for machine learning and data analytics, and RDMA-compatible INC solutions. These works are published in NSDI21 and ASPLOS23.

Wenfei WU

Wenfei Wu is an assistant professor from the School of Computer Science at Peking University. He obtained his Ph.D. degree from the University of Wisconsin-Madison in 2015. Dr. Wu researches into computer networks and distributed systems, and has published more than 50 papers in these areas. Dr. Wu's recent research focus is to build in-network computation (INC) methods for distributed systems; his work on INC-empowered distributed machine learning system ATP won the best paper award in NSDI 2021, and that on INC-empowered distributed data analytics system ASK won the distinguished paper award in ASPLOS 2023; Dr. Wu won other awards like IPCCC best paper runner-up in 2019, SoCC best student paper in 2013, etc.

Online Participation

https://calendar.hkust.edu.hk/events/iot-thrust-seminar-accelerating-distributed-systems-network-computation

Written by dkutscher

October 23rd, 2023 at 8:14 am

Posted in Events

Tagged with , ,

Network Abstractions for Continuous Innovation

without comments

In a joint panel at ACM ICN-2023 and IEEE ICNP-2023 in Reykjavik, Ken Calvert, Jim Kurose, Lixia Zhang, and myself discussed future network abstractions. The panel was moderated by Dave Oran. This was one of the more interesting and interactive panel sessions I participated in, so I am providing a summary here.

Since the Internet's initial rollout ~40 years ago, not only its global connectivity has brought fundamental changes to society and daily life, but its protocol suite and implementations have also gone through many iterations of changes, with SDN, NFV, and programmability among other changes over the last decade. This panel looks into next decade of network research by asking a set of questions regarding where lies the future direction to enable continued innovations.

Opportunities and Challenges for Future Network Innovations

Lixia Zhang: Rethinking Internet Architecture Fundamentals

Lixia Zhang (UCLA), quoting Einstein, said that the formulation of the problem is often more essential than the solution and pointed at the complexities of today's protocols stacks that are apparently needed to achieve desired functionality. For example, Lixia mentioned RFC 9298 on proxying UDP in HTTP, specifically on tunneling UDP to a server acting as a UDP-specific proxy over HTTP. UDP over IP was once conceived as a minial message-oriented communication service that was intended for DNS and interactive real-time communication. Due to its push-based communication model, it can be used with minimal effort for useful but also harmful application, including large-scale DDOS attacks. Proxing UDP over HTTP addresses this and other concerns, by providing a secure channel to a server in a web context, so that the server can authorize tunnel endpoints, and so that the UDP communication is congestion controlled by the underlying transport protocol (TCP or QUIC). This specification can be seen as a work-around: sending unsolicted (and un-authenticated) messages over the Internet is a major problem in today's Internet. There is no general approach for authenticating such messages and no concept for trust in peer identities. Instead of analyzing the root cause of such problems, the Internet communities (and the dominant players in that space) prefer to come up with (highly inefficient) workarounds.

This problem was discussed more generally by Oliver Spatscheck of AT&T Labs in his 2013 article titled Layers of Success, where he discussed the (actually deployed) excessive layering in production networks, for example mobile communication networks, where regular Internet traffic is routinely tunneled over GTP/UDP/IP/MPLS:

The main issue with layering is that layers hide information from each other. We could see this as a benefit, because it reduces the complexities involved in adding more layers, thus reducing the cost of introducing more services. However, hiding information can lead to complex and dynamic layer interactions that hamper the end-to-end system’s reliability and are extremely difficult if not impossible to debug and operate. So, much of the savings achieved when introducing new services is being spent operating them reliably.

According to Lixia, the excessive layering stems from more fundamental problems with today's network architecture, notably the lack of identity and trust in the core Internet protocols and the lack of functionality in the forwarding system – leading to significant problems today as exemplied by recent DDoS attacks. Quoting Einstein again, she said that we cannot solve problems by using the same kind of thinking we used when we created them, calling for a more fundamental redesign based on information-centric networking principles.

Ken Calvert: Domain-specific Networking

Ken Calvert (University of Kentucky) provided a retrospective of networking research and looked at selected papers published at the first IEEE ICNP conference in 1993. According to Ken, the dominant theme at that time was How to design, build, and analyze protocols, for example as discussed in his 1993 ICNP paper titled Beyond layering: modularity considerations for protocol architectures.

Ken offered a set of challenges and opportunities for future networking research, such as:

  • Domain-specific networking à la Ex uno pluria, a 2018 CCR editorial discussing:
    • infrastructure ossification;
    • lack of service innovation; and
    • a fragmentation into "ManyNets" that could re-create a service-infrastructure innovation cycle.
  • Incentives and "money flow"
    • Can we escape from the advertising-driven Internet app ecosystem? Should we?
  • Wide-area multicast (many-many) service
    • Building block for building distributed applications?
  • Inter-AS trust relationships
    • Ossification of the Inter-AS interface – cannot be solved by a protocol!
  • Impact ⇐ Applications ⇐ Business opportunities ($)
    • What user problem cannot be solved today?
  • "The core challenge of CS ... is a conceptual one, viz., what (abstract) mechanisms we can conceive without getting lost in the complexities of our own making." - Dijkstra

For his vision for networking in 30 years, Ken suggested that:

  • IP addresses will still be in use
    • but visible only at interfaces between different owners' infrastructures
  • Network infrastructure might consist of access ASes + separate core networks operated by the "Big Five".
  • Users might communicate via direct brain interfaces with AI systems.

Dirk Kutscher: Principled Approach to Network Programmability

I offered the perspective of introducing a principled approach to programmability that could provide better programmability (for humans and AI), based on more powerful network abstractions.

Previous work in SDN with protocols such as OpenFlow and dataplane programming languages such as P4 have only scratched the surface of what could be possible. OpenFlow was a great first idea, but it was fundamentally constrained by the IP and Ethernet-based abstractions that were built into it. It can be used for programming some applications in that domain, such as firewalls, virtual networking etc., but the idea of continuous innovation has not really materialized.

Similarly, P4 was advertized as an enabler for new levels of dataplane programmability, but even simple systems such as NetCache have to go to quite some extend to achieve minimal functionality for a proof-of-concept. Another P4 problem that is often reported is the hardware heterogeneity so that universal programmability is not really possible. In my opinion, this raises some questions with respect to applicability of current dataplane programming for in-network computing. A good example of a more productive application of P4 is the recent SIGCOMM paper on NetClone that describes as fast, scalable, and dynamic request cloning for microsecond-Scale RPCs. Here P4 is used as an accelerator for programming relatively simple functionality (protocol parsing, forwarding).

This may not be enough for future universal programmability though. During the panel discussion, I drew an analogy to computer programming language. We are not seeing the first programming language and IDEs that are designed from the ground up for better AI. What would that mean for network programmability? What abstractions and APIs would we need?

In my opinion, we would have to take a step back and think about the intended functionality and the required observability for future (automated) network programmability that is really protocol-independent. This would then entail more work on:

  • the fundamental forwarding service (informed by hardware constraints);
  • the telemetry approach;
  • suitable protocol semantics;
  • APIs for applications and management; and
  • new network emulation & debugging approach (a long the lines of "network digital twin" concepts).

Overall, I am expecting new exiciting research in the direction of principled approaches to network programmability.

Jim Kurose: Open Research Infrastructures and Softwarization

Jim reminded us that the key reason Internet research flourished was the availability of open infrastructure with no incumbent providers initially. The infrastructure was owned by researchers, labs, and universities and allowed for a lot of experimentation.

This open infrastructure has recently been challenged by ossification with the rise of production ISP services at scale, and the emergence of closed ISPs, cellular carriers, hyperscalers operating large portion of the network.

As an example for emerging environments that offer interesting opportunities for experiments and new developments, Jim mentioned 4G/5G private networks, i.e., licensed spectrum created closed ecosystems – but open to researchers, creating opportunities for:

Jim was also suggesting further opportunities in softwarization and programmability, such as (formal) methods for logical correctness and configuration management, as well as programmability to add services beyond the "minimal viable service", such as closed loop automatic control and management.

Finally Jim also mentioned opportunities in emerging new networks such as LEOs, IoT and home networks.

Written by dkutscher

October 23rd, 2023 at 7:46 am

ACM SIGCOMM CCR: Report of 2021 DINRG Workshop on Centralization in the Internet

without comments

ACM SIGCOMM CCR just published the report of our 2021 DINRG meeting on Centralization in the Internet.

Executive Summary

There is a consensus within the networking community that the Internet consolidation and centralization trend has progressed rapidly over recent years, as measured by the structural changes to the data delivery infrastructure, the control power over system platforms, application development and deployment, and even in the standard development efforts. This trend has brought impactful technical, societal, and economical consequences.

When the Internet was first conceived as a decentralized system 40+ years back, few people, if any, could have foreseen how it looks today. How has the Internet evolved from there to here? What have been the driving forces for the observed consolidation? From a retrospective view, was there anything that might have been done differently to influence the course the Internet has taken? And most importantly, what should and can be done now to mitigate the trend of centralization? Although there are significant interests in these topics, there has not been much structured discussion on how to answer these important questions.

The IRTF Research Group on Decentralizing the Internet (DINRG) organized a workshop on “Centralization in the Internet” on June 3, 2021, with the objective of starting an organized open discussion on the above questions. Although there seems to be an urgent need for effective countermeasures to the centralization problem, this workshop took a step back: before jumping into solution development to steer the Internet away from centralization, we wanted to discuss how the Internet has evolved and changed, and what have been the driving forces and enablers for those changes. The organizers and part of the community believe that a sound and evidence-based understanding is the key towards devising effective remedy and action plans. In particular, we would like to deepen our understanding of the relationship between the architectural properties and economic developments.

This workshop consisted of two panels, each panel started with an opening presentation, followed by panel discussions, then open-floor discussions. There was also an all-hand discussion at the end. Three hours of the workshop presentations and discussions showed that this Internet centralization problem space is highly complex and filled with intrinsic interplays between technical and economic factors.

This report aims to summarize the workshop outcome with a broad-brush picture of the problem space. We hope that this big picture view could help the research group, as well as the broader IETF community, to reach a clearer and shared high-level understanding of the problem, and from there to identify what actions are needed, which of them require technical solutions, and which of them are regulatory issues which require technical community to provide inputs to regulatory sectors to develop effective regulation policies.

You can find the report in the ACM Digital Library. We also have a pre-print version.

Written by dkutscher

July 27th, 2023 at 4:35 pm

IRTF Decentralization of the Internet Research Group at IETF-117

without comments

Recent years have witnessed the consolidations of the Internet applications, services, as well as the infrastructure. The Decentralization of the Internet Research Group (DINRG) aims to provide for the research and engineering community, both an open forum to discuss the Internet centralization phenomena and associated potential threats, and a platform to facilitate the coordination of efforts in identifying the causes of observed consolidations and the mitigation solutions.

Our upcoming DINRG meeting at IETF-117 will feature three talks – by Cory Doctorow, Volker Stocker & William Lehr, and Christian Tschudin.

1DINRG Chairs’ Presentation: Status, UpdatesChairs05 min
2Let The Platforms Burn: Bringing Back the Good Fire of the Old InternetCory Doctorow30 min
3Ecosystem Evolution and Digital Infrastructure Policy Challenges: Insights & Reflections from an Economics PerspectiveVolker Stocker & William Lehr20 min
4Minimal Global Broadcast (MGB)Christian Tschudin20 min
5Wrap-up & BufferAll15 min

Documents

Logistics

DINRG Meeting at IETF-117 – 2023-07-25, 20:00 to 21:30 UTC

IETF-117 Agenda

Written by dkutscher

July 17th, 2023 at 5:44 pm

Posted in Events,IETF,IRTF

Tagged with , , ,

Named Data Metaverse

without comments

I had the pleasure of chairing a really interesting panel discussion at the NDN Community meeting (NDNComm 2023) on March 3rd 2023.

The panel discussed opportunities and challenges for building Metaverse systems with a Named Data Networking approach. Specific discussion questions include:

  • What are architectural, security-related, and performance-related issues in Metaverse systems today?
  • What communication patterns could be supported by NDN platforms?
  • How can the data-oriented model and decentralized trust establishment help in developing better Metaverse systems and at what layer would NDN technologies help?
  • What are gaps, challenges and research opportunities for NDN evolution to address Metaverse system requirements?

The panelists were:

  • Paulo Mendes (Airbus Research)
  • Michelle Munson (Eluvio)
  • Todd Hodes (Eluvio)
  • Jeff Burke (UCLA REMAP)

The panel discussed scenarios for Named Data in the Metaverse such as AR in live performance, real-time ML for transformed reality, architectures for emerging arts, media, and entertainment, commercial content distribution and experience delivery, as well as Metaverse VR experiences in challenged networks.

Jeff Burke introduced exciting ideas for re-imaging VR-enhanced live performances and shared some ideas and insights from building such applications. In his class of applications, there is a lot of local interaction (for example in a theater), creating interesting challenges and opportunities for local, decentralized Metaverses. On the application layer, Metaverse VR applications would like use scene and model descriptions such as USD and gITF, so the question arises, what opportunities exist for mapping the corresponding names to "network layer" names.

Michelle Munson and Todd Hodes introduced Eluvio's Content Fabric Protocol (CFP), a platform aimed at commercial-grade decentralized content distribition, providing content-native adressability programmability mechanisms for storage, distribution, and in-built streaming and content processing. CFP uses Blockchain governance for versioning, access control, and on-chain/cross-chain monetization. An example use case is the Warner Movieverse.

The panel discussed the different approaches of dealing with named-data as a fundamental building block and some specific use cases for networked Metaverse systems such as (secure) in-network content transformation. Overall, the panel was a great initial discussion on these ideas that should definitely be continued. Check out the list of related events below for possible venues.

Related Events

Written by dkutscher

March 11th, 2023 at 8:55 am

Posted in Events

Tagged with , , ,

IEEE MetaCom Workshop on Decentralized, Data-Oriented Networking for the Metaverse (DORM)

without comments

IEEE MetaCom Workshop on Decentralized, Data-Oriented Networking for the Metaverse (DORM)

Workshop page at IEEE MetaCom

Organizers

  • Jeff Burke, UCLA
  • Dirk Kutscher, HKUST(GZ)
  • Dave Oran, Network Systems Research & Design
  • Lixia Zhang, UCLA

Workshop Description

The DORM workshop is a forum to explore new directions and early research results on Metaverse system architecture, protocols, and security, along a data-oriented design direction that can encourage and facilitate decentralized realizations. Here we broadly interpret the phrase “Metaverse” as a new phase of networking with multi-dimensional shared views in open realms.

Most prototype implementations of such systems today replicate the social media platform model: they run on cloud servers offered by a small number of providers, and have identities and trust management anchored at these servers. Consequently, all communications are mediated through such servers, together with extensive CDN overlay infrastructures or the equivalent.

Although the cloud services may be extended to edges to address performance and delay issues, the centralization of control power that stems from this cloud-centric approach can be problematic from a societal perspective. It also reflects a significant semantic mismatch between the existing address-based network support and many aspirations for open realm applications and interoperability: the applications, by and large, operate on named data principles at the application layer, but need to deploy multiple layers of middleware services, which are provider-specific, to bridge the gap. These added complexities prohibit new ways of interacting (leveraging new data formats such as USD and gITF) and are not conducive to flexible distributed computing in the edge-to-cloud continuum.

This workshop solicits efforts that explore new directions in metaverse realization and work that takes a principled approach to key topics in the areas of 1) Networking as the Platform, 2) Objects and Experiences, and 3) Trust and Transactions without being constrained by inherited platforms.

Networking as the Platform

Metaverse systems will rely on a variety of communication patterns such as client-server RPC, massively scalable multi-destination communication, publish-subscribe etc. In systems that are designed with a cloud-based, centralized architecture in mind, such interactions are typically mediated by central servers and supported by overlay CDN infrastructure, with operational inflexibility and lacking optimization mechanisms, for example in order to leverage specific network link layer capabilities such as broadcast/multicast features. Underlying reliance on existing stacks also introduces familiar complications in providing disruption-tolerant, mobile-friendly extended reality applications, limiting their viability for eventual use in critical infrastructure and require significant engineering support to use in demanding entertainment applications, such as large-scale live events.

This workshop seeks research on new strategies for Metaverse system design that can promote innovation by lowering barriers to entry for new applications that perform robustly under a variety of conditions. We solicit research on Metaverse system design that addresses architectural and protocol-level issues without the reliance on a centralized cloud-based architecture. Instead, we expect the DORM workshop submissions to start with a distributed system assumption, focusing on individual protocol and security elements that enable decentralized Metaverse realizations.

Many Metaverse-relevant interactions such as video streaming and distribution of event data today inherently rely on abstractions for accessing named data objects such as video chunks, for example in DASH-based video streaming. The DORM workshop will therefore particularly invite contributions that explore new systems and protocol designs that leverage that principle, thus exploring new opportunities to re-imagine the relationship between application/network and link/physical layer protocols in order to better support Metaverse system implementations. This could include work on new hypermedia concepts based on the named data principle and cross-layer designs for simplifying and optimizing the implementation and operation of such protocols.

We expect such systems to as well be better suited to elegant, efficient integration of computing into the network, thus providing more flexible and adaptive platforms for offloading computation and supporting more elaborate data dissemination strategies.

From Objects to Experiences

In our perceived Metaverse/open realm systems, there are different existing and emerging media representations and encodings such as current video encodings as well as scene and 3D object description and transmission formats such as USD and glTF. Similar to previous developments in the networked audio/video area, it is interesting to investigate opportunities for new scene and 3D object representation formats that are suitable not only for efficient creation and file-like unidirectional transmission but also for streaming, granular composition and access, de-structuring, efficient multi-destination transmission, possibly using network coding techniques.

The workshop is therefore soliciting contributions that explore a holistic approach to media/object representation within network/distributed computing, enabling better performance, composability and robustness of future distributed Metaverse systems. Submissions that explore cross-layer approaches to supporting emerging media types such as volumetric video and neural network codecs are encouraged, as are considerations of how code implementing object behaviors and interactions can be supported - providing a path to the interoperable experiences expressed in various Metaverse visions.

Trust and Transactions

Finally, distributed open realm systems need innovative solutions in identity management and security support that enable interoperation among multiple systems including a diverse population of users. We note that mechanisms to support trust are inherently coupled with various identities, from "real world" identities to application-specific identities that users may adopt in different contexts. Proposed solutions need to consider not just media asset exchange but also the interactions among objects, and the data flows needed to support it.

The workshop solicits contributions that identify specific technical challenges, for example system bootstrapping, trust establishment, authenticated information discovery, and that propose new approaches to the identified challenges. Researchers are encouraged to consider cross-layer designs that address disconnects between layers of trust in many current systems - e.g., the reliance on third-party certificate authorities for authentications, the inherent trust in connections rather than the objects themselves, that tends to generate brittleness for even local communications if connectivity to the global network is compromised.

Call for Papers

The Decentralized Data-Oriented Networking for the Metaverse (DORM) workshop is intended as a forum to explore new directions and early research results on the system architecture, protocols, and security to support Metaverse applications, focusing on data-oriented, decentralized system designs. We view Metaverse as a new phase of networking with multi-dimensional shared views in open realms.

Most Metaverse systems today replicate the social media platform model, i.e., they assume a cloud platform provider-based system architecture where identities and the trust among them is anchored via a centralized administrative structure and where communication is mediated through servers and an extensive CDN overlay infrastructure operated by that administration. The centralization that stems from this approach can be problematic both from a control and from a performance & efficiency perspective. Despite operating on named data principles conceptually, such systems typically exhibit traditional layering approaches that prohibit new ways of interacting (leveraging new data formats such as USD and gITF) and that are not conducive for flexible distributed computing in the edge-to-cloud continuum.

This workshop solicits work that takes a principled approach at key research topics in the areas of 1) Networking as the Platform, 2) Objects and Experiences, and 3) Trust and Transactions without being constrained by inherited platform designs, including but no limited to:

  • Distributed Metaverse architectures
  • Computing in the network as an integral component for better communication and interaction support
  • Application-layer protocols for a rich set of interaction styles in open realms
  • Supporting Metaverse via data-oriented techniques
  • Security, Privacy and Identity Management in Metaverse systems
  • New concepts for improved network support for Metaverse systems, e.g., through facilitating ubiquitous multipath forwarding and multi-destination delivery
  • Cross-layer designs
  • Emerging scene description and media formats
  • Quality of Experience for Metaverse applications
  • Distributed consensus and state synchronization
  • Security, Privacy and Identity Management in Metaverse systems

Given the breadth and emerging nature of the field, all papers should include the articulation of a specific vision of Metaverse that provides clarifying assumptions for the technical content.

Submissions and Formatting

The workshop invites submission of manuscripts with early and original research results that have not been previously published or posted on public websites or that are not currently under review by another conference or journal. Submitted manuscripts must be prepared according to IEEE Computer Society Proceedings Format (double column, 10pt font, letter paper) and submitted in the PDF format. The manuscript submitted for review should be no longer than 6 pages without references. Reviewing will be double-blind. Submissions must not reveal the authors’ names and their affiliations and avoid obvious self-references. Accepted and presented papers will be published in the IEEE MetaCom 2023 Conference Proceedings and included in IEEE Xplore.

Manuscript templates can be found here. All submissions to IEEE MetaCom 2023 must be uploaded to EasyChair at https://easychair.org/conferences/?conf=metacom2023.

Organization Committee

  • Jeff Burke, UCLA
  • Dirk Kutscher, HKUST(GZ)
  • Dave Oran, Network Systems Research & Design
  • Lixia Zhang, UCLA

Technical Program Committee

  • Alex Afanasyev, Florida International University
  • Hitoshi Asaeda, NICT
  • Ali Begen, Ozyegin University
  • Taejoong Chung, Virginia Tech
  • Serge Fdida, Sorbonne University Paris
  • Carlos Guimarães, ZettaScale Technology SARL
  • Peter Gusav, UCLA
  • Toru Hasagawa, Osaka University
  • Jungha Hong, ETRI
  • Kenji Kanai, Waseda University
  • Ruidong Li, Kanazawa University
  • Spyridon Mastorakis, University of Nebraska Omaha
  • Kazuhisa Matsuzono, NICT
  • Marie-Jose Montpetit, Concordia University Montreal
  • Jörg Ott, Technical University Munich
  • Yiannis Psarras, Protocol Labs
  • Eve Schooler, Intel
  • Tian Song, Beijing Institute of Technology
  • Kazuaki Ueda, KDDI Research
  • Cedric Westphal, Futurewei
  • Edmund Yeh, Northeastern University
  • Jiadong Yu, HKUST(GZ)
  • Yu Zhang, Harbin Institute of Technology

Important Dates

  • March 20, 2023, Paper submission deadline
  • April 20, 2023 Notification of paper acceptance
  • May 10, 2023, Camera-ready paper submissions

Submission Link

https://easychair.org/conferences/?conf=metacom2023

Written by dkutscher

January 16th, 2023 at 6:50 pm

Posted in Events

Tagged with , , , ,

ACM ICN-2022 Highlights

without comments

The ACM Information-Centric Networking 2022 Conference took place in Osaka from September 19 to 21 2022, hosted by Osaka University. It was a three-day conference with tutorials, one keynote, two panel session, and paper and poster/demo presentations. The highlights (with links to papers and presentations) from my perspective were the following:

Keynote by Dave Oran: Travels with ICN – The road traversed and the road ahead

Dave Oran presented an overview of his research experience over the last ten years that was informed by many seminal research contributions on ICN and his career in the network vendor sector as well as in standards and research bodies such as the IETF and IRTF.

The keynote's theme was about disentagling the application and network layer aspects of ICN, which led to interesting perspectives on some of the previous design decisions in CCNx and NDN.

As ilustrated in the figure below, the more networking-minded ICN topics are typically connected to features and challenges of building packet-forwarding networks based on the principle of accessing named data. The actual research questions are generally not different to those of IP networks (routing, mobility etc.), but ICN provides a significant potential to re-think and often improve over the specific approaches in IP networks due to its core properties such as object security and symmetric, stateful forwarding.

Information-centric applications development in contrast is often concerned with general naming concepts, namespace design, and security features that are enabled by namespace design and application layer object security such as trust schema and provenance.

The message in Dave's talk was not that these are completely disjunct areas that should best be investigated independent of each other, but rather that the ICN's fascination and disruptive potential is based on the potential for rethinking layer boundaries and contemplating a better function split between applications, network stacks on endpoints, and forwarding elements in the network. In his talk, Dave focused on

  • the Interaction of consumers & networking producers of data;
  • routing;
  • forwarding; and
  • congestion control.

He discussed many lessons learned as well as open research and new ideas for all of these topics – please refer to the presentation slides for details.

One particularly interesting current ICN research topic is distributed computing and ICN architectures & interaction models for that. ICN's name-based forwarding model and object security provide very interesting options for simplifying systems such as microservices, RESTful services and distributed application coordination. Alluding to our work on Reflexive Forwarding, Dave offered two main lessons learned from building corresponding communication abstractions:

  1. Content fetch with two-way handshakes is a poor match for doing distributed computations.

  2. Extensions to the base protocols can give a flexible underpinning for multiple interaction models

This raises the question of the slim waist of ICN, i.e., as research progresses, what should be the minimal feature set and what is the right extensibility model?

Dave concluded his talk with a few interesting questions:

  • how can the networking insights we’ve gained from ICN protocols inform the construction of Information Centric systems and applications?

    • Whether and how to utilize name-based routing to achieve robustness and performance scaling for distributed applications?
    • Where does caching help or not help and how to best utilize caches?
    • Does pushing Names down to lower layers help latency? Resilience? Fairness?
  • How can the insights we’ve gained from applying Information Centricity in applications inform what we bother to change the network to do, and what not?

    • Do things like multipath forwarding, in-network retransmission, or reflexive forwarding actually enable applications that are hard or infeasible to do without them?
    • Is there a big win for wireless networks in terms of optimizing a scarce resource or having more robust and responsive mobility characteristics?

More details in the presentation slides

Panel: ICN and the Metaverse – Challenges and Opportunities

I had the pleasure of being in a panel with Jeff Burke (UCLA) and Geoff Houston (APNIC), moderated by Alexander Afanasyev (Florida International University) discussing Metaverse challenges and opportunities for ICN.

Questions on Metaverse and ICN

Large-scale interactive and networked AR/VR/XR systems are now referred to as Metaverse, and the general assumption is that corresponding applications will be hosted on platforms, similar to those that are employed for web and social media applications today.

In the web, the platform approach has led to an accelerated development and growth of a few popular mainstream systems. On the other hand, several problems have been observed such as ubiquitous surveillance, lock-in effects, centralization, innovation stagnation, and cost overhead for achieving the required performance.

While these phenomena may have both technical and economic root causes, we would like to discuss:

  • How should Metaverse systems be designed, and what would be important architectural pillars?
  • What is the potential for re-imagining Metaverse with information-centric concepts and protocols?
  • Would ICN enable or lead to profound architecturally unique approaches – or would protocols such as NDN be a drop-in replacement for QUIC, HTTP3 etc.?
  • What are the challenges for building ICN-based Metaverse systems, and what it missing in today's ICN platforms?

As input to the discussion, Jeff Burke and myself (together with Dave Oran) submitted two papers:

Research Directions

Jeff offered a list of really interesting research directions based on the notion that in the Metaverse, host-based identifiers and end-to-end connections between hosts would be abstracted even further away than in today’s web. Client devices would fade into the background in favor of the data supplanting or augmenting the real world. Thus, a metaverse consisted of information not associated with the physical world unless it needed to describe or provide interaction with it. The experiential semantics were viscerally information-centric, which would help to motivate the ICN research opportunities such as:

  • Persistence: The information forming a metaverse persists across sessions and users.

  • “Content” and Interoperability: Designing the relationships among metaverse-layer objects and the named packets that an ICN network moves and stores.

  • Naming and Spatial Organization: How to best integrate knowledge from research in databases and related fields where these challenges have been considered for decades.

  • Trust, Provenance, and Transactions: Using ICN to disentangle metaverse objects from the security provided by a source or a given channel of communication, with the named data representation secured at the time of publication instead.

RESTful ICN

In our paper on RESTFul ICN, Dave Oran and I asked the question: given that most web applications are concerned with transferring named units of data (web resources, video chunks etc.), can the REST paradigm be married with the data-oriented, receiver-driven operation of Information-Centric Networking (ICN), leveraging attractive ICN benefits such as consumer anonymity, stateful and symmetric forwarding, flow-balance in-network caching, and implicit object security?

We argue that this is feasible given some of the recent advances in ICN protocol development and that the resulting suite is simpler and potentially having better performance and robustness properties. Our sketch of an ICN based protocol framework addresses secure and efficient establishment and continuation of REST communication sessions, without giving up key ICN properties, such as consumer anonymity and flow balance.

Panel Discussion

The panel discussed the current socio-economic realities in the Internet and the Web and explored opportunities (and non-opportunities) for redesigns, and how ICN could be a potential enabler for that.

My personal view is that most of the potential dystopian outcomes of future Metaverse applications are independent from the enabling networking technology and the technology stack at large (security, naming etc.). It is really important to understand the actual objectives of a specific systems, i.e., who operates to which ends, similar to so-called social networks today. If the main objective is to create a more powerful advertising and manipulation platform, then such as system will exhibit yet unimaginable surveillance and tracking mechanisms – independent of the underlying network stack.

With respect to the technical design, I agree to Jeff Burke's proposed research directions. One particularly interesting question will be how to design a Information-Centric communication stack and corresponding APIs. I argued that it is not necessary to replicate existing interaction styles and protocol stacks from the TCP/IP (or QUIC) world. Instead it should be more interesting and productive to discuss the fundamentally needed interaction classes such as

  • High-performance multi-destination transfer
  • Group communication and synchronization
  • High-performance session-oriented communication with servers and peers (for which we proposed RESTful ICN).

The panel then also discussed how likely non-mainstream Metaverse systems would be adopted and whether the current socio-economic environment actually allows for that level of permissionless innovation – considering the network effects that Metaverse systems would be subjected to, much in the same way as so-called social networks.

Panel: Hard Lessons for ICN from IP Multicast?

Thomas Schmidt (HAW Hamburg) moderated a panel discussion with Jon Crowcroft (University of Cambridge), Dave Oran, and George Xylomenos (Athens University of Economics and Business) as panelists.

With the continued shift towards more and more live video streaming services over the Internet, scalable multi-destination delivery has become more relevant again. For example, the recently chartered IETF Working Group on Media over QUIC (MOQ), is addressing the need for scalable multi-destination delivery and the unavailability of IP multicast as a platform by developing a QUIC-based overlay system that essentially uses information-centric concepts, albeit in a QUIC overlay network. Such system would consist of a network of QUIC proxies, connected via individual QUIC connections to emulate request forwarding and chunk-based video data distribution. Considering the non-negligible overhead and complexity one might ask the question whether live video streaming over the Internet could be served by a better approach. Questions like this are being asked by the network service provider community (ISPs have to bear a lot of the overhead and overlay complexity) as well, for example in this APNIC blog posting by Jake Holland titled Why inter-domain multicast now makes sense.

This panel was inspired by a statement paper submitted by Jon Crowcroft titled [Hard lessons for ICN from IP multicast (https://dl.acm.org/doi/10.1145/3517212.3558086). In this brief statement, Jon traced the line of thought from Internet multicast through to Information Centric Networking, and used this to outline what he thinks should have been the priorities in ICN work from the start.

The statement paper discusses a few problems with IP multicast that have been largely acknowledged such as difficulties in creating viable business models, unsolved security problems such as IP multicast being used as a DDOS platform, and interdomain multicast that proven difficult to establish due multicast routing scaling problems and the lack of robust pricing models. The second part of the paper is then some ICN work that has been addressing some of the mentioned issued.

The paper gave rise to an interesting and controversial discussion at the panel. The most important point is IMO to characterize ICN communication model correctly: it is correct that the combination of stateful forwarding, Interest aggregation, and caching enables an implicit multi-destination delivery service. It is implicit, because consumers that ask for the same units of named data within a time frame at the order of the network RTT will send equivalent Interest messages so that forwarders can multicast the data delivery to the faces they received such Interests from. In conjunction with opportunistic (or managed) caching by forwarders this would enable a very elegant multi-destination delivery services that can even cater to a wider variation of Interest sending times, as "late" Interest would be answered from caches.

This is a different service model compared to the push-based IP multicast model. ICN does not provide such as service in the first place, but is just applying its regular receiver-driven mode of operation which elegantly works well in the case of multiple consumers asking for the same data. It is probably fair to say that the ICN model caters to media-delivery use cases (one stream delivered to multiple consumers) but does not try to provide the more general IP multicast service model (Any Source Multicast). However, by extension, the ICN approach could be applied to multi-source scenarios as well – the system would build implicit delivery trees from any source to current consumers, without requiring extra machinery.

With this, if you like, simpler service model, ICN does fundamentally not inherit many of the problems that prohibit IP multicast in the Internet: the system is receiver-driven which simply eliminates DDOS threats (on the packet level). It is also not clear, whether ICN would need anything special to provide this service in inter-domain settings (except for general ICN routing in the Internet, which is a general,
but different research question).

Acknowledging this conceptual and practical difference, there are obviously other interesting research questions that ICN multi-destination delivery entails, for example performance and jitter reduction in the presence of caching and other transport questions.

Overall, a good time to talk about multi-destination delivery and to keep thinking about missing pieces and potential future work in ICN.

Enabling Distributed Applications

One paper presentation session was focused on distributed applications – a very interesting and relevant ICN research area. It featured three great papers:

SoK: The evolution of distributed dataset synchronization solutions in NDN

This paper by Philipp Moll, Varun Patil, Lan Wang, and Lixia Zhang systemizes the knowledge about distributed dataset synchronisation in ICN, or Sync in short, which, according to the authors, plays the role of a transport service in the Named Data Networking (NDN) architecture. A number of NDN Sync protocols have been developed over the last decade. For this paper, they conducted a systematic examination of NDN Sync protocol designs, identified common design patterns, revealed insights behind different design approaches,
and collected lessons learned over the years.

Sync enables new ways of thinking about coordination and general communication in distributed ICN systems, and I encourage everyone to read this for a good overview of the different proposed systems and their properties.

There are also some open research questions around Sync, such as large-scale applicability, alternative to using Interest multicast for discovery and more – a good topic to work on!

DICer: distributed coordination for in-network computations

This paper by Uthra Ambalavanan, Dennis Grewe, Naresh Nayak, Liming Liu, Nitinder Mohan, and Jörg Ott is a nice product of the Piccolo project that had the pleasure to set up and co-lead.

Application domains such as automotive and the Internet of Things may benefit from in-network computing to reduce the distance data travels through the network and the response time. Information Centric Networking (ICN) based compute frameworks such as Named Function Networking (NFN) are promising options due to their location independence and loosely-coupled communication model.

However, unlike current operations, such solutions may benefit from orchestration across the compute nodes to use the available resources in the network better. In this paper, the authors adopted the State Vector Synchronization (SVS), an application dataset synchronization protocol in ICN, to enhance the neighborhood knowledge of in-network compute nodes in a distributed fashion. They designed distributed coordination for in-network computation (DICer) that assists the service deployments by improving the resolution of compute requests.

Kua: a distributed object store over named data networking

This paper by Varun Patil, Hemil Desai, and Lixia Zhang decribes a distributed object store in NDN.

Applications such as machine learning training systems or log collection generate and consume large amounts of data. Object storage systems provide a simple abstraction to store and access such large datasets. These datasets are typically larger than the capacities of individual storage servers, and require fault tolerance through replication. This paper presents Kua, a distributed object storage system built over Named Data Networking (NDN).

The data-centric nature of NDN helps Kua maintain a simple design while catering to requirements of storing large objects, providing fault tolerance, low latency and strong consistency guarantees, along with data-centric security.

ICN Applications and Wireless Networking

The session on ICN Applications and Wireless Networking features four papers:

N-DISE: NDN-based data distribution for large-scale data-intensive science

This paper by Yuanhao Wu, Faruk Volkan Mutlu, et al. describes an NDN for Data-Intensive Science Experiments (N-DISE).

To meet unprecedented challenges faced by the world’s largest data- and network-intensive science programs, the authors designed and implemented a new, highly efficient and field-tested data distribution, caching, access and analysis system for the Large Hadron Collider (LHC) high energy physics (HEP) network and other major science programs. They developed a hierarchical Named Data Networking (NDN) naming scheme for HEP data, implemented new consumer and producer applications to interface with the high-performance NDNDPDK forwarder, and buildt on recently developed high-throughput NDN caching and forwarding methods.

The experiemts in this paper include delivering LHC data over the wide area network (WAN) testbed at throughputs exceeding 31 Gbps between Caltech and StarLight, with dramatically reduced download time.

Building a secure mHealth data sharing infrastructure over NDN

In this paper Saurab Dulal, Nasir Ali, et al. describes an NDN-based mHealth system called mGuard.

Exploratory efforts in mobile health (mHealth) data collection and sharing have achieved promising results. However, fine-grained contextual access control and real-time data sharing are two of the remaining challenges in enabling temporally-precise mHealth intervention. The authors have developed an NDN based system called mGuard to address these challenges. mGuard provides a pub-sub API to let users subscribe to real-time mHealth data streams, and uses name-based access control policies and key-policy attribute-based encryption to grant fine-grained data access to authorized users based on contextual information.

Delay-tolerant ICN and its application to LoRa

I have co-authored this paper together with Peter Kietzmann, José Alamos, Thomas C. Schmidt, and Matthias Wählisch.

Connecting low-power long-range wireless networks, such as LoRa, to the Internet imposes significant challenges because of the vastly longer round-trip-times (RTTs) in these constrained networks. In our paper on "Delay-Tolerant ICN and Its Application to LoRa" we present an Information-Centric Networking (ICN) protocol framework that enables robust and efficient delay-tolerant communication to edge networks, including but not limited to LoRa. Our approach provides ICN-idiomatic communication between networks with vastly different RTTs for different use cases. We applied this framework to LoRa, enabling end-to-end consumer-to-LoRa-producer interaction over an ICN-Internet and asynchronous ("push") data production in the LoRa edge. Instead of using LoRaWAN, we implemented an IEEE 802.15.4e DSME MAC layer on top of the LoRa PHY layer and ICN protocol mechanisms in the RIOT operating system.

For our experiments, we connected constrained LoRa nodes and gateways on IoT hardware platforms to a regular, emulated ICN network and performed a series of measurements that demonstrate robustness and efficiency improvements compared to standard ICN.

iCast: dynamic information-centric cross-layer multicast for wireless edge network

This paper by Tianlong Li, Tian Song, Yating Yang, and Jike Yang presents iCast, short for dynamic information-centric multicast, to enable dynamic multicast in the link layer.

Native multicast support in Named Data Networking (NDN)
is an attractive feature, as multicast content delivery can reduce the redundant traffic and improve the network performance, especially in wireless edge networks. With their visibility into Interest and Data names, NDN routers automatically aggregate the same requests from different end hosts and establish network-layer multicast. However,
the current link-layer multicast based on host-centric MAC address management is inflexible. Consequently, supporting NDN dynamic multicast with the current link-layer architecture remains a challenge.

iCast enables dynamic multicast in the link layer based on three main contributions:

  1. iCast integrates NDN native multicast with the host-centric link layer while maintaining the host-centric properties of the current link layer.
  2. iCast achieves per-packet dynamic multicast in the link layer, and the authors further propose a hash-based iCast variant for dynamic connection.
  3. iCast has been implemented in a real testbed, and the evaluation results show that iCast reduces up to 59.53% traffic compared with vanilla NDN. iCast bridges the gap between NDN multicast and the host-centric link-layer multicast.

Written by dkutscher

September 27th, 2022 at 3:29 pm

Posted in Events

Tagged with , ,

Dagstuhl Seminar on Compute-First Networking

without comments

Eve Schooler, Jon Crowcroft, Phil Eardley, and myself organized an online Dagstuhl seminar on Compute-First Networking earlier in June that was attended by an excellent group of researchers from distributed computing, networking and data analytics communities.

Dagstuhl has now published the seminar report that discusses new perspectives on doing Computing in the Networking, use cases and that includes many references to relevant literature and on-going projects in the field.

Executive Summary

Edge- and more generally In-Network Computing are key elements in many traditional content distribution services today, typically connecting cloud-based computing to consumers. The advent of new programmable hardware platforms, research and wide deployment of distributed computing technologies for data processing, as well as new exciting use cases such as distributed Machine Learning and Metaverse-style ubiquitous computing are now inspiring research of more fine-granular and more principled approaches to distributed computing in the "Edge-To-Cloud Continuum".

The Compute-First Networking Dagstuhl seminar has brought together researchers and practitioners in the fields of distributed computing, network programmability, Internet of Things, and data analytics to explore the potential, possible technological components, as well as open research questions in an exciting new field that will likely induce a paradigm shift for networking and its relationship with computing.

Traditional overlay-based in-network computing is typically limited to quite specific purposes, for example CDN-style edge computing. At the same time, network programmability approaches such as Software-Defined Networking and corresponding languages such as P4 are often perceived as too limited for application-level programming. Compute-First Networking (CFN) views networking and computing holistically and aims at leveraging network programmability, server- and serverless in-network computing and modern distributed computing abstraction to develop a new system's approach for an environment where computing is not merely and add-on to existing networks, but where networking is re-imagined with a broader and ubiquitous notion of programmability.

We expect this approach to enable several benefits: it can help to unlock distributed computing from the existing silos of individual cloud and CDN platforms – a necessary condition to enable Keiichi Matsuda's vision of Hyper-Reality and Metaverse concepts where the physical world, human users and different forms of analytics, and visual rendering services constantly engage in information exchanges, directly at the edges of the network. It can also help to provide reliable, scalable, privacy-preserving and universally available platforms for Distributed Machine Learning applications that will play a key role in future large-scale data collection and analytics.

CFN's integrated approach allows for several optimizations, for example a more informed and more adaptive resource optimization that can take into account dynamically changing network conditions, availability of utilization of compute platforms as well as application requirements and adaptation boundaries, thus enabling more
responsive and better-performing applications.

Several interesting research challenges have been identified that should be addressed in order to realize the CFN vision: How should the different levels of programmability in todays system be integrated into a consistent approach? How would programming and communication abstractions look like? How do orchestration systems need to evolve in order to be usable in these potentially large scale scenarios? How can be guarantee security and privacy properties of a distributed computing slice without having to rely on just location attributes? How would the special requirements and properties of relevant applications such as Distributed Machine best be mapped to CFN – or should distributed data processing for federated or split Machine Learning play a more prominent role in designing CFN abstractions?

This seminar was an important first step in identifying the potential and a first set of interesting new research challenges for re-imaging distributed computing through CFN – an exciting new topic for networking and distributed computing research.

Written by dkutscher

December 1st, 2021 at 4:18 pm

Posted in Events,Posts

Tagged with ,