Search Results
IRTF ICNRG Meeting at IETF-121
The ICNRG Meeting at IETF-121 takes place on 2024-11-05, 13:00 to 14:30 UTC.
ICNRG Agenda
1 | ICNRG Chairs’ Presentation: Status, Updates | Chairs | 05 min |
2 | FLIC Update | Marc Mosko | 15 min |
3 | CCNx Content Object Chunking | Marc Mosko | 15 min |
4 | Reflexive Forwarding Update | Hitoshi Asaeda | 20 min |
5 | ICN Challenges for Metaverse Platform Interoperability | Jungha Hong | 15 min |
6 | Distributed Micro Service Communication | Aijun Wang | 15 min |
7 | Buffer, Wrap Up and Next Steps | Chairs | 05 min |
Please remember that all sessions are being recorded.
Material
- https://datatracker.ietf.org/doc/draft-irtf-icnrg-flic/
- https://datatracker.ietf.org/doc/draft-mosko-icnrg-ccnxchunking/
- https://github.com/mmosko/ccnpy
- https://datatracker.ietf.org/doc/draft-irtf-icnrg-reflexive-forwarding/
- https://datatracker.ietf.org/doc/draft-hong-icn-metaverse-interoperability/
- https://datatracker.ietf.org/doc/draft-li-icnrg-damc/
New Internet Draft draft-irtf-icnrg-reflexive-forwarding-00
We updated our Internet Draft draft-irtf-icnrg-reflexive-forwarding-00 on Reflexive Forwarding for CCNx and NDN Protocols.
Current Information-Centric Networking protocols such as CCNx and NDN have a wide range of useful applications in content retrieval and other scenarios that depend only on a robust two-way exchange in the form of a request and response (represented by an Interest-Data exchange in the case of the two protocols noted above). A number of important applications however, require placing large amounts of data in the Interest message, and/or more than one two-way handshake. While these can be accomplished using independent Interest-Data exchanges by reversing the roles of consumer and producer, such approaches can be both clumsy for applications and problematic from a state management, congestion control, or security standpoint. This specification proposes a Reflexive Forwarding extension to the CCNx and NDN protocol architectures that eliminates the problems inherent in using independent Interest-Data exchanges for such applications. It updates RFC8569 and RFC8609.
The recent update includes a generalization of the main protocol specification, so that Reflexive Forwarding can be used in both CCNx and NDN.
IRTF ICNRG@IETF-119
The Information-Centric Networking Research Group (ICNRG) of the Internet Research Task Force (IRTF) met at IETF-119 in Brisbane. Here is my quick summary of the meeting:
Agenda:
1 | ICNRG Chairs’ Presentation: Status, Updates | Chairs |
2 | Secure Web Objects and Transactions | Dirk Kutscher |
3 | Transaction Manifests | Marc Mosko |
4 | Vanadium: Secure, Distributed Applications | Marc Mosko |
5 | Global vs. Scoped Namespaces | Marc Mosko |
Meeting material:
ICNRG Status
ICNRG recently published four news RFCs – great achievement by all involved authors and the whole group!
- RFC 9510: Alternative Delta Time Encoding for Content-Centric Networking (CCNx) Using Compact Floating-Point Arithmetic
- RFC 9531: Path Steering in Content-Centric Networking (CCNx) and Named Data Networking (NDN)
- RFC 9508: Information-Centric Networking (ICN) Ping Protocol Specification
- RFC 9507: Information-Centric Networking (ICN) Traceroute Protocol Specification
See my blog posting for a more detailed description.
Secure Web Objects and Transactions
One focus of this meeting was transactions in ICN, i.e., interactions with the intention to achieve some durable state change at a remote peer – which imposes some challenges in a system that is designed around accessing named data.
In my presentation I talked about different ways to realize transactions in ICN:
- ICN as a network layer
- Client-server communication between two nodes
- Implement transaction semantics on top of an ICN messaging service
- Recording state changes in shared data structures
- Shared namespace, potentially functioning as a transaction ledger
- Still need to think about atomicity etc
For 1) transactions as messaging over ICN networks, the following considerations apply:
- Client-server communication between two nodes
- Implement transaction semantics on top of an ICN messaging service
- Different approaches
- A: Traditional layering: Using NDN-like systems as a messaging layer
- Assign prefixes to client & servers
- Send messages back and forth, and implement reliability and transactions semantics on top
- B: ICN-native communication: Use Interest-Data as request-response abstraction for transactions
- Mapping transaction communication and state evolution more directly to ICN, e.g., Interest-Data in NDN
- Collapsing traditional network, transport, application layer functions
I mainly talked about variant 1B, ICN-native communication: Use InterestData as request-response abstraction for transactions and introduced the idea of "Secure Web Objects" (SWOs) for a data-oriened web as a motivation.
In such a system, not everything would be about accessing named data object – there is also a need for "client/server" state evolution, e.g., for online banking and similar use cases.
I introduced some ideas on RESTful ICN that we published in an earlier paper. The Restful ICN proposal leverages Reflexive Forwarding, for robust client-server communication and integrates elements of CCNx key exchange for security context setup and session resumption.
Summarizing, I wanted to initiate a discussion about how to realize transactions in information-centric systems? This discussion is not about mapping ICN to existing protocols, such as HTTP, but about actual distributed computing semantics, i.e., robust session setup and state evolution. Transactions with ICN-native communication are hard to provide with with basic Interest/Data. Reflexive Forwarding + CCNx Key Exchange + transaction semantics are an attempt to provide such a service in a mostly ICN-idiomatic way, with the downside that reflexive forwarding needs extensions to forwarders. This raises question on the minimal feature set of core ICN protocols, and to deal with extensions.
In the discussion, it was pointed out that lots of experience on distributed systems has shown that transactions or secure multi-interactions will generally require more than a single two-way exchange.
Others suggested that ICN and NDN has authentication carried out when the signed interest arrives which directly proves authentication, so that the authentication would in fact be done beforehand.
However, authentication may not be enough. For example, client authorization in client-server communication is a critical function which needs to be carefully designed in real-world networks. For example, forcing a server to do signature verification on initial request arrival has been shown in prior systems (e.g. TCP+TLS) to represent a serious computational DOS attack risk. Reflexive Forwarding in RICE tries to avoid exactly that problem, by enabling the server to iteratively authenticate and authorize clients before committing computing resources.
It was also said that whenever a protocol does authentication. you need to analyze in the context of specific examples to discuss, and that cannot only look at the problem at an abstract level.
Transaction Manifests
Marc Mosko presented another approach to transactions in ICN, called [Transaction Manifests](https://datatracker.ietf.org/meeting/119/materials/slides-119-icnrg-transaction-manifests-00 "Transaction Manifests "Transaction Manifests"). He explained that ICN can be transactional.
Typically, ICN is considered as a publish/subscribe or pre-publishing of named-data approach. Outside ICN, distributed transactions do exist, especially in DLTs. For example, considering a permissioned DLT with size N and K << N bookkeepers. In a DLT, they base their decision on the block hash history. In this talk, Marc discussed what would be an equivalent function in ICN, and introduced the notion of transaction manifests.
In ICN, there is a technology called FLIC (File-like collections), i.e., manifests for static objects. FLIC describes a single object that is re-constructed by traversing the manifest in order. In Marc's proposal, a transaction manifest describes a set of names that must be considered together. The transaction manifest names likely point to FLIC root manifests.
In the example above, transaction manifest entries entries point directly to objects. For a complete systems, you would also need a set of bookkeepers, e.g., systems like Hyperledger offering global ordering vis bespoke orderer nodes. Such bookkeeper would have to ensure that a transaction has current pre-conditions, current post-conditions, and no conflicts in post-conditions. Transaction manifests are a form of write-ahead logs (WAL), as used in databases, such as PostgreSQL.
Marc went on discussing a few challenges, such as interactions with repositories and caches, as well as distributed transaction manifests.
There was some discussion on the required ordering properties for this approach, i.e., whether, in a multi-bookkeeper system, livelocks and deadlocks could occur – and whether these could resolved without requiring a total order.
Marc is continueing to work on this. One of the next steps would be to design client-to-bookkepper and bookkeeper-to-bookkeeper protocols.
Vanadium: Secure, Distributed Applications
Marc Mosko introduced the Vanadium system, a secure, distributed RPC system based on distributed naming and discovery. Vanadium uses symmetrical authentication and encryption and may use private name discovery with Identity-Based-Encryption (IBE).
Vanadium has two parts:
- Principals and Blessings and Caveats (Security)
- Use a hierarchical name, e.g. alice:home:tv.
- Certificate based
- Blessings are scoped delegations from one principal to another for a namespace (e.g. alice grants Bob “watch” permissions to the TV)
- Caveats are restrictions on delegations (e.g. Bob can only watch 6pm – 9pm).
- 3rd party caveats must be discharged before authorization
- E.g. revocations or auditing
- The RPC mount tables (Object Naming)
- These describe how to locate RPC namespaces
- They provide relative naming
Vanadium is interesting because parts of its design resemble some ICN concepts, especially the security part:
- It uses prefix matching and encryption
- Namespaces work like groups
- The colon : separates the blesser from the blessed
- Authorizations match extensions.
- If Alice authorized “read” to alice:hometv to alice:houseguests, and if Bob has a blessing for alice:houseguests:bob, then Bob has “read” to alice:hometv.
- A special terminator :$ only matches the exact prefix.
- A blessing to alice:houseguest:$ only matches that exact prefix.
Marc then explain the object naming structure and the entity resolution in Vanadium.
More details can be found in Marc's presentation and on Vanadium's web page.
In summary, Vanadium is a permissioned RPC service. A Vanadium name encodes the endpoint plus name suffix. The endpoint does not need to resolve to a single mount table server, it could be any server that possesses an appropriate blessing. Authentication is done via pair-wise key exchange and blessing validations. It can be private if using IBE, otherwise server name leaks. Authorizations and Blessings and Caveats use hierarchical, prefixmatching names.
From an ICN perspective, the security approach seems interesting. Blessings and Caveats and discharges and namespaces as groups. One question is how this differs from SDSI co-signings. The Vanadium identity service provides an interesting mapping of OAuth2 app:email tokens to PKI and blessings. The RPC approach exhibits some differences to ICN, e.g., embedding the endpoint identifier in the name. ICN technologies in this context are public-key scoped names in CCNx and schematized trust anchors in NDN.
In the discusion, it was noted that it would be interesting to do an apples-to-apples comparison to the NDN trust schema approach; Vanadium's approach with the ability to create blessings and caveats on demand seems to be much more granular and dynamic.
Global vs. Scoped Namespaces
Marc Mosko discussed global vs. scoped namespaces. For example, how do you know that the key you are looking at is the key that you should be looking at? IPFS punts that to out-of-band mechanisms. CCNX on the other hand uses public key scoped names; you can put a public key, publisher ID in an interest and say you only wanyt this name if signed with the associated key.
It was suggested to re-visit some of the concepts in the RPC system of OSF distributed computing, where all namespaces were scoped, and name discovery starts out as local. You could then "attach" a local namespace to more global namespace via an explicit "graft" operation. The key here was that the authoritative pointers representing the namespace graph were from child to parent, as opposed to parent to child as it is with systems like DNS. Your local trust root identifier could become a name in a higher layer space, yielding a trust root higher in the hierarchy tha could be used instead of or in addition to your local trust root. Doing this can create progressively more global name spaces out of local ones.
Please check out the meeting video for the complete discussion at the meeting.
ICNRG @ IETF-118
We have posted the agenda our ICNRG meeting at IETF-118:
Drafts
- https://datatracker.ietf.org/doc/draft-irtf-icnrg-flic/
- https://datatracker.ietf.org/doc/draft-yao-tsvwg-cco-problem-statement-and-usecases/00/
- https://datatracker.ietf.org/doc/draft-yao-tsvwg-cco-requirement-and-analysis/00/
- https://datatracker.ietf.org/doc/draft-li-icnrg-damc/
Logistics
ICNRG Meeting at IETF-118 – 2023-11-07, 08:30 to 10:30 UTC
ACM ICN-2022 Highlights
The ACM Information-Centric Networking 2022 Conference took place in Osaka from September 19 to 21 2022, hosted by Osaka University. It was a three-day conference with tutorials, one keynote, two panel session, and paper and poster/demo presentations. The highlights (with links to papers and presentations) from my perspective were the following:
Keynote by Dave Oran: Travels with ICN – The road traversed and the road ahead
Dave Oran presented an overview of his research experience over the last ten years that was informed by many seminal research contributions on ICN and his career in the network vendor sector as well as in standards and research bodies such as the IETF and IRTF.
The keynote's theme was about disentagling the application and network layer aspects of ICN, which led to interesting perspectives on some of the previous design decisions in CCNx and NDN.
As ilustrated in the figure below, the more networking-minded ICN topics are typically connected to features and challenges of building packet-forwarding networks based on the principle of accessing named data. The actual research questions are generally not different to those of IP networks (routing, mobility etc.), but ICN provides a significant potential to re-think and often improve over the specific approaches in IP networks due to its core properties such as object security and symmetric, stateful forwarding.
Information-centric applications development in contrast is often concerned with general naming concepts, namespace design, and security features that are enabled by namespace design and application layer object security such as trust schema and provenance.
The message in Dave's talk was not that these are completely disjunct areas that should best be investigated independent of each other, but rather that the ICN's fascination and disruptive potential is based on the potential for rethinking layer boundaries and contemplating a better function split between applications, network stacks on endpoints, and forwarding elements in the network. In his talk, Dave focused on
- the Interaction of consumers & networking producers of data;
- routing;
- forwarding; and
- congestion control.
He discussed many lessons learned as well as open research and new ideas for all of these topics – please refer to the presentation slides for details.
One particularly interesting current ICN research topic is distributed computing and ICN architectures & interaction models for that. ICN's name-based forwarding model and object security provide very interesting options for simplifying systems such as microservices, RESTful services and distributed application coordination. Alluding to our work on Reflexive Forwarding, Dave offered two main lessons learned from building corresponding communication abstractions:
-
Content fetch with two-way handshakes is a poor match for doing distributed computations.
-
Extensions to the base protocols can give a flexible underpinning for multiple interaction models
This raises the question of the slim waist of ICN, i.e., as research progresses, what should be the minimal feature set and what is the right extensibility model?
Dave concluded his talk with a few interesting questions:
-
how can the networking insights we’ve gained from ICN protocols inform the construction of Information Centric systems and applications?
- Whether and how to utilize name-based routing to achieve robustness and performance scaling for distributed applications?
- Where does caching help or not help and how to best utilize caches?
- Does pushing Names down to lower layers help latency? Resilience? Fairness?
-
How can the insights we’ve gained from applying Information Centricity in applications inform what we bother to change the network to do, and what not?
- Do things like multipath forwarding, in-network retransmission, or reflexive forwarding actually enable applications that are hard or infeasible to do without them?
- Is there a big win for wireless networks in terms of optimizing a scarce resource or having more robust and responsive mobility characteristics?
More details in the presentation slides
Panel: ICN and the Metaverse – Challenges and Opportunities
I had the pleasure of being in a panel with Jeff Burke (UCLA) and Geoff Houston (APNIC), moderated by Alexander Afanasyev (Florida International University) discussing Metaverse challenges and opportunities for ICN.
Questions on Metaverse and ICN
Large-scale interactive and networked AR/VR/XR systems are now referred to as Metaverse, and the general assumption is that corresponding applications will be hosted on platforms, similar to those that are employed for web and social media applications today.
In the web, the platform approach has led to an accelerated development and growth of a few popular mainstream systems. On the other hand, several problems have been observed such as ubiquitous surveillance, lock-in effects, centralization, innovation stagnation, and cost overhead for achieving the required performance.
While these phenomena may have both technical and economic root causes, we would like to discuss:
- How should Metaverse systems be designed, and what would be important architectural pillars?
- What is the potential for re-imagining Metaverse with information-centric concepts and protocols?
- Would ICN enable or lead to profound architecturally unique approaches – or would protocols such as NDN be a drop-in replacement for QUIC, HTTP3 etc.?
- What are the challenges for building ICN-based Metaverse systems, and what it missing in today's ICN platforms?
As input to the discussion, Jeff Burke and myself (together with Dave Oran) submitted two papers:
- Jeff Burke: Statement: As TCP/IP is to the web, ICN is to the…?
- Dirk Kutscher and David Oran: RESTful information-centric networking: statement
Research Directions
Jeff offered a list of really interesting research directions based on the notion that in the Metaverse, host-based identifiers and end-to-end connections between hosts would be abstracted even further away than in today’s web. Client devices would fade into the background in favor of the data supplanting or augmenting the real world. Thus, a metaverse consisted of information not associated with the physical world unless it needed to describe or provide interaction with it. The experiential semantics were viscerally information-centric, which would help to motivate the ICN research opportunities such as:
-
Persistence: The information forming a metaverse persists across sessions and users.
-
“Content” and Interoperability: Designing the relationships among metaverse-layer objects and the named packets that an ICN network moves and stores.
-
Naming and Spatial Organization: How to best integrate knowledge from research in databases and related fields where these challenges have been considered for decades.
-
Trust, Provenance, and Transactions: Using ICN to disentangle metaverse objects from the security provided by a source or a given channel of communication, with the named data representation secured at the time of publication instead.
RESTful ICN
In our paper on RESTFul ICN, Dave Oran and I asked the question: given that most web applications are concerned with transferring named units of data (web resources, video chunks etc.), can the REST paradigm be married with the data-oriented, receiver-driven operation of Information-Centric Networking (ICN), leveraging attractive ICN benefits such as consumer anonymity, stateful and symmetric forwarding, flow-balance in-network caching, and implicit object security?
We argue that this is feasible given some of the recent advances in ICN protocol development and that the resulting suite is simpler and potentially having better performance and robustness properties. Our sketch of an ICN based protocol framework addresses secure and efficient establishment and continuation of REST communication sessions, without giving up key ICN properties, such as consumer anonymity and flow balance.
Panel Discussion
The panel discussed the current socio-economic realities in the Internet and the Web and explored opportunities (and non-opportunities) for redesigns, and how ICN could be a potential enabler for that.
My personal view is that most of the potential dystopian outcomes of future Metaverse applications are independent from the enabling networking technology and the technology stack at large (security, naming etc.). It is really important to understand the actual objectives of a specific systems, i.e., who operates to which ends, similar to so-called social networks today. If the main objective is to create a more powerful advertising and manipulation platform, then such as system will exhibit yet unimaginable surveillance and tracking mechanisms – independent of the underlying network stack.
With respect to the technical design, I agree to Jeff Burke's proposed research directions. One particularly interesting question will be how to design a Information-Centric communication stack and corresponding APIs. I argued that it is not necessary to replicate existing interaction styles and protocol stacks from the TCP/IP (or QUIC) world. Instead it should be more interesting and productive to discuss the fundamentally needed interaction classes such as
- High-performance multi-destination transfer
- Group communication and synchronization
- High-performance session-oriented communication with servers and peers (for which we proposed RESTful ICN).
The panel then also discussed how likely non-mainstream Metaverse systems would be adopted and whether the current socio-economic environment actually allows for that level of permissionless innovation – considering the network effects that Metaverse systems would be subjected to, much in the same way as so-called social networks.
Panel: Hard Lessons for ICN from IP Multicast?
Thomas Schmidt (HAW Hamburg) moderated a panel discussion with Jon Crowcroft (University of Cambridge), Dave Oran, and George Xylomenos (Athens University of Economics and Business) as panelists.
With the continued shift towards more and more live video streaming services over the Internet, scalable multi-destination delivery has become more relevant again. For example, the recently chartered IETF Working Group on Media over QUIC (MOQ), is addressing the need for scalable multi-destination delivery and the unavailability of IP multicast as a platform by developing a QUIC-based overlay system that essentially uses information-centric concepts, albeit in a QUIC overlay network. Such system would consist of a network of QUIC proxies, connected via individual QUIC connections to emulate request forwarding and chunk-based video data distribution. Considering the non-negligible overhead and complexity one might ask the question whether live video streaming over the Internet could be served by a better approach. Questions like this are being asked by the network service provider community (ISPs have to bear a lot of the overhead and overlay complexity) as well, for example in this APNIC blog posting by Jake Holland titled Why inter-domain multicast now makes sense.
This panel was inspired by a statement paper submitted by Jon Crowcroft titled [Hard lessons for ICN from IP multicast (https://dl.acm.org/doi/10.1145/3517212.3558086). In this brief statement, Jon traced the line of thought from Internet multicast through to Information Centric Networking, and used this to outline what he thinks should have been the priorities in ICN work from the start.
The statement paper discusses a few problems with IP multicast that have been largely acknowledged such as difficulties in creating viable business models, unsolved security problems such as IP multicast being used as a DDOS platform, and interdomain multicast that proven difficult to establish due multicast routing scaling problems and the lack of robust pricing models. The second part of the paper is then some ICN work that has been addressing some of the mentioned issued.
The paper gave rise to an interesting and controversial discussion at the panel. The most important point is IMO to characterize ICN communication model correctly: it is correct that the combination of stateful forwarding, Interest aggregation, and caching enables an implicit multi-destination delivery service. It is implicit, because consumers that ask for the same units of named data within a time frame at the order of the network RTT will send equivalent Interest messages so that forwarders can multicast the data delivery to the faces they received such Interests from. In conjunction with opportunistic (or managed) caching by forwarders this would enable a very elegant multi-destination delivery services that can even cater to a wider variation of Interest sending times, as "late" Interest would be answered from caches.
This is a different service model compared to the push-based IP multicast model. ICN does not provide such as service in the first place, but is just applying its regular receiver-driven mode of operation which elegantly works well in the case of multiple consumers asking for the same data. It is probably fair to say that the ICN model caters to media-delivery use cases (one stream delivered to multiple consumers) but does not try to provide the more general IP multicast service model (Any Source Multicast). However, by extension, the ICN approach could be applied to multi-source scenarios as well – the system would build implicit delivery trees from any source to current consumers, without requiring extra machinery.
With this, if you like, simpler service model, ICN does fundamentally not inherit many of the problems that prohibit IP multicast in the Internet: the system is receiver-driven which simply eliminates DDOS threats (on the packet level). It is also not clear, whether ICN would need anything special to provide this service in inter-domain settings (except for general ICN routing in the Internet, which is a general,
but different research question).
Acknowledging this conceptual and practical difference, there are obviously other interesting research questions that ICN multi-destination delivery entails, for example performance and jitter reduction in the presence of caching and other transport questions.
Overall, a good time to talk about multi-destination delivery and to keep thinking about missing pieces and potential future work in ICN.
Enabling Distributed Applications
One paper presentation session was focused on distributed applications – a very interesting and relevant ICN research area. It featured three great papers:
SoK: The evolution of distributed dataset synchronization solutions in NDN
This paper by Philipp Moll, Varun Patil, Lan Wang, and Lixia Zhang systemizes the knowledge about distributed dataset synchronisation in ICN, or Sync in short, which, according to the authors, plays the role of a transport service in the Named Data Networking (NDN) architecture. A number of NDN Sync protocols have been developed over the last decade. For this paper, they conducted a systematic examination of NDN Sync protocol designs, identified common design patterns, revealed insights behind different design approaches,
and collected lessons learned over the years.
Sync enables new ways of thinking about coordination and general communication in distributed ICN systems, and I encourage everyone to read this for a good overview of the different proposed systems and their properties.
There are also some open research questions around Sync, such as large-scale applicability, alternative to using Interest multicast for discovery and more – a good topic to work on!
DICer: distributed coordination for in-network computations
This paper by Uthra Ambalavanan, Dennis Grewe, Naresh Nayak, Liming Liu, Nitinder Mohan, and Jörg Ott is a nice product of the Piccolo project that had the pleasure to set up and co-lead.
Application domains such as automotive and the Internet of Things may benefit from in-network computing to reduce the distance data travels through the network and the response time. Information Centric Networking (ICN) based compute frameworks such as Named Function Networking (NFN) are promising options due to their location independence and loosely-coupled communication model.
However, unlike current operations, such solutions may benefit from orchestration across the compute nodes to use the available resources in the network better. In this paper, the authors adopted the State Vector Synchronization (SVS), an application dataset synchronization protocol in ICN, to enhance the neighborhood knowledge of in-network compute nodes in a distributed fashion. They designed distributed coordination for in-network computation (DICer) that assists the service deployments by improving the resolution of compute requests.
Kua: a distributed object store over named data networking
This paper by Varun Patil, Hemil Desai, and Lixia Zhang decribes a distributed object store in NDN.
Applications such as machine learning training systems or log collection generate and consume large amounts of data. Object storage systems provide a simple abstraction to store and access such large datasets. These datasets are typically larger than the capacities of individual storage servers, and require fault tolerance through replication. This paper presents Kua, a distributed object storage system built over Named Data Networking (NDN).
The data-centric nature of NDN helps Kua maintain a simple design while catering to requirements of storing large objects, providing fault tolerance, low latency and strong consistency guarantees, along with data-centric security.
ICN Applications and Wireless Networking
The session on ICN Applications and Wireless Networking features four papers:
N-DISE: NDN-based data distribution for large-scale data-intensive science
This paper by Yuanhao Wu, Faruk Volkan Mutlu, et al. describes an NDN for Data-Intensive Science Experiments (N-DISE).
To meet unprecedented challenges faced by the world’s largest data- and network-intensive science programs, the authors designed and implemented a new, highly efficient and field-tested data distribution, caching, access and analysis system for the Large Hadron Collider (LHC) high energy physics (HEP) network and other major science programs. They developed a hierarchical Named Data Networking (NDN) naming scheme for HEP data, implemented new consumer and producer applications to interface with the high-performance NDNDPDK forwarder, and buildt on recently developed high-throughput NDN caching and forwarding methods.
The experiemts in this paper include delivering LHC data over the wide area network (WAN) testbed at throughputs exceeding 31 Gbps between Caltech and StarLight, with dramatically reduced download time.
Building a secure mHealth data sharing infrastructure over NDN
In this paper Saurab Dulal, Nasir Ali, et al. describes an NDN-based mHealth system called mGuard.
Exploratory efforts in mobile health (mHealth) data collection and sharing have achieved promising results. However, fine-grained contextual access control and real-time data sharing are two of the remaining challenges in enabling temporally-precise mHealth intervention. The authors have developed an NDN based system called mGuard to address these challenges. mGuard provides a pub-sub API to let users subscribe to real-time mHealth data streams, and uses name-based access control policies and key-policy attribute-based encryption to grant fine-grained data access to authorized users based on contextual information.
Delay-tolerant ICN and its application to LoRa
I have co-authored this paper together with Peter Kietzmann, José Alamos, Thomas C. Schmidt, and Matthias Wählisch.
Connecting low-power long-range wireless networks, such as LoRa, to the Internet imposes significant challenges because of the vastly longer round-trip-times (RTTs) in these constrained networks. In our paper on "Delay-Tolerant ICN and Its Application to LoRa" we present an Information-Centric Networking (ICN) protocol framework that enables robust and efficient delay-tolerant communication to edge networks, including but not limited to LoRa. Our approach provides ICN-idiomatic communication between networks with vastly different RTTs for different use cases. We applied this framework to LoRa, enabling end-to-end consumer-to-LoRa-producer interaction over an ICN-Internet and asynchronous ("push") data production in the LoRa edge. Instead of using LoRaWAN, we implemented an IEEE 802.15.4e DSME MAC layer on top of the LoRa PHY layer and ICN protocol mechanisms in the RIOT operating system.
For our experiments, we connected constrained LoRa nodes and gateways on IoT hardware platforms to a regular, emulated ICN network and performed a series of measurements that demonstrate robustness and efficiency improvements compared to standard ICN.
iCast: dynamic information-centric cross-layer multicast for wireless edge network
This paper by Tianlong Li, Tian Song, Yating Yang, and Jike Yang presents iCast, short for dynamic information-centric multicast, to enable dynamic multicast in the link layer.
Native multicast support in Named Data Networking (NDN)
is an attractive feature, as multicast content delivery can reduce the redundant traffic and improve the network performance, especially in wireless edge networks. With their visibility into Interest and Data names, NDN routers automatically aggregate the same requests from different end hosts and establish network-layer multicast. However,
the current link-layer multicast based on host-centric MAC address management is inflexible. Consequently, supporting NDN dynamic multicast with the current link-layer architecture remains a challenge.
iCast enables dynamic multicast in the link layer based on three main contributions:
- iCast integrates NDN native multicast with the host-centric link layer while maintaining the host-centric properties of the current link layer.
- iCast achieves per-packet dynamic multicast in the link layer, and the authors further propose a hash-based iCast variant for dynamic connection.
- iCast has been implemented in a real testbed, and the evaluation results show that iCast reduces up to 59.53% traffic compared with vanilla NDN. iCast bridges the gap between NDN multicast and the host-centric link-layer multicast.
ACM ICN-2020 Highlights
ACM ICN-2020 took place online from September 29th to October 1st 2020. This is a quick summary of the main technical highlights from my personal perspective. Overall, it was a high-quality event, and it was great to see the progress that is being made by different teams. Here, I am focusing specifically on Architecture, Content Distribution, Programmability, and Performance. If you are interested in the complete program, all papers, presentation material, and presentation videos are available on the conference website.
Architecture
The Information-Centric Networking concept can be implemented in different ways (and some people would argue that some overlay systems for content distribution and data processing are essentially information-centric). ICN systems have often been associated with clean-slate approaches, requiring difficult to imagine fork-lift replacement of larger parts of the infrastructure. While this has never the case (because you can always run ICN protocols over different underlays or directly map the semantics to IPv6), it is still interesting to learn about new approaches and to compare existing data-oriented frameworks to pure ICN systems.
Named-Data Transport
In their paper Named-Data Transport: An End-to-End Approach for an Information-Centric IP Internet (Presentation) Abdulazaz Albalawi and J. J. Garcia-Luna-Aceves have developed an alternative implementation of the accessing named data concept called Named-Data Transport (NDT) that can leverage existing Internet routing and DNS, while still providing the general properties (accessing named-data securely, in-network caching, receiver-driven operation).
The system is based on three components: 1) A connection-free reliable transport protocol, called Named Data Transport Protocol (NDTP), 2) a DNS extension (my-DNS) for manifest records that describe content items and their chunks, and 3) NDT Proxies that act as transparent caches and that track pending requests, similar to ICN forwarders, but at the transport layer.
In NDT, content names are based on DNS domain names, and each name is mapped to an individual manifest record (in the DNS). These records provide a mapping to a list of IP addresses hosting content replicas. When requesting such records, the idea is that the system would be able apply similar traffic steering as today's CDNs, i.e., provide the requestor with a list of topologically close locations. Producers would be responsible for producing and publishing such manifests.
The Named Data Transport Protocol (NDTP) is a receiver-driven transport protocol (on top of UDP) used by consumers and NDT Proxies which behave logically like ICN forwarders. There is more to the whole approach (such as security, name privacy etc.).
In my view, NDT is an example of a resolution-based ICN system with interesting ideas for deployability. In principle, resolution-based ICN has been pursued by other approaches before (such as NetInf). In general, these systems have a better initial deployment story at the cost of requiring additional infrastructure (and resolution steps during operation.)
RESTful Information-Centric Web of Things
In the Internet of Things, ICN has demonstrated many benefits in terms of reduced code complexity, better data availability, and reduced communication overhead compared to many vertically integrated IoT stacks and location/connection-based protocols.
In their paper Toward a RESTful Information-Centric Web of Things: A Deeper Look at Data Orientation in CoAP (presentation), Cenk Gündoğan, Christian Amsüss, Thomas C. Schmidt, and Matthias Wählisch compare a CoAP and OSCORE (Object Security for Constrained RESTFul Environments) based network of CoAP clients, servers, and proxies with a corresponding NDN setup.
The authors investigated the possibility of building a restful Web of Things that adheres to ICN first principles using the CoAP protocol suite (instead of a native ICN protocol framework). The results showed, since CoAP is quite modular and can be used in different ways, this is indeed possible, if one is willing to give up strict end-to-end semantics and to introduce proxies that mimic ICN forwarder behavior. (The paper reports on many other things, such as extensive performance measurements and comparisons.)
In my view, this is an interesting Gedankenexperiment, and there was a lively discussion at the conference. One of the discussion topics was the question how accurate the comparison really is. For example, while is is possible to construct a CoAP proxy chain that mimics ICN behavior, real-world scenarios would require additional functionality in the CoAP network (routing, dealing with disruptions etc.) that might lead to a different level of complexity (that would possibly be less pronounced in an native ICN environment).
Still, the important take-away of this paper is that some applications of CoAP & OSCORE exhibit information-centric properties, and it is an interesting question whether, for a green-field deployment, the user would not be better served by a native ICN approach.
Content Distribution
Content Distribution and ICN have a long history, sometimes challenged by some misunderstandings. Because one of the early ICN approaches was called Content-Centric Networking (CCN), it was often assumed that ICN would disrupt or replace Content Distribution Networks (CDNs) or that it was a CDN-like technology.
While ICN will certainly help with large-scale content distribution and potentially also change/simplify CDN operations, the core idea is actually about accessing named data securely as a principal network service -- for all applications (that's why Named Data Networking -- NDN -- is a better name).
Managed content distribution as such will continue to be important, even in an ICN world. Surely, it will enjoy better support from the network as today's CDN can expect, thus enabling new exciting applications and simplifying operations, but I prefer avoiding the notion of ICN replacing CDN.
When looking at actual networks and applications today, it is fair to say that almost nothing works without CDN. What we are seeing today is hyperscalers and essentially all the (so-called) OTT video providers extending their systems into ISP networks, by simply shipping standalone edge caches such as Netflix OCA servers as standalone systems to ISPs.
Each of these providers have their own special requirements of how to map customers to edge caches, how to implement traffic steering etc, which is painful enough for operators already. I expect this to become even more pressing as we shift more and more linear live TV to the Internet. Flash-crowd audiences such as viewers of UEFA Champions' League matches will require a massive extension of the already extensive edge caching infrastructure and require massive investments but also significant complexity with respect to traffic steering and guaranteeing a decent viewing experience.
In that context, it is no wonder that people try to resort to IP-Multicast for ensuring a more scaleable last-mile distribution such as this proposal by Akamai and others. Marrying IP-Multicast with a CDN-overlay is (IMO) not exactly complexity reduction, so I think we are now at a tipping point where the Internet in terms of concepts and deployable physical infrastructure can provide many cool services, but where the limited features of the network layers requires a prohibitive amount of complexity -- to an extend where people start looking for better solutions.
At ICN-2020, CDN was thus discussed quite extensively again -- with many interesting, complementary contributions.
Keynote by Bruce Maggs on The Economics of Content Distribution
We were extremely happy to have Bruce Maggs (Emerald Innovations, on leave from Duke University, ex NEC researcher, one of the founding employees of Akamai) delivering his keynote on the Economics of Content Delivery. In his talk Bruce explained different economic aspects (flow of payments, cost of goods sold) but also challenges for different CDN services such as live-streaming.
The take-aways for ICN were:
- Incentives and cost must be aligned
- Performance benefits from caching
- Reducing latency is valuable to content providers
- Reducing network is valuable to ISPs.
- If there was caching at the core (in addition to the edge)
- What is the additional benefit?
- Who pays for that?
- Protocol innovation is still possible
- In the past, people thought that HTTP/TLS/TPC/IP is difficult to overcome
- QUIC demonstrates that new protocols can be introduced
The socio-economic discussion resonated quite well with me, as some of earlier ICN projects in Europe tried to address these aspects relatively early in 2008. I believe this was due to the operator and vendor influence at the time. In retrospect, I would say that the approaches at that time were possibly too much top-down and premature (trying to revert value chains and find new business models). It is only now that we understand the economics of CDN, its complexity and real cost that (in my view) represent barriers to innovation -- and that we can start to imagine actually implementing different systems.
Far Cry: Will CDNs Hear NDN's Call?
In their paper Far Cry: Will CDNs Hear CDN's Call? (presentation), Chavoosh Ghasemi, Hamed Yousefi, and Beichuan Zhang tried to compare NDN with enterprise CDN (a particular variant of CDN) with respect to caching and retrieval of static contents.
In their work, the authors deployed an adaptive video streaming service over three different networks: Akamai, Fastly, and the NDN testbed. They had users in four different continents and conducted a two-week experiment, comparing Quality of Experience, Origin workload, failure resiliency, and content security.
I cannot summarize of all of the results here, but the conclusions by the authors were:
- CDNs outperform the current NDN testbed deployment in terms of QoE (achievable video resolution in a DASH-setting)
- Origin workload and failure resiliency are mainly the products of the network design -- and the NDN testbed outperforms current CDNs
- More as an interpretation: NDN can realize a resilient, secure, and scalable content network given appropriate software and protocol maturity and hardware resources.
The paper was discussed intensively at the conference , for example, it was debated how comparable the plain NDN testbed and its network service really are -- to a production-level CDN.
In my view, the value of this paper lies in the created experiment facilities and the attempt to establish some ground truth (based on current NDN maturity). I hope that this work can leverage by more experiments in the future.
iCDN: An NDN-based CDN
In their paper iCDN: An NDN-based CDN (presentation), Chavoosh Ghasemi, Hamed Yousefi, and Beichuan Zhang (i.e., the same authors), pursue a more forward-looking approach. In this paper, they develop a CDN service based on ICN mechanisms, i.e., trying to conceive a future CDN system that does not need to take the current network's limitations into account.
One of the interesting ICN properties is that the main service of accessing named data does not require any notion of location. Sometimes people assume that an Information-Centric system always needs to map names to locators such as IP addresses, but this is a really limited view. Instead, it is possible to build the network solely on forwarding INTERESTs for named data based on forwarding information of that same namespace. A forwarder may have more than forwarding info base entry for the same name -- from a consumer (application) perspective these are completely equivalent.
Because of intrinsic object security, it does not matter from which particular host a content object is served. There can be several copies -- all equivalent. When creating copies of original content, e.g., by cloning a data repository, the new copy needs to be announced (by injecting routing information) , and from that point on, it is reachable without any additional management, configuration or other out-of-band mechanisms.
When applying this notion to CDN scenarios, it is easy to understand the simplification opportunities. In ICN, content repositories can be added to the network, and in-network name-based forwarding will find the closest copy automatically.
For iCDN, the authors have leveraged this basic notion and built an ICN-based CDN that does not need any client-to-cache mapping and overlay routing mechanisms. Based on that, iCDN features logical partitions and cache hierarchies for content namespaces (for acknowledging that there may be different CDN providers, hosting different content services).
iCDNs employ cache hierarchies to exploit on-path and off-oath caches without relying on application-layer routing functions. The idea was to provide a scalable, adaptive solution that can cope with dynamic network changes as well as dynamic changes in content popularity.
There are more details to this approach, and of course the debate on what is the best ICN-based CDN design has just started. Still, this paper is an interesting contribution in my view, because it illustrates the opportunities for rethinking CDN nicely.
Programmability
Programmability and ICN has two facets: 1) Implementing distributed computing with ICN (for example as in CFN -- Compute-First Networking) and 2) implementing ICN with programmable infrastructure. ACM ICN-2020 has seen contributions in both directions.
Result Provenance in Named Function Networking
In their paper Result Provenance in Named Function Networking (presentation), Claudio Marxer and Christian Tschudin have leveraged their previous work on Named Function Networking (NFN) and developed a result provenance framework for distributed computing in NFN.
In this work, the authors augmented NFN with a data structure that creates transparency of the genesis of every evaluation results so that entities in the system can ascertain result provenance. The main idea is the introduction of so-called provenance records that capture meta data about the genesis of the computation result. The paper discusses integration of these records into NDN and procedures for provenance checks and trust computation.
In my view, the interesting contribution of this work is the illustration of how the general concept of provenance verification can be implemented in a data-oriented system such as the ICN-based Named Function Networking framework. The results may be (so some extend) to other ICN-based in-network computing systems, so I hope this paper will start a thread of activities on this subject.
ENDN: An Enhanced NDN Architecture with a P4-programmable Data Plane
In their paper ENDN: An Enhanced NDN Architecture with a P4-programmable Data Plane (presentation), Ouassim Karrakchou, Nancy Samaan, and Ahmed Karmouch present an NDN system that is implemented in a P4-programmable data plane, i.e., a system in which applications can interact with a control plane that configures the data plane according to the required services.
The work in this paper is based on the notion that applications specify their content delivery requirements to the network, i.e., the control plane of a network. The control plane provide a catalogue of content delivery services, which are then translated into data plane configurations that ultimately get installed on P4 switches.
Examples of such services include Content Delivery Pattern services (whether the system is based on INTEREST/DATA or some stateful data forwarding), Content Name Rewrite services (enabling the network to rewrite certain names in INTERESTs), Adaptive Forwarding services (next-hop selection) etc.
In my view, this paper is interesting because it provides a relatively advanced perspective of how applications specify required behavior to a programmable ICN network. Moreover, the authors implemented this successfully on P4 switches and described relevant lessons learned and achievements in the paper.
Performance
Performance has historically always been an interesting topic in ICN. On the one hand, ICN provides substantial performance increases in the network due to its forwarding and caching features. On the other hand, it has been shown that implementing an ICN forwarder that operates at modern network line-speeds is challenging.
NDN-DPDK: NDN Forwarding at 100 Gbps on Commodity Hardware
In their paper NDN-DPDK: NDN Forwarding at 100 Gbps on Commodity Hardware (presentation), Junxiao Shi, Davide Pesavento, and Lotfi Benmohamed present their design of a DPDK-based forwarder.
The authors have developed a complete NDN implementation that runs on real hardware and that supports the complete NDN protocol and name matching semantics.
This work is interesting because the authors describe the different optimization techniques including better algorithms and more efficient data structures, as well as making use of the parallelism offered by modern multi-core CPUS and multiple hardware queues with user-space drivers for kernel-bypass.
This work represents the first software forwarder implementation that is able to achieve 100 Gpbs without compromises in NDN protocols semantics. The authors have published the source at https://github.com/usnistgov/ndn-dpdk.
Keynote at IEEE HotICN-2019
I had the pleasure of being invited for a keynote at IEEE HotICN-2019 in Chongqing. I talked about key ICN properties (from my perspective), about general research areas, and three specific topics: Quality of Service, Forwarding Plane Interaction with the Routing System and Applications, and In-Network Computing.
ACM ICN-2019 Highlights
ACM ICN-2019 took place in the week of September 23 in Macau, SAR China. The conference was co-located with Information-Centric-Networking-related side events: the TouchNDN Workshop on Creating Distributed Media Experiences with TouchDesigner and NDN before and an IRTF ICNRG meeting after the conference. In the following, I am providing a summary of some highlights of the whole week from my (naturally very subjective) perspective.
Applications
ICN with its accessing named data in the network paradigm is supposed provide a different, hopefully better, service to application compared to the traditional stack of TCP/IP, DNS and application-layer protocols. Research in this space is often addressing one of two interesting research questions: 1) What is the potential for building or re-factoring applications that use ICN and what is the impact on existing designs; and 2) what requirements can be learned for the evolution of ICN, what services are useful on top of an ICN network layer, and/or how should the ICN network layer be improved.
Network Management
The best paper at the conference on Lessons Learned Building a Secure Network Measurement Framework using Basic NDN by Kathleen Nichols took the approach of investigating how a network measurement system can be implemented without inventing new features for the NDN network layer. Instead, Kathleen's work explored the features and usability support mechanisms that would be needed for implementing her Distributed Network Measurement Protocol (DNMP) in terms of frameworks and libraries leveraging existing NDN. DNMP is secure, role-based framework for requesting, carrying out, and collecting measurements in NDN forwarders. As such it represents a class of applications where applications both send and receive data that is organized by hierarchical topics in a namespace which implies a conceptual approach where applications do not (want to) talk to specific producers but are really operating in an information-centric style.
Communication in such a system involves one-to-many, many-to-one, and any-to-any communications about information (not data objects hosted at named nodes). DNMP employs a publish/subscribe model inspired by protocols such as MQTT where publishers and subscribers communicate through hierarchically structured topics. Instead of existing frameworks for data set reconciliation, with DNMP work includes the development of a lightweight pub/sub sync protocol called syncps that uses Difference Digests, solving the multi-party set reconciliation problem with prior context.
In a role-based system such as DNMP that uses secure Named-Data-based communication, automating authentication and access control is typically a major challenge. DNMP leverages earlier work on Trust Schema but extends this by a Versatile Security Toolkit (VerSec) that integrates with the transport framework to simplify integration of trust rules. VerSec is about to be released under GPL.
I found this paper really interesting to read because it is a nice illustration of what kind of higher layer services and APIs non-trivial application require. Also, the approach of using the NDN network layer as is but implementing additional functionality as libraries and frameworks seems promising with respect to establishing a stable network layer platform where innovation can happen independently on top. Moreover, the paper embraces Information-Centric thinking nicely and demonstrates the concept with a relevant application. Finally, I am looking forward to see the VerSec software which could make it easier for developers to implement rigorous security and validation in the applications.
Distributed Media Experiences
Jeff Burke and Peter Gusev organized the very cool TouchNDN workshop on Creating Distributed Media Experiences with TouchDesigner and NDN at the School of Creative Media at the City University of Hong Kong (summary presentation). The background is that video distribution/access has evolved significantly from linear TV broadcast to todays applications. Yet, many systems still seem to be built in a way that optimizes for linear video streaming to consumer eye balls, with a frame sequence abstraction.
Creative media applications such as Live Show Control (example) exhibit a much richer interaction with digital video, often combing 3D modelling with flexible, non-sequential access to video based on (for example) semantics, specific time intervals, quality layers, or spatial coordinates.
Combine this with dynamic lightning, sound control and instrumentation of theater effects, and you get an idea of an environment where various pieces of digital media are mixed together creatively and spontaneously. Incidentally, a famous venue for such an installation is the Spectacle at MGM Cotai, close to the venue of ACM ICN-2019 in Macau.
Derivative's TouchDesigner is a development platform for such realtime user experiences. It is frequently used for projection mapping, interactive visualization and other applications. The Center for Research in Engineering, Media and Performance (REMAP) has developed an integration of NDN with TouchDesigner's realtime 3D engine via the NDN-Common-Name-Library stack as a platform for experimenting with data-centric media. The objective is to provide a more natural networked media platform that does not have to deal with addresses (L2 or L3) but enables applications to publish and request media assets in namespaces that reflect the structure of the data. Combing this with other general ICN properties such as implicit multicast distribution and in-network caching results in a much more adequate platform for creating realtime multimedia experiences.
The TouchNDN workshop was one of REMAP's activities on converging their NDN research with artistic and cultural projects, trying to get NDN out of the lab and into the hands of creators in arts, culture, and entertainment. It is also an eye-opener for the ICN community for learning about trends and opportunities in real-time rendering and visual programming which seems to bear lots of potential for innovation -- both from the artistic as well as from the networking perspective.
Personally, I think it's a great, inspiring project that teaches us a lot about more interesting properties and metrics (flexible access, natural APIs, usability, utility for enabling innovations) compared to the usual quantitative performance metrics from the last century.
Inter-Server Game State Synchronization
Massive Multiplayer Online Role-Playing Games (MMORPGs) allow up to thousands of players to play in the same shared virtual world. Those worlds are often distributed on multiple servers of a server cluster, because a single server would not be able to handle the computational load caused by the large number of players interacting in a huge virtual world. This distribution of the world on a server cluster requires to synchronize relevant game state information among the servers. The synchronization requires every server to send updated game state information to the other servers in the cluster, resulting in redundantly sent traffic when utilizing current IP infrastructure.
In their paper Inter-Server Game State Synchronization using Named Data Networking Philipp Moll, Sebastian Theuermann, Natascha Rauscher, Hermann Hellwagner, and Jeff Burke started from the assumption that ICN's implicit multicast support and the ability to to decouple the game state information from the producing server could reduce the amount of redundant traffic and also help with robustness and availability in the presence of server failures.
They built a ICNified version of Minecraft and developed protocols for synchronizing game state in a server cluster over NDN. Their evaluation results indicated the benefits on an ICN-based approach for inter-server game state synchronization despite larger packet overheads (compared to TCP/IP). The authors made all their artefacts required for reproducing the results available on github.
Panel on Industry Applications of ICN
I had the pleasure of moderating a panel on industry applications of ICN, featuring Richard Chow (Intel), Kathleen Nichols (Pollere Inc.), and Kent Wu (Hong Kong Applied Science and Technology Research Institute). Recent ICN research has produced various platforms for experimentation and application development. One welcome development consists of initial ICN deployment mechanisms that do not require a forklift replacement of large parts of the Internet. At the same time, new technologies and use cases, such as edge computing, massively scalable multiparty communication, and linear video distribution, impose challenges on the existing infrastructure. This panel with experts from different application domains discussed pain points with current systems, opportunities and promising results for building specific applications with ICN, and challenges, shortcomings, and ideas for future evolution of ICN.
What was interesting to learn was how different groups pick up the results and available software to build prototypes for research and industry applications and what they perceive as challenges in applying ICN.
Decentralization
Growing concerns about centralization, surveillance and loss of digital sovereignty are currently fuelling many activities around P2P-inspired communication and storage networks, decentralized web ("web3") efforts as well as group such as the IRTF Research Group on Decentralized Internet Infrastructure (DINRG). One particular concern is the almost universal dependency on central cloud platforms for anchoring trust in applications that are actually of a rather local nature such as smart home platforms. Since such platforms often entail rent-seeking or surveillance-based business models, it is becoming increasingly important to investigate alternatives.
NDN/CCN-based ICN with its built-in PKI system provides some elements for an alternative design. In NDN/CCN it is possible to set up secure communication relationships without necessarily depending on third-party platforms which could be leveraged for more decentralized designs of IoT systems, social media and many other applications.
Decentralized and Secure Multimedia Sharing
A particularly important application domain is multimedia sharing where surveillance and manipulation campaigns by the dominant platforms have led to the development of alternative federated social media applications such as Mastodon and Diaspora. In their paper Decentralized and Secure Multimedia Sharing Application over Named Data Networking Ashlesh Gawande, Jeremy Clark, Damian Coomes, and Lan Wang described their design and implementation of npChat (NDN Photo Chat), a multimedia sharing application that provides similar functionality as today’s media-sharing based social networking applications without requiring any centralized service providers.
The major contributions of this work include identifying the specific requirements for a fully decentralized application, and designing and implementing NDN-based mechanisms to enable users to discover other users in the local network and through mutual friends, build friendship via multi-modal trust establishment mirrored from the real world, subscribe to friends’ multimedia data updates via pub-sub, and control access to their own published media.
This paper is interesting in my view because it illustrates the challenges and some design options nicely. It also suggests further research in terms of namespace design, name privacy and trust models. The authors developed an NDN-based prototype for Android systems that is supposed to appear on the Android Play store soon.
Exploring the Relationship of ICN and IPFS
We were happy to have David Dias, Adin Schmahmann, Cole Brown, and Evan Miyazono from Protocol Labs at the conference who held a tutorial on IPFS that also touched upon the relationship of IPFS and some ICN approaches.
Protocol Lab's InterPlanetary File System (IPFS) is a peer-to-peer content-addressable distributed filesystem that seeks to connect all computing devices with the same system of files. It is an opensource community-driven project, with reference implementations in Go and Javascript, and a global community of millions of users. IPFS resembles past and present efforts to build and deploy Information-Centric Networking approaches to content storage, resolution, distribution and delivery. IPFS and libp2p, which is the modular network stack of IPFS, are based on name-resolution based routing. The resolution system is based on Kademlia DHT and content is addressed by flat hash-based names. IPFS sees significant real-world usage already and is projected to become one of the main decentralised storage platforms in the near future. The objective of this tutorial is to make the audience familiar with IPFS and able to use the tools provided by the project for research and development.
Interestingly IPFS bear quite some similarities with earlier ICN systems such as NetInf but is using traditional transport and application layer protocols for the actual data transfer. One of the interesting research questions in that space are how IPFS system could be improved with today's ICN technology (as an underlay) but also how the design of a future IPFS-like system could leverage additional ICN mechanisms such as Trust Schema, data set reconciliation protocols, and remote method invocation. The paper Towards Peer-to-Peer Content Retrieval Markets: Enhancing IPFS with ICN by Onur Ascigil, Sergi Reñé, Michał Król et al. explored some of these options.
IoT
IoT is one of the interesting application areas for ICN, especially IoT in constrained environments, where the more powerful forwarding model (stateful forwarding and in-network caching) and the associated possibility for more fine-grained control of storage and communication resources incurs significant optimization potential (which was also a topic at this year's conference).
QoS Management in Constrained NDN Networks
Quality of Service (QoS) in the IP world mainly manages forwarding resources, i.e., link capacities and buffer spaces. In addition, Information Centric Networking (ICN) offers resource dimensions such as in-network caches and forwarding state. In constrained wireless networks, these resources are scarce with a potentially high impact due to lossy radio transmission. In their paper Gain More for Less: The Surprising Benefits of QoS Management in Constrained NDN Networks Cenk Gündoğan, Jakob Pfender, Michael Frey, Thomas C. Schmidt, Felix Shzu-Juraschek, and Matthias Wählisch explored the two basic service qualities (i) prompt and (ii) reliable traffic forwarding for the case of NDN. The resources that were taken into account are forwarding and queuing priorities, as well as the utilization of caches and of forwarding state space. The authors treated QoS resources not only in isolation, but also correlated their use on local nodes and between network members. Network-wide coordination is based on simple, predefined QoS code points. The results indicate that coordinated QoS management in ICN is more than the sum of its parts and exceeds the impact QoS can have in the IP world.
What I found interesting about his paper is the validation in real-world experiments that demonstrated impressive improvements, based on the coordinated QoS management approach. This work comes timely considering the current ICN QoS discussion in ICNRG, for example in draft-oran-icnrg-qosarch. Also, the authors made their artefacts available on github for enabling reproducing their results.
How Much ICN Is Inside of Bluetooth Mesh?
Bluetooth mesh is a new mode of Bluetooth operation for low-energy devices that offers group-based publish-subscribe as a network service with additional caching capabilities. These features resemble concepts of information-centric networking (ICN), and the analogy to ICN has been repeatedly drawn in the Bluetooth community. In their paper Bluetooth Mesh under the Microscope: How much ICN is Inside? Hauke Petersen, Peter Kietzmann, Cenk Gündoğan, Thomas C. Schmidt, and Matthias Wählisch compared Bluetooth mesh with ICN both conceptually and in real-world experiments. They contrasted both architectures and their design decisions in detail. They conducted experiments on an IoT testbed using NDN/CCNx and Bluetooth Mesh on constrained RIOT nodes.
Interestingly the authors found that the implementation of ICN principles and mechanisms in Bluetooth Mesh is rather limited. In fact, Bluetooth Mesh performs flooding without content caching and merely using the equivalent of multicast addresses as a surrogate for names. Based on these findings, the authors discuss options of how ICN support for Bluetooth could or should look like, so the paper is interesting both for understanding the actual working of Bluetooth Mesh as well as for ideas for improving Bluetooth Mesh. The authors made their artefacts available on github for enabling reproducing their results.
ICN and LoRa
LoRa is an interesting technology for its usage of license-free sub-gigahertz spectrum and bi-directional communication capabilities. We were happy to have Kent Wu and Xiaoyu Zhao from ASTRI at the conference and the ICNRG meeting who talked about their LoRa prototype development for a smart metering system for water consumption in Hong Kong. In addition to that, the ICNRG also discussed different options for integrating ICN and LoRa and got an update by Peter Kietzmann on the state of LoRa support in the RIOT OS. This is an exciting area for innovation, and we expect more work and interesting results in the future.
New Frontiers
Appying ICN to big data storage and processing and to distributed computing are really promising research directions that were explored by papers at the conference.
NDN and Hadoop
The Hadoop Distributed File System (HDFS) is a network file system used to support multiple widely-used big data frameworks that can scale to run on large clusters. In their paper On the Power of In-Network Caching in the Hadoop Distributed File System Eric Newberry and Beichuan Zhang evaluate the effectiveness of using in-network caching on switches in HDFS- supported clusters in order to reduce per-link bandwidth usage in the network.
They discovered that some applications featured large amounts of data requested by multiple clients and that, by caching read data in the network, the average per-link bandwidth usage of read operations in these applications could be reduced by more than half. They also found that the choice of cache replacement policy could have a significant impact on caching effectiveness in this environment, with LIRS and ARC generally performing the best for larger and smaller cache sizes, respectively. The authors also developed a mechanism to reduce the total per-link bandwidth usage of HDFS write operations by replacing write pipelining with multicast.
Overall, the evaluation results are promising, and it will be interesting to see how the adoption of additional ICN concepts and mechanisms and caching could be useful for big data storage and processing.
Compute-First Networking
Although, as a co-author, I am clearly biased, I am quite convinced of the potential for distributed computing and ICN that we described in a paper co-authored by Michał Król, Spyridon Mastorakis, David Oran, and myself.
Edge- and, more generally, in-network computing is receiving a lot attention in research and industry fora. What are the interesting research questions from a networking perspective? In-network computing can be conceived in many different ways – from active networking, data plane programmability, running virtualized functions, service chaining, to distributed computing. Modern distributed computing frameworks and domain-specific languages provide a convenient and robust way to structure large distributed applications and deploy them on either data center or edge computing environments. The current systems suffer however from the need for a complex underlay of services to allow them to run effectively on existing Internet protocols. These services include centralized schedulers, DNS-based name translation, stateful load balancers, and heavy-weight transport protocols.
Over the past years, we have been working on alternative approaches, trying to find ways for integrating networking and computing in new ways, so that distributed computing can leverage networking capabilities directly and optimize usage of networking and computing resources in a holistic fashion. Here is a summary of our latest paper.
Compute First Networking (CFN): Distributed Computing meets ICN
Edge- and, more generally, in-network computing is receiving a lot attention in research and industry fora. What are the interesting research questions from a networking perspective? In-network computing can be conceived in many different ways - from active networking, data plane programmability, running virtualized functions, service chaining, to distributed computing. Modern distributed computing frameworks and domain-specific languages provide a convenient and robust way to structure large distributed applications and deploy them on either data center or edge computing environments. The current systems suffer however from the need for a complex underlay of services to allow them to run effectively on existing Internet protocols. These services include centralized schedulers, DNS-based name translation, stateful load balancers, and heavy-weight transport protocols.
Over the past years, we have been working on alternative approaches, trying to find ways for integrating networking and computing in new ways, so that distributed computing can leverage networking capabilities directly and optimize usage of networking and computing resources in a holistic fashion.
From Application-Layer Overlays to In-Network Computing
Domain-specific distributed computing languages like LASP have gained popularity for their ability to simply express complex distributed applications like replicated key-value stores and consensus algorithms. Associated with these languages are execution frameworks like Sapphire and Ray that deal with implementation and deployment issues such as execution scheduling, layering on the network protocol stack, and auto-scaling to match changing workloads. These systems, while elegant and generally exhibiting high performance, are hampered by the daunting complexity hidden in the underlay of services that allow them to run effectively on existing Internet protocols. These services include centralized schedulers, DNS-based name translation, stateful load balancers, and heavy-weight transport protocols.
We claim that, especially for compute functions in the network, it is beneficial to design distributed computing systems in a way that allows for a joint optimization of computing and networking resources by aiming for a tighter integration of computing and networking. For example, leveraging knowledge about data location, available network paths and dynamic network performance can improve system performance and resilience significantly, especially in the presence of dynamic, unpredictable workload changes.
The above goals, we believe, can be met through an alternative approach to network and transport protocols: adopting Information-Centric Networking as the paradigm. ICN, conceived as a networking architecture based on the principle of accessing named data, and specific systems such as NDN and CCNx have accommodated distributed computation through the addition of support for remote function invocation, for example in Named Function Networking, NFN and RICE, Remote Method Invocation in ICN and distributed data set synchronization schemes such as PSync.
Introducing Compute First Networking (CFN)
We propose CFN, a distributed computing environment that provides a general-purpose programming platform with support for both stateless functions and stateful actors. CFN can lay out compute graphs over the available computing platforms in a network to perform flexible load management and performance optimizations, taking into account function/actor location and data location, as well as platform load and network performance.
We have published a paper about CFN at the ACM ICN-2019 Conference that is being presented in Macau today by Michał Król. The paper makes the following contributions:
- CFN marries a state-of-the art distributed computing framework to an ICN underlay through RICE, Remote Method Invocation in ICN. This allows the framework to exploit important properties of ICN such as name-based routing and immutable objects with strong security properties.
- We adopted the rigorous computation graph approach to representing distributed computations, which allows all inputs, state, and outputs (including intermediate results) to be directly visible as named objects. This enables flexible and fine-grained scheduling of computations, caching of results, and tracking state evolution of the computation for logging and debugging.
- CFN maintains the computation graph using Conflict-free Replicated Data Types (CRDTs) and realizes them as named ICN objects. This enables implementation of an efficient and failure-resilient fully- distributed scheduler.
- Through evaluations using ndnSIM simulations, we demonstrate that CFN is applicable to range of different distributed computing scenarios and network topologies.
Resources and Links
- My keynote at IEEE CCNC-2019 on Compute-First Networking (CFN): New Perspectives on Integrating Computing and Networking
- Internet Draft draft-kutscher-coinrg-dir-00 with Jörg Ott and Teemu Kaerkkaeinen on Directions for Computing in the Network (COIN) and the corresponding presentation.
- The CFN paper at ACM ICN-2019: Compute First Networking: Distributed Computing meets ICN. The author team: Michał Król, Spyridon Mastorakis, David Oran, Dirk Kutscher
- CFN builds on our earlier work on RICE, Remote Method Invocation in ICN, authored by: Michał Król, Karim Habak, Karim Habak, Dirk Kutscher, Ioannis Psaras
- 1st ACM CoNEXT Workshop on Emerging in-Network Computing Paradigms (ENCP)
- IRTF Computing in the Network Proposed Research Group (COIRG)
ICN Update after IETF-99
Here is a quick (eclectic) summary of recent events in ICN at/around IETF-99 last week. ICNRG met twice: for a full-day meeting on Sunday and for a regular meeting on Wednesday. (Find a list of all past meeting, agendas, meeting materials, and minutes here.)
Edge Computing and ICN
We presented a summary of the recent Workshop on Information-Centric Fog Computing (ICFC) at IFIP Networking 2017, which featured a few papers on ICN edge computing in IoT and on Named Function Networking, one specific approach to marry access to static data and dynamic computing in ICN.
Moreover, Eve Schooler from Intel announced the three selected projects of the recent Intel/NSF-sponsored call for proposals for projects on ICN in the wireless edge:
- ICN-Enabled Secure Edge Networking with Augmented Reality (ICE-AR) (UCLA, New Mexico State University);
- SPLICE: Secure Predictive Low-Latency Information Centric Edge for Next Generation Wireless Networks (Texas A&M, Washington University St. Louis, Purdue, Ohio State, University of Illinois Urbana-Champaign); and
- Light-Speed Networking (LSN): Refactoring the Wireless Network Stack to Dramatically reduce Information Response Time (U. Massachusetts-Amherst, U. Wisconsin-Madison).
Lixia Zhang presented an overview of the first project on Augmented Reality and described how the project conceives AR as one of several applications that can leverage a web of browsable named data, based on decentralized multiparty context-content exchange.
Finally, Yiannis Psaras presented his paper on Keyword-Based Mobile Application Sharing through Information-Centric Connectivity that won the Best Paper Award at ACM MobiArch 2016. In this paper, the authors describe a cloud-independent content and application sharing platform based on ICN.
ICN Demos
Luca Muscariello and Marcel Enguehard presented an overview of the Community ICN (CICN) activity in the Linux Foundation fd.io project and showed a demo of the software and their emulation environment.
CICN consists of several Open Source ICN implementations, including an efficient VPP-based forwarder implementations. Cisco made this software available after acquiring PARC's implementation earlier this year.
ICN Specifications Moving Forward Towards Publication
ICNRG has completed its (research group) last calls on the two core specifications for the CCNx variant of ICN:
The fd.io CICN implementations are based on these specifications (that are intended to be published as Experimental RFCs).
ICNRG also started the Last Call for an Internet Draft on Research Directions for Using ICN in Disaster Scenarios that is intended to be published as an Informal RFC. There are a few additional documents that are nearing completion -- see our Wiki for more information.
Upcoming Things
There a few exciting events around ICN taking place this summer/fall.
The ACM SIGCOMM ICN Conference 2017 is embedded into a week of cool ICN and IoT events:
- IRTF Thing-to-Thing-Research-Group meeting on September 23/24 (Saturday/Sunday)
- RIOT Summit 2017 on September 25/26 (Monday/Tuesday)
- The ICN Conference itself from September 26 through 26 (Tuesday through Thursday)
- IRTF ICNRG meeting on September 27 (Friday)
Moreover, ICNRG plans to meet at IETF-100, most likely on Sunday, November 11 and during the following week.
If you are working on ICN Security, there a current Call For Papers for an IEEE Communications Magazine Feature Topic on Information-Centric Networking Security.