Dirk Kutscher

Personal web page

ACM CoNEXT Workshop on Emerging In-Network Computing Paradigms (ENCP)

without comments

Edge- and, more generally, in-network computing is receiving a lot attention in research and industry fora. The ability to decentralize computing, to achieve low latency communication to distributed application logic, and the potential for privacy-preserving analytics are just a few examples that motivate a new approach for looking at computing and networking.

What are the interesting research questions from a networking and distributed computing perspective? In-network computing can be conceived in many different ways – from active networking, data plane programmability, running virtualized functions, service chaining, to distributed computing. What abstractions do we need to program, optimize, and to manage such systems? What is the relationship to cloud networking?

These questions will be discussed at the first workshop on Emerging In-Network Computing (ENCP) that takes place at ACM CoNEXT-2019 on December 9th in Orlando.

We have received many interesting submission and were able to put together a really interesting program that covers both Network Programmability and In-Network Computing Architectures and Protocols. Check out the full program here.

Many thanks to my co-organizers Spyros Mastorakis and Abderrahmen Mtibaa, to our steering committee members Jon Crowcroft, Satyajayant (Jay) Misra, and Dave Oran, and to our great Technical Program Committee for putting this together.

Links

Written by dkutscher

December 5th, 2019 at 8:24 am

ACM ICN-2019 Highlights

without comments

ACM ICN-2019 took place in the week of September 23 in Macau, SAR China. The conference was co-located with Information-Centric-Networking-related side events: the TouchNDN Workshop on Creating Distributed Media Experiences with TouchDesigner and NDN before and an IRTF ICNRG meeting after the conference. In the following, I am providing a summary of some highlights of the whole week from my (naturally very subjective) perspective.

University of Macau — the ICN-2019 Venue

Applications

ICN with its accessing named data in the network paradigm is supposed provide a different, hopefully better, service to application compared to the traditional stack of TCP/IP, DNS and application-layer protocols. Research in this space is often addressing one of two interesting research questions: 1) What is the potential for building or re-factoring applications that use ICN and what is the impact on existing designs; and 2) what requirements can be learned for the evolution of ICN, what services are useful on top of an ICN network layer, and/or how should the ICN network layer be improved.

Network Management

The best paper at the conference on Lessons Learned Building a Secure Network Measurement Framework using Basic NDN by Kathleen Nichols took the approach of investigating how a network measurement system can be implemented without inventing new features for the NDN network layer. Instead, Kathleen’s work explored the features and usability support mechanisms that would be needed for implementing her Distributed Network Measurement Protocol (DNMP) in terms of frameworks and libraries leveraging existing NDN. DNMP is secure, role-based framework for requesting, carrying out, and collecting measurements in NDN forwarders. As such it represents a class of applications where applications both send and receive data that is organized by hierarchical topics in a namespace which implies a conceptual approach where applications do not (want to) talk to specific producers but are really operating in an information-centric style.

Communication in such a system involves one-to-many, many-to-one, and any-to-any communications about information (not data objects hosted at named nodes). DNMP employs a publish/subscribe model inspired by protocols such as MQTT where publishers and subscribers communicate through hierarchically structured topics. Instead of existing frameworks for data set reconciliation, with DNMP work includes the development of a lightweight pub/sub sync protocol called syncps that uses Difference Digests, solving the multi-party set reconciliation problem with prior context.

In a role-based system such as DNMP that uses secure Named-Data-based communication, automating authentication and access control is typically a major challenge. DNMP leverages earlier work on Trust Schema but extends this by a Versatile Security Toolkit (VerSec) that integrates with the transport framework to simplify integration of trust rules. VerSec is about to be released under GPL.

I found this paper really interesting to read because it is a nice illustration of what kind of higher layer services and APIs non-trivial application require. Also, the approach of using the NDN network layer as is but implementing additional functionality as libraries and frameworks seems promising with respect to establishing a stable network layer platform where innovation can happen independently on top. Moreover, the paper embraces Information-Centric thinking nicely and demonstrates the concept with a relevant application. Finally, I am looking forward to see the VerSec software which could make it easier for developers to implement rigorous security and validation in the applications.

Distributed Media Experiences

Jeff Burke and Peter Gusev organized the very cool TouchNDN workshop on Creating Distributed Media Experiences with TouchDesigner and NDN at the School of Creative Media at the City University of Hong Kong (summary presentation). The background is that video distribution/access has evolved significantly from linear TV broadcast to todays applications. Yet, many systems still seem to be built in a way that optimizes for linear video streaming to consumer eye balls, with a frame sequence abstraction.

Creative media applications such as Live Show Control (example) exhibit a much richer interaction with digital video, often combing 3D modelling with flexible, non-sequential access to video based on (for example) semantics, specific time intervals, quality layers, or spatial coordinates.

Touchdesigner used for sound reactive 3D object and for mixing a video loop

Combine this with dynamic lightning, sound control and instrumentation of theater effects, and you get an idea of an environment where various pieces of digital media are mixed together creatively and spontaneously. Incidentally, a famous venue for such an installation is the Spectacle at MGM Cotai, close to the venue of ACM ICN-2019 in Macau.

The Spectacle at MGM Cotai – Creative Overview

Derivative’s TouchDesigner is a development platform for such realtime user experiences. It is frequently used for projection mapping, interactive visualization and other applications. The Center for Research in Engineering, Media and Performance (REMAP) has developed an integration of NDN with TouchDesigner’s realtime 3D engine via the NDN-Common-Name-Library stack as a platform for experimenting with data-centric media. The objective is to provide a more natural networked media platform that does not have to deal with addresses (L2 or L3) but enables applications to publish and request media assets in namespaces that reflect the structure of the data. Combing this with other general ICN properties such as implicit multicast distribution and in-network caching results in a much more adequate platform for creating realtime multimedia experiences.

The TouchNDN workshop was one of REMAP’s activities on converging their NDN research with artistic and cultural projects, trying to get NDN out of the lab and into the hands of creators in arts, culture, and entertainment. It is also an eye-opener for the ICN community for learning about trends and opportunities in real-time rendering and visual programming which seems to bear lots of potential for innovation — both from the artistic as well as from the networking perspective.

Personally, I think it’s a great, inspiring project that teaches us a lot about more interesting properties and metrics (flexible access, natural APIs, usability, utility for enabling innovations) compared to the usual quantitative performance metrics from the last century.

Inter-Server Game State Synchronization

Massive Multiplayer Online Role-Playing Games (MMORPGs) allow up to thousands of players to play in the same shared virtual world. Those worlds are often distributed on multiple servers of a server cluster, because a single server would not be able to handle the computational load caused by the large number of players interacting in a huge virtual world. This distribution of the world on a server cluster requires to synchronize relevant game state information among the servers. The synchronization requires every server to send updated game state information to the other servers in the cluster, resulting in redundantly sent traffic when utilizing current IP infrastructure.

In their paper Inter-Server Game State Synchronization using Named Data Networking Philipp Moll, Sebastian Theuermann, Natascha Rauscher, Hermann Hellwagner, and Jeff Burke started from the assumption that ICN’s implicit multicast support and the ability to to decouple the game state information from the producing server could reduce the amount of redundant traffic and also help with robustness and availability in the presence of server failures.

They built a ICNified version of Minecraft and developed protocols for synchronizing game state in a server cluster over NDN. Their evaluation results indicated the benefits on an ICN-based approach for inter-server game state synchronization despite larger packet overheads (compared to TCP/IP). The authors made all their artefacts required for reproducing the results available on github.

Panel on Industry Applications of ICN

I had the pleasure of moderating a panel on industry applications of ICN, featuring Richard Chow (Intel), Kathleen Nichols (Pollere Inc.), and Kent Wu (Hong Kong Applied Science and Technology Research Institute). Recent ICN research has produced various platforms for experimentation and application development. One welcome development consists of initial ICN deployment mechanisms that do not require a forklift replacement of large parts of the Internet. At the same time, new technologies and use cases, such as edge computing, massively scalable multiparty communication, and linear video distribution, impose challenges on the existing infrastructure. This panel with experts from different application domains discussed pain points with current systems, opportunities and promising results for building specific applications with ICN, and challenges, shortcomings, and ideas for future evolution of ICN.

What was interesting to learn was how different groups pick up the results and available software to build prototypes for research and industry applications and what they perceive as challenges in applying ICN.

Decentralization

Growing concerns about centralization, surveillance and loss of digital sovereignty are currently fuelling many activities around P2P-inspired communication and storage networks, decentralized web (“web3”) efforts as well as group such as the IRTF Research Group on Decentralized Internet Infrastructure (DINRG). One particular concern is the almost universal dependency on central cloud platforms for anchoring trust in applications that are actually of a rather local nature such as smart home platforms. Since such platforms often entail rent-seeking or surveillance-based business models, it is becoming increasingly important to investigate alternatives.

NDN/CCN-based ICN with its built-in PKI system provides some elements for an alternative design. In NDN/CCN it is possible to set up secure communication relationships without necessarily depending on third-party platforms which could be leveraged for more decentralized designs of IoT systems, social media and many other applications.

Decentralized and Secure Multimedia Sharing

A particularly important application domain is multimedia sharing where surveillance and manipulation campaigns by the dominant platforms have led to the development of alternative federated social media applications such as Mastodon and Diaspora. In their paper Decentralized and Secure Multimedia Sharing Application over Named Data Networking Ashlesh Gawande, Jeremy Clark, Damian Coomes, and Lan Wang described their design and implementation of npChat (NDN Photo Chat), a multimedia sharing application that provides similar functionality as today’s media-sharing based social networking applications without requiring any centralized service providers.

The major contributions of this work include identifying the specific requirements for a fully decentralized application, and designing and implementing NDN-based mechanisms to enable users to discover other users in the local network and through mutual friends, build friendship via multi-modal trust establishment mirrored from the real world, subscribe to friends’ multimedia data updates via pub-sub, and control access to their own published media.

This paper is interesting in my view because it illustrates the challenges and some design options nicely. It also suggests further research in terms of namespace design, name privacy and trust models. The authors developed an NDN-based prototype for Android systems that is supposed to appear on the Android Play store soon.

Exploring the Relationship of ICN and IPFS

We were happy to have David Dias, Adin Schmahmann, Cole Brown, and Evan Miyazono from Protocol Labs at the conference who held a tutorial on IPFS that also touched upon the relationship of IPFS and some ICN approaches.

Protocol Lab’s InterPlanetary File System (IPFS) is a peer-to-peer content-addressable distributed filesystem that seeks to connect all computing devices with the same system of files. It is an opensource community-driven project, with reference implementations in Go and Javascript, and a global community of millions of users. IPFS resembles past and present efforts to build and deploy Information-Centric Networking approaches to content storage, resolution, distribution and delivery. IPFS and libp2p, which is the modular network stack of IPFS, are based on name-resolution based routing. The resolution system is based on Kademlia DHT and content is addressed by flat hash-based names. IPFS sees significant real-world usage already and is projected to become one of the main decentralised storage platforms in the near future. The objective of this tutorial is to make the audience familiar with IPFS and able to use the tools provided by the project for research and development.

Interestingly IPFS bear quite some similarities with earlier ICN systems such as NetInf but is using traditional transport and application layer protocols for the actual data transfer. One of the interesting research questions in that space are how IPFS system could be improved with today’s ICN technology (as an underlay) but also how the design of a future IPFS-like system could leverage additional ICN mechanisms such as Trust Schema, data set reconciliation protocols, and remote method invocation. The paper Towards Peer-to-Peer Content Retrieval Markets: Enhancing IPFS with ICN by Onur Ascigil, Sergi Reñé, Michał Król et al. explored some of these options.

IoT

IoT is one of the interesting application areas for ICN, especially IoT in constrained environments, where the more powerful forwarding model (stateful forwarding and in-network caching) and the associated possibility for more fine-grained control of storage and communication resources incurs significant optimization potential (which was also a topic at this year’s conference).

QoS Management in Constrained NDN Networks

Quality of Service (QoS) in the IP world mainly manages forwarding resources, i.e., link capacities and buffer spaces. In addition, Information Centric Networking (ICN) offers resource dimensions such as in-network caches and forwarding state. In constrained wireless networks, these resources are scarce with a potentially high impact due to lossy radio transmission. In their paper Gain More for Less: The Surprising Benefits of QoS Management in Constrained NDN Networks Cenk Gündoğan, Jakob Pfender, Michael Frey, Thomas C. Schmidt, Felix Shzu-Juraschek, and Matthias Wählisch explored the two basic service qualities (i) prompt and (ii) reliable traffic forwarding for the case of NDN. The resources that were taken into account are forwarding and queuing priorities, as well as the utilization of caches and of forwarding state space. The authors treated QoS resources not only in isolation, but also correlated their use on local nodes and between network members. Network-wide coordination is based on simple, predefined QoS code points. The results indicate that coordinated QoS management in ICN is more than the sum of its parts and exceeds the impact QoS can have in the IP world.

What I found interesting about his paper is the validation in real-world experiments that demonstrated impressive improvements, based on the coordinated QoS management approach. This work comes timely considering the current ICN QoS discussion in ICNRG, for example in draft-oran-icnrg-qosarch. Also, the authors made their artefacts available on github for enabling reproducing their results.

How Much ICN Is Inside of Bluetooth Mesh?

Bluetooth mesh is a new mode of Bluetooth operation for low-energy devices that offers group-based publish-subscribe as a network service with additional caching capabilities. These features resemble concepts of information-centric networking (ICN), and the analogy to ICN has been repeatedly drawn in the Bluetooth community. In their paper Bluetooth Mesh under the Microscope: How much ICN is Inside? Hauke Petersen, Peter Kietzmann, Cenk Gündoğan, Thomas C. Schmidt, and Matthias Wählisch compared Bluetooth mesh with ICN both conceptually and in real-world experiments. They contrasted both architectures and their design decisions in detail. They conducted experiments on an IoT testbed using NDN/CCNx and Bluetooth Mesh on constrained RIOT nodes.

Interestingly the authors found that the implementation of ICN principles and mechanisms in Bluetooth Mesh is rather limited. In fact, Bluetooth Mesh performs flooding without content caching and merely using the equivalent of multicast addresses as a surrogate for names. Based on these findings, the authors discuss options of how ICN support for Bluetooth could or should look like, so the paper is interesting both for understanding the actual working of Bluetooth Mesh as well as for ideas for improving Bluetooth Mesh. The authors made their artefacts available on github for enabling reproducing their results.

ICN and LoRa

LoRa is an interesting technology for its usage of license-free sub-gigahertz spectrum and bi-directional communication capabilities. We were happy to have Kent Wu and Xiaoyu Zhao from ASTRI at the conference and the ICNRG meeting who talked about their LoRa prototype development for a smart metering system for water consumption in Hong Kong. In addition to that, the ICNRG also discussed different options for integrating ICN and LoRa and got an update by Peter Kietzmann on the state of LoRa support in the RIOT OS. This is an exciting area for innovation, and we expect more work and interesting results in the future.

New Frontiers

Appying ICN to big data storage and processing and to distributed computing are really promising research directions that were explored by papers at the conference.

NDN and Hadoop

The Hadoop Distributed File System (HDFS) is a network file system used to support multiple widely-used big data frameworks that can scale to run on large clusters. In their paper On the Power of In-Network Caching in the Hadoop Distributed File System Eric Newberry and Beichuan Zhang evaluate the effectiveness of using in-network caching on switches in HDFS- supported clusters in order to reduce per-link bandwidth usage in the network.

They discovered that some applications featured large amounts of data requested by multiple clients and that, by caching read data in the network, the average per-link bandwidth usage of read operations in these applications could be reduced by more than half. They also found that the choice of cache replacement policy could have a significant impact on caching effectiveness in this environment, with LIRS and ARC generally performing the best for larger and smaller cache sizes, respectively. The authors also developed a mechanism to reduce the total per-link bandwidth usage of HDFS write operations by replacing write pipelining with multicast.

Overall, the evaluation results are promising, and it will be interesting to see how the adoption of additional ICN concepts and mechanisms and caching could be useful for big data storage and processing.

Compute-First Networking

Although, as a co-author, I am clearly biased, I am quite convinced of the potential for distributed computing and ICN that we described in a paper co-authored by Michał Król, Spyridon Mastorakis, David Oran, and myself.

Edge- and, more generally, in-network computing is receiving a lot attention in research and industry fora. What are the interesting research questions from a networking perspective? In-network computing can be conceived in many different ways – from active networking, data plane programmability, running virtualized functions, service chaining, to distributed computing. Modern distributed computing frameworks and domain-specific languages provide a convenient and robust way to structure large distributed applications and deploy them on either data center or edge computing environments. The current systems suffer however from the need for a complex underlay of services to allow them to run effectively on existing Internet protocols. These services include centralized schedulers, DNS-based name translation, stateful load balancers, and heavy-weight transport protocols.

Over the past years, we have been working on alternative approaches, trying to find ways for integrating networking and computing in new ways, so that distributed computing can leverage networking capabilities directly and optimize usage of networking and computing resources in a holistic fashion. Here is a summary of our latest paper.

Written by dkutscher

October 4th, 2019 at 12:33 am

Posted in Events

Tagged with , , , ,

Compute First Networking (CFN): Distributed Computing meets ICN

without comments

Edge- and, more generally, in-network computing is receiving a lot attention in research and industry fora. What are the interesting research questions from a networking perspective? In-network computing can be conceived in many different ways – from active networking, data plane programmability, running virtualized functions, service chaining, to distributed computing. Modern distributed computing frameworks and domain-specific languages provide a convenient and robust way to structure large distributed applications and deploy them on either data center or edge computing environments. The current systems suffer however from the need for a complex underlay of services to allow them to run effectively on existing Internet protocols. These services include centralized schedulers, DNS-based name translation, stateful load balancers, and heavy-weight transport protocols.

Over the past years, we have been working on alternative approaches, trying to find ways for integrating networking and computing in new ways, so that distributed computing can leverage networking capabilities directly and optimize usage of networking and computing resources in a holistic fashion.

From Application-Layer Overlays to In-Network Computing

Domain-specific distributed computing languages like LASP have gained popularity for their ability to simply express complex distributed applications like replicated key-value stores and consensus algorithms. Associated with these languages are execution frameworks like Sapphire and Ray that deal with implementation and deployment issues such as execution scheduling, layering on the network protocol stack, and auto-scaling to match changing workloads. These systems, while elegant and generally exhibiting high performance, are hampered by the daunting complexity hidden in the underlay of services that allow them to run effectively on existing Internet protocols. These services include centralized schedulers, DNS-based name translation, stateful load balancers, and heavy-weight transport protocols.

We claim that, especially for compute functions in the network, it is beneficial to design distributed computing systems in a way that allows for a joint optimization of computing and networking resources by aiming for a tighter integration of computing and networking. For example, leveraging knowledge about data location, available network paths and dynamic network performance can improve system performance and resilience significantly, especially in the presence of dynamic, unpredictable workload changes.

The above goals, we believe, can be met through an alternative approach to network and transport protocols: adopting Information-Centric Networking as the paradigm. ICN, conceived as a networking architecture based on the principle of accessing named data, and specific systems such as NDN and CCNx have accommodated distributed computation through the addition of support for remote function invocation, for example in Named Function Networking, NFN and RICE, Remote Method Invocation in ICN and distributed data set synchronization schemes such as PSync.

Introducing Compute First Networking (CFN)

We propose CFN, a distributed computing environment that provides a general-purpose programming platform with support for both stateless functions and stateful actors. CFN can lay out compute graphs over the available computing platforms in a network to perform flexible load management and performance optimizations, taking into account function/actor location and data location, as well as platform load and network performance.

We have published a paper about CFN at the ACM ICN-2019 Conference that is being presented in Macau today by Michał Król. The paper makes the following contributions:

  1. CFN marries a state-of-the art distributed computing framework to an ICN underlay through RICE, Remote Method Invocation in ICN. This allows the framework to exploit important properties of ICN such as name-based routing and immutable objects with strong security properties.
  2. We adopted the rigorous computation graph approach to representing distributed computations, which allows all inputs, state, and outputs (including intermediate results) to be directly visible as named objects. This enables flexible and fine-grained scheduling of computations, caching of results, and tracking state evolution of the computation for logging and debugging.
  3. CFN maintains the computation graph using Conflict-free Replicated Data Types (CRDTs) and realizes them as named ICN objects. This enables implementation of an efficient and failure-resilient fully- distributed scheduler.
  4. Through evaluations using ndnSIM simulations, we demonstrate that CFN is applicable to range of different distributed computing scenarios and network topologies.

Resources and Links

Written by dkutscher

September 25th, 2019 at 3:56 am

Posted in Publications

Tagged with , , , ,

Information-Centric Networking RFCs on CCNx Published

without comments

The Internet Research Task Force (IRTF) has published two Experimental RFCs specifying the node behavior, message semantics, and the message syntax of the CCNx protocol: RFC 8569 (Content-Centric Networking (CCNx) Semantics) and RFC 8609 (Content-Centric Networking (CCNx) Messages in TLV Format). CCNx is one particular variant of ICN protocols. These specifications document the implementation of an available Open-Source implementation and are intended to encourage additional experiments with Information-Centric Networking technologies.

Background

Information-Centric Networking (ICN) is a class of architectures and protocols that provide “access to named data” as a first-order network service. Instead of host-to-host communication as in IP networks, ICNs often use location-independent names to identify data objects, and the network provides the services of processing (answering) requests for named data with the objective to finally deliver the requested data objects to a requesting consumer.

Such an approach has profound effects on various aspects of a networking system, including security (by enabling object-based security on a message/packet level), forwarding behavior (name-based forwarding, caching), but also on more operational aspects such as bootstrapping, discovery etc.

The CCNx and NDN variants of ICN are based on a request/response abstraction where consumers (hosts, applications requesting named data) send INTEREST messages into the network that are forwarded by network elements to a destination that can provide the requested named data object. Corresponding responses are sent as so-called DATA messages that follow the reverse INTEREST path.

Sometimes ICN has been mis-characterized as a solution for in-network caching, possibly replacing CDN. While ICN’s location-independent access and its object-security approach does indeed enable opportunistic in-network data caching (e.g., for local retransmissions, data sharing), it is actually not the main feature — it is actually rather a consequence of the more fundamental properties of 1) accessing named data, 2) object-security and integrated trust model, and 3) stateful forwarding.

Accessing Named Data

Each unique data object is named unambiguously in a hierarchical naming scheme and can be validated in a means specified by the producer, i.e., the origin source. (Data objects can also optionally be encrypted in different ways). The naming concept and the object-based validation approach lay the foundation for location independent operation, because data validity can be ascertained by any node in the network, regardless of where the corresponding messages was received from.

The network can generally operate without any notion of location, and nodes (consumers, forwarders) can forward requests for named data objects directly, i.e., without any additional address resolution. Location independence also enables additional features, for example the possibility to replicate and cache named data objects. Opportunistic on-patch caching is thus a standard feature in many ICN systems — typically for enhancing reliability and performance.

Naming data and application-specific naming conventions are naturally important aspects in ICN. It is common that applications define their own naming convention (i.e., semantics of elements in the name hierarchy). Such names can often directly derived from application requirements, for example a name like /my-home/living-room/light/switch/main could be relevant in a smart home setting, and corresponding devices and application could use a corresponding convention to facilitate controllers finding sensors and actors in such a system with minimal user configuration.

Object-Security and Integrated Trust Model

One of the objection validation approaches is based on Public-Key cryptography, where publishers sign objects (parts of messages) and can name the Public Key in the message, so that a validator can retrieve the corresponding object (containing the Public Key and a certificate that would bind the key to a naming hierarchy). The certificate would be an element of a typical trust hierarchy.

Public-Key cryptography and PKI systems are also used in the Internet/Web today. In CCNx/NDN-based ICN, the key/certificate retrieval is directly provided by the network itself, i.e., it uses the same INTEREST/DATA protocol, and the system is typically used in a way that every object/message can be linked to a trust anchor.

Where that trust anchor resides is defined by the application semantics and its naming conventions. Unlike the Internet/Web today, it is not required to link to centralized trust anchors (such as root Certificate Authorities) — instead it is possible to set up local, decentralized trustworthy networked systems in a permissionless manner.

Stateful Forwarding

In CCNx and NDN, forwarders are stateful, i.e., they keep track of forwarded INTEREST to later match the received DATA messages. Stateful forwarding (in conjunction with the general named-based and location-independent operation) also empowers forwarders to execute individual forwarding strategies and perform optimizations such as in-network retransmissions, multicasting requests (in cases there are several opportunities for accessing a particular named data object) etc.

Stateful forwarding enables nodes in the network to perform similar function as endpoints (i.e., consumers), so that there is not a strong distinction between these roles. For example, consumers and forwarders can control INTEREST sending rates to respond to observed network conditions. Adapting in-network transport behavior can thus be achieved naturally, i.e., without brittle, in-transparent middleboxes, TCP proxies etc.

ICN Scenarios

ICN is a general-purpose networking technologies and can thus be applied to many scenarios. I am highlighting a few particularly interesting ones in the following sections.

Scalable Media Distribution

The “Accessing Named Data” paradigm also implies that CCNx/NDN-based ICN is fundamentally connectionless. While there can be collections of Named Data Objects that are requested (and transmitted) in a flow-like manner (as a consecutive series, sharing paths), a server (producer) does not have to maintain any client or connection state — one factor for making servers more scalable.

ICN forwarders can aggregate INTEREST received from different (for example, downstream) links for the same Named Data Object. Instead of forwarding the second, third etc. INTEREST for the same object, a forwarder (as part of its forwarding strategy) could decide to just record those INTERESTS (and note the interfaces they have been received from) and then later distribute the received object via all of these interfaces.

For live or near-live media distribution, this can enable an additional factor for scalability: 1) less INTERESTs are hitting the producers and 2) less INTEREST and DATA messages are transmitted over the network. Effectively, this behavior implement an implicit multicast-like tree-based distribution — without any explicit signaling and (inter-domain) multicast routing.

Finally in-network caching can further reduce upstream traffic, i.e., by answering requests for currently popular objects from a forwarder cache.

The corresponding gains have been demonstrated in Proof-of-Concept implementations, for example in Cisco’s hICN DASH-like video distribution system.

Multi-Access & Multi-Path Networking

Multi-Access networking is getting increasingly important as most mobile devices already provide at least two radio interfaces that can be used simultaneously. For example Apple’s Siri can use Multipath TCP for trying to obtain better performance by combining mobile network and WLAN interfaces and by jointly managing the available resources.

ICN communication is inherently multipath in a sense that ICN is not connection-based and that any forwarder can make independent forwarding decisions for multipath INTEREST forwarding. ICN’s location independence also enables a multidestination communication style: Named Data Object can be replicated in the network, so that the network could not only provide different paths to one producer but to many producers, which can increase network utilization and performance further.

These properties in conjunction with ICN’s stateful forwarding model enables several optimizations (both for window- as well as rate-based congestion controlled multipath communication) of MPTCP’s end-to-end control loop. An example of such an approach has been described by Mahdian et al..

Internet of Things (IoT)

IoT is a broad field, but often refers to 1) networking constrained devices and 2) communicating in local networks (that are not or should not be connected to the Internet on a permanent basis).

In low-power wireless networks with challenged connectivity, frequent power-saving and potentially node mobility, ICN can typically outperform IP-based technology stacks with respect to implementation simplicity, data availability and performance. The implementation simplicity stems from the ICN model of accessing named data directly, i.e., with integrated security and without the need for any resolution infrastructure and application layer protocols (in some IoT scenarios).

The data availability and performance improvements are caused by the stateful forwarding and opportunistic caching feature that are useful for multi-hop mesh networks with frequent connectivity changes due to sleep cycles and mobility. The stateful forwarding enables ICN to react more flexibly to changes, and in-network caching can keep data available in the network so that it can be retrieved at some time offset, for example when a sleeping wakes up and resumes communication with a next-hop node. Gündoğan et al. have performed an extensive analysis comparing NDN with CoAP and MQTT on large-scale IoT testbeds that demonstrated these benefits.

Computing in the Network

Recent advances in platform virtualization, link layer technologies and data plane programmability have led to a growing set of use cases where computation near users or data consuming applications is needed — for example for addressing minimal latency requirements for compute intensive interactive applications (networked Augmented Reality, AR), for addressing privacy sensitivity (avoiding raw data copies outside a perimeter by processing data locally), and for speeding up distributed computation by putting computation at convenient places in a network topology.

Most application layer frameworks suffer from being conceived as overlays, i.e., they can enable certain forms of optimization (such as function placement, scaling) — but do typically require centralized orchestration. Running as an overlay means, connecting compute functions through protocols such as TCP, requiring some form of resolution system that maps application-layer names to IP addresses etc.

Approaches such as Named Function Networking (NFN) and Remote Method Invocation for ICN (RICE) have demonstrated how the ICN approach of accessing named data in the network can be extended to accessing dynamic computation results, maintaining all the ICN security and forwarding/caching properties.

In such systems, computing and networking can be integrated in new ways, for example by allowing compute node to include knowledge about the ICN networks routing information base, currently observed availability and performance data for making offloading and scaling decisions. Consequentially, this enables a promising joint optimization of computing and networking resource that is especially attractive for fine-granular distributed system development.

Also see draft-kutscher-coinrg-dir for a general discussion of Computing in the Network.

The CCNx Specifications

The work on CCN started about 11 years ago in project led by Van Jacobson at PARC — in parallel with many other research projects on ICN such as NetInf, PURSUIT etc. The CCN work split up into branches later: NDN (maintained by the NDN NSN projects) and CCNx (maintained by PARC).

In 2016, Cisco acquired the CCNx technology and the software implementations from PARC and continued working on them in research and proof-of-concepts, and trials. The software has been made available as a sub-project in the fd.io project and is now called CICN, featuring support for the VPP framework in fd.io.

This implementation largely follows the specification in the now published CCNx RFCs which are products of the IRTF ICN Research Group.

RFC 8569 describes the core concepts of the Content-Centric Networking (CCNx) architecture and presents a network protocol based on two messages: Interests and Content Objects. It specifies the set of mandatory and optional fields within those messages and describes their behavior and interpretation. This architecture and protocol specification is independent of a specific wire encoding.

RFC 8609 specifies the encoding of CCNx messages in a TLV packet format, including the TLV types used by each message element and the encoding of each value. The semantics of CCNx messages follow the encoding-independent CCNx Semantics specification.

Both of these RFCs have been authored by Marc Mosko, Nacho Solis, and Chris Wood.

More Information

The IRTF ICN Research Group is an international research forum that covers research and experimentation work across the different ICN approaches and projects. Its goal is to promote experimentation and validation activities with ICN technology.

There is also a yearly academic conference under the ACM SIGCOMM
umbrella. The 2019 ICN conference takes place from September 24 to 26 in HongKong. Previous editions of the conference:

Written by dkutscher

July 11th, 2019 at 3:02 pm

Posted in Blogroll,IRTF

Tagged with , , , ,

Great Expectations

without comments

Protocol Design and Socioeconomic Realities


(PDF version)

The Internet & Web as a whole qualify as wildly successful technologies, each of which empowered by wildly successful protocols per RFC 5218’s definition [1]. As the Internet & Web became critical infrastructure and business platforms, most of the originally articulated design goals and features such as global reach, permissionless innovation, accessibility etc. [5] got overshadowed by the trade-offs that they incur. For example, global reach —intended as enabling global connectivity — can also imply global reach for infiltration, regime change and infrastructure attacks by state actors. Permissionless innovation — motivated by the intention to overcome the lack of innovation options in traditional telephone networks — has also led us to permissionless surveillance and mass-manipulation-based business models that have been characterized as detrimental from a societal perspective.

Most of these developments cannot be directly ascribed to Internet technologies alone. For example, most user surveillance and data extraction technologies are actually based on web protocol mechanisms and particular web protocol design decisions. While it has been documented that some of these technology and standards developments have been motivated by particular economic interests [2], it is unclear whether different Internet design decisions could have led to a different, “better” outcome. Fundamentally, economic drivers in different societies (and on a global scale) cannot be controlled through technology and standards development alone.

This memo is thus rather focused on specific protocol design and evolution questions, specifically on the question how technical design decisions relate to socio-economic effects, and aims at providing input for future design discussions, leveraging experience from 50 years of Internet evolution, 30 years of Web evolution, observations from economic realities, and from years of Future Internet research.

IP Service Model

The IP service model was clearly designed to provide a minimal layer over different link layer technologies to enable inter-networking at low implementation cost [3]. Starting off as an experiment, looking for feasible initial deployment strategies, this was clearly a reasonable approach. The IP service model of packet-switched end-to-end best-effort communication between hosts (host interfaces) over a network of networks, was implemented by:

  • an addressing scheme that allows specifying source and destination host (interface) addresses in a topologically structured address space; and
  • minimal per-hop behavior (stateless forwarding of individual packets).

The minimal model implied punting many functions to other layers, encapsulation, and/or “management” services (transport, dealing with names, security). Multicast was not excluded by the architecture, but also not very well supported, so that IP Multicast (and the required inter-domain multicast routing protocols) did not find much deployment outside well-controlled local domains (for example, telco IP TV).

The resulting system of end-to-end transport over a minimal packet forwarding service has served many applications and system implementations. However, over time, technical application as well as business requirements have led to additional infrastructure, extensions and new way of using Internet technologies, for example:

  • in-network transport performance optimization to provide better control loop localization in mobile networks;
  • massive CDN infrastructure to provide more scalable popular content distribution;
  • (need for) access control, authorization based on IP and transport layer identifiers;
  • user-tracking based on IP and transport layer identifiers; and
  • usage of DNS for localization, destination rewriting, and user tracking.

It can be argued that some of these approaches and developments have also led to some of the centralization/consolidation issues that are discussed today – especially with respect to CDN that is essentially inevitable for any large-scale content distribution (both static and live content). Looking at the original designs, the later understood commercial needs and the outcome today, one could ask the question, how would a different Internet service model and different network capabilities affect the tussle balance [5] between different actors and interests in the Internet?

For example, a more powerful forwarding service with more elaborate (and more complex) per-hop-behavior could employ (soft-) stateful forwarding, enabling certain forms of in-network congestion control. Some form of caching could help making services such as local retransmissions and potential data sharing at the edge a network service function, removing the need for some middleboxes.

Other systems such as the NDN/CCNx variants of ICN employ the principle of accessing named-data in the network, where each packet must be requested by INTEREST messages that are visible to forwarders. Forwarders can aggregate INTERESTs for the same data, and in conjunction with in-network storage, this can implement an implicit multicast distribution service for near-simultaneous transmissions.

In ICN, receiver-driven operation could eliminate certain DoS attack vectors, and the lack of source addresses (due to stateful forwarding) could provide some form of anonymity. The use of expressive, possibly application-relevant names could enable better visibility by the network —however potentially enabling both, more robust access control and (on the negative side) more effective hooks for censoring communication and monitoring user traffic.

This short discussion alone illustrates how certain design decisions can play out in the real world later and that even little changes in the architecture and protocol mechanisms can shift the tussle balance between actors, possibly in unintended ways. As Clark argued in [3], it is important to understand the corresponding effects or architectural changes, let alone bigger redesign efforts.

The Internet design choices at a time were motivated by certain requirements that were valid at the time — but may not all still hold today. Todays networking platforms are by far more powerful, more programmable. The main applications are totally different as are the business players and the governance structures. This process of change may continue in the future, which adds another level of difficulty for any change of architecture elements and core protocols. However, this does not mean that we should not try it.

Network Address Translation

Network Address Translation (NAT) has been criticized for impeding transport layer innovation, adding brittleness, and delaying IPv6 adoption. At the same time NAT was deemed necessary for growing the Internet eco system, for enabling local network extensions at the edge without administrative configuration. It also provides a limited form of protection against certain types of attacks. As such it addressed shortcomings of the system.

The implicit client-initiated port-forwarding (the technical reason for the limit attack protection mentioned above) is obviously blocking both unwanted and wanted communication, which makes it difficult to run servers at homes, enterprise sites etc. in a sound way (manual configuration of port forwarding still comes with limitations). This however could be seen as one of the drivers for the centralization of servers in data centers (“cloud”) that is a concern in some discussions today. [4]

What does this mean for assessing and potentially evolving previous design decisions? The NAT use cases and their technical realization are connected to several trade-offs that impose non-trivial challenges for potential architecture and protocol evolution: 1) Easy extensibility at the edge vs. scalable routing; 2) Threat protection vs. decentralized nature of the system; 3) Interoperability vs. transport innovation.

In a positive light, use cases such as local communication and dynamic Internet extension at the edge (with the associated security challenges) represent interesting requirements that can help finding the right balance in the design space for future network designs.

Encryption

Pervasive monitoring is an attack [7], and it is important to assess existing protocol and security frameworks with respect to changes in the way that the Internet is being used by corporations and state-level actors and to develop new protocols where needed. QUIC is encrypting transport headers in addition to application data, intending to make user tracking and other monitoring attacks harder to mount.

Economically however, the more important use case of user tracking today is the systematic surveillance of individuals on the web, i.e., through a massive network of tracking, aggregation and analytics entities [6]. Ubiquitous encryption of transport and application protocols does not prevent this at all — on the contrary, it makes it more difficult to detect, analyze, and, where needed, prevent user tracking. This does not render connection encryption useless (especially not because surveillance in the network and on web platforms complement each other through aggregation and commercial trading of personally identifying information (PII)) but it requires a careful consideration of the trade-offs.

For example, perfect protection against on-path monitoring is only effective if it covers the complete path between a user agent and the corresponding application server. This shifts the tussle balance between confidentiality and network control (enterprise firewalls, parental control etc.) significantly. Specifically for QUIC, which is intended to run in user space, i.e., without the potential for OS control, users may end up in situations where they have to trust the application service providers (who typically control the client side as well, through apps or browsers, as well parts of the CDN and network infrastructure) to transfer information without leaking PII irresponsibly.

If the Snowden revelations led to a better understanding of the nature and scope of pervasive monitoring and to best current practices for Internet protocol design, what is the adequate response to the continuous revelations of the workings and extent of the surveillance industry? What protocol mechanisms and API should we develop, and what should we rather avoid?

DNS encryption is another example that illustrates the trade-offs. Unencrypted DNS (especially with the EDNS0 client subnet option, depending on prefix length and network topology) can increase of privacy violations by on-path/intermediary monitoring.

DNS encryption can counter certain on-path monitoring attacks — but it could effectively make the privacy situation for users worse, if it is implemented by centralizing servers (so that application service providers, in addition to tracking user behaviour for one application, can now also monitor DNS communication for all applications). This has been recognized in current proposals, e.g., limiting the scope for DNS encryption to stub-to-resolver communication. While this can be enforced by architectural oversight in standards development, we do not yet know how we can enforce this in actual implementation, for example for DNS over QUIC.

Future Challenges: In-Network Computing

Recent advances in platform virtualization, link layer technologies and data plane programmability have led to a growing set of use cases where computation near users or data consuming applications is needed — for example for addressing minimal latency requirements for compute-intensive interactive applications (networked Augmented Reality, AR), for addressing privacy sensitivity (avoiding raw data copies outside a perimeter by processing data locally), and for speeding up distributed computation by putting computation at convenient places in a network topology.

In-network computing has mainly been perceived in four main variants so far: 1) Active Networking, adapting the per-hop-behavior of network elements with respect to packets in flows, 2) Edge Computing as an extension of virtual-machine (VM) based platform-as-a-service to access networks, 3) programming the data plane of SDN switches (leveraging powerful programmable switch CPUs and programming abstractions such as P4), and 4) application-layer data processing frameworks.

Active Networking has not found much deployment due to its problematic security properties and complexity. Programmable data planes can be used in data centers with uniform infrastructure, good control over the infrastructure, and the feasibility of centralized control over function placement and scheduling. Due to the still limited, packet-based programmability model, most applications today are point solutions that can demonstrate benefits for particular optimizations, however often without addressing transport protocol services or data security that would be required for most applications running in shared infrastructure today.

Edge Computing (just as traditional cloud computing) has a fairly coarse-grained (VM-based) computation-model and is hence typically deploying centralized positioning/scheduling though virtual infrastructure management (VIM) systems. Application-layer data processing such as Apache Flink on the other hand, provide attractive dataflow programming models for event-based stream processing and light-weight fault-tolerance mechanisms — however systems such as Flink are not designed for dynamic scheduling of compute functions.

Ongoing research efforts (for example in the proposed IRTF COIN RG) have started exploring this space and the potential role that future network and transport layer protocols can play. It is feasible to integrate networking and computing beyond overlays, potentially ? What would be a minimal service (like IP today) that has the potential for broad reach, permissionless innovation, and evolution paths to avoid early ossification?

Conclusions

Although the impact of Internet technology design decisions may be smaller than we would like to think, it is nevertheless important to assess the trade-offs in the past and the potential socio-economic effects that different decisions could have in the future. One challenge is the depth of the stack and the interactions across the stack (e.g., the perspective of CDN addressing shortcomings of the IP service layer, or the perspective of NAT and centralization). The applicability of new technology proposals therefore needs a far more thorough analysis — beyond proof-of-concepts and performance evaluations.

References

[1] D. Thaler, B. Aboba; What Makes for a Successful Protocol?; RFC 5218; July 2008

[2] S. Greenstein; How The Internet Became Commercial; Princeton University Press; 2017

[3] David Clark; Designing an Internet; MIT Press; October 2018

[4] Jari Arkko et al.; Considerations on Internet Consolidation and the Internet Architecture; Internet Draft https://tools.ietf.org/html/draft-arkko-iab-internet-consolidation-01; March 2019

[5] Internet Society; Internet Invariants: What Really Matters; https://www.internetsociety.org/internet-invariants-what-really-matters/; February 2012

[6] Shosanna Zuboff; The Age of Surveillance Capitalism; PublicAffairs; 2019

[7] Stephen Farrell, Hannes Tschofenig; Pervasive Monitoring is an Attack; RFC 7258; May 2014

Change Log

  • 2019-06-07: fixed several typos and added clarification regarding EDNS0 client subnet (thanks to Dave Plonka)

Written by dkutscher

June 4th, 2019 at 11:30 am

Posted in Blogroll,Posts

Tagged with ,

New Proposed Decentralized Internet Infrastructure Research Group

without comments

New Proposed Decentralized Internet Infrastructure Research Group

The Internet was designed as a distributed, decentralized system. For example, intra- and inter-domain routing, DNS, and so on were designed to operate in a distributed manner. However, over time the dominant deployment model for applications and some infrastructure services evolved to become more centralized and hierarchical. Some of the increase in centralization is due to business models that rely on centralized accounting and administration.

However, we are simultaneously seeing the evolution of use cases (e.g., certain IoT deployments) that cannot work (or which work poorly) in centralized deployment scenarios along with the evolution of decentralized technologies which leverage new cryptographic infrastructures, such as DNSSEC, or which use novel, cryptographically-based distributed consensus mechanisms, such as a number of different ledger technologies. For example, these use cases include identity/trust management leveraging reputation for authentication, authorization and decentralized management of shared resources.

The evolution of distributed ledger technologies and the platforms that leverage them has given rise to the development of decentralized communication and infrastructure systems, and experiments with the same. Some examples include name resolution (Namecoin, Ethereum Name Service), identity management (OneName), distributed storage (IPFS, MaidSafe), distributed applications, or DApps (Blockstack), and IP address allocation and delegation.

These systems differ with respect to the problem they are solving, the specific technologies that they apply, the consensus algorithms that are employed, and the incentives that are built into the system. Now is a good time to investigate these systems from an Internet technologies perspective, and to connect the domain expertise in the IRTF and IETF with the distributed systems and decentralized ledgers community.

Proposed IRTF DINRG

In the past months we have been working on a proposal for a new Research Group in the Internet Research Task Force. The Decentralized Internet Infrastructure Research Group (DINRG) will investigate open research issues in decentralizing infrastructure services such as trust management, identity management, name resolution, resource/asset ownership management, and resource discovery. The focus of DINRG is on infrastructure services that can benefit from decentralization or that are difficult to realize in local, potentially connectivity-constrained networks.

The objective of DINRG is to 1) investigate (understand, document, survey) use cases and their specific requirements with respect to implementing them in a distributed manner; 2) to discuss and assess solutions for specific use cases with a focus on Internet level deployment issues such as scalability, performance, and security; 3) to develop and document technical solutions and best practices; 4) to develop tools and metrics to identify scaling issues and to determine whether components are missing; and 5) to identify future work items for the IETF.

Other topics of interest are the investigation of economic drivers and incentives and the development and operation of experimental platforms. DINRG will operate in a technology- and solution-neutral manner, i.e., while the RG has an interest in distributed ledger technologies, it is not limited to specific technologies and or implementation aspect. We expect DINRG to advance the state of the art with respect to fostering a better understanding of the merits and constraints of specific technologies with respect to the DINRG use cases.

If you are interested to discuss these topics, please have a look at the complete charter text and subscribe to the mailing list.

Resources

 

Written by dkutscher

October 17th, 2017 at 5:06 pm

Posted in IRTF

Edgy with a Chance of RIOTs

without comments

Report from IRTF T2TRG Meeting, RIOT Summit, ACM ICN Conference, and IRTF ICNRG Meeting

 

 

Berlin saw a remarkable series of research, coding, demonstration and open discussion events on the Internet of Things and Information-Centric Networking last week. It brought together an interesting mix of researchers, developers, entrepreneurs and thought leaders, which facilitated making real progress and moving the needle in next-generation networking for IoT, edge computing and decentralized operations. In my view the whole setup (although demanding in terms of commitment by organizers and participants) can likely serve as a prototype for future un-conference (and un-standards-meeting) events that want to put emphasis on constructive discussions and progress making instead of paper publication and marketing. For those who have been unlucky to miss it, I have written this (eclectic) summary (please refer to the respective events’ web pages for a complete view). Also note, I am not speaking for the organizers of the different events.

Introduction & Executive Summary

The Internet of Things, Edge Computing, Virtual/Augmented/Mixed Reality are popular buzzwords in the networking industry and academic community. Unfortunately, the popularity and the associated revenue expectations often lead to proposed solutions that try to leverage (often failed) foundations from related domains (e.g., the telco area), that compromise on security and performance and that lead to complex point-solutions. For example, in IoT, past experience in factory automation, home networking etc. have led to the popular assumption that most IoT networks will be built with the notion of a gateway that connects controllers, sensors on different incompatible fieldbus networks to cloud backends, employing significant translation magic to enable connectivity and semantic interoperability. People often use the term convergence to describe the fact that a zoo of different technologies will be integrated in such frameworks.

Converting to Internet Technologies

However, the Internet research and technology development community has demonstrated before (when multi-media real-time communication made telephony just another service on the Internet) that conversion (not convergence) is what actually creates an interoperable and extensible set of technologies. In IoT, protocols such as 6lowpan (IPv6 over Low power WPAN) and CoAP (Constrained Application Protocol) are enabling an efficient, secure, end-to-end communication service for the Internet-of-Things, where the Internet does not necessarily terminate at a predefined gateway. Instead, the Internet communication semantics can be extended to constrained devices — providing one stable platform of communication, obsoleting a lot of cruft that current IoT “industry standards” represent.

Semantic Interoperability

Beyond the fundamental connectivity layer, it is important to agree on they way Things in the IoT actually interact with one another, i.e., request-response type of interaction, publish-subscribe, RESTfulness, group communication etc. CoAP enables different interaction types on a Thing-to-Thing-based communication model. But when you compose/deploy/re-program IoT networks, how do you actually know how to communicate with your Things? How do you learn about available resources and the correct way to interact with them? How do Things and their users understand the physical-world effects, and, finally, how can you (reliably and securely) create larger applications that leverage Things in the IoT?

There are different approaches for describing and discovering resources. In the age of Service-Oriented-Architectures, people came up with resource description frameworks etc., enabling a first level of semantic interoperability. In the IRTF Thing-to-Thing Research Group (T2TRG), we are trying to find a sweet-spot between expressiveness, simplicity and flexibility with respect of re-using and re-combining resources for new purposes. This work is leveraging ideas from the web (hypermedia in general) so that “simple things should be simple; complex things should be possible”. Information-Centric Networking (ICN) also has a relation to semantic interoperability — I will talk more about it when summarizing the ICN conference below.

Data-Oriented Networking and Forwarding Abstractions

In IoT most interactions are actually not about sending bits from host A to host B — most often, we are interested in accessing names resources such as sensor readings, the result of an actuation request — regardless of network and host addresses. Similar considerations apply to other applications, too — for example web applications, video streaming and virtual reality. Realizing these applications today requires a stack of overlays for secure communication (server authentication and confidentiality through TLS), storage for resource sharing and latency reduction (CDN), and application-specific in-network processing (for example, routing IoT data to intended and authorized consumers).

In more advanced and/or challenging network scenarios such as multipath communication or data sharing in the IoT, the trade-offs that the traditional overlay approach requires are becoming increasingly painful. For example, TLS-based connection-oriented security may be a good approach for tele-banking, but it clearly gets into the way when we want to communicate in dynamic environments (with changing IP addresses etc.) or when we want to disseminate and consumer data from multiple producers securely in the IoT.

Being able to access named data regardless of current node addresses is a concern in more traditional frameworks such as CoAP, too. ICN addresses this by providing access to named (and authenticated) data as a first-order service. The network relies on named data access on the Internet layer, so that security (name-content binding, access control, confidentiality) does not depend on from where a particular data object has been retrieved. Obviously, this can facilitate communication in dynamic network topologies (mobility, disruptions) as well as enhance efficiency and reliability (caching) and is thus attractive for IoT but also for most other application domains.

The way that ICN implements the accessing-named-data service on the Internet layer enables peers and intermediary nodes to support forwarding and effective data dissemination in a network. For example, compared to IP, a router has slightly more visibility of request-response latency and data availability (potentially per name prefix) which can inform queue management, forwarding behavior and caching strategies. This is the basis for better transport performance in more conventional networks. In IoT, an enabled forwarding layer can help to optimize data availability in the presence of disruptions, power-saving and improve mesh network routing by leveraging information about data interest at certain parts of the network.

Because ICN can enable application-independent in-network caching directly on the Internet layer (as opposed to on the application layer as CDNs do) you can also characterize ICN as a democratizing technology: it enables data production and efficient sharing over the network by everyone and for any application — without requiring permissions from ISPs or contracts with CDN providers.

Regardless of ICN or any other technology, the technical question is “what is an appropriate forwarding abstraction?”  — for the new Internet that includes the IoT and other domains. From an Internet perspective, it would certainly be good if one could find a suitable comprise and arrive at a functionality set that is as powerful as needed — but not too powerful in terms of requiring application-specific knowledge and functionality at too many places in the network to be useful. To that end, ICN is inspired by IP and provides a minimal thin-waist (in the Internet stack hour glass model) but provides more functionality for in-network forwarding and caching strategies.

The ICN Conference and the ICNRG meeting last week discussed technical aspects of applying this technology to different application domains such as IoT: how to automate trust management, how to map ICN protocols efficiently to lower layer protocols such as IEEE 802.15.4, how to manage/bootstrap such networks securely, and how use the ICN protocol semantics for IoT use cases, for example asynchronous data generation.

Edge Computing

Edge Computing is becoming increasingly popular these days, and there are many good reasons to rethink current cloud-centric compute service architectures. For example, in industrial IoT, there are strong trust-sensitivity reasons for not shoveling all data to the cloud by default for processing and redistribution. Instead the data needs to be processed, potentially stored and shared close to the producers and consumers in an industrial IoT network. Or, as another example, infrastructure support for Virtual Reality  has low-latency requirements that mandate placing the compute function close to the display device.

There are different ways to do edge computing though — some approaches can be seen as extending today’s cloud infrastructure to the edge — to so-called edge gateways or to multi-tiered arrangements of compute platforms (fog computing). Also, popular CDN platforms provide some form of in-network computation already, so it seems attractive to extend these platforms to the edge.

From an Internet technology perspective, it is important to understand the implications of different architecture with respect to security and privacy (does edge computing mean we have to entrust unknown proxies to intercept our communication sessions?), permissionless innovation (can anyone run distributed computations in the network, or do you have to be a big content/service provider?), and generality (if edge computing means shipping VMs images to edge gateways, what about constrained networks/platforms?).

In the Thing-to-Thing context, we are discussing options for light-weight in-network computing that does not necessarily have to rely on an ossified architecture of constrained IoT network, edge gateway, and cloud backend. Similarly to thing-to-thing communication, would it be possible to design IoT edge computing in a way that allows some nodes in the network to offer compute services for other (possibly more constrained) nodes, and can this be achieved without complicated, and in the worst case, manual orchestration?

In ICN, the combination of accessing static named data and dynamic computation results in the same framework seems to be a very elegant and powerful approach to edge computing. For that reason, Intel and the NSF have recently decided to fund three research projects on ICN in wireless edge networks. One interesting aspect in this context is the idea not treating edge computing (and its applications) as a very special case in a distributed computing architecture. Instead, applications such as Virtual Reality could essentially just be web applications that leverage standardized protocols, media formats and dynamic code execution.

One particular proposal blending static data access with dynamic in-network computation in ICN is called Named Function Networking (NFN). NFN applies functional programming concepts (expression reduction, code as data, memomization) to networking and thus provide a light-weight in-network computation platform that can ultimately provide similar features as stream processing and distributed data bases under one single abstraction.

Going Cloudless

The Internet was designed as a distributed, decentralized system. For example, intra- and inter-domain routing, DNS, and so on were designed to operate in a distributed manner. However, over time the dominant deployment model for applications and some infrastructure services evolved to become more centralized and hierarchical. Some of the increase in centralization is due to business models that rely on centralized accounting and administration. However, we are simultaneously seeing the evolution of use cases (e.g., certain IoT deployments) that cannot work (or which work poorly) in centralized deployment scenarios along with the evolution of decentralized technologies which leverage new cryptographic infrastructures, such as DNSSEC, or which use novel, cryptographically-based distributed consensus mechanisms, such as a number of different ledger technologies.

One example that was mentioned at the T2TRG meeting on Sunday was the coordination of different wireless networks that compete for spectrum in a geographic context. For large-scale, managed spectrum sharing you could employ centralized databases for recording who is entitled to use what frequency band in a certain geographic location. In more dynamic settings like a multi-vendor, multi-radio technology IoT network deployment, this centralized approach may not work that well.

Decentralizing trust management, identity management, name resolution etc. could thus be another interesting factor towards democratizing network and application usage on the Internet. Less applications in the future may have to depend on centralized cloud services, and new players may be able to introduce innovative services. These ideas touch upon T2TRG work as well as ICN (that promote decentralized operation by itself). We are therefore kicking off a new proposed Research Group on Decentralized Internet Infrastructure in the IRTF.

Open Source and Free Software

In IoT one crucial element is the operation system platform for constrained devices. There are a few one that a freely available, and some companies have developed their own OSes, sometimes also marketed as Open Source. Open Source IOT OS software is important for two reasons: 1) For providing a platform that people can start new developments at minimal cost; and 2) For providing a platform that is reviewed and ideally governed by an open community process. If you think about security bugs/fixes, it has been demonstrated that the ability to review code and to propose changes improves the security and stability of software systems significantly compared to closed-source approaches, also with respect to agility when quick response to a new security threat is required.

Unfortunately, Open Source has become a marketing term these days, and many people confuse the availability of for-free software with Open Source. In addition to actually obtaining source code, two other important factors are licensing models and the project governance. Who actually decides about integrating proposed changes and future directions?

The RIOT OS project has developed a modern UNIX-like, very modular, very lightweight IoT OS that licensed under LGPL. The project is governed by a transparent and open community process, which has led to many useful extensions in the past, for example the addition of ICN support through integration of CCN-Lite or the addition of CAN bus functionality. RIOT’s architecture, its modularity and flexibility has led to increasing popularity and its wide availability on many different target platforms, which was demonstrated at the RIOT summit last week.

TL;DR

There is lots of activity in making the Internet better and bringing it to new places. Last week’s series of research events on IoT and ICN demonstrated new approaches towards Internet-inspired, direct communication. The most important meta aspects (in my view) are disintermediated communication, semantic interoperability, data-oriented communication and edge computing, and democratizing network operation and innovation through decentralizing communication and network infrastructure. The following sections represent my eclectic summary of theses meetings, focusing on these aspects.

IRTF Thing-to-Thing Research Group

The T2TRG meeting took place on Saturday/Sunday (September 23/24). One particular technology in T2TRG’s activities on semantic interoperability is the Constrained RESTful Application Language (CoRAL) by Klaus Hartke that “defines a data model and interaction model as well as two specialized serialization formats for the description of typed connections between resources on the Web (“links”), possible operations on such resources (“forms”), and simple resource metadata” (presentation slides from the meeting). CoRAL is essentially a constrained-environment-compatible hypermedia framework that can be used by IoT applications to discover node capabilities in a modern, flexible way.

On the topic of coordination and consensus using decentralized network infrastructure, Laura Feeney talked about “A role for higher layer protocols in mitigating wireless interference”, illustrating the use case of coordination between different (unknown) wireless networks that may compete with each other for spectrum (slides will become available here). Pekka Nikander introduced an upcoming EU H2020 project on Secure and Open Federation of IoT Systems (SOFIE) that is going to start 2018. The project plans to investigate use cases and ledger federation approaches to connect different types of IoT applications and their ledger infrastructure. I gave a talk on decentralized network infrastructure and considerations for T2T edge computing (as described earlier).

RIOT Summit 2017

The RIOT summit 2017 took place on Monday/Tuesday (September 25/26).  The keynote on Permutation-based Cryptography for the Internet of Things was presented by Gilles van Assche. The rest of the agenda was split up into topical sessions on IoT Security, Virtualization & Bootstrappping, Use Cases, and Networking. The second day featured different tutorials and coding sessions. In addition, there were many demos and posters on specific applications of RIOTs, new ideas etc.

In the Virtualization and Bootstrapping session, Marcel Enguehard talked about Cisco’s “Large-scale experiments on virtual ICN-based IoT networks with vICN“, an automated emulation platform, allowing for connecting physical devices for experiments.

In the Use Cases session, Michael Frey gave a presentation titled “Cloudy with a chance of RIOTS — Towards an Open Industrial Internet“, describing the R&D work at MSA on RIOT-based IoT appliances. In the same session,  Joern Alraun gave an introduction to the “Calliope mini“, a single-board computer for teaching. I am personally interested quite a bit in didactics of computer science (and am deploring the sad computer science education situation at most schools…).

In the Networking session, Vincent Dupont talked about “RIOT and CAN” and reported on OTAkeys’ development of a CAN implementation for RIOT (that has been integrated into the project) and its application to a commercial product related to vehicle on-board diagnosis (OBD). This resonated well with me, because I know how limited closed-source commercial OBD-2 adapters typically are, so the availability of an open platform sounds great for working with cars that use proprietary extensions etc.

Overall, the RIOT summit exhibited a vibrant community, and it was great to see an increasing number of commercial applications.

ACM ICN Conference

The ACM ICN 2017 Conference took place from Tuesday through Thursday (September 26 — 28). The first day saw three tutorials on 1) NDN, CCN-Lite, RIOT, 2) FD.io/cicn, and 3) Umobile, all of them were really well attended. The conference itself was organized into 6 technical sessions on Security, Architecture, Forwarding, Caching & Mobility, Infrastructure, and miscellaneous topics. In addition, there was a panel discussion on ICN & Operating Systems.

Jon Crowcroft presented the keynote on Private Namespaces in ICN. In his talk Jon made the connection of earlier work on reliable multicast (PGM — Pragmatic General Multicast) to ICN — both technologies can achieve scalable data distribution, albeit in different ways. He also made the connection of ICN and distributed ledger technologies (DLT) — as both technologies can be characterized as democratizing networking in their respective ways. ICN can provide a general-purpose multicast-like distribution infrastructure that can be used by anyone for any application without requiring prior contractual agreements, and DLT can be a basis for decentralized digital currencies and other ledger-based services in communication networks.

The best paper was titled “Jointly Optimal Routing and Caching for Arbitrary Network Topologies” (slides) by Stratis Ioannidis and Edmund Yeh. The paper presents polynomial time approximation algorithms for the (normally NP-hard) problem of jointly optimizing routing and caching for arbitrary topologies. This paper is noteworthy because the proposed solution can reduce routing cost in ICN dramatically, and furthermore, the work is applicable beyond ICN.

The Security session featured a paper titled “NDN DeLorean: An Authentication System for Data Archives in Named Data Networking” (slides) by Yingdi Yu, Alexander Afanasyes, Jan Seedorf, Zhiyi Zhang, and Lixia Zhang.  NDN DeLorean is  authentication framework to ensure the long-term authenticity of long-lived data, inspired by Certificate Transparency.   It is using a publicly auditable bookkeeping service approach to keep permanent proofs of data signatures and the times when the signatures were generated. I found this work interesting and important because it can provide a basis for trust management and attestation services in ICNs, with a purely data-oriented security approach.

In the Architecture session, there was a presentation of a short paper titled “Improved Content Addressability Through Relational Data Modelling and In-Network Processing Elements” (slides) by Claudio Marxer and Christian Tschudin. This work represents new ideas how relational database concepts can be applied to an ICN/NFN framework so that general-purpose processing of elements in ICN Named Data Objects becomes possible, which could be an interesting feature in NFN-based in-network computation, especially in application domains such as IoT. I found this work interesting and relevant because it can be seen as an ICN contribution to semantic interoperability, enabling application components to “talk” to each other across application silos.

The Forwarding session featured a paper titled “Path Switching in Content Centric and Named Data Networks” (slides) by Ilya Moiseenko and Dave Oran. The work described in this paper is leveraging the path symmetry in CCN/NDN for computing end-to-end label paths that can be used to steer forwarding of subsequent requests through the network. Over time, a consumer potentially different available paths for a certain prefix or set of prefixes and can then provide hints to forwarding nodes as to which particular path to use. I found this work interesting and relevant because it provides an MPLS-like functionality solely by leveraging data plane functions, i.e., unlike MPLS in IP, this approach would not need and label configuration and a corresponding control plane.

In the so-called Potpourri session, there was a presentation of a paper on ICN edge computing titled “NFaaS: Named Function as a Service” (slides) by Michael Krol and Ionnis Psaras, presenting an edge/fog computing extension to NDN that is leveraging very lightweight VMs, thus allowing dynamic code execution in a VM-based approach. Similarly to NFN, this work represents function names in Interest messages (that identify unikernel images). Some forwarding provide additional VM execution capabilities and can decide whether they want to fetch, store and execute the named images. NFaaS implements different forwarding strategies for delay-sensitive and for “bandwidth-hungry” services that can lead to different locations for the respective function execution. I found this work interesting and relevant because it proposes a framework for ICN-in network computation that enables certain useful optimizations with respect to function placement, without relying on centralized management with a  global network view.

A particular highlight of this year’s conference was the demo and poster session that featured 12 (!) demos and 13 posters, which was praised by many attendees. The best-demo award went to Nikos Fotiou, George Xylomenos, George Polyzos, Hasan Islam, Dmitrij Lagutin, and Eero Hakala for their demo on “ICN enabling CoAP Extensions for IP based IoT devices“. Another demo that impressed me was on “Panoramic Streaming using Named Tiles” by Kazuaki Ueda, Yuma Ishigaki, Atsushi Tagami and Toru Hasegawa. This demo showed how 360-degree video can be made more efficient through ICN by segmenting the video into named tiles that a consumer can request independently. A video renderer can thus request the required tiles for a particular field-of-view at a time only, thereby saving significant amount of bandwith. In conjunction with other ICN features such as caching and multipoint distribution, this approach can help to make 360-degree video much more viable in constrained networks.

Overall ACM ICN 2017 was a great research festival, and it was especially fascinating to see the all the different demos that applied ICN to a wide range of application domains, including IoT, video, tactical networks, robotics etc. I am really looking forward to ACM ICN 2018 that will be held at Northeastern University in Boston.

IRTF ICN Research Group

Finally, ICNRG had an interim meeting on Friday (September 29) that was focused on new research work and allowed a good amount of time for in-depth discussion (which is not always possible in the more rigid framework of an academic conference).

Michael Frey presented thoughts “Towards an ICN-powered Industrial IoT” and described specific requirements for MSA’s mobile safety appliances. The talk also provided some insights on the particular approach towards ICN for Industrial IoT at MSA and reported some intermediate experimentation results, for example using pub/sub communication in NDN.

Mayutan Arumaithurai and Dennis Grewe presented “Information-Centric Mobile Edge Computing for Connected Vehicle Environments: Challenges and Research Directions“. The talk featured the description of a mixed reality use case called “Electronic Horizon” for cars and a discussion of how its specific edge computing requirements can be met by ICN, pointing at interesting directions for future research.

Michael Krol talked about “Adapting ICN to Function Execution for Edge Computing” and the different research challenges he encountered such as PIT Expiry (when computations take longer…), security, authorization (for function execution), leveraging hardware-based cryptography and secure execution environments (SGX etc.).

This time, we tried a new interactive format at ICNRG which featured a panel-like discussion (with active participation from the rest of the group). The topic was “ICMP-like control-plane communication  for ICN“, following up on an earlier discussion at the last meeting and and on the mailing list. The discussion featured the following contributions:

  1. Non-Application Messages for ICN (Panel introduction by Dave Oran)
  2. Do we need an ICMP for NDN (Thomas Schmidt)
  3. Fraudulent Names (Christian Tschudin)

Full house at ICNRG when Dave Oran kicks-off a discussion in ICN control plane communication

Full house at ICNRG when Dave Oran kicks-off a discussion on ICN control plane communication

During the discussion we clarified what we mean by control messages and discussed several options for representing corresponding semantics in ICN (namespace, message types, header fields). Please consult our detailed meeting notes if you are interested in the discussion.

Bengt Ahlgren talked about “ICN Congestion Control — how to handle unknown and varying link capacity?” and kicked of a discussion on how ICN hop-by-hop congestion control should effectively work together with end-to-end (receiver-driven) congestion control.

Jacopo De Benedetto presented “Interconnection of testbeds to enable better testing” — proposing using the Geant Testbed Service (GTS) for future ICN testing.

Cenk Gündogan and Christopher Scherb provided an “update on CCN-lite and RIOT“. In 2017, the development of CCN-lite v2 has been kicked-off, with many improvements with respect to code modularity, functionality and implementation specifics. One of the planned changes is the introduction of static memory allocation which is deemed important on constrained platforms.

Cenk Gündogan also reported on his work on “CCN LoWPAN“, i.e., mapping the CCNx and NDN protocols to an IEEE 802.15.4 link layer, employing header compression for a more compact message format.

Finally, I provided a short summary of the IRTF T2TRG meeting earlier in the week (see above).

Disclaimer

I was not involved in the local meeting arrangement and general organization of these events. The heavy lifting has been done by Matthias Wählisch, Thomas Schmidt, Emmaniel Baccelli and many supporters at FU Berlin and HAW.

ChangeLog

  • 2017-10-12: Added correct link to ICNRG meeting minutes

Written by dkutscher

October 5th, 2017 at 12:13 am

Posted in Events

ICN Update after IETF-99

without comments

Here is a quick (eclectic) summary of recent events in ICN at/around IETF-99 last week. ICNRG met twice: for a full-day meeting on Sunday and for a regular meeting on Wednesday. (Find a list of all past meeting, agendas, meeting materials, and minutes here.)

Edge Computing and ICN

We presented a summary of the recent Workshop on Information-Centric Fog Computing (ICFC) at IFIP Networking 2017, which featured a few papers on ICN edge computing in IoT and on Named Function Networking, one specific approach to marry access to static data and dynamic computing in ICN.

Moreover, Eve Schooler from Intel announced the three selected projects of the recent Intel/NSF-sponsored call for proposals for projects on ICN in the wireless edge:

Lixia Zhang presented an overview of the first project on Augmented Reality and described how the project conceives AR as one of several applications that can leverage a web of browsable named data, based on decentralized multiparty context-content exchange.

Finally, Yiannis Psaras presented his paper on Keyword-Based Mobile Application Sharing through Information-Centric Connectivity that won the Best Paper Award at ACM MobiArch 2016. In this paper, the authors describe a cloud-independent content and application sharing platform based on ICN.

ICN Demos

Luca Muscariello and Marcel Enguehard presented an overview of the Community ICN (CICN) activity in the Linux Foundation fd.io project and showed a demo of the software and their emulation environment.

IMG_20170716_123755

IMG_20170716_115833

CICN consists of several Open Source ICN implementations, including an efficient VPP-based forwarder implementations. Cisco made this software available after acquiring PARC’s implementation earlier this year.

ICN Specifications Moving Forward Towards Publication

ICNRG has completed its (research group) last calls on the two core specifications for the CCNx variant of ICN:

The fd.io CICN implementations are based on these specifications (that are intended to be published as Experimental RFCs).

ICNRG also started the Last Call for an Internet Draft on Research Directions for Using ICN in Disaster Scenarios that is intended to be published as an Informal RFC. There are a few additional documents that are nearing completion — see our Wiki for more information.

Upcoming Things

There a few exciting events around ICN taking place this summer/fall.

The ACM SIGCOMM ICN Conference 2017 is embedded into a week of cool ICN and IoT events:

  1. IRTF Thing-to-Thing-Research-Group meeting on September 23/24 (Saturday/Sunday)
  2. RIOT Summit 2017 on September 25/26  (Monday/Tuesday)
  3. The ICN Conference itself from September 26 through 26 (Tuesday through Thursday)
  4. IRTF ICNRG meeting on September 27 (Friday)

Moreover, ICNRG plans to meet at IETF-100, most likely on Sunday, November 11 and during the following week.

If you are working on ICN Security, there a current Call For Papers for an IEEE Communications Magazine Feature Topic on Information-Centric Networking Security.

 

 

 

 

Written by dkutscher

July 25th, 2017 at 11:52 am

Posted in Events

Affiliation Update

without comments

Hello everyone,

After 7.5 years in NEC Laboratories Europe, I have decided to take on a new challenge and joined Huawei’s German Research Center in Munich as the CTO for Virtual Networking and IP on October 1st 2016.

I am grateful for NEC’s support of my work in the past and especially for the colleagueship at NEC Labs Europe and NEC Corporation and for being given the opportunity for working with so many cool people.

I have now decided to move on and am looking forward to working with new colleagues (and esteemed collaborators) on topics that are close to my heart: evolving the Internet, embracing computer science, data centricity, distributed computing, programmability and automation.

With the Internet of Things, the next-generation of mobile networks being developed, the possibilities of experimenting with new concepts thanks to virtualization and programmability, redesigning network function platforms and overall architectures, and with applying recent work in machine learning, I am really excited to explore and experiment with new ideas and new tools in networking.

Are you interested in making a dent in networking? Come work with me in Munich. Do you want to collaborate in research and Open Source projects? Please feel to contact me at Dirk.Kutscher@Huawei.com.

Written by dkutscher

October 11th, 2016 at 10:57 am

Posted in personal

RFC 7927: Information-Centric Networking (ICN) Research Challenges

without comments

We (ICNRG) published RFC 7927 on Information-Centric Networking (ICN) Research Challenges.

This memo describes research challenges for Information-Centric Networking (ICN), an approach to evolve the Internet infrastructure to directly support information distribution by introducing uniquely named data as a core Internet principle. Data becomes independent from location, application, storage, and means of transportation, enabling or enhancing a number of desirable features, such as security, user mobility, multicast, and in-network caching. Mechanisms for realizing these benefits is the subject of ongoing research in the IRTF and elsewhere. This document describes current research challenges in ICN, including naming, security, routing, system scalability, mobility management, wireless networking, transport services, in-network caching, and network management.

Information-Centric Networking (ICN) is an approach to evolve the Internet infrastructure to directly support accessing Named Data Objects (NDOs) as a first-order network service. Data objects become independent of location, application, storage, and means of transportation, allowing for inexpensive and ubiquitous in-network caching and replication. The expected benefits are improved efficiency and security, better scalability with respect to information/bandwidth demand, and better robustness in challenging communication scenarios.

ICN concepts can be deployed by retooling the protocol stack: name-based data access can be implemented on top of the existing IP infrastructure, e.g., by allowing for named data structures,
ubiquitous caching, and corresponding transport services, or it can be seen as a packet-level internetworking technology that would cause fundamental changes to Internet routing and forwarding. In summary, ICN can evolve the Internet architecture towards a network model based on named data with different properties and different services.

This document presents the ICN research challenges that need to be addressed in order to achieve these goals. These research challenges are seen from a technical perspective, although business relationships between Internet players will also influence developments in this area. We leave business challenges for a separate document, however. The objective of this memo is to document the technical challenges and corresponding current approaches and to expose requirements that should be addressed by future research work.

Continue reading…

Written by dkutscher

August 9th, 2016 at 3:51 pm

Posted in IETF,Publications

Tagged with , ,