Dirk Kutscher

Personal web page

New Proposed Decentralized Internet Infrastructure Research Group

without comments

New Proposed Decentralized Internet Infrastructure Research Group

The Internet was designed as a distributed, decentralized system. For example, intra- and inter-domain routing, DNS, and so on were designed to operate in a distributed manner. However, over time the dominant deployment model for applications and some infrastructure services evolved to become more centralized and hierarchical. Some of the increase in centralization is due to business models that rely on centralized accounting and administration.

However, we are simultaneously seeing the evolution of use cases (e.g., certain IoT deployments) that cannot work (or which work poorly) in centralized deployment scenarios along with the evolution of decentralized technologies which leverage new cryptographic infrastructures, such as DNSSEC, or which use novel, cryptographically-based distributed consensus mechanisms, such as a number of different ledger technologies. For example, these use cases include identity/trust management leveraging reputation for authentication, authorization and decentralized management of shared resources.

The evolution of distributed ledger technologies and the platforms that leverage them has given rise to the development of decentralized communication and infrastructure systems, and experiments with the same. Some examples include name resolution (Namecoin, Ethereum Name Service), identity management (OneName), distributed storage (IPFS, MaidSafe), distributed applications, or DApps (Blockstack), and IP address allocation and delegation.

These systems differ with respect to the problem they are solving, the specific technologies that they apply, the consensus algorithms that are employed, and the incentives that are built into the system. Now is a good time to investigate these systems from an Internet technologies perspective, and to connect the domain expertise in the IRTF and IETF with the distributed systems and decentralized ledgers community.

Proposed IRTF DINRG

In the past months we have been working on a proposal for a new Research Group in the Internet Research Task Force. The Decentralized Internet Infrastructure Research Group (DINRG) will investigate open research issues in decentralizing infrastructure services such as trust management, identity management, name resolution, resource/asset ownership management, and resource discovery. The focus of DINRG is on infrastructure services that can benefit from decentralization or that are difficult to realize in local, potentially connectivity-constrained networks.

The objective of DINRG is to 1) investigate (understand, document, survey) use cases and their specific requirements with respect to implementing them in a distributed manner; 2) to discuss and assess solutions for specific use cases with a focus on Internet level deployment issues such as scalability, performance, and security; 3) to develop and document technical solutions and best practices; 4) to develop tools and metrics to identify scaling issues and to determine whether components are missing; and 5) to identify future work items for the IETF.

Other topics of interest are the investigation of economic drivers and incentives and the development and operation of experimental platforms. DINRG will operate in a technology- and solution-neutral manner, i.e., while the RG has an interest in distributed ledger technologies, it is not limited to specific technologies and or implementation aspect. We expect DINRG to advance the state of the art with respect to fostering a better understanding of the merits and constraints of specific technologies with respect to the DINRG use cases.

If you are interested to discuss these topics, please have a look at the complete charter text and subscribe to the mailing list.

Resources

 

Written by dkutscher

October 17th, 2017 at 5:06 pm

Posted in IRTF

Edgy with a Chance of RIOTs

without comments

Report from IRTF T2TRG Meeting, RIOT Summit, ACM ICN Conference, and IRTF ICNRG Meeting

 

 

Berlin saw a remarkable series of research, coding, demonstration and open discussion events on the Internet of Things and Information-Centric Networking last week. It brought together an interesting mix of researchers, developers, entrepreneurs and thought leaders, which facilitated making real progress and moving the needle in next-generation networking for IoT, edge computing and decentralized operations. In my view the whole setup (although demanding in terms of commitment by organizers and participants) can likely serve as a prototype for future un-conference (and un-standards-meeting) events that want to put emphasis on constructive discussions and progress making instead of paper publication and marketing. For those who have been unlucky to miss it, I have written this (eclectic) summary (please refer to the respective events’ web pages for a complete view). Also note, I am not speaking for the organizers of the different events.

Introduction & Executive Summary

The Internet of Things, Edge Computing, Virtual/Augmented/Mixed Reality are popular buzzwords in the networking industry and academic community. Unfortunately, the popularity and the associated revenue expectations often lead to proposed solutions that try to leverage (often failed) foundations from related domains (e.g., the telco area), that compromise on security and performance and that lead to complex point-solutions. For example, in IoT, past experience in factory automation, home networking etc. have led to the popular assumption that most IoT networks will be built with the notion of a gateway that connects controllers, sensors on different incompatible fieldbus networks to cloud backends, employing significant translation magic to enable connectivity and semantic interoperability. People often use the term convergence to describe the fact that a zoo of different technologies will be integrated in such frameworks.

Converting to Internet Technologies

However, the Internet research and technology development community has demonstrated before (when multi-media real-time communication made telephony just another service on the Internet) that conversion (not convergence) is what actually creates an interoperable and extensible set of technologies. In IoT, protocols such as 6lowpan (IPv6 over Low power WPAN) and CoAP (Constrained Application Protocol) are enabling an efficient, secure, end-to-end communication service for the Internet-of-Things, where the Internet does not necessarily terminate at a predefined gateway. Instead, the Internet communication semantics can be extended to constrained devices — providing one stable platform of communication, obsoleting a lot of cruft that current IoT “industry standards” represent.

Semantic Interoperability

Beyond the fundamental connectivity layer, it is important to agree on they way Things in the IoT actually interact with one another, i.e., request-response type of interaction, publish-subscribe, RESTfulness, group communication etc. CoAP enables different interaction types on a Thing-to-Thing-based communication model. But when you compose/deploy/re-program IoT networks, how do you actually know how to communicate with your Things? How do you learn about available resources and the correct way to interact with them? How do Things and their users understand the physical-world effects, and, finally, how can you (reliably and securely) create larger applications that leverage Things in the IoT?

There are different approaches for describing and discovering resources. In the age of Service-Oriented-Architectures, people came up with resource description frameworks etc., enabling a first level of semantic interoperability. In the IRTF Thing-to-Thing Research Group (T2TRG), we are trying to find a sweet-spot between expressiveness, simplicity and flexibility with respect of re-using and re-combining resources for new purposes. This work is leveraging ideas from the web (hypermedia in general) so that “simple things should be simple; complex things should be possible”. Information-Centric Networking (ICN) also has a relation to semantic interoperability — I will talk more about it when summarizing the ICN conference below.

Data-Oriented Networking and Forwarding Abstractions

In IoT most interactions are actually not about sending bits from host A to host B — most often, we are interested in accessing names resources such as sensor readings, the result of an actuation request — regardless of network and host addresses. Similar considerations apply to other applications, too — for example web applications, video streaming and virtual reality. Realizing these applications today requires a stack of overlays for secure communication (server authentication and confidentiality through TLS), storage for resource sharing and latency reduction (CDN), and application-specific in-network processing (for example, routing IoT data to intended and authorized consumers).

In more advanced and/or challenging network scenarios such as multipath communication or data sharing in the IoT, the trade-offs that the traditional overlay approach requires are becoming increasingly painful. For example, TLS-based connection-oriented security may be a good approach for tele-banking, but it clearly gets into the way when we want to communicate in dynamic environments (with changing IP addresses etc.) or when we want to disseminate and consumer data from multiple producers securely in the IoT.

Being able to access named data regardless of current node addresses is a concern in more traditional frameworks such as CoAP, too. ICN addresses this by providing access to named (and authenticated) data as a first-order service. The network relies on named data access on the Internet layer, so that security (name-content binding, access control, confidentiality) does not depend on from where a particular data object has been retrieved. Obviously, this can facilitate communication in dynamic network topologies (mobility, disruptions) as well as enhance efficiency and reliability (caching) and is thus attractive for IoT but also for most other application domains.

The way that ICN implements the accessing-named-data service on the Internet layer enables peers and intermediary nodes to support forwarding and effective data dissemination in a network. For example, compared to IP, a router has slightly more visibility of request-response latency and data availability (potentially per name prefix) which can inform queue management, forwarding behavior and caching strategies. This is the basis for better transport performance in more conventional networks. In IoT, an enabled forwarding layer can help to optimize data availability in the presence of disruptions, power-saving and improve mesh network routing by leveraging information about data interest at certain parts of the network.

Because ICN can enable application-independent in-network caching directly on the Internet layer (as opposed to on the application layer as CDNs do) you can also characterize ICN as a democratizing technology: it enables data production and efficient sharing over the network by everyone and for any application — without requiring permissions from ISPs or contracts with CDN providers.

Regardless of ICN or any other technology, the technical question is “what is an appropriate forwarding abstraction?”  — for the new Internet that includes the IoT and other domains. From an Internet perspective, it would certainly be good if one could find a suitable comprise and arrive at a functionality set that is as powerful as needed — but not too powerful in terms of requiring application-specific knowledge and functionality at too many places in the network to be useful. To that end, ICN is inspired by IP and provides a minimal thin-waist (in the Internet stack hour glass model) but provides more functionality for in-network forwarding and caching strategies.

The ICN Conference and the ICNRG meeting last week discussed technical aspects of applying this technology to different application domains such as IoT: how to automate trust management, how to map ICN protocols efficiently to lower layer protocols such as IEEE 802.15.4, how to manage/bootstrap such networks securely, and how use the ICN protocol semantics for IoT use cases, for example asynchronous data generation.

Edge Computing

Edge Computing is becoming increasingly popular these days, and there are many good reasons to rethink current cloud-centric compute service architectures. For example, in industrial IoT, there are strong trust-sensitivity reasons for not shoveling all data to the cloud by default for processing and redistribution. Instead the data needs to be processed, potentially stored and shared close to the producers and consumers in an industrial IoT network. Or, as another example, infrastructure support for Virtual Reality  has low-latency requirements that mandate placing the compute function close to the display device.

There are different ways to do edge computing though — some approaches can be seen as extending today’s cloud infrastructure to the edge — to so-called edge gateways or to multi-tiered arrangements of compute platforms (fog computing). Also, popular CDN platforms provide some form of in-network computation already, so it seems attractive to extend these platforms to the edge.

From an Internet technology perspective, it is important to understand the implications of different architecture with respect to security and privacy (does edge computing mean we have to entrust unknown proxies to intercept our communication sessions?), permissionless innovation (can anyone run distributed computations in the network, or do you have to be a big content/service provider?), and generality (if edge computing means shipping VMs images to edge gateways, what about constrained networks/platforms?).

In the Thing-to-Thing context, we are discussing options for light-weight in-network computing that does not necessarily have to rely on an ossified architecture of constrained IoT network, edge gateway, and cloud backend. Similarly to thing-to-thing communication, would it be possible to design IoT edge computing in a way that allows some nodes in the network to offer compute services for other (possibly more constrained) nodes, and can this be achieved without complicated, and in the worst case, manual orchestration?

In ICN, the combination of accessing static named data and dynamic computation results in the same framework seems to be a very elegant and powerful approach to edge computing. For that reason, Intel and the NSF have recently decided to fund three research projects on ICN in wireless edge networks. One interesting aspect in this context is the idea not treating edge computing (and its applications) as a very special case in a distributed computing architecture. Instead, applications such as Virtual Reality could essentially just be web applications that leverage standardized protocols, media formats and dynamic code execution.

One particular proposal blending static data access with dynamic in-network computation in ICN is called Named Function Networking (NFN). NFN applies functional programming concepts (expression reduction, code as data, memomization) to networking and thus provide a light-weight in-network computation platform that can ultimately provide similar features as stream processing and distributed data bases under one single abstraction.

Going Cloudless

The Internet was designed as a distributed, decentralized system. For example, intra- and inter-domain routing, DNS, and so on were designed to operate in a distributed manner. However, over time the dominant deployment model for applications and some infrastructure services evolved to become more centralized and hierarchical. Some of the increase in centralization is due to business models that rely on centralized accounting and administration. However, we are simultaneously seeing the evolution of use cases (e.g., certain IoT deployments) that cannot work (or which work poorly) in centralized deployment scenarios along with the evolution of decentralized technologies which leverage new cryptographic infrastructures, such as DNSSEC, or which use novel, cryptographically-based distributed consensus mechanisms, such as a number of different ledger technologies.

One example that was mentioned at the T2TRG meeting on Sunday was the coordination of different wireless networks that compete for spectrum in a geographic context. For large-scale, managed spectrum sharing you could employ centralized databases for recording who is entitled to use what frequency band in a certain geographic location. In more dynamic settings like a multi-vendor, multi-radio technology IoT network deployment, this centralized approach may not work that well.

Decentralizing trust management, identity management, name resolution etc. could thus be another interesting factor towards democratizing network and application usage on the Internet. Less applications in the future may have to depend on centralized cloud services, and new players may be able to introduce innovative services. These ideas touch upon T2TRG work as well as ICN (that promote decentralized operation by itself). We are therefore kicking off a new proposed Research Group on Decentralized Internet Infrastructure in the IRTF.

Open Source and Free Software

In IoT one crucial element is the operation system platform for constrained devices. There are a few one that a freely available, and some companies have developed their own OSes, sometimes also marketed as Open Source. Open Source IOT OS software is important for two reasons: 1) For providing a platform that people can start new developments at minimal cost; and 2) For providing a platform that is reviewed and ideally governed by an open community process. If you think about security bugs/fixes, it has been demonstrated that the ability to review code and to propose changes improves the security and stability of software systems significantly compared to closed-source approaches, also with respect to agility when quick response to a new security threat is required.

Unfortunately, Open Source has become a marketing term these days, and many people confuse the availability of for-free software with Open Source. In addition to actually obtaining source code, two other important factors are licensing models and the project governance. Who actually decides about integrating proposed changes and future directions?

The RIOT OS project has developed a modern UNIX-like, very modular, very lightweight IoT OS that licensed under LGPL. The project is governed by a transparent and open community process, which has led to many useful extensions in the past, for example the addition of ICN support through integration of CCN-Lite or the addition of CAN bus functionality. RIOT’s architecture, its modularity and flexibility has led to increasing popularity and its wide availability on many different target platforms, which was demonstrated at the RIOT summit last week.

TL;DR

There is lots of activity in making the Internet better and bringing it to new places. Last week’s series of research events on IoT and ICN demonstrated new approaches towards Internet-inspired, direct communication. The most important meta aspects (in my view) are disintermediated communication, semantic interoperability, data-oriented communication and edge computing, and democratizing network operation and innovation through decentralizing communication and network infrastructure. The following sections represent my eclectic summary of theses meetings, focusing on these aspects.

IRTF Thing-to-Thing Research Group

The T2TRG meeting took place on Saturday/Sunday (September 23/24). One particular technology in T2TRG’s activities on semantic interoperability is the Constrained RESTful Application Language (CoRAL) by Klaus Hartke that “defines a data model and interaction model as well as two specialized serialization formats for the description of typed connections between resources on the Web (“links”), possible operations on such resources (“forms”), and simple resource metadata” (presentation slides from the meeting). CoRAL is essentially a constrained-environment-compatible hypermedia framework that can be used by IoT applications to discover node capabilities in a modern, flexible way.

On the topic of coordination and consensus using decentralized network infrastructure, Laura Feeney talked about “A role for higher layer protocols in mitigating wireless interference”, illustrating the use case of coordination between different (unknown) wireless networks that may compete with each other for spectrum (slides will become available here). Pekka Nikander introduced an upcoming EU H2020 project on Secure and Open Federation of IoT Systems (SOFIE) that is going to start 2018. The project plans to investigate use cases and ledger federation approaches to connect different types of IoT applications and their ledger infrastructure. I gave a talk on decentralized network infrastructure and considerations for T2T edge computing (as described earlier).

RIOT Summit 2017

The RIOT summit 2017 took place on Monday/Tuesday (September 25/26).  The keynote on Permutation-based Cryptography for the Internet of Things was presented by Gilles van Assche. The rest of the agenda was split up into topical sessions on IoT Security, Virtualization & Bootstrappping, Use Cases, and Networking. The second day featured different tutorials and coding sessions. In addition, there were many demos and posters on specific applications of RIOTs, new ideas etc.

In the Virtualization and Bootstrapping session, Marcel Enguehard talked about Cisco’s “Large-scale experiments on virtual ICN-based IoT networks with vICN“, an automated emulation platform, allowing for connecting physical devices for experiments.

In the Use Cases session, Michael Frey gave a presentation titled “Cloudy with a chance of RIOTS — Towards an Open Industrial Internet“, describing the R&D work at MSA on RIOT-based IoT appliances. In the same session,  Joern Alraun gave an introduction to the “Calliope mini“, a single-board computer for teaching. I am personally interested quite a bit in didactics of computer science (and am deploring the sad computer science education situation at most schools…).

In the Networking session, Vincent Dupont talked about “RIOT and CAN” and reported on OTAkeys’ development of a CAN implementation for RIOT (that has been integrated into the project) and its application to a commercial product related to vehicle on-board diagnosis (OBD). This resonated well with me, because I know how limited closed-source commercial OBD-2 adapters typically are, so the availability of an open platform sounds great for working with cars that use proprietary extensions etc.

Overall, the RIOT summit exhibited a vibrant community, and it was great to see an increasing number of commercial applications.

ACM ICN Conference

The ACM ICN 2017 Conference took place from Tuesday through Thursday (September 26 — 28). The first day saw three tutorials on 1) NDN, CCN-Lite, RIOT, 2) FD.io/cicn, and 3) Umobile, all of them were really well attended. The conference itself was organized into 6 technical sessions on Security, Architecture, Forwarding, Caching & Mobility, Infrastructure, and miscellaneous topics. In addition, there was a panel discussion on ICN & Operating Systems.

Jon Crowcroft presented the keynote on Private Namespaces in ICN. In his talk Jon made the connection of earlier work on reliable multicast (PGM — Pragmatic General Multicast) to ICN — both technologies can achieve scalable data distribution, albeit in different ways. He also made the connection of ICN and distributed ledger technologies (DLT) — as both technologies can be characterized as democratizing networking in their respective ways. ICN can provide a general-purpose multicast-like distribution infrastructure that can be used by anyone for any application without requiring prior contractual agreements, and DLT can be a basis for decentralized digital currencies and other ledger-based services in communication networks.

The best paper was titled “Jointly Optimal Routing and Caching for Arbitrary Network Topologies” (slides) by Stratis Ioannidis and Edmund Yeh. The paper presents polynomial time approximation algorithms for the (normally NP-hard) problem of jointly optimizing routing and caching for arbitrary topologies. This paper is noteworthy because the proposed solution can reduce routing cost in ICN dramatically, and furthermore, the work is applicable beyond ICN.

The Security session featured a paper titled “NDN DeLorean: An Authentication System for Data Archives in Named Data Networking” (slides) by Yingdi Yu, Alexander Afanasyes, Jan Seedorf, Zhiyi Zhang, and Lixia Zhang.  NDN DeLorean is  authentication framework to ensure the long-term authenticity of long-lived data, inspired by Certificate Transparency.   It is using a publicly auditable bookkeeping service approach to keep permanent proofs of data signatures and the times when the signatures were generated. I found this work interesting and important because it can provide a basis for trust management and attestation services in ICNs, with a purely data-oriented security approach.

In the Architecture session, there was a presentation of a short paper titled “Improved Content Addressability Through Relational Data Modelling and In-Network Processing Elements” (slides) by Claudio Marxer and Christian Tschudin. This work represents new ideas how relational database concepts can be applied to an ICN/NFN framework so that general-purpose processing of elements in ICN Named Data Objects becomes possible, which could be an interesting feature in NFN-based in-network computation, especially in application domains such as IoT. I found this work interesting and relevant because it can be seen as an ICN contribution to semantic interoperability, enabling application components to “talk” to each other across application silos.

The Forwarding session featured a paper titled “Path Switching in Content Centric and Named Data Networks” (slides) by Ilya Moiseenko and Dave Oran. The work described in this paper is leveraging the path symmetry in CCN/NDN for computing end-to-end label paths that can be used to steer forwarding of subsequent requests through the network. Over time, a consumer potentially different available paths for a certain prefix or set of prefixes and can then provide hints to forwarding nodes as to which particular path to use. I found this work interesting and relevant because it provides an MPLS-like functionality solely by leveraging data plane functions, i.e., unlike MPLS in IP, this approach would not need and label configuration and a corresponding control plane.

In the so-called Potpourri session, there was a presentation of a paper on ICN edge computing titled “NFaaS: Named Function as a Service” (slides) by Michael Krol and Ionnis Psaras, presenting an edge/fog computing extension to NDN that is leveraging very lightweight VMs, thus allowing dynamic code execution in a VM-based approach. Similarly to NFN, this work represents function names in Interest messages (that identify unikernel images). Some forwarding provide additional VM execution capabilities and can decide whether they want to fetch, store and execute the named images. NFaaS implements different forwarding strategies for delay-sensitive and for “bandwidth-hungry” services that can lead to different locations for the respective function execution. I found this work interesting and relevant because it proposes a framework for ICN-in network computation that enables certain useful optimizations with respect to function placement, without relying on centralized management with a  global network view.

A particular highlight of this year’s conference was the demo and poster session that featured 12 (!) demos and 13 posters, which was praised by many attendees. The best-demo award went to Nikos Fotiou, George Xylomenos, George Polyzos, Hasan Islam, Dmitrij Lagutin, and Eero Hakala for their demo on “ICN enabling CoAP Extensions for IP based IoT devices“. Another demo that impressed me was on “Panoramic Streaming using Named Tiles” by Kazuaki Ueda, Yuma Ishigaki, Atsushi Tagami and Toru Hasegawa. This demo showed how 360-degree video can be made more efficient through ICN by segmenting the video into named tiles that a consumer can request independently. A video renderer can thus request the required tiles for a particular field-of-view at a time only, thereby saving significant amount of bandwith. In conjunction with other ICN features such as caching and multipoint distribution, this approach can help to make 360-degree video much more viable in constrained networks.

Overall ACM ICN 2017 was a great research festival, and it was especially fascinating to see the all the different demos that applied ICN to a wide range of application domains, including IoT, video, tactical networks, robotics etc. I am really looking forward to ACM ICN 2018 that will be held at Northeastern University in Boston.

IRTF ICN Research Group

Finally, ICNRG had an interim meeting on Friday (September 29) that was focused on new research work and allowed a good amount of time for in-depth discussion (which is not always possible in the more rigid framework of an academic conference).

Michael Frey presented thoughts “Towards an ICN-powered Industrial IoT” and described specific requirements for MSA’s mobile safety appliances. The talk also provided some insights on the particular approach towards ICN for Industrial IoT at MSA and reported some intermediate experimentation results, for example using pub/sub communication in NDN.

Mayutan Arumaithurai and Dennis Grewe presented “Information-Centric Mobile Edge Computing for Connected Vehicle Environments: Challenges and Research Directions“. The talk featured the description of a mixed reality use case called “Electronic Horizon” for cars and a discussion of how its specific edge computing requirements can be met by ICN, pointing at interesting directions for future research.

Michael Krol talked about “Adapting ICN to Function Execution for Edge Computing” and the different research challenges he encountered such as PIT Expiry (when computations take longer…), security, authorization (for function execution), leveraging hardware-based cryptography and secure execution environments (SGX etc.).

This time, we tried a new interactive format at ICNRG which featured a panel-like discussion (with active participation from the rest of the group). The topic was “ICMP-like control-plane communication  for ICN“, following up on an earlier discussion at the last meeting and and on the mailing list. The discussion featured the following contributions:

  1. Non-Application Messages for ICN (Panel introduction by Dave Oran)
  2. Do we need an ICMP for NDN (Thomas Schmidt)
  3. Fraudulent Names (Christian Tschudin)
Full house at ICNRG when Dave Oran kicks-off a discussion in ICN control plane communication

Full house at ICNRG when Dave Oran kicks-off a discussion on ICN control plane communication

During the discussion we clarified what we mean by control messages and discussed several options for representing corresponding semantics in ICN (namespace, message types, header fields). Please consult our detailed meeting notes if you are interested in the discussion.

Bengt Ahlgren talked about “ICN Congestion Control — how to handle unknown and varying link capacity?” and kicked of a discussion on how ICN hop-by-hop congestion control should effectively work together with end-to-end (receiver-driven) congestion control.

Jacopo De Benedetto presented “Interconnection of testbeds to enable better testing” — proposing using the Geant Testbed Service (GTS) for future ICN testing.

Cenk Gündogan and Christopher Scherb provided an “update on CCN-lite and RIOT“. In 2017, the development of CCN-lite v2 has been kicked-off, with many improvements with respect to code modularity, functionality and implementation specifics. One of the planned changes is the introduction of static memory allocation which is deemed important on constrained platforms.

Cenk Gündogan also reported on his work on “CCN LoWPAN“, i.e., mapping the CCNx and NDN protocols to an IEEE 802.15.4 link layer, employing header compression for a more compact message format.

Finally, I provided a short summary of the IRTF T2TRG meeting earlier in the week (see above).

Disclaimer

I was not involved in the local meeting arrangement and general organization of these events. The heavy lifting has been done by Matthias Wählisch, Thomas Schmidt, Emmaniel Baccelli and many supporters at FU Berlin and HAW.

ChangeLog

  • 2017-10-12: Added correct link to ICNRG meeting minutes

Written by dkutscher

October 5th, 2017 at 12:13 am

Posted in Events

ICN Update after IETF-99

without comments

Here is a quick (eclectic) summary of recent events in ICN at/around IETF-99 last week. ICNRG met twice: for a full-day meeting on Sunday and for a regular meeting on Wednesday. (Find a list of all past meeting, agendas, meeting materials, and minutes here.)

Edge Computing and ICN

We presented a summary of the recent Workshop on Information-Centric Fog Computing (ICFC) at IFIP Networking 2017, which featured a few papers on ICN edge computing in IoT and on Named Function Networking, one specific approach to marry access to static data and dynamic computing in ICN.

Moreover, Eve Schooler from Intel announced the three selected projects of the recent Intel/NSF-sponsored call for proposals for projects on ICN in the wireless edge:

Lixia Zhang presented an overview of the first project on Augmented Reality and described how the project conceives AR as one of several applications that can leverage a web of browsable named data, based on decentralized multiparty context-content exchange.

Finally, Yiannis Psaras presented his paper on Keyword-Based Mobile Application Sharing through Information-Centric Connectivity that won the Best Paper Award at ACM MobiArch 2016. In this paper, the authors describe a cloud-independent content and application sharing platform based on ICN.

ICN Demos

Luca Muscariello and Marcel Enguehard presented an overview of the Community ICN (CICN) activity in the Linux Foundation fd.io project and showed a demo of the software and their emulation environment.

IMG_20170716_123755

IMG_20170716_115833

CICN consists of several Open Source ICN implementations, including an efficient VPP-based forwarder implementations. Cisco made this software available after acquiring PARC’s implementation earlier this year.

ICN Specifications Moving Forward Towards Publication

ICNRG has completed its (research group) last calls on the two core specifications for the CCNx variant of ICN:

The fd.io CICN implementations are based on these specifications (that are intended to be published as Experimental RFCs).

ICNRG also started the Last Call for an Internet Draft on Research Directions for Using ICN in Disaster Scenarios that is intended to be published as an Informal RFC. There are a few additional documents that are nearing completion — see our Wiki for more information.

Upcoming Things

There a few exciting events around ICN taking place this summer/fall.

The ACM SIGCOMM ICN Conference 2017 is embedded into a week of cool ICN and IoT events:

  1. IRTF Thing-to-Thing-Research-Group meeting on September 23/24 (Saturday/Sunday)
  2. RIOT Summit 2017 on September 25/26  (Monday/Tuesday)
  3. The ICN Conference itself from September 26 through 26 (Tuesday through Thursday)
  4. IRTF ICNRG meeting on September 27 (Friday)

Moreover, ICNRG plans to meet at IETF-100, most likely on Sunday, November 11 and during the following week.

If you are working on ICN Security, there a current Call For Papers for an IEEE Communications Magazine Feature Topic on Information-Centric Networking Security.

 

 

 

 

Written by dkutscher

July 25th, 2017 at 11:52 am

Posted in Events

Affiliation Update

without comments

Hello everyone,

After 7.5 years in NEC Laboratories Europe, I have decided to take on a new challenge and joined Huawei’s German Research Center in Munich as the CTO for Virtual Networking and IP on October 1st 2016.

I am grateful for NEC’s support of my work in the past and especially for the colleagueship at NEC Labs Europe and NEC Corporation and for being given the opportunity for working with so many cool people.

I have now decided to move on and am looking forward to working with new colleagues (and esteemed collaborators) on topics that are close to my heart: evolving the Internet, embracing computer science, data centricity, distributed computing, programmability and automation.

With the Internet of Things, the next-generation of mobile networks being developed, the possibilities of experimenting with new concepts thanks to virtualization and programmability, redesigning network function platforms and overall architectures, and with applying recent work in machine learning, I am really excited to explore and experiment with new ideas and new tools in networking.

Are you interested in making a dent in networking? Come work with me in Munich. Do you want to collaborate in research and Open Source projects? Please feel to contact me at Dirk.Kutscher@Huawei.com.

Written by dkutscher

October 11th, 2016 at 10:57 am

Posted in personal

RFC 7927: Information-Centric Networking (ICN) Research Challenges

without comments

We (ICNRG) published RFC 7927 on Information-Centric Networking (ICN) Research Challenges.

This memo describes research challenges for Information-Centric Networking (ICN), an approach to evolve the Internet infrastructure to directly support information distribution by introducing uniquely named data as a core Internet principle. Data becomes independent from location, application, storage, and means of transportation, enabling or enhancing a number of desirable features, such as security, user mobility, multicast, and in-network caching. Mechanisms for realizing these benefits is the subject of ongoing research in the IRTF and elsewhere. This document describes current research challenges in ICN, including naming, security, routing, system scalability, mobility management, wireless networking, transport services, in-network caching, and network management.

Information-Centric Networking (ICN) is an approach to evolve the Internet infrastructure to directly support accessing Named Data Objects (NDOs) as a first-order network service. Data objects become independent of location, application, storage, and means of transportation, allowing for inexpensive and ubiquitous in-network caching and replication. The expected benefits are improved efficiency and security, better scalability with respect to information/bandwidth demand, and better robustness in challenging communication scenarios.

ICN concepts can be deployed by retooling the protocol stack: name-based data access can be implemented on top of the existing IP infrastructure, e.g., by allowing for named data structures,
ubiquitous caching, and corresponding transport services, or it can be seen as a packet-level internetworking technology that would cause fundamental changes to Internet routing and forwarding. In summary, ICN can evolve the Internet architecture towards a network model based on named data with different properties and different services.

This document presents the ICN research challenges that need to be addressed in order to achieve these goals. These research challenges are seen from a technical perspective, although business relationships between Internet players will also influence developments in this area. We leave business challenges for a separate document, however. The objective of this memo is to document the technical challenges and corresponding current approaches and to expose requirements that should be addressed by future research work.

Continue reading…

Written by dkutscher

August 9th, 2016 at 3:51 pm

Posted in IETF,Publications

Tagged with , ,

RFC 7778: Mobile Communication Congestion Exposure

without comments

Mobile network designs have to meet several, at first sight contradicting, requirements: maximize resource utilization, provide optimal performannce (user-perceived quality of experience), enable operator-defined “fair usage” policies, maintain user privacy and minimize management complexity.

For 5G networks, virtual network slicing is often mentioned as one the desirable properties, i.e., the ability to run virtual networks for different application classes (service slicing) or different customer groups (MVNOs etc.) over the same physical infrastructure. Virtualizing networks over a larger set of shared resources (radio networks, backhaul, data centers) requires effective and efficient means for capacity sharing.

Capacity sharing can be done in different ways: traditionally, telco network capacity sharing has been inspired by telephony network architectures with an emphasis on control plane-based monitoring, resource allocation and configuration. Such approaches often involve traffic management systems that monitor performance, load etc. of network elements, analyze traffic properties (for example, DPI-based traffic inspection) and configure network elements such as base station and gateways to implement certain rate limits based on operator policies.

Three trends make this difficult in present and future networks:

  1. with virtualization, slicing etc., the effort of analyzing every single tenant’s flows can be increasingly prohibitive;
  2. encryption-by-default with HTTP2 and other protocols that employ connection-based encryption renders DPI-based approaches costly (at best — if not impossible); and
  3. Internet protocols and applications such TCP (transport layer) and DASH-based video streaming over HTTP (application layer) are themselves adaptive to congestion, delay and overall observed performance. New protocols with specific requirements are invented all the time (think IoT, Virtual Reality). Interfering with their control loops through network traffic management may yield bad performance, suboptimal user preference and higher cost overall.

The idea of enabling an effective capacity sharing with a productive cooperation of operator policy decision making and dynamic application/user resource utilization has driven the work in the IETF ConEx working group. Based on earlier work by Bob Briscoe on Re-Feedback, the ConEx WG has defined concepts and (experimental) mechanisms for congestion exposure, enabling a form of capacity sharing that incentivizes senders to respond to congestion signals, while still enabling operators with hooks for auditing and enforcing correct behavior.

RFC 7778 describes how the ConEx mechanisms can be applied to current LTE (EPS) networks, considering their specifics regarding QoS and network architecture. For example, RFC 7778 described how ConEx can

  • enable or enhance flow policy-based traffic management;
  • reduce the need for complex DPI by allowing for a bulk packet traffic management system that does not have to consider either the application classes flows belong to or the individual sessions; and how it can
  • be used to more effectively trigger the offload of selected traffic to a non-3GPP network.

More experiments with ConEx and related capacity sharing mechanisms are needed, but the questions behind ConEx remain important for 5G (and beyond…): How to achieve an effective collaboration of networks and their users (senders and receivers) considering increased need for capacity sharing, increased demand for user privacy (connection encryption) and the permissionless innovation feature of the Internet, i.e., not expecting the network to know all possible application classes and their traffic management requirements.

Written by dkutscher

March 18th, 2016 at 1:21 pm

Posted in IETF,Publications

5G: It’s the Network, Stupid

without comments

Current 5G network discussion are often focusing on providing more comprehensive and integrated orchestration and management functions in order to improve “end-to-end” managebility and programmability, derived from NGMN and similar requirements. While these are important challenges, this memo takes the perspective that in order to arrive at a more powerful network, it is important to understand the pain points and the reasons for certain design choices of today’s networks. Understanding the drivers for traffic management systems, middleboxes, CDNs and other application-layer overlays should be taken as a basis for analyzing 5G uses cases and their requirements. In this memo, I am making the point that many of today’s business needs and the ambitious 5G use cases do call for a more powerful data forwarding plane, taking ICN as an example. Features of such a forwarding plane would include better support for heterogeneous networks (access networks and whole network deployments), multi-path communication, in-network storage and implementation of operator policies. This would help to avoid overlay silos and finally simplify network management.

Introduction

5G is the current title for much of the network system work in the telco industry these days. All companies, SDOs and industry fora are now working on 5G technologies. There seems to be a rough consensus on requirements and use cases, and first proposals seem to suggest that the design and the implementation will have something to do with SDN/NFV. In general, the assumptions are that 5G will be faster (thanks to better radio), larger (intended to cover more connected devices due to IoT, smart city, new markets), more flexible (network programmability), and converged (unification of mobile and fixed network access/core).

The implications for the actual system architecture and the way we will communicate in the future are not that clear. Some of the current proposals seems to suggest a network platform that is going to provide significant application support in the network. Other proposals seem to be targeted at trying to apply SDN/NFV to the design.

In this memo, I am making the point that the design of a new system architecture and the formulation of requirements for that should be based on a good understanding of the realities and problems of today’s networks. I am claiming that we should use the tools we have now developed like NFV, SDN but also knowledge about efficient transport, content distribution, security to rethink network and system architecture.

I will start with a discussion of pain points in today’s networks, before I address popular 5G use cases as proposed by NGMN. I am assessing a few 5G design options and formulate a constructive proposal as conclusion.

Notes:

  • The views presented here are my own.
  • This is based on a recent presentation I did on this topic. If you are interested, please find the presentation material here: “Security and Transport Performance in 5G“.

Today’s Networks

The commercial success of today’s mobile and fixed networks is clearly based on the success of the Internet and the web. Web applications are hugely popular, especially web-based video video services. It’s a bit ironic that these applications are sometimes called Over-the-Top (OTT) applications (from a network operator perspective) — clearly these are the applications — there are essentially no other applications that are of interest to users (except for audio telephony which is still treated as a special application).

Current networks are largely leveraging Internet technologies (namely IP) — however we had to develop a large set of additional gear to make a useful service out of it.

For example, mobility management: Based on the “seamless connectivity” service requirement LTE employs an anchor-point-based mobility approach, implementing through tunneling (either GTP- or proxy-MIP-based). This concept lends itself to a centralized design with the usual inefficiency and scalability problems — hence people have started inventing technologies like Selected Traffic Offload when they figured out that most users actually just want to access a web resource — for which seamless IP connectivity is not necessarily required. Current work in the IETF DMM WG and in 3GPP is concerned with generalizing this principle towards decentralized mobility management.

Most extra work needs to be done on the performance side: TCP proxies, traffic management systems, application traffic optimizers, CDNs.

Figure 1 contrasts the theoretic architecture with a more typical implementation.

Figure 1: Mobile Network Functions

Figure 1: Mobile Network Performance Enhancing Functions (Copyright 2015 NEC)

The motivation for this extra functionality is as follows:

  • TCP proxies are tools for mobile operators for tuning network performance with respect to their requirements. TCP’s end-to-end congestion control does not work so well when it has to bridge heterogeneous networks with different causes for delays and packet loss. One of the reasons it does not work so well especially in mobile networks is actually the design of the system as a virtual-circuit-like service: Significant buffering, variable latency, no AQM, no congestion notification. So as a result, you end up with proxies that manipulate the flow/generation of ACKs to trick senders etc. This is typically really helping performance — otherwise, I hope, these boxes would not be deployed, because they are also creating some problems.[Honda-2011]
  • Traffic management systems have a similar motivation: give operators a tool for implementing performance & capacity sharing policies. There are really different implementations of this concept, but in general, these systems typically work like this: a centralized traffic management system collects real-time and long-term load and performance-related data from base stations, routers etc. and uses that to configure policers on gateways, base stations etc. The policies may be flow-specific (e.g., to reduce current congestion contribution of specific flow) or application-type-specific (enforce specific treatment of a group of flows) etc. Surely, it does not sound like a terribly elegant or scalable approach — but it is done nevertheless because IP itself does not provide sufficient traffic management features itself, so that some of this could be done in-band. Another reason is that without AQM and ECN, such management-based approaches are perceived as the only option to have the network re-act to overload.
  • Application-traffic optimizers are mainly video optimizers these days. Their job is caching, pacing, transcoding of video traffic, e.g., youTube. There may be other purposes such as user behavior analytics, statistics etc. These systems are implemented as a transparent chain of traffic classifiers, load balancers and the actual application function. TCP/IP per se does not offer caching on the network/transport layer and explicit HTTP proxies have interoperability problems, so this motivates this implementation approach. Obviously, this will all become more difficult/expensive as more encryption is deployed, e.g., through HTTP/2.
  • Network/application server cooperation. An extended variant of traffic management is the Mobile Traffic Throughput Guidance proposal. This is about sharing base station and other relevant information to application servers outside the operator domain to enable applications (video senders) to adapt faster and more proactively. Again, this is done because of a perceived lack of corresponding network/transport layer functionality.
  • CDN deployment is ubiquitous these days. No major web service is deployed without it. CDNs are large-scale content distribution/management networks that provide functions such as pro-active distribution, caching, transcoding, filtering etc. There are different CDN providers, and some operators actually own or cooperate closely with CDN providers. A typical deployment is to run CDN nodes close to the operator network, e.g., in a co-location point, although there is a trend to move CDNs deeper into the network. CDNs are essentially like large-scale application-traffic optimizers. But since CDN nodes are normally on the direct path for all user traffic, they require explicit redirection which is done through DNS-based resolution of DNS names to operator (telco or DNS) CDN nodes. But as on-path application traffic optimizers, CDNs have problems with respect to encryption, i.e., they normally cannot intercept TLS communication between a user and a orign server without impersonating that server. The reason that TLS and CDN still works today is that CDN nodes today are configured with their own certificates for a certain domain (e.g., “cdn.example.com”) that are linked to a valid trust chain so that users’ browsers accept those certificates. While this works, it should be mentioned that this is still problematic from an e2e encryption perspective. The user actually expected an encrypted communication channel between her application and the application server on the orign server, but what she gets is merely an encrypted connection to the next CDN node.
  • Transport encryption will proliferate very fast due to the integration of TLS into HTTP/2 and the “always encrypt” policy in major web browsers. It will see a significant uptake once CDNs start deploying it, i.e., also as adapters to legacy HTTP/1.1 servers. As mentioned above, it will render most of the existing traffic management and application traffic optimizers useless or at least make it more expensive to use them. This is creating quite some concerns on the mobile operator side — which fuels current discussions on if and how the network, user application, and application servers should cooperate for exchanging at least some traffic management information in the presence of ubiquitous encryption (cf. IAB/GSMA MARNEW workshop). Unfortunately, according to some views at least, such management information cannot be (reliably) transferred in an IP or TCP header today, so there is discussion about creating overlay solutions with better support for signaling management and other meta information.

Summarizing, it is not surprising that we need a significant amount of gear in today’s network to make them work and perform well: IP forwarding concepts and the whole network architecture were not designed for this scale of commercial deployment, for specific business needs and performance requirements.

Unfortunately, we had to hack the system to some extent to get this functionality integrated: localized congestion control loops require transparent (and brittle) TCP proxies. The lack of in-network visibility of imminent congestion on multiple bottleneck made us resort to management-based approaches, and the lack of network/transport caching as well as the lack for policy-based request forwarding gave us CDN. I did not mention much about problems, but lack of true end-to-end security in the presence of connection-based security and CDN is certainly a big one. The fact that CDN and the DNS-based cache selection is essentially only an overlay over the network shows when we try to do multipath communication in an CDN network. I could go on.

These things are really normal when systems grow over time and people learn what is needed, what did work well, what did not work so well etc. At some point, you have learned enough that you can build a new system.

5G Use Cases and Requirements

The mobile operator industry has been trying to approach the 5G topic by formulating the following use cases in the NGMN 5G White Paper:

  • Broadband access in dense areas (“Pervasive Video”)
  • Broadband access everywhere (“50+ Mbps Everywhere”)
  • Higher User mobility (“High-Speed Train”)
  • Massive Internet of Things (“Sensor Networks”)
  • Extreme real-time communications (“Tactile Internet”)
  • Lifeline communications (“Natural Disaster”)
  • Ultra-reliable communications (“E-Health Services”)
  • Broadcast-like services (“Broadcast Services”)

The NGMN White Paper does not claim this list to be exhaustive. I would add Affordable Access as another use case, i.e., along the lines of what is discussed in the Global Access to the Internet for All (GAIA) community.

Also, what is not explicitly mentioned is industry networks (also known as Industry-4.0 in some communities), i.e., the concept to 1) use Internet and virtual networking technology and platforms for factory networks and the like, and 2) to interconnect industry sites. Obviously for both cases, the challenge would be guaranteeing upper latency bounds, reliability — when running over a multiplexed network infrastructure.

Finally, I would like to add The next use case to the list, i.e., I would like to emphasize the need to keep the network open for future innovations that cannot be planned or imagined today. This has to do with permissionless innovation, avoiding in-network silos, creating a powerful general-purpose platform.

Everyone has their own interpretation when it comes to deriving requirements, but in my view the following can be inferred:

  • 5G access will be much more heterogeneous with respect to link layer properties, bandwidth, latency, availability. For example, extremely high frequency communication such as mmWave communication is sometimes mentioned. This would offer super-high throughput and low latency, however only in very small cells. It has interesting challenges, for example, ramping up sending rates in a TCP session with peers on the Internet or managing connectivity for mobile users. Then there are very constrained networks in IoT scenarios, or cheap but low-bandwidth radios in GAIA scenarios. Finding a good network abstraction for all these different kinds of networks seems to be an interesting challenge.
  • Use cases such as broadband access everywhere and tactile Internet require a super low latency — especially the latter would not tolerate full path delay, so would need some local communication possibilities (e.g., through caching or edge computing).
  • Related to the increased heterogeneity, I also predict that mobile devices would have more simultaneous access options, i.e., they’d be able to select interface or how to use them in parallel depending on performance requirements and cost constraints.
  • Lifeline communication e.g., in disaster scenarios would call for a network that is able to provide useful services in the presence of fragmentation, loss of core network connectivity etc. The GreenICN project has investigated this intensively. Clearly, centralized control and gateways would not lend themselves to such scenarios.

5G Design Options

In the current design discussions I am aware of, there are few ideas that come up frequently:

  • Data/control plane split through SDN: this is essentially the idea to design switch capabilities and a programmable interface for enabling controllers to program GTP processing behavior. It’s a straightforward idea for generalizing PDN-GW platforms, but it’s clearly orthogonal to the requirements listed above.
  • Simplified mobile core: Accepting the fact that not all mobile applications would need perfect mobility management and seamless connectivity, one idea is to simplify the architecture in a way that it provides a layered service stack, i.e., with a minimal baseline layer that is less complicated and less costly to operate. This could actually help with performance improvement goals.
  • Sliced network architecture: Potentially based on the simplified mobile network idea, there are also proposals to apply virtualization to the mobile network and to offer separate slices. There are two variants to this: Multitenancy for MVNOs and Quality of Service Slicing. Multitenancy for MVNOs is relatively straightforward and is essentially about allowing MVNOs deeper access to a physical network operator’s network through virtualizing most core and access network functions, including base stations.
  • Quality of Service Slicing: another view on slicing is to offer individual Quality-of-Service slices (like the not so frequently used QCI classes in UMTS and LTE). For example, there would be the best-effort slice, the interactive multi-media slide, the IoT slice etc. It’s really like mapping traditional QoS to virtual network concepts — with similar problems: how would an operator know which slice configurations will be needed in the future? How would an applicaton or a user select slices? How would such a system correspond to network neutrality requirements — how would it maintain the permission-less innovation feature of the Internet?
  • (Deep) In-network caching and computing: For use cases such as “Tactile Internet”, but also for more profane applications such as IoT gateways and caching in the access network, there are many ideas for moving such functions deeper into the network. Industry initiatives such as Mobile Edge Computing are pursuing this in a limited fashion today.Technically, this would be about managing IaaS and about shifting function containers to the right place in the network. More future-looking proposals are assuming arbitrary application layer compute functions in arbitrary places in the network. There are different motivations by different players: Network operators see this as an opportunity to “create value” for their networks, i.e., offering platforms that can host such functions. CDN providers see this is an opportunity to extend their platform, both in terms of reach as well as functionality (Akamai). If you extend a CDN massively you could effectively run an overlay multicast distribution network. Again, the shortcoming of the underlying network and transport layer are motivating factors for doing this as an overlay.
  • Network service programmability and orchestration: Extending the in-network compute concept, you could also envision a distributed programmable platform that would offer more flexible programmability than just pushing containers to specified locations. For example, a next-generation Mobile-TV provider could operate a multicast-overlay in an operator network, with functions chains for caching, transcoding etc. The distribution, run-time management etc. would then be subject to an application-independent orchestration function. This idea is also motivated by the “value creation” proposition, i.e., network operators would provide the platform and orchestration functions to application service developers/providers. (cf. SONATA project).

Silos in the Network

The last two approaches raise interesting questions as to how manageable this approach would be in the end. There are most likely many CDN providers who would want to run their functions deeper in the network. Then there are also specific applications that require some caching but would not want to use external CDNs platforms, for example video-on-demand services. As a result, you could end up with a collection of silos, each with their specific overlay as depicted in figure 2.

Overlay Silos

Figure 2: Overlay Silos (Copyright 2015 NEC)

The “deep silo” approach is also motivated by the connection-based communication and security model. Because it is not really possible to share data (while maintaining security properties such as access control rights, authenticity) in the network, we tend to build silos that are centered around the model of enabling a connection to a named server.

There is a particular risk associated with the “deep silo” approach and security. Assume a large number of virtualized CDN nodes, each of those maintaining certificates and public keys for the overall CDN service. Hardening these platforms so that none of these would eventually leak seems to be a major objective. In general, running services on massively distributed software functions deep in the network has risks like this — which makes the overall approach appear questionable in my opinion.

Centralized Control and Orchestration

The orchestration topic highlights a particular problem: the existing shortcomings of the network infrastructure with respect to its forwarding and self-management capabilities already require a worrying collection of management functions as explained in section “Today’s Networks”. Instead of empowering the network, removing the need for transparent middleboxes, overlays etc., we might be taking the need for network management to the extreme — by adding more overlays, more application-layer functions in the network etc.

This is exemplified by misapplying the SDN paradigm towards complete “end-to-end” network control with a network management mindset. Let me explain this: SDN (OpenFlow in particular) was once created as a programmatic interface to enterprise/campus networks that would allow implementing a consistent security policies (isolating nodes on a (virtualized) network). That was motivated by the fact that this is difficult to achieve with the traditional control plane and network management tool set. Also, as mentioned above, IP is really limited with respect to traffic management support, hooks for policy implementation etc.

With OpenFlow, a controller in a local domain is enabled to program forwarding and limited transformation rules into switches in a network so that they could be treated as a virtual switch. This can be done in well-controlled domains (enterprise/campus networks, data centers) and remove the need for some distributed control plane functions and protocols. Since larger parts of mobile networks run in data centers, this is also a valid technology for 5G — as a tool to implement network control to achieve better network flexibility and policy implementation.

What (in my opinion) does not work so well is to elevate the SDN centralized control paradigm to a mantra for network architecture by applying the centralized control concept to the Internet. For example (slightly exaggerating) creating a powerful centralized controller for controlling base station radio communication, transport network, core network, middleboxes, application servers is likely to create a complex and soon ossified system with a gigantic control overhead. Not only will you have to master the timing issues if you want to achieve fine granular control across layers, you will also have to think about domain-to-domain controller interaction (“east/west interfaces”) etc. Anyone remembering “Intelligent Networks”?

Instead, it would be more productive to think about desirable forwarding plane features and proper network abstractions for that — and then use SDN to control networks in a programmatic fashion, i.e., without fine-grained re-active control and without tying network management & orchestration to network programmability.

Way Forward

So, what do to do about 5G? First of all, it is important to understand that “5G” is not going to be a sudden major fork-lift upgrade of the network. It is actually an innovation effort title, and we are going to see changes in phases.

  • Optimizing LTE system implementation through NFV and SDN is happening right now. I would also list “Data/control plane split through SDN” in this category. I would not call this core 5G work — it would not change the system architecture and interfaces — but it would be useful in a sense that we improve the infrastructure and explore the potential for more fundamental architecture changes.
  • Introduce modern AQM, ECN and transport protocols NOW. A lot of progress has been made in past years (Experimenting with ECN, improving fair queueing and AQM), and it’s about time to get these technologies deployed, especially in the presence of ubiquitous encryption, when DPI-based traffic management has less leverage. It’s really important to reduce latency further and to enable applications to respond and adapt to congestion faster. One work item here is to get the interworking of IP and link layer protocols correct. In that context, it would also be useful to rethink capacity sharing and traffic management. For example, try to learn from the (experimental) IETF ConEx effort to find ways to combine performance, smarter ways of capacity sharing than traditional TCP fairness, and incentives for applications to cooperate better — without requiring a complicated traffic management system to enforce this.
  • Enable competition and innovation on the network service provider side: This may sound odd first, but in order to move towards 5G, the anticipated use cases, also including GAIA-inspired services, it would be good if it was easier to start new services, not only as virtual services on top of existing networks. In that context the FCC efforts for spectrum sharing between incumbents and new players are interesting.
  • Avoid “Intelligent Networks-2.0″. It may sound tempting to create super-powerful platforms for in-network services, APIs for service creation etc. to create a more valuable network. There may be even a case for certain applications, for example IoT gateways. But be careful when defining use case and requirements without actually talking to stake holders that are building Internet and Web Services. For example, services like youTube would best benefit from an efficient, low-latency bitpipe — not from a network service platform. The fundamental risk is that we are building a very elaborate service platform with powerful orchestration etc. that is just too complicated and costly to use, or may impede innovation by enforcing certain communication forms — so that application service providers would refrain from using it — and do everything “over-the-top”. Or worse, they would start their own network services. If you don’t think this is possible, I recommend taking a look at Project FiGoogle’s MVNO approach. BTW, this is what happened to Intelligent Networks. Their problem was not that you could not build and operate networks that way — IN was just too inflexible for innovation, one of the factors that led to the development of SIP-based VoIP “over the top”.
  • Innovate on the forwarding plane. In order to address the performance requirements, especially considering increased access technology heterogeneity and more flexibility with respect to network deployment options thanks to NFV and SDN, we need a more powerful forwarding plane that enables the network to better deal with local bottlenecks, multipath communication opportunities, in-network storage for local repair, data sharing and rate adaptation. This would enable us to let the network handle many important optimization itself — without requiring fine-grained control from network management. It would enable us to provide such functions in an efficient, application-independent way — without creating different silos with similar functionality that is entangled with application-specifics.

Powerful Forwarding Plane

The last point is the motivation for people to look into Information-Centric Networking (ICN) as a 5G forwarding plane. ICN is based on the notion of providing “access to named data” as the fundamental network service. Named data can be packets, Application-Data Units, chunks, or objects. Data is secured (cryptographically bound to a name and/or orign) so that is does not need connection-based security. This facilitates application-independent caching in the network and other functions that are today done in application-specific silos.

ICN routers have better visibility of performance because they can measure interface/path performance in correlation with requests names — for every hop where this is needed. This enables a forwarding plane that is powerful enough to handle challenges such as intermittent connectivity, multiple local bottlenecks, varying path performance — without adding too much complexity. Operators can configure different, powerful, forwarding strategies on individual routers, which is the key to support the different 5G use cases and heterogeneous access networks.

Especially for 5G, ICN would make mobility management much easier — in a way that it would not need the current anchor-based mobility management schemes. For example, requestor hand-over is just a matter of (re-) requesting named data on new network attachment points. ICN forwarding strategies and in-network caching would make this as seamless as today’s managed mobility.

There are different specific ideas on how to make use of ICN in 5G (e.g. Cisco’s). There are also other benefits such as having a unfied communication abstraction for both the mobile network part of 5G and IoT networks that would be better discussed in a separate posting. The important notion is as follows:

  • We have learned much about required functionality to make the Internet useful for diverse sets of commercial and non-commercial applications. For many of those we had to revert to application-layer overlays and elaborated network management support. With that knowledge we can now redesign the interplay of network layer, transport layer, and application as well as network management to build better networks.
  • The key question to me is to find a suitable network and forwarding plane abstraction, i.e., to define the capabilities of nodes in the network and find a good function split between forwarding plane and SDN control and network management (the latter two are two different things). The general approach for simplification should be to only do things in network management that you cannot do on the network layer. ICN is just an example of how to do design such a function split — and it illustrates the benefits.
  • The named data approach is a better fit to modern communication requirements. It provides object security, enables data consumption independent of the current source of the bits, which is turn a prerequisite of in-network caching, device-to-device communication and delay-tolerant communication, all of which is deemed critical for 5G. We have moved from physical circuits to TCP connections — it’s now time to go one step further from telephony towards networked computing.

You might ask what this has to do with SDN and NFV. As mentioned above SDN and NFV are really network implementation approaches and infrastructure operation improvements. NFV is obviously an enabler for innovation in a sense as it enables and automates the deployment of software in the network, including ICN functions. ICN could very well be implemented with SDN.

In fact, ICN may actually enable an interesting evolution of today’s OpenFlow model.  In SDN for IP (take OpenFlow as an example), you have to deal with the fact that endpoint identity and next-hop forwarding information is entangled in IP addresses. Consequently, SDN applications typically implement the desired forwarding behavior through header rewriting in order to interwork with existing infrastructure (and to encode additional information in packet headers). Software-Defined ICN would rather have to do with programming Forwarding Information Bases, configuring forwarding strategies and caching policies — so a more pro-active, actual programming-like approach. IP SDN and ICN SDN could well coexists, for example in separate slices in a shared infrastructure.

Again, the important notion for 5G is to emphasize networking capabilities and abstraction — with a focus on performance, application-independence and openness to innovation. The question is not so much whether we should do that or not — but rather who is going to do it. Cisco’s Paul Mankiewich, SP Mobility CTO, has expressed this as follows:

If the network operator industry fails to create an ICN-like architecture then someone like Google will and they will put it behind the SP’s IP transport network.

In fact Google has many ingredients for that already: Project Fi as virtual bitpipe across service providers’ networks, QUIC as a vehicle for redesigning transport and application layer protocols, Google CDN and the whole Google cloud as the infrastructure platform.

In that sense, it might not be too unreasonable to say that those who refuse to learn from the history of Intelligent Networks are doomed to repeat it.

 

Written by dkutscher

December 16th, 2015 at 7:31 pm

Posted in Posts

Tagged with , , ,

2015 ACM SIGCOMM ICN Conference has started

without comments

The 2015 ICN conference has started in San Francisco today!

Program Overview

Wednesday

  • Tutorials on CCN and NDN
  • Posters and demostrations

Thursday

  • Keynote by Van Jacobson: Improving the Internet with ICN
  • Paper presentations on Routing, Node Architectures
  • Panel: ICN — next two years
  • Poster Presentations

Friday

  • Paper presentation on In-Network Caching, Content & Applications, Security
  • Posters and demostrations

 

 

Written by dkutscher

September 30th, 2015 at 6:53 pm

Posted in Events

Tagged with , , , , , ,

Managing Radio Networks in an Encrypted World

without comments

I attended last week’s IAB/GSMA Workshop on Managing Radio Networks in an Encrypted World (MaRNEW).

The motivation for this workshop was the increasing trend of applying transport layer end-to-end encryption in major web applications such as Google services, YouTube, Netflix, Facebook and others. This trend will likely increase due to further deployment of HTTP/2 for which client implementations today try to setup TLS connections per default.

In mobile networks, traffic management but also additional services/functions have traditionally relied on being able to leverage knowledge about application type, application specifics. Example for such functions include policing/prioritization, optimized scheduling, caching, filtering, but also tracking, ad-insertion etc. In addition to functions that operators want to apply, there are also regulation requirements (depending on local legislation) for filtering, legal intercepting etc. that would become more difficult in the presence of ubiquitous encryption.

At the MaRNEW workshop, leading experts from network operators, vendors, application service providers, CDN providers and academic institutions discussed the impact of ubiquitous encryption as well as ideas for enabling an effective collaboration between the network, applications and users to enable optimal performance and resource efficiency.

In particular, the workshop addressed the following topics:

  • Understanding the bandwidth optimization use cases particular to radio networks;
  • Understanding existing approaches and how these do not work with encrypted traffic;
  • Understanding reasons why the Internet has not standardised support for legal interception and why mobile networks have;
  • Determining how to match traffic types with bandwidth optimization methods;
  • Discussing minimal information to be shared to manage networks but ensure user security and privacy;
  • Developing new bandwidth optimization techniques and protocols within these new constraints;
  • Discussing the appropriate network layer(s) for each management function; and
  • Cooperative methods of bandwidth optimization and issues associated with these.

Encryption: Technological and Business Aspects

It is not a secret that there are different aspects for discussing end-to-end encryption in public networks. Obviously, encryption helps with user privacy, and with the background of recent and current revelations of privacy breaches through pervasive monitoring, it has become common agreement that more (easily deployable) encryption would be useful to overcome this.

There is however also the business perspective: the Internet and specifically the eco system of mobile communication and service provision has multiple stake holders, each of those with their particular interests: network operators want to provide a useful service, in an economical way and may have an interest to enhance the overall service quality through various technical measures. Application service providers want their particular service to perform well over a range of different networks. Network equipment vendors have their product roadmaps and network architecture preferences etc.

Finally, there are the actual users of the system who have an interest in good quality of experience, cost-efficiency — and privacy. Privacy is not only a concern with respect to (illegal) pervasive monitoring by agencies, but also with respect to maintaining anonymity and confidentiality towards network and service providers. For many applications, user profiles, user-generated data etc. is also a key business asset — so there is a strong interest by different players to either get access to that data — or (depending on the nature of a player) to keep other players from accessing it — through encryption.

The MaRNEW workshop focused on the technological discussion.

Impact of Encryption

During the discussion the following main impacts of ubiquitous encryption on mobile network were identified:

  • Traditional ways of identifying and classifying network traffic (DPI) become more costly and potentially infeasible.
  • Traditional traffic management systems have relied on such classification, for different purpose: optimizing resource usage in access networks according to operator policies, forwarding of traffic through optimizers, caches etc., as well as filtering. Those approaches and the actual requirements behind them need to be revisited.
  • Content and service provisioning in both mobile and fixed networks today is heavily relying on CDN and in-network application functions. In addition, new approaches such as Mobile Edge Computing may shift more of such functions to access networks. The motivation is to provide better performance and cost efficiency through offloading networks (CDN cache hits) and through reducing latency and transport protocol performance (local control loops, reduced RTT to caches). Introducing more and more end-to-end encryption makes it impossible for operators to provide any application (or CDN-provider)-independent optimization functions. The alternative of running individual instances for each individual CDN provider does not seem promising. It could also be a major road block for future network and application innovation — because each of those individual functions might require upgrading to introduce in-network support for it.

Way Forward

cooperative-traffic-management

 

(Copyright 2015 NEC)

At the workshop, different solutions were discussed.

  • First, it was agreed that the actual impact needs to be understood better and ought to be quantified. For example, assuming that some knowledge about application types (or corresponding service quality expectations) could be leveraged by base stations for more efficient transmission scheduling (e.g., by delaying packets of non-latency-sensitive flows or by operating multiple queues for different flow types), networks should at least be able to obtain corresponding hints from senders. However, the actual impact and potential benefits have to be demonstrated. Operators will work on that issue.
  • The (Internet) transport protocol community has made significant progress in recent years on several fronts: Active Queue Management (AQM) such as fq_codel and PIE have been demonstrated to be able to improve load balancing and reduce latency in router queues. Moreover, transport protocol research has led to promising results (for example PCC — Performance-oriented Congestion Control). It was suggested that those mechanisms should be implemented and deployed where possible.
  • Several options for Cooperative Traffic Management have been discussed. For example this could included exchanging certain information between the network and senders/receivers. The network could inform endpoints better about congestion and non-congestion-induced problems (for example in an extended ECN fashion), or endpoints could inform the network about relevant meta information (application type, QoS requirements etc.). The latter could leverage existing technologies such as DiffServ. Potentially, it would be sufficient to distinguish delay-sensitive flows (e.g., for interactive real-time) and delay-tolerant flows (file download etc.). One interesting question is how endpoints would be incentivized to use such signaling correctly and how corresponding APIs would look like.
  • Overcoming the general limitations of connection-based security and its tendency to require application-specific (or CDN-provider-specific) in-network functions could require a more fundamental rethinking of network architecture and protocol layering. For example, Information-Centric Networking (ICN) would leverage object-security (authentication, encryption), hence enabling the network to implement functions such as caching, local transport strategies etc. in an application manner. This could be of particular relevance for 5G networks where a higher level of dynamicity in the creation and deployment of new OTT services are expected.

For the discussion of such solutions, I (together with several colleagues) have made two contributions to the workshop: 1) Enabling Traffic Management without DPI, and 2) Maintaining Efficiency and Privacy in Mobile Networks through Information-Centric Networking.

Enabling Traffic Management without DPI

Is DPI really needed for traffic management in mobile networks? Our position is “no”. Traffic management is usually realized through relatively simple mechanisms like rate shaping, prioritization, and dropping packets. Compared to these mechanisms, the semantics of applications that can be exposed through DPI are much richer; traffic classification anyway maps these semantics down to a simple set of categories.

The question then arises whether operators are really helped by brittle, insecure and expensive mechanisms for gaining higher fidelity information for the coarse traffic information for traffic management, or whether simple signaling would suffice for traffic classification for mobile network management purposes.

Obviously, when relying on endpoints to signal information about the underlying application which may be used to change the network’s treatment of that application’s traffic, questions of trust arise: how can the network be sure the endpoints are being honest, and prevent endpoints from gaming the system to their advantage (and the disadvantage of others); can these signaling approaches be used as an attack vector. Here the approach is to define the vocabulary of the signaling protocol to properly incentivize honest cooperation, while allowing the network to verify this cooperation.

We discuss two application-independent approaches for traffic management that are based on network-compatible metrics: ConEx Policing and low latency support with SPUD.


Congestion Exposure (ConEx) is a mechanism that enables senders to inform the network about previously encountered congestion in flows thus enabling senders and network infrastructure to respond to congestion based on operator policies. This information is provided in the IP header and can still be accessed even if the payload is encrypted. ConEx information is auditable by comparing the congestion level at network egress to the ConEx signal which incentivizes the sender to state its congestion contribution correctly.

Using ConEx would allow for a bulk packet traffic management system that does not have to consider application classes. Instead, with ConEx accurate downstream path information on incipient congestion are visible to ingress network operators. This information can be used to base traffic management on the actual current cost (which is the contribution to congestion of each flow) and enable operators to apply congestion-based policing/accounting depending on their preference and independent of application characteristics. Such traffic management would be simpler, more robust (no real-time flow application type identification required, no static configuration of application classes) and provide better performance as decisions can be taken based on the real actual cost contribution at each point in time.

The Substrate Protocol for User Datagrams (SPUD) is a new approach to selective information exposure designed to support transport evolution. SPUD is realized as a shim between UDP and an (encrypted) transport protocol. The basic SPUD protocol provides minimal sub-transport functionality by grouping of packets together into tubes and signaling of the start and end of a tube.

This will assist middleboxes in state setup and teardown along the path. Further, SPUD provides an extensible signaling mechanism based on a type-value encoding for associating properties with individual packets or all packets in a tube. The SPUD protocol can be used to signal low latency requirements from an endpoint to the network, or expose the existence of support for such services from the network to the endpoint. Therefore we propose to provide four SPUD signals: a latency sensitivity flag, a signal to yield to another tube, an application preference for a maximum single queue delay, and a facility to discover the maximum possible single queue length along the path.

Based on the latency-sensitivity flag a network operator can implement an additional service (as compared to today’s best effort service) that uses smaller queues and/or different AQM parameters without changing the service that is provided today. Signaling of lower queue priority or maximum single hop delay can further be used to preferentially drop packets of the same sender or within one flow. Information about expected queuing delays on the path can be used for buffer configuration at the endpoints.

The proposal is not intended as a blueprint for immediate implementation — but it demonstrates how cooperative traffic management could be implemented. In our view, cooperative traffic management requires a solid understanding of the interactions with transport layer and the corresponding performance impacts/improvements.

Maintaining Efficiency and Privacy in Mobile Networks through Information-Centric Networking

We present a solution to overcome the impasse of deploying confidentiality at the cost of breaking most of current network traffic engineering in mobile networks. Our proposition is based on Information-Centric Networking (ICN) which is a data-centric network architecture that gracefully incorporates security and traffic optimization.

Content-based security instead of connection based is the foundation of the Information-Centric Networking (ICN) architecture. In ICN, we provide a network service that directly implements the desired information-access abstraction. The network forwards requests for named data and corresponding responses containing the data. The name can be cryptographically bound to the data for ascertaining authenticity. This enables the network to replicate data objects in arbitrary locations, thus enabling ubiquitous caching. Object data can also be encrypted for user privacy, leaving other network-relevant information such as the name intact – thus maintaining options for traffic management, policing etc. The performance gains of having ICN in the mobile backhaul have been evaluated experimentally (see paper). ICN incorporates these ideas into a novel network layer providing all of the mentioned objectives without using man-in-the-middle like solutions.

ICN secures data itself by requiring producers to cryptographically sign every data packet: the signature constitutes the integrity meta-data. The data is uniquely identified by a name that is bound to the data via the signature. The producer’s public key to implement signature verification can be obtained by using the KeyLocator field which can be the name of the data containing the key of the producer. Authentication is implemented via the producer’s key that makes use of a trust model, e.g. PKI, Web-of-Trust that can be extended using key chaining to delegate trust to different sub-namespaces (for hierarchical naming). Confidentiality is obtained by encryption of the data payload using the producer’s key. Notice that authenticity, integrity and confidentiality are independent features.

Once data is published by the producer it can be stored in any location without affecting the security properties of the data which are location independent. Inter-networking of encrypted data is included by design in ICN and in-network caching is always possible with or without confidentiality. Authenticity might not be necessary in many cases so the authentication of the identity of the producer is optional. It is not mandatory either to verify the integrity of the data by verification of the signature. It is important to remark that ICN disantangles authenticity, privacy and integrity so that they can be handled in different ways and without the interaction of end-hosts.

TLS provides web security by encrypting a layer 4 connection between two hosts. Authenticity is provided by the web of trust (certification authorities and a public key infrastructure) to authenticate the web server and symmetric cypher on the two end points based on a negotiated key. In presence of TLS many networking operations become unfeasible: filtering, caching, acceleration, trans-coding.

ICN takes a radically different approach to guarantee confidentiality, authenticity and integrity by embedding them into a redefined network layer. Indeed, ICN builds on the abstraction of data requested, accessed, cached and forwarded by name: the network forwards requests coming from the consumer for named data and routes back data packets on the identical reverse path (symmetric routing).

The ICN communication model allows network nodes between a web server and a web client to operate as forwarding and storage functions to implement various inter-networking functionalities like caching or load balancing without relaxing any security feature. As a fully fledged data-centric network architecture, ICN incorporates mobility, storage, security and multi-point communication by design.

Written by dkutscher

September 28th, 2015 at 12:49 am

ICN-2015 Conference Program

without comments

Join us for the ICN-2015 Conference in San Francisco from Sep. 30 to Oct. 2.

ACM ICN is an annual conference of the ACM Special Interest Group on Data Communication (SIGCOMM) on information-centric networking.

In a nutshell, this year’s conference includes
– 1 keynote given by Van Jacobson
– 19 full papers presented in single track format
– 8 posters
– 10 demos
– 2 full-day tutorials
– 1 industrial panel

Conference details:
http://conferences.sigcomm.org/acm-icn/2015/

Registration details:
http://www.regonline.com/icn2015

Keynote:
– Van Jacobson, Internet pioneer and core architect of Named Data
Networking (NDN), will talk about “Improving the Internet with ICN”.

Tutorials:
– CCN: Practical CCNx – Protocol and Code
– NDN: Security & Synchronization in Named Data Networking (NDN)

Panel:
– Next Steps for ICN: Research, Applications, Deployment and Economics

Topics of papers, posters, and demos include:
– Architecture design and evaluation
– Comparison of ICN architecture proposals
– Limits and limitations of ICN architectures
– ICN evaluation methodology and metrics
– Evaluation of ICN benefits
– Analysis of scalability issues in ICN
– ICN enabled applications
– Routing in ICN networks
– Mobility support
– Trust management
– Access control mechanisms
– ICN economics and business models
– Tools and experimentation facilities
– Measurement methodologies
– Experience from implementations and experiments
– Specific scenarios and implementation approaches
– Feasibility studies for high speed networking
– Privacy
– ICN Deployment
– ICN APIs

Check out the program.

Written by dkutscher

August 20th, 2015 at 10:42 am

Posted in Events

Tagged with ,