Dirk Kutscher

Personal web page

Archive for the ‘Publications’ Category

Ist das DNS Noch zu Retten?

without comments

In der neuen Folge unseres Podcasts Neulich im Netz geht um Namen im Internet, d.h., um das Domain Name System (DNS). Wir sprechen über grundlegende DNS-Funktionen, die Bedeutung, die das DNS für das Internet und das Web hat und über Anwendungen wie Tracking und Traffic Steering, die man vielleicht nicht unbedingt mit Namensauflösung in Verbindung bringt.

Wir diskutieren, inwieweit das technische Design des DNS und seine heutige Verwendung zu Sicherheitsproblemen führt und beurteilen einige vorgeschlagene Verbesserungen. Ist das DNS in der heutigen Form noch zu retten? Wie stehen die Chancen dafür? Diese und andere Frage in der zweiten Episode von Neulich im Netz.

Written by dkutscher

June 9th, 2021 at 11:01 am

Re-Thinking LoRaWAN

without comments

Low-power, long-range radio systems such as LoRaWAN represent one of the few remaining networked system domains that still feature a complete vertical stack with special link- and network layer designs independent of IP. Similar to local IoT systems for low-power networks (LoWPANs), the main service of these systems is to make data available at minimal energy consumption, but over longer distances. LoRaWAN (the system that comprises the LoRa PHY and MAC) supports bi-directional communication, if the IoT device has the energy budget. Application developers interface with the system using a centralized server that terminates the LoRaWAN protocol and makes data available on the Internet.

While LoRaWAN applications are typically providing access to named data, the existing LoRaWAN stack does not support this way of communicating. LoRaWAN is device-centric and is generally designed as a device-to-server messaging system – with centralized servers that serve as rendezvous point for accessing sensor data. The current design imposes rigid constraints and does not facilitate accessing named data natively, which results in many point solutions and dependencies on central server instances.

In our demo paper & presentation at ACM ICN-2020, we are therefore describing how Information-Centric Networking could provide a more natural communication style for LoRa applications and how ICN could help to conceive LoRa networks in a more distributed fashion compared to todays mainstream LoRaWAN deployments. For LoWPANs (e.g., 802.15.4 networks), ICN has already demonstrated to be an attractive and viable alternative to legacy integrated special purpose stacks – we believe that
LoRa communication provides similar opportunities.

Watch my Peter Kietzmann's talk about it here:

Written by dkutscher

October 6th, 2020 at 10:39 pm

Posted in Events,IRTF,Projects,Talks

Tagged with , , ,

Compute First Networking (CFN): Distributed Computing meets ICN

without comments

Edge- and, more generally, in-network computing is receiving a lot attention in research and industry fora. What are the interesting research questions from a networking perspective? In-network computing can be conceived in many different ways - from active networking, data plane programmability, running virtualized functions, service chaining, to distributed computing. Modern distributed computing frameworks and domain-specific languages provide a convenient and robust way to structure large distributed applications and deploy them on either data center or edge computing environments. The current systems suffer however from the need for a complex underlay of services to allow them to run effectively on existing Internet protocols. These services include centralized schedulers, DNS-based name translation, stateful load balancers, and heavy-weight transport protocols.

Over the past years, we have been working on alternative approaches, trying to find ways for integrating networking and computing in new ways, so that distributed computing can leverage networking capabilities directly and optimize usage of networking and computing resources in a holistic fashion.

From Application-Layer Overlays to In-Network Computing

Domain-specific distributed computing languages like LASP have gained popularity for their ability to simply express complex distributed applications like replicated key-value stores and consensus algorithms. Associated with these languages are execution frameworks like Sapphire and Ray that deal with implementation and deployment issues such as execution scheduling, layering on the network protocol stack, and auto-scaling to match changing workloads. These systems, while elegant and generally exhibiting high performance, are hampered by the daunting complexity hidden in the underlay of services that allow them to run effectively on existing Internet protocols. These services include centralized schedulers, DNS-based name translation, stateful load balancers, and heavy-weight transport protocols.

We claim that, especially for compute functions in the network, it is beneficial to design distributed computing systems in a way that allows for a joint optimization of computing and networking resources by aiming for a tighter integration of computing and networking. For example, leveraging knowledge about data location, available network paths and dynamic network performance can improve system performance and resilience significantly, especially in the presence of dynamic, unpredictable workload changes.

The above goals, we believe, can be met through an alternative approach to network and transport protocols: adopting Information-Centric Networking as the paradigm. ICN, conceived as a networking architecture based on the principle of accessing named data, and specific systems such as NDN and CCNx have accommodated distributed computation through the addition of support for remote function invocation, for example in Named Function Networking, NFN and RICE, Remote Method Invocation in ICN and distributed data set synchronization schemes such as PSync.

Introducing Compute First Networking (CFN)

We propose CFN, a distributed computing environment that provides a general-purpose programming platform with support for both stateless functions and stateful actors. CFN can lay out compute graphs over the available computing platforms in a network to perform flexible load management and performance optimizations, taking into account function/actor location and data location, as well as platform load and network performance.

We have published a paper about CFN at the ACM ICN-2019 Conference that is being presented in Macau today by Michał Król. The paper makes the following contributions:

  1. CFN marries a state-of-the art distributed computing framework to an ICN underlay through RICE, Remote Method Invocation in ICN. This allows the framework to exploit important properties of ICN such as name-based routing and immutable objects with strong security properties.
  2. We adopted the rigorous computation graph approach to representing distributed computations, which allows all inputs, state, and outputs (including intermediate results) to be directly visible as named objects. This enables flexible and fine-grained scheduling of computations, caching of results, and tracking state evolution of the computation for logging and debugging.
  3. CFN maintains the computation graph using Conflict-free Replicated Data Types (CRDTs) and realizes them as named ICN objects. This enables implementation of an efficient and failure-resilient fully- distributed scheduler.
  4. Through evaluations using ndnSIM simulations, we demonstrate that CFN is applicable to range of different distributed computing scenarios and network topologies.

Resources and Links

Written by dkutscher

September 25th, 2019 at 3:56 am

Posted in Publications

Tagged with , , , ,

RFC 7927: Information-Centric Networking (ICN) Research Challenges

without comments

We (ICNRG) published RFC 7927 on Information-Centric Networking (ICN) Research Challenges.

This memo describes research challenges for Information-Centric Networking (ICN), an approach to evolve the Internet infrastructure to directly support information distribution by introducing uniquely named data as a core Internet principle. Data becomes independent from location, application, storage, and means of transportation, enabling or enhancing a number of desirable features, such as security, user mobility, multicast, and in-network caching. Mechanisms for realizing these benefits is the subject of ongoing research in the IRTF and elsewhere. This document describes current research challenges in ICN, including naming, security, routing, system scalability, mobility management, wireless networking, transport services, in-network caching, and network management.

Information-Centric Networking (ICN) is an approach to evolve the Internet infrastructure to directly support accessing Named Data Objects (NDOs) as a first-order network service. Data objects become independent of location, application, storage, and means of transportation, allowing for inexpensive and ubiquitous in-network caching and replication. The expected benefits are improved efficiency and security, better scalability with respect to information/bandwidth demand, and better robustness in challenging communication scenarios.

ICN concepts can be deployed by retooling the protocol stack: name-based data access can be implemented on top of the existing IP infrastructure, e.g., by allowing for named data structures,
ubiquitous caching, and corresponding transport services, or it can be seen as a packet-level internetworking technology that would cause fundamental changes to Internet routing and forwarding. In summary, ICN can evolve the Internet architecture towards a network model based on named data with different properties and different services.

This document presents the ICN research challenges that need to be addressed in order to achieve these goals. These research challenges are seen from a technical perspective, although business relationships between Internet players will also influence developments in this area. We leave business challenges for a separate document, however. The objective of this memo is to document the technical challenges and corresponding current approaches and to expose requirements that should be addressed by future research work.

Continue reading...

Written by dkutscher

August 9th, 2016 at 3:51 pm

Posted in IETF,Publications

Tagged with , ,

RFC 7778: Mobile Communication Congestion Exposure

without comments

Mobile network designs have to meet several, at first sight contradicting, requirements: maximize resource utilization, provide optimal performannce (user-perceived quality of experience), enable operator-defined "fair usage" policies, maintain user privacy and minimize management complexity.

For 5G networks, virtual network slicing is often mentioned as one the desirable properties, i.e., the ability to run virtual networks for different application classes (service slicing) or different customer groups (MVNOs etc.) over the same physical infrastructure. Virtualizing networks over a larger set of shared resources (radio networks, backhaul, data centers) requires effective and efficient means for capacity sharing.

Capacity sharing can be done in different ways: traditionally, telco network capacity sharing has been inspired by telephony network architectures with an emphasis on control plane-based monitoring, resource allocation and configuration. Such approaches often involve traffic management systems that monitor performance, load etc. of network elements, analyze traffic properties (for example, DPI-based traffic inspection) and configure network elements such as base station and gateways to implement certain rate limits based on operator policies.

Three trends make this difficult in present and future networks:

  1. with virtualization, slicing etc., the effort of analyzing every single tenant's flows can be increasingly prohibitive;
  2. encryption-by-default with HTTP2 and other protocols that employ connection-based encryption renders DPI-based approaches costly (at best -- if not impossible); and
  3. Internet protocols and applications such TCP (transport layer) and DASH-based video streaming over HTTP (application layer) are themselves adaptive to congestion, delay and overall observed performance. New protocols with specific requirements are invented all the time (think IoT, Virtual Reality). Interfering with their control loops through network traffic management may yield bad performance, suboptimal user preference and higher cost overall.

The idea of enabling an effective capacity sharing with a productive cooperation of operator policy decision making and dynamic application/user resource utilization has driven the work in the IETF ConEx working group. Based on earlier work by Bob Briscoe on Re-Feedback, the ConEx WG has defined concepts and (experimental) mechanisms for congestion exposure, enabling a form of capacity sharing that incentivizes senders to respond to congestion signals, while still enabling operators with hooks for auditing and enforcing correct behavior.

RFC 7778 describes how the ConEx mechanisms can be applied to current LTE (EPS) networks, considering their specifics regarding QoS and network architecture. For example, RFC 7778 described how ConEx can

  • enable or enhance flow policy-based traffic management;
  • reduce the need for complex DPI by allowing for a bulk packet traffic management system that does not have to consider either the application classes flows belong to or the individual sessions; and how it can
  • be used to more effectively trigger the offload of selected traffic to a non-3GPP network.

More experiments with ConEx and related capacity sharing mechanisms are needed, but the questions behind ConEx remain important for 5G (and beyond...): How to achieve an effective collaboration of networks and their users (senders and receivers) considering increased need for capacity sharing, increased demand for user privacy (connection encryption) and the permissionless innovation feature of the Internet, i.e., not expecting the network to know all possible application classes and their traffic management requirements.

Written by dkutscher

March 18th, 2016 at 1:21 pm

Posted in IETF,Publications

Managing Radio Networks in an Encrypted World

without comments

I attended last week's IAB/GSMA Workshop on Managing Radio Networks in an Encrypted World (MaRNEW).

The motivation for this workshop was the increasing trend of applying transport layer end-to-end encryption in major web applications such as Google services, YouTube, Netflix, Facebook and others. This trend will likely increase due to further deployment of HTTP/2 for which client implementations today try to setup TLS connections per default.

In mobile networks, traffic management but also additional services/functions have traditionally relied on being able to leverage knowledge about application type, application specifics. Example for such functions include policing/prioritization, optimized scheduling, caching, filtering, but also tracking, ad-insertion etc. In addition to functions that operators want to apply, there are also regulation requirements (depending on local legislation) for filtering, legal intercepting etc. that would become more difficult in the presence of ubiquitous encryption.

At the MaRNEW workshop, leading experts from network operators, vendors, application service providers, CDN providers and academic institutions discussed the impact of ubiquitous encryption as well as ideas for enabling an effective collaboration between the network, applications and users to enable optimal performance and resource efficiency.

In particular, the workshop addressed the following topics:

  • Understanding the bandwidth optimization use cases particular to radio networks;
  • Understanding existing approaches and how these do not work with encrypted traffic;
  • Understanding reasons why the Internet has not standardised support for legal interception and why mobile networks have;
  • Determining how to match traffic types with bandwidth optimization methods;
  • Discussing minimal information to be shared to manage networks but ensure user security and privacy;
  • Developing new bandwidth optimization techniques and protocols within these new constraints;
  • Discussing the appropriate network layer(s) for each management function; and
  • Cooperative methods of bandwidth optimization and issues associated with these.

Encryption: Technological and Business Aspects

It is not a secret that there are different aspects for discussing end-to-end encryption in public networks. Obviously, encryption helps with user privacy, and with the background of recent and current revelations of privacy breaches through pervasive monitoring, it has become common agreement that more (easily deployable) encryption would be useful to overcome this.

There is however also the business perspective: the Internet and specifically the eco system of mobile communication and service provision has multiple stake holders, each of those with their particular interests: network operators want to provide a useful service, in an economical way and may have an interest to enhance the overall service quality through various technical measures. Application service providers want their particular service to perform well over a range of different networks. Network equipment vendors have their product roadmaps and network architecture preferences etc.

Finally, there are the actual users of the system who have an interest in good quality of experience, cost-efficiency -- and privacy. Privacy is not only a concern with respect to (illegal) pervasive monitoring by agencies, but also with respect to maintaining anonymity and confidentiality towards network and service providers. For many applications, user profiles, user-generated data etc. is also a key business asset -- so there is a strong interest by different players to either get access to that data -- or (depending on the nature of a player) to keep other players from accessing it -- through encryption.

The MaRNEW workshop focused on the technological discussion.

Impact of Encryption

During the discussion the following main impacts of ubiquitous encryption on mobile network were identified:

  • Traditional ways of identifying and classifying network traffic (DPI) become more costly and potentially infeasible.
  • Traditional traffic management systems have relied on such classification, for different purpose: optimizing resource usage in access networks according to operator policies, forwarding of traffic through optimizers, caches etc., as well as filtering. Those approaches and the actual requirements behind them need to be revisited.
  • Content and service provisioning in both mobile and fixed networks today is heavily relying on CDN and in-network application functions. In addition, new approaches such as Mobile Edge Computing may shift more of such functions to access networks. The motivation is to provide better performance and cost efficiency through offloading networks (CDN cache hits) and through reducing latency and transport protocol performance (local control loops, reduced RTT to caches). Introducing more and more end-to-end encryption makes it impossible for operators to provide any application (or CDN-provider)-independent optimization functions. The alternative of running individual instances for each individual CDN provider does not seem promising. It could also be a major road block for future network and application innovation -- because each of those individual functions might require upgrading to introduce in-network support for it.

Way Forward

cooperative-traffic-management

 

(Copyright 2015 NEC)

At the workshop, different solutions were discussed.

  • First, it was agreed that the actual impact needs to be understood better and ought to be quantified. For example, assuming that some knowledge about application types (or corresponding service quality expectations) could be leveraged by base stations for more efficient transmission scheduling (e.g., by delaying packets of non-latency-sensitive flows or by operating multiple queues for different flow types), networks should at least be able to obtain corresponding hints from senders. However, the actual impact and potential benefits have to be demonstrated. Operators will work on that issue.
  • The (Internet) transport protocol community has made significant progress in recent years on several fronts: Active Queue Management (AQM) such as fq_codel and PIE have been demonstrated to be able to improve load balancing and reduce latency in router queues. Moreover, transport protocol research has led to promising results (for example PCC -- Performance-oriented Congestion Control). It was suggested that those mechanisms should be implemented and deployed where possible.
  • Several options for Cooperative Traffic Management have been discussed. For example this could included exchanging certain information between the network and senders/receivers. The network could inform endpoints better about congestion and non-congestion-induced problems (for example in an extended ECN fashion), or endpoints could inform the network about relevant meta information (application type, QoS requirements etc.). The latter could leverage existing technologies such as DiffServ. Potentially, it would be sufficient to distinguish delay-sensitive flows (e.g., for interactive real-time) and delay-tolerant flows (file download etc.). One interesting question is how endpoints would be incentivized to use such signaling correctly and how corresponding APIs would look like.
  • Overcoming the general limitations of connection-based security and its tendency to require application-specific (or CDN-provider-specific) in-network functions could require a more fundamental rethinking of network architecture and protocol layering. For example, Information-Centric Networking (ICN) would leverage object-security (authentication, encryption), hence enabling the network to implement functions such as caching, local transport strategies etc. in an application manner. This could be of particular relevance for 5G networks where a higher level of dynamicity in the creation and deployment of new OTT services are expected.

For the discussion of such solutions, I (together with several colleagues) have made two contributions to the workshop: 1) Enabling Traffic Management without DPI, and 2) Maintaining Efficiency and Privacy in Mobile Networks through Information-Centric Networking.

Enabling Traffic Management without DPI

Is DPI really needed for traffic management in mobile networks? Our position is “no”. Traffic management is usually realized through relatively simple mechanisms like rate shaping, prioritization, and dropping packets. Compared to these mechanisms, the semantics of applications that can be exposed through DPI are much richer; traffic classification anyway maps these semantics down to a simple set of categories.

The question then arises whether operators are really helped by brittle, insecure and expensive mechanisms for gaining higher fidelity information for the coarse traffic information for traffic management, or whether simple signaling would suffice for traffic classification for mobile network management purposes.

Obviously, when relying on endpoints to signal information about the underlying application which may be used to change the network’s treatment of that application’s traffic, questions of trust arise: how can the network be sure the endpoints are being honest, and prevent endpoints from gaming the system to their advantage (and the disadvantage of others); can these signaling approaches be used as an attack vector. Here the approach is to define the vocabulary of the signaling protocol to properly incentivize honest cooperation, while allowing the network to verify this cooperation.

We discuss two application-independent approaches for traffic management that are based on network-compatible metrics: ConEx Policing and low latency support with SPUD.


Congestion Exposure (ConEx) is a mechanism that enables senders to inform the network about previously encountered congestion in flows thus enabling senders and network infrastructure to respond to congestion based on operator policies. This information is provided in the IP header and can still be accessed even if the payload is encrypted. ConEx information is auditable by comparing the congestion level at network egress to the ConEx signal which incentivizes the sender to state its congestion contribution correctly.

Using ConEx would allow for a bulk packet traffic management system that does not have to consider application classes. Instead, with ConEx accurate downstream path information on incipient congestion are visible to ingress network operators. This information can be used to base traffic management on the actual current cost (which is the contribution to congestion of each flow) and enable operators to apply congestion-based policing/accounting depending on their preference and independent of application characteristics. Such traffic management would be simpler, more robust (no real-time flow application type identification required, no static configuration of application classes) and provide better performance as decisions can be taken based on the real actual cost contribution at each point in time.

The Substrate Protocol for User Datagrams (SPUD) is a new approach to selective information exposure designed to support transport evolution. SPUD is realized as a shim between UDP and an (encrypted) transport protocol. The basic SPUD protocol provides minimal sub-transport functionality by grouping of packets together into tubes and signaling of the start and end of a tube.

This will assist middleboxes in state setup and teardown along the path. Further, SPUD provides an extensible signaling mechanism based on a type-value encoding for associating properties with individual packets or all packets in a tube. The SPUD protocol can be used to signal low latency requirements from an endpoint to the network, or expose the existence of support for such services from the network to the endpoint. Therefore we propose to provide four SPUD signals: a latency sensitivity flag, a signal to yield to another tube, an application preference for a maximum single queue delay, and a facility to discover the maximum possible single queue length along the path.

Based on the latency-sensitivity flag a network operator can implement an additional service (as compared to today’s best effort service) that uses smaller queues and/or different AQM parameters without changing the service that is provided today. Signaling of lower queue priority or maximum single hop delay can further be used to preferentially drop packets of the same sender or within one flow. Information about expected queuing delays on the path can be used for buffer configuration at the endpoints.

The proposal is not intended as a blueprint for immediate implementation -- but it demonstrates how cooperative traffic management could be implemented. In our view, cooperative traffic management requires a solid understanding of the interactions with transport layer and the corresponding performance impacts/improvements.

Maintaining Efficiency and Privacy in Mobile Networks through Information-Centric Networking

We present a solution to overcome the impasse of deploying confidentiality at the cost of breaking most of current network traffic engineering in mobile networks. Our proposition is based on Information-Centric Networking (ICN) which is a data-centric network architecture that gracefully incorporates security and traffic optimization.

Content-based security instead of connection based is the foundation of the Information-Centric Networking (ICN) architecture. In ICN, we provide a network service that directly implements the desired information-access abstraction. The network forwards requests for named data and corresponding responses containing the data. The name can be cryptographically bound to the data for ascertaining authenticity. This enables the network to replicate data objects in arbitrary locations, thus enabling ubiquitous caching. Object data can also be encrypted for user privacy, leaving other network-relevant information such as the name intact – thus maintaining options for traffic management, policing etc. The performance gains of having ICN in the mobile backhaul have been evaluated experimentally (see paper). ICN incorporates these ideas into a novel network layer providing all of the mentioned objectives without using man-in-the-middle like solutions.

ICN secures data itself by requiring producers to cryptographically sign every data packet: the signature constitutes the integrity meta-data. The data is uniquely identified by a name that is bound to the data via the signature. The producer’s public key to implement signature verification can be obtained by using the KeyLocator field which can be the name of the data containing the key of the producer. Authentication is implemented via the producer’s key that makes use of a trust model, e.g. PKI, Web-of-Trust that can be extended using key chaining to delegate trust to different sub-namespaces (for hierarchical naming). Confidentiality is obtained by encryption of the data payload using the producer’s key. Notice that authenticity, integrity and confidentiality are independent features.

Once data is published by the producer it can be stored in any location without affecting the security properties of the data which are location independent. Inter-networking of encrypted data is included by design in ICN and in-network caching is always possible with or without confidentiality. Authenticity might not be necessary in many cases so the authentication of the identity of the producer is optional. It is not mandatory either to verify the integrity of the data by verification of the signature. It is important to remark that ICN disantangles authenticity, privacy and integrity so that they can be handled in different ways and without the interaction of end-hosts.

TLS provides web security by encrypting a layer 4 connection between two hosts. Authenticity is provided by the web of trust (certification authorities and a public key infrastructure) to authenticate the web server and symmetric cypher on the two end points based on a negotiated key. In presence of TLS many networking operations become unfeasible: filtering, caching, acceleration, trans-coding.

ICN takes a radically different approach to guarantee confidentiality, authenticity and integrity by embedding them into a redefined network layer. Indeed, ICN builds on the abstraction of data requested, accessed, cached and forwarded by name: the network forwards requests coming from the consumer for named data and routes back data packets on the identical reverse path (symmetric routing).

The ICN communication model allows network nodes between a web server and a web client to operate as forwarding and storage functions to implement various inter-networking functionalities like caching or load balancing without relaxing any security feature. As a fully fledged data-centric network architecture, ICN incorporates mobility, storage, security and multi-point communication by design.

Written by dkutscher

September 28th, 2015 at 12:49 am

Open Source Carrier Networking

without comments

Open Source Software development models are changing the way the telco industry is creating products and systems. This presentation at ONS-2015 discusses how innovation, agile development and Open Source Software are linked together.It presents experience with transforming telco vendor development from closed to open source and provides an outlook of future activities in the NFV space.

Talk Info (Presentation available on request)

Written by dkutscher

June 18th, 2015 at 11:22 pm

Posted in Talks

Tagged with , , , , ,

The Next Step of OpenStack Evolution for NFV Deployments

without comments

Chris Wright and I presented on "The Next Step of OpenStack Evolution for NFV Deployments" at last week's OpenStack Summit in Vancouver.


Presentation at OpenStack Summit

NFV is now a well-known concept and in an early deployment stage, leveraging and adapting OpenStack and other Open Source Software systems. In the OPNFV project, a large group of industry peers is building a carrier-grade, integrated, open source reference platform for the NFV community. The telco industry has successfully adopted Open Source Software for carrier-grade deployments. It is now time for taking the next steps and to extend the colloaboration with upstream projects -- by opening up previously proprietary developments, by contributing code and other artifacts in order to create a ecosystem of NFV platforms, applications, and management/orchestration systems.

This presentation shares some insights on how Red Hat and NEC are working together to foster collaboration in the NFV ecosystem by actively working with OpenStack and other upstream projects.

NEC has pioneered the adoption of Linux, KVM, Open vSwitch, and OpenStack for their mobile network core product line (virtualized EPC)
and has gained significant experience through development work and deployments. NEC's extensions for high efficiency and high
availability have led to contributions of new features to OpenStack, such as DPDK vSwitch control and CPU allocation features. For NEC, it is very important to have those features integrated into the mainstream code base for building reliable infrastructure systems.

Red Hat, one of main contributors to OpenStack, leads the development of those functions to meet NFV requirements in OpenStack, making critical and demanding applications run of top of open platforms. The presentation explains how NEC and Red Hat are integrating and optimizing Red Hat Enterprise Linux OpenStack Platform and NFV, along with contributions to open source communities, including OpenStack and Open Platform for NFV (OPNFV).

Written by dkutscher

May 26th, 2015 at 11:25 pm

Posted in Talks

Tagged with , , ,

Scalable Content Exchange in Challenged ICNs

without comments

I presented GreenICN work on Scalable Content Exchange in Challenged ICNs at CCNxCon-2015 this week.

Download: ccnxcon2015-kutscher.pdf

Abstract:
The principles of Information­Centric Networking (ICN), accessing data objects by name (not by location address), securing data objects (not connections), in­network caching (for sharing, repair, rate adaptation) make ICN attractive for a wide range of application scenarios beyond traditional data center or telco access network scenarios. In fact, one of the first instantiation of ICN had been developed based on Delay­Tolerant Networking (DTN) technologies.

Currently, ICN/DTN is considered a promising approach for enabling/enhancing communication in disaster scenarios. In such scenarios, so­called ICN data mules (that carry and disseminate data times) may move randomly, and each time data mules encounter one another exchange data items. We envision that in such a scenario where there is no connectivity, data mules (e.g. vehicles or drones) can move around randomly. So these mobile routers interact with end users, working base stations and other data mules to fetch and deliver the data and queries. Thus, we do not consider adhoc networks where you can build a path to the destination reactively or proactively, rather a DTN like scenario.

Consider a large scale disaster scenario like the earthquake in Japan in 2011 , where people in different parts of the city are stranded without the internet connectivity. But there are some zones, where base stations are still working and providing connectivity. Essentially, the scenario is such that ICN data mule move randomly across a geographic area, and when meeting end­users receive interests from them and also forward corresponding data items to end­users (if present in the content store / cache of the data mule). At the same time, when data mules encounter each other, they forward to each other certain end­user interest and/or data items (according to a predefined rule set and algorithm), such that interests and data items can be forwarded in a hop­by­hop DTN fashion. One research problem in such a scenario is how to optimize such data exchanges among data mules for optimal data dissemination (e.g. optimizing how many desired messages reach their recipients within a given timeframe with a given forwarding strategy, assuming that data mules only have limited time at each encounter to exchange
messages).

Written by dkutscher

May 21st, 2015 at 4:57 pm

Posted in Talks

Tagged with , , , ,

URIs for Named Information

without comments

URIs [RFC3986] are used in various protocols for identifying resources. In many deployments those URIs contain strings that are hash function outputs in order to ensure uniqueness in terms of mapping the URI to a specific resource, or to make URIs hard to guess for security reasons. However, there is no standard way to interpret those strings and so today in general only the creator of the URI knows how to use the hash function output.

In the context of information-centric networking and elsewhere there is value in being able to compare a presented resource against the URI that was de-referenced in order to access that resource. If a cryptographically-strong comparison function can be used then this allows for many forms of in-network storage, without requiring as much trust in the infrastructure used to present the resource. The outputs of hash functions can be used in this manner, if presented in a standard way. There are also many other potential uses for these hash outputs, for example, in terms of binding the URI to an owner via signatures and public keys, mapping between names, handling versioning etc. Many such uses can be based on "wrapping" the object with meta-data, e.g. including signatures, public key certificates etc.

We therefore define the "ni" URI scheme that allows for, but does not insist upon, checking of the integrity of the URI/resource mapping.

The "ni" URI scheme is specified in draft-farrell-ni-00

Written by dkutscher

April 19th, 2011 at 2:04 pm