Dirk Kutscher

Personal web page

Archive for the ‘ietf’ tag

Towards a Unified Transport Protocol for In-Network Computing in Support of RPC-based Applications

without comments

The emerging term In-Network Computin (INC) [inc] in particular refers applying on-path programmable networking devices (e.g., switches and routers between clients and servers) as an accelerator or function offloader to boost throughput, reduce server load, or improve latency, typically in a well-controlled data center network environment.

Some INC implementations evolved from programmable data plane systems and align with the trend of network programmability at large. In recent year, it has been shown to support many promising applications (e.g., caching, aggregation, and agreement). For example, in distributed machine learning (DML), training nodes produce data (gradients) that needs to be aggregated or reduced -- and the result could be distributed to one or multiple consumers. As another example, the NetClone system [netclone] uses in-network forwarder to replicate RPC invocation messages and to perform more informed forwarding based on observed latencies for accelerating RPC communication.

While it is possible to achieve this kind of operation purely with end-to-end communication between worker nodes, performance can be dramatically improved by offloading both the operation processing and the data dissemination to nodes in the network. These in-network processors are often conceived as semi-transparent performance enhancing on-path elements, i.e., they are not the actual endpoints in transport protocol sessions and would intercept packets with application data and potentially generate new data that they would have to transmit.

In our Internet Draft draft-song-inc-transport-protocol-req-01.txt, we are discussing this problem and are formulating some requirements for the design of future transport protocols in this space.

References

Written by dkutscher

January 25th, 2024 at 7:02 am

IETF Datatracker Document Metadata Processing

without comments

I have created two tools for fetching and formatting metadata for IETF documents (RFCs and Internet Drafts). I sometimes want to create publications lists or just reference IETF documents in other publications, and these tools are intended to automate the process as much as possible.

  1. tracker-doc: for fetching document metadata by user-id (datatracker ID)
  2. bibdoc: for formatting document metadata in text or bibtex format

These are two Clojure scripts that are executed by Babashka – a native Clojure interpreter for scripting.

Install: datatracker-publications on GitHub.

Written by dkutscher

January 5th, 2024 at 7:47 pm

DINRG @ IETF-118

without comments

We have posted the agenda for our DINRG meeting at IETF-118:

Documents

Logistics

DINRG Meeting at IETF-118 – 2023-11-06, 08:30 to 10:30 UTC

Written by dkutscher

November 1st, 2023 at 9:21 am

Posted in Events,IETF,IRTF

Tagged with , , ,

ICNRG @ IETF-118

without comments

Written by dkutscher

October 30th, 2023 at 1:10 pm

Posted in Events,ICN,IRTF

Tagged with , , ,

Collective Communication: Better Network Abstractions for AI

without comments

We have submitted two new Internet Drafts on Collective Communication:

  1. Kehan Yao , Xu Shiping , Yizhou Li , Hongyi Huang , Dirk Kutscher; Collective Communication Optimization: Problem Statement and Use cases; Internet Draft draft-yao-tsvwg-cco-problem-statement-and-usecases-00; work in progress; October 2023

  2. Kehan Yao , Xu Shiping , Yizhou Li , Hongyi Huang , Dirk Kutscher; Collective Communication Optimization: Requirement and Analysis; Internet Draft draft-yao-tsvwg-cco-requirement-and-analysis-00; work in progress; October 2023

Collective Communication refers to communication between a group of processes in distributed computing contexts, for example involving interaction types such as broadcast, reduce, all-reduce. This data-oriented communication model is employed by distributed machine learning and other data processing systems, such as stream processing. Current Internet network and transport protocols (and corresponding transport layer security) make it difficult to support these interactions in the network, e.g., for aggregating data on topologically optimal nodes for performance enhancements. These two drafts discuss use cases, problems, and initial ideas for requirements for future system and protocol design for Collective Communication. They will be discussed at IETF-118.

Written by dkutscher

October 30th, 2023 at 8:03 am

IRTF Decentralization of the Internet Research Group at IETF-117

without comments

Recent years have witnessed the consolidations of the Internet applications, services, as well as the infrastructure. The Decentralization of the Internet Research Group (DINRG) aims to provide for the research and engineering community, both an open forum to discuss the Internet centralization phenomena and associated potential threats, and a platform to facilitate the coordination of efforts in identifying the causes of observed consolidations and the mitigation solutions.

Our upcoming DINRG meeting at IETF-117 will feature three talks – by Cory Doctorow, Volker Stocker & William Lehr, and Christian Tschudin.

1DINRG Chairs’ Presentation: Status, UpdatesChairs05 min
2Let The Platforms Burn: Bringing Back the Good Fire of the Old InternetCory Doctorow30 min
3Ecosystem Evolution and Digital Infrastructure Policy Challenges: Insights & Reflections from an Economics PerspectiveVolker Stocker & William Lehr20 min
4Minimal Global Broadcast (MGB)Christian Tschudin20 min
5Wrap-up & BufferAll15 min

Documents

Logistics

DINRG Meeting at IETF-117 – 2023-07-25, 20:00 to 21:30 UTC

IETF-117 Agenda

Written by dkutscher

July 17th, 2023 at 5:44 pm

Posted in Events,IETF,IRTF

Tagged with , , ,

Information-Centric Networking Research Group at IETF-113 Summary

without comments

The Information-Centric Networking Research Group (ICNRG) of the Internet Research Task Force (IRTF) met during the 113th meeting of the Internet Engineering Task Force (IETF) that took place in Vienna from March 19th to March 25th 2022. IETF-113 was the IETF's first hybrid meeting with onsite and remote participants.

Presentation material and minutes are available online, and there is a full recording on youTube. I am summarizing the meeting below.

Edmund Yeh: NDN for Data-Intensive Science Experiments

Edmund Yeh (Northeastern University) presented the NSF-funded project NDN for Data-Intensive Science Experiments (N-DISE), a two-year inter-disciplinary project with participants from Northeastern, Caltech, UCLA, and Tennessee Tech that collaborates with the Large Hadron Collider (LHC), genomics researchers, and the NDN project team.

N-DISE is building data-centric ecosystem to provide agile, integrated, interoperable, scalable, robust and trustworthy solutions for heterogeneous data-intensive domains, in order to support very data-intensive science applications through an NDN-based communication and data sharing infrastructure. The LHC high energy physics program represents the leading target use case, but the project is also looking at BioGenome and other human genome projects as future use cases.

In many data-intensive science applications, data needs to distributed in real-time, archived, retrieved by multiple consumers etc. Within one data centers, but even more so in geographically distributed scenarios, this could lead to a signficant amount of duplicated transmissions with legacy system architectures. N-DISE would leverage general ICN features and concepts such as location-independent data naming, on-path caching and explicit replication through data repos to dramatically improve the efficiency but also to reduce the complexity of such data management systems and their applications.

The general approach of the N-DISE project is to leverage recent results in high-speed NDN networking such as ndn-dpdk to build a data science support infrastructure for petascale distribution, which involves research in high-througput forwarding/caching, the definition of container-based node architectures, FPGA acceleration subsystems and SDN control. The goal is to deliver LHC data over wide area networks at throughputs of approximately 100 Gpbs and to dramatically decrease download times by using optimized caching.

From an NDN perspective, the project provides several interesting lines of work:

  • Deployment architectures (how to build efficient container-based N-DISE nodes);
  • WAN Testbed creation and throughput testing;
  • Optimized caching and forwarding;
  • Congestion control and multi-path forwardind; and
  • FPGA acceleration.

There are several interesting ideas and connections to ongoing ICN research in N-DISE. For example, as people start building applications for high-speed data sharing but also distributed computing, the question of container-based ICN node architectures arise, i.e., how to enable easy cloud-native deployment of such systems without compromising too much on performance.

Another interesting aspect is congestion control in multi-path forwarding scenarios. Existing technologies such as Multipath TCP and Multipath QUIC are somewhat limited with respect to their ability to use multipath resources in the network efficiently. In ICN, with its different forwarding model multipath forwarding decisions could be made hop-by-hop, and consumers (receiving endpoints) could be given greater control over path selection. For example:

Cenk Gündoğan: Alternative Delta Time Encoding for CCNx Using Compact Floating-Point Arithmetic

Cenk Gündoğan of HAW Hamburg presented an update of draft-gundogan-icnrg-ccnx-timetlv, a proposal for an alternative logarithmic encoding of time values in ICN (CCNx) messages.

The motivation for this work lies in constrained networking where header compression as per RFC 9139 (ICNLoWPAN) would be applied and more compact time encoding would be desirable. The proposed approach would allow for a compact encoding with dynamic ranges (as in floating point arithmetics), but imposes challenges with respect to backwards compatibility.

ICNRG is considering adopting this work as a research group item to find the best way for updating the current CCNx specifications in the light of these questions.

Dave Oran: Ping & Traceroute Update

Dave Oran presented the recent updates to two specifications:

In IP, fundamental and very useful tools such as ping and traceroute were created years after the architecture and protocol definitions. In ICN there is an opportunity to leverage tooling at an earlier phase – but also to reason about needed tools and useful features.

ICN Ping provides the ability to ascertain reachability of names, which includes

  • to test the reachability and operational state of an ICN forwarder;
  • to test the reachability of a producer or a data repository;
  • to test whether a specific named object is cached in some on-path CS, and, if so, return the administrative name of the corresponding forwarder; and
    • to perform some simple network performance measurements.

ICN Traceroute provides ability to ascertain characteristics (transit forwarders
and delays) of at least one of the available routes to a name prefix, which includes

  • to trace one or more paths towards an ICN forwarder (for troubleshooting purposes);
  • to trace one or more paths along which a named data of an application can be reached;
  • to test whether a specific named object is cached in some on-path CS, and, if so, trace the path towards it and return the identity of the corresponding forwarder; and
  • to perform transit delay network measurements.

Both drafts completed Research Group Last Call in January 2022 and evoked some feedback that has now been addressed (see presentation for details). ICNRG will transfer these drafts to IRSG review and subsequent steps in the IRTF review and publication process soon.

Dave Oran: Path Steering Refresher

Dave Oran presented a refresher of a previously presented specification of Path Steering in ICN (draft-oran-icnrg-pathsteering). Path Steering is a mechanism to discover paths to the producers of ICN content objects and steer subsequent Interest messages along a previously discovered path. It has various uses, including the operation of state-of-the-art multipath congestion control algorithms and for network measurement and management.

In ICN, communication is inherently multi-path and potentially multidestination. But so far there is no mechanism for consumers to direct Interest traffic onto a
specific path, which could lead to
– Forwarding Strategies in ICN forwarders can spray Interests onto various paths;
– Consumers have a hard time interpreting failures and performance glitches;
– Troubleshooting and performance tools need path visibility and control to find problems and do simple measurements.

ICN Path Steering would enable

  • Discovering, monitor and troubleshoot multipath network connectivity based on names and name prefixes:
    • Ping
    • Traceroute
  • Accurately measure a performance of a specific network path.
  • Multipath Congestion control needs to:
    • Estimate/Count number of available paths
    • Reliably identify a path
    • Allocate traffic to each path
  • Traffic Engineering and SDN
    • Externally programmable end-to-end paths for Data Center and
      Service Provider networks.

Briefly, Path Steering works by using a Path Label (as an extension to existing protocol formats, see figure) for discovering and for specifying selected paths.

The technology would give consumers much more visibility and greater control of multipath usage and could be useful for many applications, especially those that want to leverage path diversity, for example high-volume file transfers, robust communication in dynamically changing networks, and distributed computing.

Dirk Kutscher: Reflexive Forwarding Re-Design

Dave Oran and I recently re-design a scheme that we called Reflexive Forwarding and that is specified in draft-oran-icnrg-reflexive-forwarding.

Current Information-Centric Networking protocols such as CCNx and NDN have a wide range of useful applications in content retrieval and other scenarios that depend only on a robust two-way exchange in the form of a request and response (represented by an Interest-Data exchange in the case of the two protocols noted above).

A number of important applications however, require placing large amounts of data in the Interest message, and/or more than one two-way handshake. While these can be accomplished using independent Interest-Data exchanges by reversing the roles of consumer and producer, such approaches can be both clumsy for applications and problematic from a state management, congestion control, or security standpoint.

This specification proposes a Reflexive Forwarding extension to the CCNx and NDN protocol architectures that eliminates the problems inherent in using independent Interest-Data exchanges for such applications. It updates RFC8569 and RFC8609.

Example: RESTful communication over ICN

In today HTTP deployments, requests such as HTTP GET requests are conceptionally stateless, but in fact they carry a lot of information that would allow server to process these requests correctly. This includes regular header fields, cookies but also input parameters (form data etc.) so that requests can become very large (sometimes larger than the corresponding result messages).

It is generally not a good idea to build client-server systems that require servers to parse and process a lot a client-supplied input data, as this could easily be exploited by computational overload attacks.

In ICN, in addition, Interest message should not be used to carry a lot of "client" parameters as this could lead to issues with respect to flow balance (congestion control schemes in ICN should work based on DATA message volume and rate), but would also force forwarders to store large Interest messages and could potentially even lead to Interest fragmentation, a highly undesirable consequence.

Reflexive Forwarding aims at providing a robust ICN-idiomatic way to transfer "input parameters", by enabling the "server side" to fetch parameters using regular ICN communication (Interest/Data). When doing so, we do not want to give up important ICN properties such as not requiring consumers (i.e., the "clients") to reveal their source address – a useful feature for enable easy consumer mobility and some form of privacy.

Reflexive Forwarding Design

Our Reflexive Forwarding scheme addresses this by letting the consumer specify a tempory, non-globally-routable prefix to the network and the producer that would allow the producer to get back to the consumer through Reflexive Interests for fetching the required input parameters at the producer's discretion. The figure above depicts the high-level protocol operation.

Our new design leverage tempory PIT (Pending Interest Table) state in forwarders and PIT Tokens (hop-by-hop protocol fields in NDN and CCNx) that would allow forwaders, to map Reflexive Interests to PIT entries of the actual Interest and thus forward the Reflexive Interest correctly, on the reverse path.

Potential Applications

Potential applications include

  • RESTful communication, e.g., Web over ICN;
  • Remote Method Invocation;
  • Phone-home scenarios; and
  • Peer state synchronization.

For example, we have used a previous design of this scheme in our paper RICE: Remote Method Invocation in ICN that leveraged Reflexive Forwarding for the invocation and input parameter transmission as depicted in the figure above.

Reflexive Forwarding requires relativly benign to ICN forwarder and endpoint behavior but could enable many relevant use cases in an ICN idiomatic way, without requiring large layering overhead and without giving important ICN properties.

Written by dkutscher

April 1st, 2022 at 2:36 pm

Posted in IRTF

Tagged with , , , ,

Addressing in the Internet

without comments

There was a side meeting on Internet Addressing at IETF-112 this week, discussing potential gaps in Internet Addressing and potential use cases that would suggest new addressing structures.

Looking at the realities in the Internet today, I do not think that actual relevant use cases and current issues in the Internet are served well by just a new addressing approach for the Internet Protocol. Instead I believe that there needs to be architectural discussion first – and addressing might eventually fall out as a result.

My slides for the panel discussion.

Written by dkutscher

November 11th, 2021 at 2:22 pm

Posted in IETF,Posts

Tagged with ,

cwik: an Interactive and Extensible Clojure Framework for Automating QUIC Interop Tests and Performance Measurements

without comments

QUIC is the next transport protocol for the Internet that is currently being standardized by the IETF. My student Thomas Ripping at University of Applied Sciences Emden/Leer has developed cwik, an open, interactive and extensible Clojure framework for automating QUIC interop tests and performance measurements, based on Peter Doornbosch’s Kwik library. It can be used for defining test campaigns programmatically and can enable reproducible experiments and automatic evaluation. The source code is available on github.com.

Written by dkutscher

December 6th, 2019 at 8:33 pm

Posted in Code

Tagged with , , ,

Great Expectations

without comments

Protocol Design and Socioeconomic Realities


(PDF version)

The Internet & Web as a whole qualify as wildly successful technologies, each of which empowered by wildly successful protocols per RFC 5218's definition [1]. As the Internet & Web became critical infrastructure and business platforms, most of the originally articulated design goals and features such as global reach, permissionless innovation, accessibility etc. [5] got overshadowed by the trade-offs that they incur. For example, global reach —intended as enabling global connectivity — can also imply global reach for infiltration, regime change and infrastructure attacks by state actors. Permissionless innovation — motivated by the intention to overcome the lack of innovation options in traditional telephone networks — has also led us to permissionless surveillance and mass-manipulation-based business models that have been characterized as detrimental from a societal perspective.

Most of these developments cannot be directly ascribed to Internet technologies alone. For example, most user surveillance and data extraction technologies are actually based on web protocol mechanisms and particular web protocol design decisions. While it has been documented that some of these technology and standards developments have been motivated by particular economic interests [2], it is unclear whether different Internet design decisions could have led to a different, "better" outcome. Fundamentally, economic drivers in different societies (and on a global scale) cannot be controlled through technology and standards development alone.

This memo is thus rather focused on specific protocol design and evolution questions, specifically on the question how technical design decisions relate to socio-economic effects, and aims at providing input for future design discussions, leveraging experience from 50 years of Internet evolution, 30 years of Web evolution, observations from economic realities, and from years of Future Internet research.

IP Service Model

The IP service model was clearly designed to provide a minimal layer over different link layer technologies to enable inter-networking at low implementation cost [3]. Starting off as an experiment, looking for feasible initial deployment strategies, this was clearly a reasonable approach. The IP service model of packet-switched end-to-end best-effort communication between hosts (host interfaces) over a network of networks, was implemented by:

  • an addressing scheme that allows specifying source and destination host (interface) addresses in a topologically structured address space; and
  • minimal per-hop behavior (stateless forwarding of individual packets).

The minimal model implied punting many functions to other layers, encapsulation, and/or "management" services (transport, dealing with names, security). Multicast was not excluded by the architecture, but also not very well supported, so that IP Multicast (and the required inter-domain multicast routing protocols) did not find much deployment outside well-controlled local domains (for example, telco IP TV).

The resulting system of end-to-end transport over a minimal packet forwarding service has served many applications and system implementations. However, over time, technical application as well as business requirements have led to additional infrastructure, extensions and new way of using Internet technologies, for example:

  • in-network transport performance optimization to provide better control loop localization in mobile networks;
  • massive CDN infrastructure to provide more scalable popular content distribution;
  • (need for) access control, authorization based on IP and transport layer identifiers;
  • user-tracking based on IP and transport layer identifiers; and
  • usage of DNS for localization, destination rewriting, and user tracking.

It can be argued that some of these approaches and developments have also led to some of the centralization/consolidation issues that are discussed today – especially with respect to CDN that is essentially inevitable for any large-scale content distribution (both static and live content). Looking at the original designs, the later understood commercial needs and the outcome today, one could ask the question, how would a different Internet service model and different network capabilities affect the tussle balance [5] between different actors and interests in the Internet?

For example, a more powerful forwarding service with more elaborate (and more complex) per-hop-behavior could employ (soft-) stateful forwarding, enabling certain forms of in-network congestion control. Some form of caching could help making services such as local retransmissions and potential data sharing at the edge a network service function, removing the need for some middleboxes.

Other systems such as the NDN/CCNx variants of ICN employ the principle of accessing named-data in the network, where each packet must be requested by INTEREST messages that are visible to forwarders. Forwarders can aggregate INTERESTs for the same data, and in conjunction with in-network storage, this can implement an implicit multicast distribution service for near-simultaneous transmissions.

In ICN, receiver-driven operation could eliminate certain DoS attack vectors, and the lack of source addresses (due to stateful forwarding) could provide some form of anonymity. The use of expressive, possibly application-relevant names could enable better visibility by the network —however potentially enabling both, more robust access control and (on the negative side) more effective hooks for censoring communication and monitoring user traffic.

This short discussion alone illustrates how certain design decisions can play out in the real world later and that even little changes in the architecture and protocol mechanisms can shift the tussle balance between actors, possibly in unintended ways. As Clark argued in [3], it is important to understand the corresponding effects or architectural changes, let alone bigger redesign efforts.

The Internet design choices at a time were motivated by certain requirements that were valid at the time — but may not all still hold today. Todays networking platforms are by far more powerful, more programmable. The main applications are totally different as are the business players and the governance structures. This process of change may continue in the future, which adds another level of difficulty for any change of architecture elements and core protocols. However, this does not mean that we should not try it.

Network Address Translation

Network Address Translation (NAT) has been criticized for impeding transport layer innovation, adding brittleness, and delaying IPv6 adoption. At the same time NAT was deemed necessary for growing the Internet eco system, for enabling local network extensions at the edge without administrative configuration. It also provides a limited form of protection against certain types of attacks. As such it addressed shortcomings of the system.

The implicit client-initiated port-forwarding (the technical reason for the limit attack protection mentioned above) is obviously blocking both unwanted and wanted communication, which makes it difficult to run servers at homes, enterprise sites etc. in a sound way (manual configuration of port forwarding still comes with limitations). This however could be seen as one of the drivers for the centralization of servers in data centers ("cloud") that is a concern in some discussions today. [4]

What does this mean for assessing and potentially evolving previous design decisions? The NAT use cases and their technical realization are connected to several trade-offs that impose non-trivial challenges for potential architecture and protocol evolution: 1) Easy extensibility at the edge vs. scalable routing; 2) Threat protection vs. decentralized nature of the system; 3) Interoperability vs. transport innovation.

In a positive light, use cases such as local communication and dynamic Internet extension at the edge (with the associated security challenges) represent interesting requirements that can help finding the right balance in the design space for future network designs.

Encryption

Pervasive monitoring is an attack [7], and it is important to assess existing protocol and security frameworks with respect to changes in the way that the Internet is being used by corporations and state-level actors and to develop new protocols where needed. QUIC is encrypting transport headers in addition to application data, intending to make user tracking and other monitoring attacks harder to mount.

Economically however, the more important use case of user tracking today is the systematic surveillance of individuals on the web, i.e., through a massive network of tracking, aggregation and analytics entities [6]. Ubiquitous encryption of transport and application protocols does not prevent this at all — on the contrary, it makes it more difficult to detect, analyze, and, where needed, prevent user tracking. This does not render connection encryption useless (especially not because surveillance in the network and on web platforms complement each other through aggregation and commercial trading of personally identifying information (PII)) but it requires a careful consideration of the trade-offs.

For example, perfect protection against on-path monitoring is only effective if it covers the complete path between a user agent and the corresponding application server. This shifts the tussle balance between confidentiality and network control (enterprise firewalls, parental control etc.) significantly. Specifically for QUIC, which is intended to run in user space, i.e., without the potential for OS control, users may end up in situations where they have to trust the application service providers (who typically control the client side as well, through apps or browsers, as well parts of the CDN and network infrastructure) to transfer information without leaking PII irresponsibly.

If the Snowden revelations led to a better understanding of the nature and scope of pervasive monitoring and to best current practices for Internet protocol design, what is the adequate response to the continuous revelations of the workings and extent of the surveillance industry? What protocol mechanisms and API should we develop, and what should we rather avoid?

DNS encryption is another example that illustrates the trade-offs. Unencrypted DNS (especially with the EDNS0 client subnet option, depending on prefix length and network topology) can increase of privacy violations by on-path/intermediary monitoring.

DNS encryption can counter certain on-path monitoring attacks — but it could effectively make the privacy situation for users worse, if it is implemented by centralizing servers (so that application service providers, in addition to tracking user behaviour for one application, can now also monitor DNS communication for all applications). This has been recognized in current proposals, e.g., limiting the scope for DNS encryption to stub-to-resolver communication. While this can be enforced by architectural oversight in standards development, we do not yet know how we can enforce this in actual implementation, for example for DNS over QUIC.

Future Challenges: In-Network Computing

Recent advances in platform virtualization, link layer technologies and data plane programmability have led to a growing set of use cases where computation near users or data consuming applications is needed — for example for addressing minimal latency requirements for compute-intensive interactive applications (networked Augmented Reality, AR), for addressing privacy sensitivity (avoiding raw data copies outside a perimeter by processing data locally), and for speeding up distributed computation by putting computation at convenient places in a network topology.

In-network computing has mainly been perceived in four main variants so far: 1) Active Networking, adapting the per-hop-behavior of network elements with respect to packets in flows, 2) Edge Computing as an extension of virtual-machine (VM) based platform-as-a-service to access networks, 3) programming the data plane of SDN switches (leveraging powerful programmable switch CPUs and programming abstractions such as P4), and 4) application-layer data processing frameworks.

Active Networking has not found much deployment due to its problematic security properties and complexity. Programmable data planes can be used in data centers with uniform infrastructure, good control over the infrastructure, and the feasibility of centralized control over function placement and scheduling. Due to the still limited, packet-based programmability model, most applications today are point solutions that can demonstrate benefits for particular optimizations, however often without addressing transport protocol services or data security that would be required for most applications running in shared infrastructure today.

Edge Computing (just as traditional cloud computing) has a fairly coarse-grained (VM-based) computation-model and is hence typically deploying centralized positioning/scheduling though virtual infrastructure management (VIM) systems. Application-layer data processing such as Apache Flink on the other hand, provide attractive dataflow programming models for event-based stream processing and light-weight fault-tolerance mechanisms — however systems such as Flink are not designed for dynamic scheduling of compute functions.

Ongoing research efforts (for example in the proposed IRTF COIN RG) have started exploring this space and the potential role that future network and transport layer protocols can play. It is feasible to integrate networking and computing beyond overlays, potentially ? What would be a minimal service (like IP today) that has the potential for broad reach, permissionless innovation, and evolution paths to avoid early ossification?

Conclusions

Although the impact of Internet technology design decisions may be smaller than we would like to think, it is nevertheless important to assess the trade-offs in the past and the potential socio-economic effects that different decisions could have in the future. One challenge is the depth of the stack and the interactions across the stack (e.g., the perspective of CDN addressing shortcomings of the IP service layer, or the perspective of NAT and centralization). The applicability of new technology proposals therefore needs a far more thorough analysis — beyond proof-of-concepts and performance evaluations.

References

[1] D. Thaler, B. Aboba; What Makes for a Successful Protocol?; RFC 5218; July 2008

[2] S. Greenstein; How The Internet Became Commercial; Princeton University Press; 2017

[3] David Clark; Designing an Internet; MIT Press; October 2018

[4] Jari Arkko et al.; Considerations on Internet Consolidation and the Internet Architecture; Internet Draft https://tools.ietf.org/html/draft-arkko-iab-internet-consolidation-01; March 2019

[5] Internet Society; Internet Invariants: What Really Matters; https://www.internetsociety.org/internet-invariants-what-really-matters/; February 2012

[6] Shosanna Zuboff; The Age of Surveillance Capitalism; PublicAffairs; 2019

[7] Stephen Farrell, Hannes Tschofenig; Pervasive Monitoring is an Attack; RFC 7258; May 2014

Change Log

  • 2019-06-07: fixed several typos and added clarification regarding EDNS0 client subnet (thanks to Dave Plonka)

Written by dkutscher

June 4th, 2019 at 11:30 am

Posted in Blogroll,Posts

Tagged with ,