Dirk Kutscher

Personal web page

Archive for the ‘Events’ Category

IRTF ICNRG@IETF-119

without comments

The Information-Centric Networking Research Group (ICNRG) of the Internet Research Task Force (IRTF) met at IETF-119 in Brisbane. Here is my quick summary of the meeting:

Agenda:

1 ICNRG Chairs’ Presentation: Status, Updates Chairs
2 Secure Web Objects and Transactions Dirk Kutscher
3 Transaction Manifests Marc Mosko
4 Vanadium: Secure, Distributed Applications Marc Mosko
5 Global vs. Scoped Namespaces Marc Mosko


Meeting material:

ICNRG Status

ICNRG recently published four news RFCs – great achievement by all involved authors and the whole group!

See my blog posting for a more detailed description.

Secure Web Objects and Transactions

One focus of this meeting was transactions in ICN, i.e., interactions with the intention to achieve some durable state change at a remote peer – which imposes some challenges in a system that is designed around accessing named data.

In my presentation I talked about different ways to realize transactions in ICN:

  1. ICN as a network layer
    • Client-server communication between two nodes
    • Implement transaction semantics on top of an ICN messaging service
  2. Recording state changes in shared data structures
    • Shared namespace, potentially functioning as a transaction ledger
    • Still need to think about atomicity etc

For 1) transactions as messaging over ICN networks, the following considerations apply:

  • Client-server communication between two nodes
  • Implement transaction semantics on top of an ICN messaging service
  • Different approaches
    • A: Traditional layering: Using NDN-like systems as a messaging layer
    • Assign prefixes to client & servers
    • Send messages back and forth, and implement reliability and transactions semantics on top
    • B: ICN-native communication: Use Interest-Data as request-response abstraction for transactions
    • Mapping transaction communication and state evolution more directly to ICN, e.g., Interest-Data in NDN
    • Collapsing traditional network, transport, application layer functions

I mainly talked about variant 1B, ICN-native communication: Use InterestData as request-response abstraction for transactions and introduced the idea of "Secure Web Objects" (SWOs) for a data-oriened web as a motivation.

In such a system, not everything would be about accessing named data object – there is also a need for "client/server" state evolution, e.g., for online banking and similar use cases.

I introduced some ideas on RESTful ICN that we published in an earlier paper. The Restful ICN proposal leverages Reflexive Forwarding, for robust client-server communication and integrates elements of CCNx key exchange for security context setup and session resumption.

Summarizing, I wanted to initiate a discussion about how to realize transactions in information-centric systems? This discussion is not about mapping ICN to existing protocols, such as HTTP, but about actual distributed computing semantics, i.e., robust session setup and state evolution. Transactions with ICN-native communication are hard to provide with with basic Interest/Data. Reflexive Forwarding + CCNx Key Exchange + transaction semantics are an attempt to provide such a service in a mostly ICN-idiomatic way, with the downside that reflexive forwarding needs extensions to forwarders. This raises question on the minimal feature set of core ICN protocols, and to deal with extensions.

In the discussion, it was pointed out that lots of experience on distributed systems has shown that transactions or secure multi-interactions will generally require more than a single two-way exchange.

Others suggested that ICN and NDN has authentication carried out when the signed interest arrives which directly proves authentication, so that the authentication would in fact be done beforehand.

However, authentication may not be enough. For example, client authorization in client-server communication is a critical function which needs to be carefully designed in real-world networks. For example, forcing a server to do signature verification on initial request arrival has been shown in prior systems (e.g. TCP+TLS) to represent a serious computational DOS attack risk. Reflexive Forwarding in RICE tries to avoid exactly that problem, by enabling the server to iteratively authenticate and authorize clients before committing computing resources.

It was also said that whenever a protocol does authentication. you need to analyze in the context of specific examples to discuss, and that cannot only look at the problem at an abstract level.

Transaction Manifests

Marc Mosko presented another approach to transactions in ICN, called [Transaction Manifests](https://datatracker.ietf.org/meeting/119/materials/slides-119-icnrg-transaction-manifests-00 "Transaction Manifests "Transaction Manifests"). He explained that ICN can be transactional.

Typically, ICN is considered as a publish/subscribe or pre-publishing of named-data approach. Outside ICN, distributed transactions do exist, especially in DLTs. For example, considering a permissioned DLT with size N and K << N bookkeepers. In a DLT, they base their decision on the block hash history. In this talk, Marc discussed what would be an equivalent function in ICN, and introduced the notion of transaction manifests.

In ICN, there is a technology called FLIC (File-like collections), i.e., manifests for static objects. FLIC describes a single object that is re-constructed by traversing the manifest in order. In Marc's proposal, a transaction manifest describes a set of names that must be considered together. The transaction manifest names likely point to FLIC root manifests.


In the example above, transaction manifest entries entries point directly to objects. For a complete systems, you would also need a set of bookkeepers, e.g., systems like Hyperledger offering global ordering vis bespoke orderer nodes. Such bookkeeper would have to ensure that a transaction has current pre-conditions, current post-conditions, and no conflicts in post-conditions. Transaction manifests are a form of write-ahead logs (WAL), as used in databases, such as PostgreSQL.

Marc went on discussing a few challenges, such as interactions with repositories and caches, as well as distributed transaction manifests.

There was some discussion on the required ordering properties for this approach, i.e., whether, in a multi-bookkeeper system, livelocks and deadlocks could occur – and whether these could resolved without requiring a total order.

Marc is continueing to work on this. One of the next steps would be to design client-to-bookkepper and bookkeeper-to-bookkeeper protocols.

Vanadium: Secure, Distributed Applications

Marc Mosko introduced the Vanadium system, a secure, distributed RPC system based on distributed naming and discovery. Vanadium uses symmetrical authentication and encryption and may use private name discovery with Identity-Based-Encryption (IBE).

Vanadium has two parts:

  1. Principals and Blessings and Caveats (Security)
    • Use a hierarchical name, e.g. alice:home:tv.
    • Certificate based
    • Blessings are scoped delegations from one principal to another for a namespace (e.g. alice grants Bob “watch” permissions to the TV)
    • Caveats are restrictions on delegations (e.g. Bob can only watch 6pm – 9pm).
    • 3rd party caveats must be discharged before authorization
    • E.g. revocations or auditing
  2. The RPC mount tables (Object Naming)
    • These describe how to locate RPC namespaces
    • They provide relative naming

Vanadium is interesting because parts of its design resemble some ICN concepts, especially the security part:

  • It uses prefix matching and encryption
  • Namespaces work like groups
  • The colon : separates the blesser from the blessed
  • Authorizations match extensions.
    • If Alice authorized “read” to alice:hometv to alice:houseguests, and if Bob has a blessing for alice:houseguests:bob, then Bob has “read” to alice:hometv.
  • A special terminator :$ only matches the exact prefix.
    • A blessing to alice:houseguest:$ only matches that exact prefix.

Marc then explain the object naming structure and the entity resolution in Vanadium.

More details can be found in Marc's presentation and on Vanadium's web page.

In summary, Vanadium is a permissioned RPC service. A Vanadium name encodes the endpoint plus name suffix. The endpoint does not need to resolve to a single mount table server, it could be any server that possesses an appropriate blessing. Authentication is done via pair-wise key exchange and blessing validations. It can be private if using IBE, otherwise server name leaks. Authorizations and Blessings and Caveats use hierarchical, prefixmatching names.

From an ICN perspective, the security approach seems interesting. Blessings and Caveats and discharges and namespaces as groups. One question is how this differs from SDSI co-signings. The Vanadium identity service provides an interesting mapping of OAuth2 app:email tokens to PKI and blessings. The RPC approach exhibits some differences to ICN, e.g., embedding the endpoint identifier in the name. ICN technologies in this context are public-key scoped names in CCNx and schematized trust anchors in NDN.

In the discusion, it was noted that it would be interesting to do an apples-to-apples comparison to the NDN trust schema approach; Vanadium's approach with the ability to create blessings and caveats on demand seems to be much more granular and dynamic.

Global vs. Scoped Namespaces

Marc Mosko discussed global vs. scoped namespaces. For example, how do you know that the key you are looking at is the key that you should be looking at? IPFS punts that to out-of-band mechanisms. CCNX on the other hand uses public key scoped names; you can put a public key, publisher ID in an interest and say you only wanyt this name if signed with the associated key.

It was suggested to re-visit some of the concepts in the RPC system of OSF distributed computing, where all namespaces were scoped, and name discovery starts out as local. You could then "attach" a local namespace to more global namespace via an explicit "graft" operation. The key here was that the authoritative pointers representing the namespace graph were from child to parent, as opposed to parent to child as it is with systems like DNS. Your local trust root identifier could become a name in a higher layer space, yielding a trust root higher in the hierarchy tha could be used instead of or in addition to your local trust root. Doing this can create progressively more global name spaces out of local ones.

Please check out the meeting video for the complete discussion at the meeting.

Written by dkutscher

April 7th, 2024 at 3:41 pm

Posted in Events,ICN,IETF,IRTF

Tagged with , ,

HKUST Internet Research Workshop 2024

without comments

On March 15 2024, in the week before the IETF-119 meeting in Brisbane, Zili Meng and I organized the 1st HKUST Internet Research Workshop that brought together researchers in computer networking and systems around the globe to a live forum discussing innovative ideas at their early stages. The workshop took place at HKUST's Clear Water Bay campus in Hong Hong.

We ran the workshop like a “one day Dagstuhl seminar” and focused on discussion and ideas exchange and less on conference-style presentations. The objective was to identify topics and connect like-minded people for potential future collaboration, which worked out really well.

The agenda was:

  1. Dirk Kutscher: Networking for Distributed ML
  2. Zili Meng: Overview of the Low-Latency Video Delivery Pipeline
  3. Jianfei He: The philosophy behind computer networking
  4. Carsten Bormann: Towards a device-infrastructure continuum in IoT and OT networks
  5. Zili Meng: Network Research – Academia, Industry, or Both?

Dirk Kutscher: Networking for Distributed ML

With the ever-increasing demand for compute power from large-scale machine learning training we have started to realize that not only does Moore's Law no longer address increasing performance demand automatically, but also that the growth rate in terms of training FLOPs for transformers and other large-scale machine learning exhibits by far larger exponential factors.

This has been well illustrated by presentations in an AI data center side meeting at IETF-118, for example by Omer Shabtai who talked about Distributed Training in data centers.

WIth increasing scale, communication over networks becomes a bottleneck, and the question arises, what could be good system designs, protocols, and in-network support strategies to improve performance.

Current distributed machine learning systems typically use a technology called Collective Communication that was developed as a Message Passing Interface (MPI) abstraction for high-performance computing (HPC). Collective Communication is the combination of standardized aggregration and reduction function with communication abstractions, e.g., for "broadcasting" or "unicasting" results.

Collective Communication is implemented a few popular libraries such as OpenMPI and Nvidia's NCCL. When used in IP networks, the communication is usually mapped to iterations of peer-to-peer interactions, e.g., organizing nodes in a ring and sending data for aggregation within such rings. One potential way to achieve better performance would be to perform the aggregation "in the network", as in HPC systems, e.g., using the Scalable hierarchical aggregation protocol (SHArP). Previous work has attempted doing this with P4-based dataplane programming, however such approaches are typically limited due to the mostly stateless operation of the corresponding network elements.

In large-scale training sessions, running over shared infrastructure in multi-tenant data centers, communication needs to respond to congestion, packet loss, server overload etc., i.e., the features of typical transport protocols are needed.

I had previously discussed corresponding challenges and requirements in these Internet Drafts:

In my talk at HKIRW, I discussed ideas for corresponding transport protocols. There are interesting challenges in bringing together reliable communication, congestion control, flow control, single-destination as well multi-destination communication and in-network processing.

Zili Meng: Overview of the Low-Latency Video Delivery Pipeline

Zili talked about requirements for ultra-low latency for interactive streaming for the next-generation of immersive applications. Some application provide really stringent low-latency requirements, with a consistent service quality over many hours, and the talk suggested a better coordination between all elements of the streaming and rendering pipeline.

There was a discussion as to how achievable these requirements are in the Internet and whether applications might be re-designed in terms of providing acceptable user experience even without guaranteed high-bandwidth low-latency service, for example by employing technologies such as semantic communication, prediction, local control loops etc.

Jianfei He: The philosophy behind computer networking

In his talk, Jianfe He asked the question how the field of computer networked can be more precisely defined and how a more systematic could help with the understanding and design of future networked systems.

Specifically, he suggested considering basing design on a solid understanding of potentials and absolute constraints in a certain field, such as Shannon's theory/limit and on the notion of tradeoffs, i.e., consequences of certain design decisions, as represented by the CAP theorem in distributed systems. He mentioned two examples: 1) routing protocols and 2) transport protocols.

For routing protocols, there are well-known tradeoffs between convergence time, scaling limits, and required bandwidths. With changed network properties (bandwidth) – can we reasons about options for shifting the tradeoffs?

For transport protocols, there a goals such as reliability, congestion control etc., and tradeoff relationships between packet loss, line utilization, delay and buffer size. How would designs change if we changed the objective, e.g., to shortest flow completion times or shortest message completion time (or if we looked at collections of flows)? What if we added fairness to these objectives?

Jianfe asked the question whether it was possible to develop these tradeoffs/constraints into a more consistent theory.

Carsten Bormann: Towards a device-infrastructure continuum in IoT and OT networks

Carsten talked about requirements and available technologies for providing a secure management of IoT devices in a device-infrastructure continuum in IoT and OT networks, where scale demands high degrees of automation at run-time and only limited individual device configuration (at installation only). It is no longer possible to manually track each new "Thing" species.


Carsten mentioned technologies such as

  • RFC 8250: Manufacturer's Usage Description (MUD);
  • W3C Web of Things description model; and
  • IETF Semantic Definition Format (SDF).

In his talk, Carsten formulated the goal of "Well-Informed Networking", i.e., an approach where networks can obtain sufficient information about the existing devices, their legitimate communication requirements, and their current status (device health).

Zili Meng: Network Research – Academia, Industry, or Both?

Zili discussed the significance of consistently high numbers industry and industry-only papers at major networking conferences. Often such papers are based on operational experience that can only obtained by companies actually operating corresponding systems.

Sometimes papers seem to get accepted not necessarily on the basis of their technical merits but because they report on "large-scale deployments".

When academics get involved in such work, it is often not in a driving position, but rather through students who work in internship at corresponding companies. Naturally, such papers are not questioning the status quo and are generally not critical of the systems they discuss.

At the workshop, we discussed the changes in the networking research field over the past years, as well as the challenges of successful collaborations between academia and industry.

Written by dkutscher

April 6th, 2024 at 10:55 am

DINRG @ IETF-118

without comments

We have posted the agenda for our DINRG meeting at IETF-118:

Documents

Logistics

DINRG Meeting at IETF-118 – 2023-11-06, 08:30 to 10:30 UTC

Written by dkutscher

November 1st, 2023 at 9:21 am

Posted in Events,IETF,IRTF

Tagged with , , ,

ICNRG @ IETF-118

without comments

Written by dkutscher

October 30th, 2023 at 1:10 pm

Posted in Events,ICN,IRTF

Tagged with , , ,

Seminar Talk: Accelerating Distributed Systems with In-Network Computation

without comments

In our invited talks series at HKUST(GZ), I am happy to be hosting Wenfei WU from Peking University on 2023-11-02, 14:00 CST, for his talk on Accelerating Distributed Systems with In-Network Computation.

Accelerating Distributed Systems with In-Network Computation

With Moore's Law slowing down, building distributed and heterogeneous systems becomes a new trend to support large-scale applications, such as large model training and big data analytics. In-Network Computing (INC) is an effective approach to building such distributed systems. INC leverages programmable network devices to process traversing data packets, and provides line-rate and low-latency data processing capabilities, which could compress traffic volume and accelerate the overall transmission and job efficiency. In this talk, we will share the progress and development of INC technologies, including INC protocol design for machine learning and data analytics, and RDMA-compatible INC solutions. These works are published in NSDI21 and ASPLOS23.

Wenfei WU

Wenfei Wu is an assistant professor from the School of Computer Science at Peking University. He obtained his Ph.D. degree from the University of Wisconsin-Madison in 2015. Dr. Wu researches into computer networks and distributed systems, and has published more than 50 papers in these areas. Dr. Wu's recent research focus is to build in-network computation (INC) methods for distributed systems; his work on INC-empowered distributed machine learning system ATP won the best paper award in NSDI 2021, and that on INC-empowered distributed data analytics system ASK won the distinguished paper award in ASPLOS 2023; Dr. Wu won other awards like IPCCC best paper runner-up in 2019, SoCC best student paper in 2013, etc.

Online Participation

https://calendar.hkust.edu.hk/events/iot-thrust-seminar-accelerating-distributed-systems-network-computation

Written by dkutscher

October 23rd, 2023 at 8:14 am

Posted in Events

Tagged with , ,

Network Abstractions for Continuous Innovation

without comments

In a joint panel at ACM ICN-2023 and IEEE ICNP-2023 in Reykjavik, Ken Calvert, Jim Kurose, Lixia Zhang, and myself discussed future network abstractions. The panel was moderated by Dave Oran. This was one of the more interesting and interactive panel sessions I participated in, so I am providing a summary here.

Since the Internet's initial rollout ~40 years ago, not only its global connectivity has brought fundamental changes to society and daily life, but its protocol suite and implementations have also gone through many iterations of changes, with SDN, NFV, and programmability among other changes over the last decade. This panel looks into next decade of network research by asking a set of questions regarding where lies the future direction to enable continued innovations.

Opportunities and Challenges for Future Network Innovations

Lixia Zhang: Rethinking Internet Architecture Fundamentals

Lixia Zhang (UCLA), quoting Einstein, said that the formulation of the problem is often more essential than the solution and pointed at the complexities of today's protocols stacks that are apparently needed to achieve desired functionality. For example, Lixia mentioned RFC 9298 on proxying UDP in HTTP, specifically on tunneling UDP to a server acting as a UDP-specific proxy over HTTP. UDP over IP was once conceived as a minial message-oriented communication service that was intended for DNS and interactive real-time communication. Due to its push-based communication model, it can be used with minimal effort for useful but also harmful application, including large-scale DDOS attacks. Proxing UDP over HTTP addresses this and other concerns, by providing a secure channel to a server in a web context, so that the server can authorize tunnel endpoints, and so that the UDP communication is congestion controlled by the underlying transport protocol (TCP or QUIC). This specification can be seen as a work-around: sending unsolicted (and un-authenticated) messages over the Internet is a major problem in today's Internet. There is no general approach for authenticating such messages and no concept for trust in peer identities. Instead of analyzing the root cause of such problems, the Internet communities (and the dominant players in that space) prefer to come up with (highly inefficient) workarounds.

This problem was discussed more generally by Oliver Spatscheck of AT&T Labs in his 2013 article titled Layers of Success, where he discussed the (actually deployed) excessive layering in production networks, for example mobile communication networks, where regular Internet traffic is routinely tunneled over GTP/UDP/IP/MPLS:

The main issue with layering is that layers hide information from each other. We could see this as a benefit, because it reduces the complexities involved in adding more layers, thus reducing the cost of introducing more services. However, hiding information can lead to complex and dynamic layer interactions that hamper the end-to-end system’s reliability and are extremely difficult if not impossible to debug and operate. So, much of the savings achieved when introducing new services is being spent operating them reliably.

According to Lixia, the excessive layering stems from more fundamental problems with today's network architecture, notably the lack of identity and trust in the core Internet protocols and the lack of functionality in the forwarding system – leading to significant problems today as exemplied by recent DDoS attacks. Quoting Einstein again, she said that we cannot solve problems by using the same kind of thinking we used when we created them, calling for a more fundamental redesign based on information-centric networking principles.

Ken Calvert: Domain-specific Networking

Ken Calvert (University of Kentucky) provided a retrospective of networking research and looked at selected papers published at the first IEEE ICNP conference in 1993. According to Ken, the dominant theme at that time was How to design, build, and analyze protocols, for example as discussed in his 1993 ICNP paper titled Beyond layering: modularity considerations for protocol architectures.

Ken offered a set of challenges and opportunities for future networking research, such as:

  • Domain-specific networking à la Ex uno pluria, a 2018 CCR editorial discussing:
    • infrastructure ossification;
    • lack of service innovation; and
    • a fragmentation into "ManyNets" that could re-create a service-infrastructure innovation cycle.
  • Incentives and "money flow"
    • Can we escape from the advertising-driven Internet app ecosystem? Should we?
  • Wide-area multicast (many-many) service
    • Building block for building distributed applications?
  • Inter-AS trust relationships
    • Ossification of the Inter-AS interface – cannot be solved by a protocol!
  • Impact ⇐ Applications ⇐ Business opportunities ($)
    • What user problem cannot be solved today?
  • "The core challenge of CS ... is a conceptual one, viz., what (abstract) mechanisms we can conceive without getting lost in the complexities of our own making." - Dijkstra

For his vision for networking in 30 years, Ken suggested that:

  • IP addresses will still be in use
    • but visible only at interfaces between different owners' infrastructures
  • Network infrastructure might consist of access ASes + separate core networks operated by the "Big Five".
  • Users might communicate via direct brain interfaces with AI systems.

Dirk Kutscher: Principled Approach to Network Programmability

I offered the perspective of introducing a principled approach to programmability that could provide better programmability (for humans and AI), based on more powerful network abstractions.

Previous work in SDN with protocols such as OpenFlow and dataplane programming languages such as P4 have only scratched the surface of what could be possible. OpenFlow was a great first idea, but it was fundamentally constrained by the IP and Ethernet-based abstractions that were built into it. It can be used for programming some applications in that domain, such as firewalls, virtual networking etc., but the idea of continuous innovation has not really materialized.

Similarly, P4 was advertized as an enabler for new levels of dataplane programmability, but even simple systems such as NetCache have to go to quite some extend to achieve minimal functionality for a proof-of-concept. Another P4 problem that is often reported is the hardware heterogeneity so that universal programmability is not really possible. In my opinion, this raises some questions with respect to applicability of current dataplane programming for in-network computing. A good example of a more productive application of P4 is the recent SIGCOMM paper on NetClone that describes as fast, scalable, and dynamic request cloning for microsecond-Scale RPCs. Here P4 is used as an accelerator for programming relatively simple functionality (protocol parsing, forwarding).

This may not be enough for future universal programmability though. During the panel discussion, I drew an analogy to computer programming language. We are not seeing the first programming language and IDEs that are designed from the ground up for better AI. What would that mean for network programmability? What abstractions and APIs would we need?

In my opinion, we would have to take a step back and think about the intended functionality and the required observability for future (automated) network programmability that is really protocol-independent. This would then entail more work on:

  • the fundamental forwarding service (informed by hardware constraints);
  • the telemetry approach;
  • suitable protocol semantics;
  • APIs for applications and management; and
  • new network emulation & debugging approach (a long the lines of "network digital twin" concepts).

Overall, I am expecting new exiciting research in the direction of principled approaches to network programmability.

Jim Kurose: Open Research Infrastructures and Softwarization

Jim reminded us that the key reason Internet research flourished was the availability of open infrastructure with no incumbent providers initially. The infrastructure was owned by researchers, labs, and universities and allowed for a lot of experimentation.

This open infrastructure has recently been challenged by ossification with the rise of production ISP services at scale, and the emergence of closed ISPs, cellular carriers, hyperscalers operating large portion of the network.

As an example for emerging environments that offer interesting opportunities for experiments and new developments, Jim mentioned 4G/5G private networks, i.e., licensed spectrum created closed ecosystems – but open to researchers, creating opportunities for:

Jim was also suggesting further opportunities in softwarization and programmability, such as (formal) methods for logical correctness and configuration management, as well as programmability to add services beyond the "minimal viable service", such as closed loop automatic control and management.

Finally Jim also mentioned opportunities in emerging new networks such as LEOs, IoT and home networks.

Written by dkutscher

October 23rd, 2023 at 7:46 am

ACM SIGCOMM CCR: Report of 2021 DINRG Workshop on Centralization in the Internet

without comments

ACM SIGCOMM CCR just published the report of our 2021 DINRG meeting on Centralization in the Internet.

Executive Summary

There is a consensus within the networking community that the Internet consolidation and centralization trend has progressed rapidly over recent years, as measured by the structural changes to the data delivery infrastructure, the control power over system platforms, application development and deployment, and even in the standard development efforts. This trend has brought impactful technical, societal, and economical consequences.

When the Internet was first conceived as a decentralized system 40+ years back, few people, if any, could have foreseen how it looks today. How has the Internet evolved from there to here? What have been the driving forces for the observed consolidation? From a retrospective view, was there anything that might have been done differently to influence the course the Internet has taken? And most importantly, what should and can be done now to mitigate the trend of centralization? Although there are significant interests in these topics, there has not been much structured discussion on how to answer these important questions.

The IRTF Research Group on Decentralizing the Internet (DINRG) organized a workshop on “Centralization in the Internet” on June 3, 2021, with the objective of starting an organized open discussion on the above questions. Although there seems to be an urgent need for effective countermeasures to the centralization problem, this workshop took a step back: before jumping into solution development to steer the Internet away from centralization, we wanted to discuss how the Internet has evolved and changed, and what have been the driving forces and enablers for those changes. The organizers and part of the community believe that a sound and evidence-based understanding is the key towards devising effective remedy and action plans. In particular, we would like to deepen our understanding of the relationship between the architectural properties and economic developments.

This workshop consisted of two panels, each panel started with an opening presentation, followed by panel discussions, then open-floor discussions. There was also an all-hand discussion at the end. Three hours of the workshop presentations and discussions showed that this Internet centralization problem space is highly complex and filled with intrinsic interplays between technical and economic factors.

This report aims to summarize the workshop outcome with a broad-brush picture of the problem space. We hope that this big picture view could help the research group, as well as the broader IETF community, to reach a clearer and shared high-level understanding of the problem, and from there to identify what actions are needed, which of them require technical solutions, and which of them are regulatory issues which require technical community to provide inputs to regulatory sectors to develop effective regulation policies.

You can find the report in the ACM Digital Library. We also have a pre-print version.

Written by dkutscher

July 27th, 2023 at 4:35 pm

IRTF Decentralization of the Internet Research Group at IETF-117

without comments

Recent years have witnessed the consolidations of the Internet applications, services, as well as the infrastructure. The Decentralization of the Internet Research Group (DINRG) aims to provide for the research and engineering community, both an open forum to discuss the Internet centralization phenomena and associated potential threats, and a platform to facilitate the coordination of efforts in identifying the causes of observed consolidations and the mitigation solutions.

Our upcoming DINRG meeting at IETF-117 will feature three talks – by Cory Doctorow, Volker Stocker & William Lehr, and Christian Tschudin.

1DINRG Chairs’ Presentation: Status, UpdatesChairs05 min
2Let The Platforms Burn: Bringing Back the Good Fire of the Old InternetCory Doctorow30 min
3Ecosystem Evolution and Digital Infrastructure Policy Challenges: Insights & Reflections from an Economics PerspectiveVolker Stocker & William Lehr20 min
4Minimal Global Broadcast (MGB)Christian Tschudin20 min
5Wrap-up & BufferAll15 min

Documents

Logistics

DINRG Meeting at IETF-117 – 2023-07-25, 20:00 to 21:30 UTC

IETF-117 Agenda

Written by dkutscher

July 17th, 2023 at 5:44 pm

Posted in Events,IETF,IRTF

Tagged with , , ,

Named Data Metaverse

without comments

I had the pleasure of chairing a really interesting panel discussion at the NDN Community meeting (NDNComm 2023) on March 3rd 2023.

The panel discussed opportunities and challenges for building Metaverse systems with a Named Data Networking approach. Specific discussion questions include:

  • What are architectural, security-related, and performance-related issues in Metaverse systems today?
  • What communication patterns could be supported by NDN platforms?
  • How can the data-oriented model and decentralized trust establishment help in developing better Metaverse systems and at what layer would NDN technologies help?
  • What are gaps, challenges and research opportunities for NDN evolution to address Metaverse system requirements?

The panelists were:

  • Paulo Mendes (Airbus Research)
  • Michelle Munson (Eluvio)
  • Todd Hodes (Eluvio)
  • Jeff Burke (UCLA REMAP)

The panel discussed scenarios for Named Data in the Metaverse such as AR in live performance, real-time ML for transformed reality, architectures for emerging arts, media, and entertainment, commercial content distribution and experience delivery, as well as Metaverse VR experiences in challenged networks.

Jeff Burke introduced exciting ideas for re-imaging VR-enhanced live performances and shared some ideas and insights from building such applications. In his class of applications, there is a lot of local interaction (for example in a theater), creating interesting challenges and opportunities for local, decentralized Metaverses. On the application layer, Metaverse VR applications would like use scene and model descriptions such as USD and gITF, so the question arises, what opportunities exist for mapping the corresponding names to "network layer" names.

Michelle Munson and Todd Hodes introduced Eluvio's Content Fabric Protocol (CFP), a platform aimed at commercial-grade decentralized content distribition, providing content-native adressability programmability mechanisms for storage, distribution, and in-built streaming and content processing. CFP uses Blockchain governance for versioning, access control, and on-chain/cross-chain monetization. An example use case is the Warner Movieverse.

The panel discussed the different approaches of dealing with named-data as a fundamental building block and some specific use cases for networked Metaverse systems such as (secure) in-network content transformation. Overall, the panel was a great initial discussion on these ideas that should definitely be continued. Check out the list of related events below for possible venues.

Related Events

Written by dkutscher

March 11th, 2023 at 8:55 am

Posted in Events

Tagged with , , ,

IEEE MetaCom Workshop on Decentralized, Data-Oriented Networking for the Metaverse (DORM)

without comments

IEEE MetaCom Workshop on Decentralized, Data-Oriented Networking for the Metaverse (DORM)

Workshop page at IEEE MetaCom

Organizers

  • Jeff Burke, UCLA
  • Dirk Kutscher, HKUST(GZ)
  • Dave Oran, Network Systems Research & Design
  • Lixia Zhang, UCLA

Workshop Description

The DORM workshop is a forum to explore new directions and early research results on Metaverse system architecture, protocols, and security, along a data-oriented design direction that can encourage and facilitate decentralized realizations. Here we broadly interpret the phrase “Metaverse” as a new phase of networking with multi-dimensional shared views in open realms.

Most prototype implementations of such systems today replicate the social media platform model: they run on cloud servers offered by a small number of providers, and have identities and trust management anchored at these servers. Consequently, all communications are mediated through such servers, together with extensive CDN overlay infrastructures or the equivalent.

Although the cloud services may be extended to edges to address performance and delay issues, the centralization of control power that stems from this cloud-centric approach can be problematic from a societal perspective. It also reflects a significant semantic mismatch between the existing address-based network support and many aspirations for open realm applications and interoperability: the applications, by and large, operate on named data principles at the application layer, but need to deploy multiple layers of middleware services, which are provider-specific, to bridge the gap. These added complexities prohibit new ways of interacting (leveraging new data formats such as USD and gITF) and are not conducive to flexible distributed computing in the edge-to-cloud continuum.

This workshop solicits efforts that explore new directions in metaverse realization and work that takes a principled approach to key topics in the areas of 1) Networking as the Platform, 2) Objects and Experiences, and 3) Trust and Transactions without being constrained by inherited platforms.

Networking as the Platform

Metaverse systems will rely on a variety of communication patterns such as client-server RPC, massively scalable multi-destination communication, publish-subscribe etc. In systems that are designed with a cloud-based, centralized architecture in mind, such interactions are typically mediated by central servers and supported by overlay CDN infrastructure, with operational inflexibility and lacking optimization mechanisms, for example in order to leverage specific network link layer capabilities such as broadcast/multicast features. Underlying reliance on existing stacks also introduces familiar complications in providing disruption-tolerant, mobile-friendly extended reality applications, limiting their viability for eventual use in critical infrastructure and require significant engineering support to use in demanding entertainment applications, such as large-scale live events.

This workshop seeks research on new strategies for Metaverse system design that can promote innovation by lowering barriers to entry for new applications that perform robustly under a variety of conditions. We solicit research on Metaverse system design that addresses architectural and protocol-level issues without the reliance on a centralized cloud-based architecture. Instead, we expect the DORM workshop submissions to start with a distributed system assumption, focusing on individual protocol and security elements that enable decentralized Metaverse realizations.

Many Metaverse-relevant interactions such as video streaming and distribution of event data today inherently rely on abstractions for accessing named data objects such as video chunks, for example in DASH-based video streaming. The DORM workshop will therefore particularly invite contributions that explore new systems and protocol designs that leverage that principle, thus exploring new opportunities to re-imagine the relationship between application/network and link/physical layer protocols in order to better support Metaverse system implementations. This could include work on new hypermedia concepts based on the named data principle and cross-layer designs for simplifying and optimizing the implementation and operation of such protocols.

We expect such systems to as well be better suited to elegant, efficient integration of computing into the network, thus providing more flexible and adaptive platforms for offloading computation and supporting more elaborate data dissemination strategies.

From Objects to Experiences

In our perceived Metaverse/open realm systems, there are different existing and emerging media representations and encodings such as current video encodings as well as scene and 3D object description and transmission formats such as USD and glTF. Similar to previous developments in the networked audio/video area, it is interesting to investigate opportunities for new scene and 3D object representation formats that are suitable not only for efficient creation and file-like unidirectional transmission but also for streaming, granular composition and access, de-structuring, efficient multi-destination transmission, possibly using network coding techniques.

The workshop is therefore soliciting contributions that explore a holistic approach to media/object representation within network/distributed computing, enabling better performance, composability and robustness of future distributed Metaverse systems. Submissions that explore cross-layer approaches to supporting emerging media types such as volumetric video and neural network codecs are encouraged, as are considerations of how code implementing object behaviors and interactions can be supported - providing a path to the interoperable experiences expressed in various Metaverse visions.

Trust and Transactions

Finally, distributed open realm systems need innovative solutions in identity management and security support that enable interoperation among multiple systems including a diverse population of users. We note that mechanisms to support trust are inherently coupled with various identities, from "real world" identities to application-specific identities that users may adopt in different contexts. Proposed solutions need to consider not just media asset exchange but also the interactions among objects, and the data flows needed to support it.

The workshop solicits contributions that identify specific technical challenges, for example system bootstrapping, trust establishment, authenticated information discovery, and that propose new approaches to the identified challenges. Researchers are encouraged to consider cross-layer designs that address disconnects between layers of trust in many current systems - e.g., the reliance on third-party certificate authorities for authentications, the inherent trust in connections rather than the objects themselves, that tends to generate brittleness for even local communications if connectivity to the global network is compromised.

Call for Papers

The Decentralized Data-Oriented Networking for the Metaverse (DORM) workshop is intended as a forum to explore new directions and early research results on the system architecture, protocols, and security to support Metaverse applications, focusing on data-oriented, decentralized system designs. We view Metaverse as a new phase of networking with multi-dimensional shared views in open realms.

Most Metaverse systems today replicate the social media platform model, i.e., they assume a cloud platform provider-based system architecture where identities and the trust among them is anchored via a centralized administrative structure and where communication is mediated through servers and an extensive CDN overlay infrastructure operated by that administration. The centralization that stems from this approach can be problematic both from a control and from a performance & efficiency perspective. Despite operating on named data principles conceptually, such systems typically exhibit traditional layering approaches that prohibit new ways of interacting (leveraging new data formats such as USD and gITF) and that are not conducive for flexible distributed computing in the edge-to-cloud continuum.

This workshop solicits work that takes a principled approach at key research topics in the areas of 1) Networking as the Platform, 2) Objects and Experiences, and 3) Trust and Transactions without being constrained by inherited platform designs, including but no limited to:

  • Distributed Metaverse architectures
  • Computing in the network as an integral component for better communication and interaction support
  • Application-layer protocols for a rich set of interaction styles in open realms
  • Supporting Metaverse via data-oriented techniques
  • Security, Privacy and Identity Management in Metaverse systems
  • New concepts for improved network support for Metaverse systems, e.g., through facilitating ubiquitous multipath forwarding and multi-destination delivery
  • Cross-layer designs
  • Emerging scene description and media formats
  • Quality of Experience for Metaverse applications
  • Distributed consensus and state synchronization
  • Security, Privacy and Identity Management in Metaverse systems

Given the breadth and emerging nature of the field, all papers should include the articulation of a specific vision of Metaverse that provides clarifying assumptions for the technical content.

Submissions and Formatting

The workshop invites submission of manuscripts with early and original research results that have not been previously published or posted on public websites or that are not currently under review by another conference or journal. Submitted manuscripts must be prepared according to IEEE Computer Society Proceedings Format (double column, 10pt font, letter paper) and submitted in the PDF format. The manuscript submitted for review should be no longer than 6 pages without references. Reviewing will be double-blind. Submissions must not reveal the authors’ names and their affiliations and avoid obvious self-references. Accepted and presented papers will be published in the IEEE MetaCom 2023 Conference Proceedings and included in IEEE Xplore.

Manuscript templates can be found here. All submissions to IEEE MetaCom 2023 must be uploaded to EasyChair at https://easychair.org/conferences/?conf=metacom2023.

Organization Committee

  • Jeff Burke, UCLA
  • Dirk Kutscher, HKUST(GZ)
  • Dave Oran, Network Systems Research & Design
  • Lixia Zhang, UCLA

Technical Program Committee

  • Alex Afanasyev, Florida International University
  • Hitoshi Asaeda, NICT
  • Ali Begen, Ozyegin University
  • Taejoong Chung, Virginia Tech
  • Serge Fdida, Sorbonne University Paris
  • Carlos Guimarães, ZettaScale Technology SARL
  • Peter Gusav, UCLA
  • Toru Hasagawa, Osaka University
  • Jungha Hong, ETRI
  • Kenji Kanai, Waseda University
  • Ruidong Li, Kanazawa University
  • Spyridon Mastorakis, University of Nebraska Omaha
  • Kazuhisa Matsuzono, NICT
  • Marie-Jose Montpetit, Concordia University Montreal
  • Jörg Ott, Technical University Munich
  • Yiannis Psarras, Protocol Labs
  • Eve Schooler, Intel
  • Tian Song, Beijing Institute of Technology
  • Kazuaki Ueda, KDDI Research
  • Cedric Westphal, Futurewei
  • Edmund Yeh, Northeastern University
  • Jiadong Yu, HKUST(GZ)
  • Yu Zhang, Harbin Institute of Technology

Important Dates

  • March 20, 2023, Paper submission deadline
  • April 20, 2023 Notification of paper acceptance
  • May 10, 2023, Camera-ready paper submissions

Submission Link

https://easychair.org/conferences/?conf=metacom2023

Written by dkutscher

January 16th, 2023 at 6:50 pm

Posted in Events

Tagged with , , , ,