Dirk Kutscher

Personal web page

Projects

without comments

The following list provides a selection of some research projects and other activities I have been previously been involved in.

Named Data Microverse

Our project proposal on Named Data Microverse was selected as a winner of the Future of Data Challenge.

The Named Data Microverse project explores how Information-Centric Networking (ICN) can enable a free, open and decentralized approach to “the metaverse”. The project aims to balances scalability and market-based innovation with democratization, trustworthiness, and equitable empowerment of individuals. ICN provides an architectural foundation for secure, distributed applications to be created more easily and provides resilience in natural disasters, better mobility support, cloud-optional local communication, improved privacy, and other benefits that are not addressed solely by “Web3” technologies.

This is a joint project with Jeff Burke and Lixia Zhang at UCLA.

MAVERIC: In-Network Computing for 5G Campus Networks

The MAVERIC project will develop a mobile 5G campus network system with a special focus on automated deployment, monitoring as well as flexible and digitally sovereign in-network computing. The main use cases within the project are processes and tasks on ship yards. This environment is particularly harsh and has very high requirements regarding availability, security and confidentiality.

Piccolo: In-Network Computing

The Piccolo research project is developing new solutions for in-network computing that remove known and emerging deficiencies of edge/fog computing. Starting from a set of innovative industry-relevant use cases, we are creating a distributed computing platform that can leverage different kinds of underlying infrastructure that can cater to various business needs and user preferences, and that will provide an open platform for future applications.

Our motivation is that the centralised cloud computing model in use today has difficulty handling new and emerging applications. Ever-more powerful user and IoT devices are producing enormous amounts of data – too much to send into the cloud for centralised processing, and further the round trip time is too large for the stringent latency requirements of some applications. Also, there are increasing concerns about leaving data privacy at the mercy of big cloud operators. Shifting from centralized to in-network compute can alleviate these concerns and thereby open up new horizons for application development and create new infrastructure markets.

Compute-First Networking

Edge- and, more generally, in-network computing is receiving a lot attention in research and industry fora. What are the interesting research questions from a networking perspective? In-network computing can be conceived in many different ways – from active networking, data plane programmability, running virtualized functions, service chaining, to distributed computing. Modern distributed computing frameworks and domain-specific languages provide a convenient and robust way to structure large distributed applications and deploy them on either data center or edge computing environments. The current systems suffer however from the need for a complex underlay of services to allow them to run effectively on existing Internet protocols. These services include centralized schedulers, DNS-based name translation, stateful load balancers, and heavy-weight transport protocols.

Over the past years, we have been working on alternative approaches, trying to find ways for integrating networking and computing in new ways, so that distributed computing can leverage networking capabilities directly and optimize usage of networking and computing resources in a holistic fashion.

Please read the online article for more information and links to papers.

OPNFV

OPNFV is a carrier-grade, integrated, open source platform to accelerate the introduction of new NFV products and services. As an open source project, OPNFV is uniquely positioned to bring together the work of standards bodies, open source communities and commercial suppliers to deliver a de facto standard open source NFV platform for the industry.The OPNFV community is collaborating on a carrier-grade, integrated, open source platform to accelerate the introduction of new NFV products and services. By integrating components from upstream projects, the community can perform performance and use case-based testing to ensure the platform’s suitability for NFV use cases. OPNFV will also work upstream--with other open source communities--to bring the learnings from its work directly to those communities in the form of blueprints, patches, and code contribution. The scope of OPNFV’s initial release is focused on building NFV Infrastructure (NFVI) and Virtualized Infrastructure Management (VIM) by integrating components from upstream projects such as OpenDaylight, OpenStack, Ceph Storage, KVM, Open vSwitch, and Linux. These components, along with application programmable interfaces (APIs) to other NFV elements form the basic infrastructure required for Virtualized Network Functions (VNF) and Management and Network Orchestration (MANO) components. OPNFV’s goal is to increase performance and power efficiency; improve reliability, availability, and serviceability; and deliver comprehensive platform instrumentation.

Fostering a diverse community of developers who bring different needs, ideas and knowledge to the table means faster time to market and stronger code. We hope you’ll join OPNFV as we work together to effect the game-changing networking transformation that is NFV.

More information on OPNFV: www.opnfv.org.

SSICLOPS

The Scalable and Secure Infrastructures for Cloud Operations (SSICLOPS, pronounced “cyclops”) project focuses on techniques for the management of federated private cloud infrastructures, in particular cloud networking techniques within software-defined data centres and across wide-area networks. SSICLOPS is funded by the European Commission under the Horizon2020 programme.SSICLOPS will empower enterprises to create and operate high-performance private cloud infrastructure that allows flexible scaling through federation with other private clouds without compromising on their service level and security requirements. SSICLOPS federation will support the efficient integration of clouds, no matter if they are geographically collocated or spread out, belong to the same or different administrative entities or jurisdictions: in all cases, SSICLOPS will deliver maximum performance for inter-cloud communication, enforce legal and security constraints, and minimize the overall resource consumption. In such a federation, individual enterprises will be able to dynamically scale in/out their private cloud services: because they dynamically offer own spare resources (when available) and take in resources from others when needed. This allows maximizing own infrastructure utilization while minimizing excess capacity needs for each federation member. SSICLOPS-powered private clouds will offer fine-grained monitoring and tuning capabilities along with workload planning and optimization tools to maximize the performance across a broad spectrum of workloads and across a wide operational scale, as we will demonstrate using four highly diverse use cases. The SSICLOPS solution will be based upon state-of-the-art open source products used broadly in private cloud deployments today to provide enterprises with full control over their own deployment.

More information on SSICLOPS: ssiclops.eu.

GreenICN

Information Centric Networking (ICN) is a new paradigm where the network provides users with named content, instead of communication channels between hosts. Research on ICN is at an early stage, with many key issues still open, including naming, routing, resource control, security, privacy and a migration path from the current Internet. Also missing for efficient information dissemination is seamless support of contentbased publish/subscribe. Further, and importantly, current proposals do not sufficiently address energy efficiency. GreenICN aims to bridge this gap, addressing how the ICN network and devices can operate in a highly scalable and energy-efficient way.The project exploity the designed infrastructure to support two exemplary application scenarios: 1. The aftermath of a disaster e.g., hurricane or tsunami, when energy and communication resources are at a premium and it is critical to efficiently distribute disaster notification and critical rescue information. Key to this is the ability to exploit fragmented networks with only intermittent connectivity;

  1. Scalable, efficient pub/sub video delivery, a key requirement in both normal and disaster situations.

GreenICN will also expose a functionality-rich API to spur the creation of new applications and services expected to drive EU and Japanese industry and consumers into ICN adoption. Our team, comprising researchers with diverse expertise, system and network equipment manufacturers, device vendors, a startup, and mobile telecommunications operators, is very well positioned to design, prototype and deploy GreenICN technology, and validate usability and performance of real-world GreenICN applications, contributing to create a new, low-energy, Information-Centric Internet. Our expertise and experience in standardization will enable us to make major contributions to standards bodies. Our efforts will foster continued close cooperation between both industrial and research communities of Europe and Japan.

More information on GreenICN: www.greenicn.org.

SAIL

SAIL (Scalable & Adaptive Internet Solutions) is aiming at designing architectures for the Networks of the Future, as part of the European Commission’s 7th Framework Program. SAIL has three main technical strands: Network of Information (information-centric networking), Cloud Networking (combining virtual networking with cloud computing), and Open Connectivity Services (transport and routing services that can be controlled and orchestrated over various technologies).My main interest is the research on information-centric networking. The main idea is to move from a host-based communication paradigm, where host addresses/IDs are the principal communication objects, to a paradigm that is based on named-content. In some current application areas such as content distribution and peer-to-peer communication we can observe that communication is actually no longer about setting up end-to-end connections to origin server in order to access a certain service/content. Instead, users are interested in named content (represented by, for instance, Torrents or URLs) and a corresponding distribution system provides lookup and distribution services that enable interested receivers to obtain the content (copies of the content or content chunks). So far, this paradigm is applied to isolated, mostly overlaid, applications or distribution platforms. The intention in SAIL is to generalize these concepts for a ubiquitous communication platform, where name-based content, in-network-storage, and efficient distribution is available to any application. Several research questions are related to this: 1) how to design a naming framework that allows to name all information objects, is scalable in terms of lookup table size and lookup latency while still meeting security requirements; 2) how to efficiently move content to appropriate location in the network; 3) how to manage mobility, multi-interface nodes and disruption-tolerance; and 4) how to evolve socio-economics with potential new roles for content providers/consumers, as well as network/cache operators.

More information on SAIL: http://www.sail-project.eu/.

CHIANTI

CHIANTI is a Small or medium-scale focused research project (STREP) and part of the ICT initiative of 7th EU framework programme. CHIANTI is developing technologies for enabling effective, robust, and cost-efficient communication services in challenging network environments, e.g., for providing a productive and stable Internet access to passengers in high-speed trains. Different to many existing approaches, CHIANTI is developing technologies that do not require a complete network coverage. Instead, CHIANTI will provide perceived seamless connectivity despite disruptions, changing network characteristics etc. and will thus enable users on the move to use today\'s and future network more productively.More information on CHIANTI: http://www.chianti-ict.org/. ScaleNetWithin ScaleNet, academia and industry jointly work on the scaleable and converged multi-access operator\'s network from tomorrow, focusing on 2010 onwards.ScaleNet is addressing both service and network convergence. The multi-play of services in ScaleNet embraces voice and video telephony, Mobile TV, massively multiplayer online gaming and Internet Access. Network convergence is seen as the migration of heterogeneous physical and logical network elements of fixed and mobile networks into one single (IP based) infrastructure.TZI has been developing a robust Mobile TV application for the converged ScaleNet network infrastracture. More information on ScaleNet: http://www.scalenet.de/.

Network Service Maps

Network Service Maps are an enabling technology for facilitating network access in heterogeneous, potentially challenged networks, such as sparsely distributed WLAN hotspots. Networks Service Maps are based on the notion that future heterogeneous wireless networks will encompass different link layer technologies and allow selecting the most appropriate network depending on different criteria. To support mobile nodes in the selection process, network information services are developed that provide the mobile node with sufficient information about its network neighborhood, typically focusing on the optimization of handover processes. In this research, we take a more general approach towards network information services, which is needed to support mobile communications in the existing environments of WLAN hotspots and wide area mobile communications networks. We introduce the notion of service maps, a mobile data management approach allowing a mobile user to obtain a detailed view of available networks and the services they offer depending on the user context such as geographic position, mobility paths, and application requirements.More information on Network Service Maps is available at: http://service-maps.net

Kasuari Emulation Framework

The Kasuari framework is mainly intended to help with (IP) protocol development and testing. One of its features is the possibility to run unmodified real-world networked applications on a virtual host under simulated network conditions. The framework is based on Xen 3.0, and comes with scripts to run the virtual machines, a pre-configured filesystem image (with DTN and AODV implementations), a copy-on-write driver and a few other tools. It can be used for testing almost any kernel module or networked application that runs on Linux, and it allows to simulate complex and realistic (wireless) networks using a slightly adopted version of the ns2 network simulator.More information on the Kasuari Emulation Framework is available at: http://www.kasuari.org/

Drive-thru Internet

The Drive-thru Internet project investigates the usability of IEEE 802.11 technology for providing network access to mobile users in moving vehicles. The idea of Drive-thru Internet is to provide hot spots along the road -- within a city, on a highway, or even on high-speed freeways such as autobahns. They need to be placed in a way that a vehicle driving by will obtain WLAN access for some (relatively short) period of time; if located in rest areas, the driver may exit and pass by slowly or even stop to prolong the connectivity period. One or more locally interconnected access points form a so-called connectivity island that may provide local services as well as Internet access. Several of these connectivity islands along a road or in the same geographic area may be interconnected and cooperate to provide network access with intermittent connectivity for a larger area.More information about the Drive-thru Internet project: http://www.drive-thru-internet.org/

Internet Media Guides

Internet Media Guides (IMGs) are a generalization of Electronic Program Guides (EPGs) as known from digital video broadcasting (DVB). They are independent of specific metadata formats and thus are able to support a broad range of applications, including EPG distribution for TV networks and distribution of session descriptions for Internet-based multimedia sessions. Unlike most existing approaches, the IMG framework is also completely independent of specific delivery networks for the media content described in media guides -- and it is also independent of the distribution mechanisms for the media guides themselves: IMGs can be distributed in unidirectional broadcast networks, they can also retrieved over established query/response protocols such as HTTP, and they allow for asynchronous change notifications to interested subscribers.At TZI we have developed IMG distribution implementations that are availabe for download. More information on the IMG work is available at: https://prj.tzi.org/cgi-bin/trac.cgi/wiki/TZI-IMG

Mbus

The Message Bus (Mbus) is a light-weight local coordination protocol for developing component-based distributed applications that has been developed by Universität Bremen and University College London. Mbus provides a simple and flexible message oriented communication channel for a group of components that may be distributed on multiple hosts in a local network. The Mbus transport services include useful features such as peer location, point-to-point and group communication and security. The protocol specification has been published as RFC 3259.Mbus implementations have been developed for different programming languages and platforms, including small one-chip computers. The protocol has been applied to different application domains, e.g., for coordinating application components in decomposed multimedia conferencing applications and for providing coordination services for pervasice computing environments such as home networks. This web site provides some details on the Mbus protocol itself as well as on extensions, implementations and applications: http://www.mbus.org/

6WINIT

The 6WINIT project that has concluded January 2003 has validated the introduction of the new Mobile Wireless Internet in Europe. It has investigated and validated the set-up of one of the first European operational IPv6-3G Mobile Internet initiatives, providing the 6WINIT project customers with native IPv6 access points and native IPv6 services in a 3G environment.More information about 6WINIT: - Local 6WINIT description (german)

MECCANO

The objective of the MECCANO project that has concluded in May 2000 was to provide all the technology components, other than the data network itself, to support collaborative research and technical development through the deployment of enhanced tools for multimedia collaboration in Europe. The project has improved and deployed existing conferencing toolsets with a particular application aim of distance education and of conferencing.MECCANO homepage

Winspect

The Winspect Project (Wearable Computing in Inspection) has developed a system to support the maintenance staff dealing with the inspection of industrial cranes at a Bremen steel plant. We have investigated the use of wireless, wearable computers in industrial environments and have developed different applications, e.g., multimedia conferencing and data inspection support applications on PC-platform based werable computers.More information about Winspect: - Winspect homepage (german)

CONTRABAND

The CONTRABAND project (Conferencing for Transport Breakdown and Accident Management and Networking of Dispatchers) has developed a multimedia multiparty conferencing system that is tailored for application in both engineering and accident management usage situations. For the latter type of application a mobile conferencing component has been developed that is based on a wearable, wireless computer.

MEDUSA

The sensor network MEDUSA (Multispectral Environment Data Unit for Surveillance Application) enables a regular monitoring of waters regardless of optical visibility, an inspection of reported oil pollution, securing evidence regarding polluters and providing support for the ships assigned to combat pollution. To be able to operate regardless of the time of day and weather, several types of sensors e.g. radar, infrared and ultraviolet line scanners, and video or low-light-level cameras are used. With the help of this equipment it is possible to detect pollution (e.g. oil or algae) on or below the sea surface in parts even at a distance of up to 50 km, subsequently to classify it in overflight and determine its amount.MEDUSA homepage

Written by dkutscher

December 10th, 2009 at 7:51 pm

Posted in