Computer Science Department
Duke University

Overview | Research Directions | Publications and Presentations | Funding | Members


Note: this is an early version of a site describing the Duke CS NICL project we now call Open Resource Control Architecture (Orca). We renamed the project because there are several other closely related projects called Sirius, Cerias, etc. This page is still correct, but we now use the name Cereus only for a subproject dealing with market-based resource management, which can be implemented as a set of controller policy modules for an Orca/Shirako lease-based resource control plane.

The Cereus project is investigating a broad range of topics focused on how to best manage networked utilities, such as computational grids, network testbeds, or commercial hosting services. Our goal is to build a general utility architecture that incorporates the means to virtualize resources to balance isolation and sharing, monitor and control resource status, match resource supply and demand, and adapt applications and services to the dynamics of a shared environment.

Cereus is an outgrowth of previous work in the systems group at Duke University with Cluster-On-Demand (COD) and Secure Highly Available Resource Peering (SHARP) [SOSP 2003 pdf], as well as market-based resource management approaches including work on adaptive resource provisioning for data centers [SOSP 2001 pdf] and market-based task services [HPDC 2004 pdf].

Cereus is based upon a brokered lease contract model that defines a general approach to resource sharing based on extensible leasing abstraction. Our current prototype of the leasing system is called Shirako: it is a generic leasing core with plugin interfaces to allow for extensible policies for resource allocation, resource-specific configuration, and event handlers. Resource leasing is a powerful abstraction for providing predictable performance and allowing sophisticated services and applications to meet target levels of service quality.

We have developed a third-generation implementation of Cluster-On-Demand as a plugin module to Cereus to demonstrate the power of our leasing architecture to allocate server resources (physical or virtual) dynamically. However, Cereus is designed to lease any type of "raw" resource that specifies its power and capability by attributes, but is given in terms of metrics that are independent of the host software (e.g. blocks of machines with processor/memory attributes, storage partitions with capacity/throughput attributes).


Cereus is designed to manage a networked utility comprising a collection of autonomous resource supplier sites without central control. The sites may export a variety of resources including servers in data centers (Cluster-On-Demand), network paths, capacity at storage sites, or other resources such as distributed sensors. Infrastructure services within the utility partition the resources and assign them to host guests, which share the resources but are contained and protected from one another. The guests may be networked services, applications, or virtual network overlays. For example, a guest may comprise a linked set of virtual clusters at multiple sites, leased to host a distributed environment such as a cross-institutional grid or a distributed application such as a content distribution network.

Each guest service, application, or environment has an associated service manager that monitors application demands and resource status, and negotiates with the utility services to acquire leases for the mix of resources needed to host the guest. The policies for approving requests and provisioning resources are specified in brokers (also called agents), and different brokers may implement different policies. Brokers maintain inventories of resources offered by supplier sites and have the power to coordinate resource allocation. The resources held by a guest at any given time comprise the guest's slice (terminology common to PlanetLab) of the shared infrastructure. A site authority at each resource supplier site is responsible for binding resources to the slice, and enforcing isolation among multiple guests hosted on the resources under its control.

The service managers, brokers, and site authorities are actors in the utility. These actors are self-interested participants that can form various trust relationships, and enter into agreements or contracts. Actors are held accountable for contractual arrangements entered into with other actors. These actor roles and security relationships are defined in the SHARP framework, which addresses key challenges for networked utilities: preserving availability of resources when the actors controlling them fail, decentralized trust management, and enforcement of contracts.

Research Directions
  • Adaptable Network Services
    Cereus is designed to host network services (e.g., Web services, task scheduling services for grid computing, or other distributed environments) that span multiple sites. Hosted services must adapt to ever-changing, and potentially, unexpected conditions induced by load surges, competing resource consumers, or failures. Resource sharing expands the opportunities for adaptation, and the need for it. We are investigating architectural components within a utility to allow guest services to easily operate and adapt to the shared hosting environment.

  • Policy-based management using Brokers
  • Brokering is fundamental as a basis to coordinate resource leasing across multiple sites, while leaving contributors the autonomy to control their own resources. Through separating provisioning policy from assignment, we give brokers the power to control and specify for what time, from where, and the amount of resources guests receive, while leaving assignment decisions for individual site authorities. We have implemented various resource allocation policies, from simple (first-come-first-served) to complex (open-ascending English auction), as plugin modules, allowing for extensibility and flexibility within the system. We are exploring the benefits brokers provide to guests and supplier sites, as well as how different design decisions in the broker architecture affect resource allocation outcomes; in particular, how broker allocation policies affect the stability and agility of resource control in the system.

  • Market-based Resource Allocation
  • The growing reach and scale of shared cyberinfrastructure systems, such as computational grids, application hosting services, and network testbeds, exposes the need for more advanced solutions to manage shared resources. Market-based control is beneficial for regulating resource allocation and generating incentives for users to contribute resources to the system to make them self-sustaining. We are addressing the challenges of designing a virtual currency economy for network utilities used in a community setting.

  • Accountable Resource Sharing
  • Participants in network utilities may lie, cheat, or steal to maximize their returns from the system, creating challenges for dependable resource sharing. We believe that accountability is a sufficient disincentive for abuse by any participant in a community. Actions of an accountable actor are provable and non-repudiable, and may be legally binding. We are investigating techniques that hold actors accountable, so that if an actor misrepresents its resources or its currency, an auditor may detect any misbehavior and construct undeniable cryptographic proof that is verifiable by any third party.
Publications and Presentations
  • "Toward a Doctrine of Containment: Grid Hosting with Adaptive Resource Control", Lavanya Ramakrishnan, Laura Grit, Anda Iamnitchi, David Irwin, Aydan Yumerefendi, and Jeff Chase. In the 19th Annual Supercomputing Conference (SC06), November 2006. [pdf]
  • "Virtual Machine Hosting for Networked Clusters: Building the Foundations for ``Autonomic'' Orchestration", Laura Grit, David Irwin, Aydan Yumerefendi, and Jeff Chase. In the First International Workshop on Virtualization Technology in Distributed Computing (VTDC), November 2006. [pdf]
  • "Sharing Networked Resources with Brokered Leases", David Irwin, Jeff Chase, Laura Grit, Aydan Yumerefendi, David Becker, and Ken Yocum, USENIX Technical Conference, June 2006, Boston, Massachusetts [pdf].

  • "Self-Recharging Virtual Currency", David Irwin, Jeff Chase, Laura Grit, and Aydan Yumerefendi, 3rd Workshop on Economics of Peer-to-Peer Systems (ECONP2P) at SIGCOMM, August 2005, Philadelphia, Pennsylvania [pdf]  [ps] (talk slides [ppt] [pdf]).

  • "Balancing Risk and Reward in a Market-based Task Service", David Irwin, Laura Grit, and Jeff Chase, Thirteenth IEEE Symposium on High Performance Distributed Computing (HPDC), June 2004, Honolulu, Hawaii [pdf]  [ps] (talk slides [ppt] [pdf]).

  • "SHARP: An Architecture for Secure Resource Peering", Yun Fu, Jeff Chase, Brent Chun, Stephen Schwab, and Amin Vahdat, Nineteenth ACM Symposium on Operating Systems Principles (SOSP), October 2003, Bolton Landing, New York [pdf]  [ps].

  • "Shirako: Virtual Machine Hosting for Federated Clusters", Laura Grit, Jeff Chase, David Irwin, and Aydan Yumerefendi, Refereed poster and demo at the Seventh USENIX Symposium on Operating Systems Design and Implementation (OSDI), November 2006, Seattle, Washington [pdf].

  • Cereus Poster  [ppt]  [pdf]
  • ANI 03-30658 - Dynamic Virtual Clusters part of the NSF Middleware Initiative
  • NSF CNS-0509408 - Virtual Playgrounds: Making Virtual Distributed Computing Real in collaboration with the Globus Virtual Workspaces Project
  • ANI-01-26231 - Request Routing for Network Services
  • Industry Partners: HP, IBM, and Network Appliance
Project Members