COMPSCI 296.6, Spring 2003

Experimental Methods in Computer Systems

CPS 296.6 Home

Tentative Schedule

Lecture Notes







Here are the conference proceeding assignments:
Laura: OSDI
Priya: Mobicom
Mariyam: FAST
Justin: USENIX
Jaidev: ISCA
Kashi: SOSP

  1. Tuesday, Jan. 14: Make a list of the METRICS considered in your proceedings and how well they support the claims and questions the paper is trying to address (also note how easy it is to figure out what questions the experiments are trying to answer -- do the authors come right out and say what they are trying to evaluate or is the reader expected to dig it out from the results obtained?) This should NOT necessarily involve reading all the papers in detail to compile this info. It might work best for each of you to prepare one or two powerpoint slides summarizing what you learn from your survey of metrics in your chosen conference proceedings. Just put it in your public_html so we can get to it via the web. Pay close attention to the definition of the metrics. 3 papers could all use "latency" as a metric and mean very different things by it.
  2. Tuesday, Jan 21: choose one paper and evaluate its experimental development from the point of view of Strong Inference as discussed in class and in Pratt's paper. Working in teams of 2 is OK. Prepare a short ppt presentation describing what you found.
  3. Note change in date. Thursday, Jan 30: Survey the types of workloads -- especially the standard benchmarks -- used in your proceedings (10 papers).
  4. Tuesday, Feb. 4: Term Project Pre-proposal. The goal of this assignment is to (1) briefly articulate the vague idea behind your term project (brief means < 1 ppt slide) and (2) sketch out "groping around" kind of experiments that will provide (2a) the data you would use to justify that you have an interesting problem and (2b) the data you would need to understand and model your idea well-enough to move toward the hypothesis stage. In case you have already done this preliminary step, then describe what you did.

    Recall what I mean by "groping around" experiments: they ask about the feasibility of an idea, try to identify where the "real" bottlenecks are, or determine basic parameter values (e.g. costs) for your model. These might be experiments you do but never expect that they will end up as "results" in a paper.

    Approx. 2 slides are expected. Groups of 2 are allowed/encouraged. Leveraging other course projects is also allowed.

  5. Tuesday February 25. Bring in one either notoriously bad or exceptionally good example of data presentation from your proceedings. The bad ones are more fun. Or if you find something just really different, please show it.
  6. Tuesday March 5. Project Proposal covering (a) hypothesis statement, (b) workload decisions, (c) metrics to be used, and (d) method (simulation, emulation, measurement of prototype).
  7. Tuesday March 25, Survey your proceedings for just one paper in which factorial design has been used or, if none, one in which it could have been used effectively. Talk about the factors and levels, replications (if any), interactions among factors, and the contributions found for each (if such results given).


Last updated Jan 9, 2003