WODA 2013 will consist of three invited talks and four paper presentations, as well as opportunities for discussion by workshop attendees.

Date: 16th March 2013

10:25   WODA Welcome, Opening

10:30-12:00: Session 1,  Reliability and Monitoring

10:30-11:30: Invited talk by Klaus Havelund, NASA/JPL Laboratory for Reliable Software: “Report from the Trace Analysis Tool Shop”

11:30-12:00: “An Extensible AOP Framework for Runtime Monitoring,” Gholamali Rahnavard, Amjad Nusayr, and Jonathan Cook (New Mexico State University)

12:00-1:30 Lunch break

1:30-3:00: Session 2, Debugging

1:30-2:00: “Concurrency Bug Detection and Avoidance Through Continuous Learning of Invariants Using Neural Networks in Hardware,”  Mejbah Ul Alam, Rehana Begam, Sabidur Rahman, and Abdullah Muzahid (University of Texas at San Antonio)

2:00-3:00: Invited talk by Madanlal Musuvathi, Microsoft Research: “Sampling Using Code Breakpoints”

3:00-3:30: Break

3:30-5:30: Session 3, Software Reliability

3:30-4:30: Invited talk by Brian Demsky, UC Irvine: “Self-Stabilizing Java”

4:30-5:00: “Characterizing Active Data Sharing in Threaded Applications Using Shared Footprint,” Hao Luo, Xiaoya Xiang, and Chen Ding (Department of Computer Science, University of Rochester)

5:00-5:30: “SimuBoost: Scalable Parallelization of Functional System Simulation,” Marc Rittinghaus, Konrad Miller, Marius Hillenbrand, and Frank Bellosa (System Architecture Group Karlsruhe Institute of Technology)

5:30 Close of WODA
Invited talk abstracts:

“Report from the Trace Analysis Tool Shop”

Dynamic program analysis can be defined as: the act of analyzing single executions of programs, with the purpose of either detecting problems, or achieving a better understanding of what the program does. A specific sub-field is that of checking traces against user-provided formal specifications – usually written in some form of temporal logic, state machines, regular expressions, grammars, etc. We have over the years developed several DSLs (Domain Specific Languages) for trace analysis, and this talk will present some of these. We shall present the issues that one should take into consideration when designing a trace analysis DSL, typically a balance between efficiency, expressiveness, and ease of use. The distinction between external and internal DSLs will be discussed, and why one may want to choose one over the other depending on the situation. We outline specifically three DSLs: LogScope, TraceContract, and LogFire, showing an evolution towards a rule-based approach based on well-established techniques from Artificial Intelligence. The rule-based approach is inspired by the work on the earlier Ruler runtime verification framework, an external DSL, but defined as an internal DSL in Scala and using the RETE algorithm at its core. All three systems have been applied to actual NASA missions, including the JPL missions: MSL and SMAP, and the NASA Ames Research Center mission: LADEE.

“Sampling Using Code Breakpoints”

A code breakpoint allows a program to be interrupted the next time a specific instruction is executed. In this talk, I will describe interesting ways to sample instructions at runtime using breakpoints with very little runtime overhead. Applications include near-zero overhead data-race detection, program phase detection, code coverage measurement, and redundant-test elimination.  I will also describe an efficient infrastructure we have built to efficiently insert and process millions of breakpoints at a time.

“Self-Stabilizing Java”

Self-stabilizing programs automatically recover from state corruption caused by software bugs and other sources to reach the correct state.  A number of applications are inherently self-stabilizing—such programs typically overwrite all non-constant data with new input data.  We present a type system and static analyses that together check whether a program is self-stabilizing.  We combine this with a code generation strategy that ensures that a program continues executing long enough to self-stabilize.  Our experience using SJava indicates that (1) SJava annotations are easy to write once one understands a program and (2) SJava successfully checked that several benchmarks were self-stabilizing.