Automated Software Engineering
The lecture is an integrated course (“integrierte Lehrveranstaltung”) consisting of a 90 minutes slot every week, of which we will use about 60 minutes for the actual lecture and 30 minutes for discussing exercises and their solutions. There will be optional exercises to be performed in groups of two students. (In exceptions we may also allow three-person groups.) Completing the exercises will allow you to obtain a bonus towards the grade of the final exam.
Opposed to the previous edition, this year the course will have a stronger emphasis on runtime-monitoring, security-monitoring and refactoring techniques.
Where: Technische Universität Darmstadt, Altes Hauptgebäude; S1|03/175
When: Mondays 13:30-15:00, starting on 17.10.2011
Who: Dr. Eric Bodden
How much: 90 minutes per week (lecture + exercise discussion) – 3 Credit Points (2 Semesterwochenstunden – integrierte Lehrveranstaltungen)
TUCaN number: 20-00-0497
Time/Place of final exam: 29.02.12, 10:00-12:00h in S202/C205
Having final questions before the exam? Use the forum or make an appointment here.
There is a public forum (maintained by the Fachschaft) that you can use to discuss course-related issued. Please use this forum also for asking question about your course work such that the answers to those questions can benefit other course participants as well.
Exercise sheets and bonus system
There is a bonus system that allows you to improve your final grade by up to 1.0 points, depending on how well you succeed in the exercises.
- There will be 7 exercise sheets, and all count towards the bonus.
- Every exercise sheet is pass/fail. When your group passes the exercise sheet, all group members earn a bonus of 0.2=1/5.
- The maximal bonus is 1.0, i.e., if you have passed 5 exercise sheets then you have reached the maximal bonus. Nevertheless we recommend completing all sheets because all of them give examples of exercises that may be relevant for the final exam.
- Every sheet will contain at least one exercise marked as optional. Exercises marked as optional do not need to be answered to pass an exercise sheet.
- If you have failed one exercise sheet (or missed to hand one in), then you may compensate for the lost bonus by correctly completing two optional exercises from (one or more) other exercise sheets. (We will make sure that there are two optional exercises on the last sheet.)
- The final grade for this course will be computed by adding your bonus to the grade for the final exam and then rounding to the next grading level.
Outline and Preliminary Schedule
In this lecture we will cover the ideas and principles behind important tool-based software engineering approaches:
- What goal are the tools trying to achieve?
- How do they fit into the software-development process?
- How do they achieve their goal?
- To what extent do they work on real programs and why?
We will start off using an introductory lecture during which we will cover version control systems, in particular Subversion, which you will be required to use for submitting your course work (if you are participating; course work is optional). On October 24th, we will have a special Guest Lecture by Martin Robillard, Associate Professor at McGill University and an awardee of the prestigious von Humboldt Fellowship.
The regular lectures will start off by first covering the initial phase of the software design process: requirements engineering and elicitation. We will discuss how tools like Alloy help you define requirements and test these definitions for completeness or the absence of contradictions. Korat is a tool that lets you automatically generate complex test inputs based on an Alloy specification. EvoSuite from Saarland University goes beyond the simple generation of test inputs: it generates whole test cases, along with post-conditions (assertions) that the test case is guaranteed to fulfill.
Test-case generation can only succeed, however, if it has a means to tell apart faulty from “good” test runs. This is easy if a fault causes the program to throw an exception. In many cases, however, faults may lead to unintended behavior without exceptions. A so-called “oracle” can detect such unintended behavior by monitoring the program under test, and signal a faulty test run. Often, oracles are implemented through program instrumentation, also called a “runtime monitor”. We will discuss MOPBox, a novel open-source framework that allows programmers to define such oracles quite easily. Tracematches are a way to generate oracles (or runtime monitors) automatically from declarative specifications based on regular expressions. JavaMOP is an open framework for generating such monitors from different kinds of specification languages. We will discuss how JavaMOP achieves this genericity while at the same time allowing for highly efficient monitoring code. The term Runtime Monitor is very common in the research field of software engineering. In security research, one will more often speak of so-called Inline Reference Monitors. We will discuss how those monitors differ from general runtime monitors, and how they can be used to secure software by design. Defining runtime monitors can often be cumbersome. We hence discuss Stateful Breakpoints, an integration of runtime monitoring into the Eclipse Debugger. Stateful Breakpoints allow programmers to express complex condition that, in combination, allow the debugger to run up to an error situation within a single run.
The following schedule is still subject to minor changes.
|1||17.10.2011||Introduction, Revision Control||SVN, Mercurial||Ex. Sheet 1|
|2||24.10.2011||Guest Lecture by M. Robillard: Thinking Inside the Box: Stories of the ideas behind the IDEs||Martin Robillard||References etc.|
|3||31.10.2011||Reliable System Requirements and Models||Alloy||Ex. Sheet 2|
|4||14.11.2011||Guest Lecture by Gordon Fraser: Automatic generation of unit tests||EvoSuite|
|5||21.11.2011||Korat: Automated testing-input generation for Java programs||Korat||Ex. Sheet 3|
|6||28.11.2011||Defining testing oracles through runtime monitors||MOPBox||Ex. Sheet 4|
|7||5.12.2011||Efficient Runtime Montoring||JavaMOP, Tracematches|
|8||12.12.2011||Monitoring languages and Security Monitoring||JavaMOP, Tracematches|
|9||19.12.2011||Static control-flow and data-flow analysis of Java programs||Soot||Ex. Sheet 5|
|10||9.1.2011||Partially evaluating runtime monitors at compile time||Clara||Ex. Sheet 6|
|12||23.1.2011||Static program analysis in the presence of reflection and custom class loaders||TamiFlex|
|13||30.1.2011||Safe, constraint-based refactoring of reflective Java programs||RefaFlex||Ex. Sheet 7|
Slides and additional material
You can download the lecture slides and notes here, as well as the exercise sheets here. Both folders are password protected. You will be able to access them once you have emailed Eric your group information. There’s also a list of additional material here.
- automated analysis & definition of requirements
- computer-aided design of software
- implementation with intelligent IDEs
- automated testing and verification
- computer-aided bug finding
- aiding program understanding by mining latent specifications from existing software
- advanced debugging & profiling techniques
- computer-aided refactoring and documentation