# $blog→errstr ## do not commit # Concrete CESK for Android's Dalvik | Comments # Introduction At the heart of all Android applications is Dalvik byte code. It’s what everything gets compiled to and then run on the Dalvik VM. In order to do static program analysis for Android, you need to somehow interpret the byte code. That’s where the CESK machine shines. The CESK machine, developed by Matthias Felleisen, provides a simple and powerful architecture used to model the semantics of functional, object-oriented and imperative languages and features like mutation, recursion, exceptions, continuations, garbage collection and multi-threading. It is a state-machine that takes its name from the four components of each state: Control, Environment, Store, and Kontinuation. In this article, I implement a concrete CESK machine to interpret a dynamically typed object-oriented language abstracted from Dalvik byte code. Every byte code and its semantics have been transformed into this language. # CESK Being a state machine, CESK has a notion of jumping or stepping from one state to another. In terms of sets, we can think of a program ($p$) as a set of these machine states ($\Sigma$) that exist within the set of all machine states ($\mathit{Prog}$) with a partial transition function (step) from state to state ($\Sigma \rightharpoonup \Sigma$). Defining the state-space, $\Sigma$ as $\sigma\in\Sigma = Stmt^{*}\times FP\times Store\times Kont$, which allows us to encode a state as a simple struct: (struct state {stmts fp stor kont}) If you are familiar with pushdown automata, then a CESK machine has many striking similarities. You can think of each state as the CES portion and the $\Delta\kappa$ would be what would be pushed/popped from the stack between state transitions. (For a project in my computational theory course some classmates and I implemented a Non-Deterministic Pushdown Automata in Python that outputs to a commandline and DOT format, feel free to play around with it: PyDA) ## C.ontrol: A sequence of statements The Control component of a CESK machine is a control string. In the lambda calculus, the control string would be an expression. For Dalvik, we should think of this component as a sequence of statements. This sequence of statements gives an indication of which part of the program this state is. ## E.nvironment: Frame pointers The Environment component of a CESK machine is a datastructure that maps variables with an address. In the Dalvik CESK machine, we use simple frame pointers as the addresses. ### Addresses Registers are offsets from frame pointers and map to local variables, we will need to compute the location (frame offset) by pairing the frame pointer with the name of a Dalvik register Objects and their fields are structurally equivalent to frame addresses. So the set of all addresses include both Object and Frame addresses: ## S.tore: The Heap The Store component of a CESK machine is a data structure that maps addresses to values. In the Dalvik case, we map frame pointers to values. ## K.ontinuations: The program stack (continuations) The Kontinuation component of a CESK machine is essentially a program stack. Within Dalvik, you find exception handlers and procedure calls - and with all continuation based machines, the halt continuation to signify program termination. Each continuation is placed on a stack, where the top-most matching continuation is found and executed: for exceptions this is the matching handler and for assignment it is the next assignment continuation. Halt is handled as a termination continuation without context encoded into the component. In the case of exceptions, Dalvik defines a type of exception (class name), the branch label where execution should go and the next continuation. $handle(className, label, \kappa)$ Any other type of invocation affecting the program stack is an assignment continuation, where the return context for a procedure call is encoded with the register waiting for the result (name), the statement after the call, the frame pointer from before the call and the next continuation. $assign(name, \vec{s}, fp, \kappa)$ # Running the CESK machine Since the CESK machine is a state-machine, we have a single partial transition function (from state to state) that is run until it is told to terminate (when we encounter the halt continuation) called step. We only need to have an initial state ($\varsigma_0$) and then iterate until we hit halt. So, we need four things: • inject to create the initial state, with an empty environment and store • step the transition function for each type of state transition • lookup a way to lookup frame pointers in the store • run run the CESK state-machine # Dalvik Byte-code Grammar For the purposes of this article, I will be using the core grammar defined by Matt Might’s Java CESK article. He defined this by looking at all of Dalvik’s byte code and ensuring that its semantics are represented in a straightforward way. He divided the language into two classes of terms: statements and expressions. program ::= class-def ... class-def ::= class class-name extends class-name { field-def ... method-def ... } field-def ::= var field-name ; method-def ::= def method-name($name, ..., $name) { body } body ::= stmt ... stmt ::= label label: | skip ; | goto label ; | if aexp goto label ; |$name := aexp | cexp ;
|  return aexp ;
|  aexp.field-name := aexp ;
|  push-handler class-name label ;
|  pop-handler ;
|  throw aexp ;
|  move-exception $name ; cexp ::= new class-name | invoke aexp.method-name(aexp,...,aexp) | invoke super.method-name(aexp,...,aexp) aexp ::= this | true | false | null | void |$name
|  int
|  atomic-op(aexp, ..., aexp)
|  instanceof(aexp, class-name)
|  aexp.field-name


# Transitions: Evaluation and stepping

## Continuations

Remembering from earilier in the article, there are three types of continutations: assignment, handler, and halt. Each of the components of a Dalvik program will use those generalized definitions.

We can define an applyKont function to aid in the overall machine design, which will be utilized when we encounter returns and exceptions.

Then using applyKont with the assign and handle continuations is defined by:

With this definition, we can translate this to code with apply/κ:

## Assignment: Atomic statements/expressions

Atomic expressions are expressions which evaluation must terminate, never cause and exception or side effect.

Atomic statements assign an atomic value to a variable, this involves evaluation of the statement/expression, calculating the frame address and updating the store.

### The Atomic Expression Evaluator

To evaluate an atomic expression, we use the atomic expression evaluator:

We have some key types of atomic expressions and how they are evaluated.

Atomic values that can be immediately returned such as integers, booleans, void, null.

Register lookups simply involve knowing what the frame pointer offset is to do a lookup of the atomic value. Since we have encoded this with the name of the expression along with the frame pointer we have the frame address encoded into the store with (fp, name). There are two special registers "$this" and $ex, but use the same semantics as other register lookups.

Accessing an object field is similar to register lookups, you get the field offset from the object pointer with (op, field) from the store.

### The Atomic Assignment Statement

Like I said earlier, we need to evaluate the atomic expression and assign it a variable-value pair in the store. We can define this operation as:

The $\sigma'$ is the store updated with the new atomic assignment variable and value mapping:

I have a couple of articles regarding variable substitution and implementation if you are interested in getting a larger view of what is going on here.

This creates a new state, where the store is now updated with a mapping of the variable var to the value val.

## Object Assignment and Creation

There is another type of assignment similar to the atomic assignment statement, a new object. This is when a variable is being assigned to a brand new object, e.g. Object o = new Object(). Consequently, the definition of object creation and assignment is similar to atomic assignments:

The $\sigma'$ is the store updated with the new object assignment variable and value mapping (corresponding to a never-before used object pointer op’):

This creates a new state, where the store is now updated with a mapping of the variable varname to the value (object classname (gensym)). The (gensym) is used to generate a guaranteed-to-be-globally unique value.

## nop, label, line

There are thre types of statements that cause no change in state: nop, label, and line. We can define nop as: $step(\mathbf{nop}: (\vec{s}, fp, \sigma, \kappa) \to (\vec{s}, fp, \sigma, \kappa)$ This says that when we se a nop, get the next statement in the list of statements and run it.

The only difference between nop and label is that label has an identifier (the label) with it $step(\mathbf{label}\;\mathit{l}: (\vec{s}, fp, \sigma, \kappa)) \to (\vec{s}, fp, \sigma, \kappa)$

line is defined the same as label and is a side-effect of the s-expression generation, not part of the actual grammar.

We can add a few new items to our transition function’s match statement:

goto is much like nop, except that we must do a lookup using the label to find the next statement sequence. $next(\mathbf{goto}\;\mathit{label} : \vec{s}, fp, \sigma, \kappa) = (S(label), fp, \sigma, \kappa)$

## S function for Label Lookups

Labels are identifiers for statements and used in jumping from one statement to another. We will need to store these labels for lookup later. So, let’s define a label map and then a mechanism to lookup labels. We define this mapping function as $\mathit{S} : \mathit{Label}\to \mathsf{Stmt*}$

In code, what we are trying to do is to find the label and execute the next statment, so we will need a label store, a way to update the store, and a way to lookup the next statement by the label

But, this means we also need to update the store when we see a label, so an update to the earlier match construct is in order:

## if-goto Statement

The if-goto statement is similar to a jump, the only difference being that the conditional statement must be evaluated before you can determine which branch to execute. We will use the atomic-eval that was constructed earlier to determine the truthiness of the expression, then either issue a goto or just move to the next statement.

## Invoking Methods

Dalvik supports inheritance, due to a possible traversal of super classes, method invocation is necessarily the most complicated to model. Methods involve using all four components of the CESK machine: Control, Environment, Store, Continuation.

It is useful to abstract away a simplified situation: assume that the method has already been looked up. Thus, we can define an applyMethod helper function to aid in applying the method to its arguments: $applyMethod : Method \times Name \times Value \times AExp^{*} \times FP \times Store \times Kont \rightharpoonup \Sigma$

Further assume a method is defined as m = def methodName($v_1,...,$v_n) {body}

### applyMethod Helper

applyMethod needs to do the following:

• Lookup the values of the arguments
• Bind those values to the formal parameters of the method
• Create a new frame pointer
• Create a new continuation
• Ensure the next sequence of statements is included in the new continuation

### Invoking a method

With apply/method now doing much of the heavy lifting, invoking a method is reduced to a simple method lookup. lookup is a partial function that traverses through the inheritance chain until it finds the matching method. First, let’s define invoke, we’ll need the methodName for our lookup function.

In order to run the applyMethod function, we need a few variables defined: val and m. We can get val by a store lookup:

Getting m is where we finally need to define lookup since it is what finds the correct method. We need two values to process our lookup: className and methodName. We already have methodName from our invoke function, and we can get our className by extracting it from val:

Now that we have both className and methodName we can define our partial function lookupMethod that will traverse the class hierarchy to find the correct method to invoke:

In code, this sequence of functions becomes:

We can’t implement lookup/method quite yet, however, since I still haven’t defined and implemented classes. But, for now, let’s move on.

## Return Statement

A return is an application of the current continuation to the return value. We already have the apply/κ function for application, so we just need to define what to pass in the val parameter. In the case of a return value, we need to evaluate the value to ensure we get the atomic instance:

In code, this is simply:

## Exceptions

To handle exceptions we have several cases to implement: when a continuation is an exception handler, pushing and popping exception handlers, throwing exception handlers, and capturing exceptions.

### Exception Handler Continuation

This is the simplest continuation to handle, we simply skip over the current continuation and we have already implemented it in the apply/κ function, so there is no need to do anything more. This would happen if the current continuation was an exception, but no exception was needed.

### Pushing and Popping Exception Handlers

We will have two ways to put and get exception handlers, but pushing and popping from the program stack with a pushhandler and a pophandler:

In code:

### Throwing Exception Handlers

In order to throw an exception, we must search the stack for a matching exception handler. Implementing a helper function handle to do this for us will help. First, let’s define helper as:

Here is how we will traverse the stack, putting last thrown exceptions into the register “$ex” as is protocol: If className is a subclass of className’: If not: A throw skips over non-handler continuations: handle in code looks like this: You might have noticed a call to isinstanceof. I will define this function later when defining classes. With the handle helper function, we can now define the throw statement: Then we can add this to the transition function: ### Capturing Exceptions Since we store the last thrown exception into the$ex register, it can be examined to determine which exception was caught. We then use this to go to the label that handles this execution branch.

We can define this capturing with a moveException statement:

The store is updated by simply moving the exception e into the register \$ex:

Putting this in our transition function, this is:

## Class Definitions

With the calls to the unimplemented lookup/method and isinstanceof functions, it’s time to implement them! But, first we need to define and implement classes.

Classes are defined by their name, a potential super class, 0 or more fields (instance variables), and 0 or more methods.

What we need is to be able to represent classes in a sane way that will allow us to keep track of super classes, fields, and methods. A classDef then, in code is: (struct class {super fields methods}) where super is the name of the super class, fields is a list of field names, and methods is a mapping defined by:

Since methods also have multiple properties, we will also define a method as: (struct method {formals body}).

We will also need a way to store and lookup classes. Since class names are unique, we can define a simple table of classes and a lookupClass method defined as:

Now in code, classes are:

### Method Lookup

Now that we have defined classes, method lookup is a recursive function that traverses the classes hierarchy until it reaches a matching method, defined by

In code:

Since super is just a string, all we need to do is check for string equality. By definition, a class without a super class is void.

### Is Instance Of

Finding out if a class is an instance of another class is simple: find out if the class name is anywhere in its direct class hierarchy, returning true if it is and false otherwise:

In code:

Since super is just a string, we need to only check string equality. By definition, super is void if there is no higher class.