Algorithms and correctness

From Algowiki
Jump to: navigation, search

Algorithmic problem

An algorithmic problem is described by:

  1. a set of feasible inputs (a.k.a. instances);
  2. for each input a set of outputs (a.k.a. feasible solutions or solutions, for short), which may be empty;
  3. optionally, an objective function, which assigns a real number to each feasible solution (the quality of the solution). If an objective function is specified, it is also specified whether the objective function is to be minimized or to be maximized.

Remark: Typically, sets such as the set of all feasible inputs to a problem and the set of all feasible outputs to an input are given by a genus-differentia definition, which is an example of intensional definitions. More specifically, an abstract data type is given along with the restrictions that must be fulfilled by feasible inputs and outputs, respectively. Sometimes, the abstract data type is called the set of all inputs / outputs, and the members of the respective set that fulfill the restrictions are then the feasible inputs / outputs.

Instructions, operations and subroutines

Imperative programming languages offer possibilities to specify instructions (a.k.a. statements). An instruction specifies a sequence of (machine) operations; executing the instruction on a machine means that the machine performs this sequence of operations. Instructions may be bundled as subroutines (a.k.a. procedures, functions, methods).

(Source) programs and processes

A program is a sequence of instructions in some programming language. A program written in a higher programming language, which is to be compiled or interpreted, is often called a source program. A process means the execution of a program on a real or virtual machine.

Termination

  1. A loop has at least one break condition; this is a part of the (source) program of the loop. Termination of a loop refers to the process and means that the process evaluates the break condition and this evaluation yields true.
  2. Termination of a subroutine means that the process runs into an unconditional return-statement or, alternatively, runs into a conditional return-statement and this condition yields true (common high-level languages only provide unconditional return-statements; a return statement at the end of a void-subroutine may be implicit in many languages).
  3. Termination of a recursion means that every branch of the recursion has a finite depth, that is, runs into a recursive call that does not call the recursive subroutine any further.

Algorithm

  1. An algorithm is associated with an algorithmic problem.
  2. An algorithm is an abstract description of a process and can be formulated (a.k.a. implemented) as a subroutine. This subroutine is required to compute some feasible output for any given input of the associated algorithmic problem.
  3. If an objective function is given, the objective function value of the generated solution is a criterion for the quality of an algorithm. More specifically:
    1. in case of minimization: a low objective function value is favored;
    2. in case of maximization: a high objective function value is favored.

Iterative and recursive algorithms

  1. Basically, a non-trivial algorithm is a loop or recursion plus some preprocessing (a.k.a. initialization) and/or some postprocessing.
  2. If an algorithm consists of two or more loops/recursions that are strictly disjoint parts of an implementation of the algorithm (and thus executed strictly after each other), it may be viewed as two or more algorithms that are executed after each other. Therefore, without loss of generality, an algorithm may be viewed as one loop or one recursion plus some pre/postprocessing.
  3. An iteration of a loop may contain another loop or recursion. Analogously, a recursive call may contain another loop or recursion. We say that a loop/recursion inside another loop/recursion is nested or the inner loop/recursion. Correspondingly, the nesting loop/recursion is the outer one.

Remarks:

  1. Clearly, a loop may be transformed into a recursion and vice versa. So, every algorithm may be formulated either as a loop or as a recursion. However, in most cases, one of these two options looks simpler and "more natural" than the other one.
  2. Formulating an algorithm as a loop might be favorable in many cases because a loop allows more control than a recursion. More specifically, a loop may be implemented as an iterator whose method for going one step forward implements the execution of one iteration of the loop. Such an implementation allows one to terminate the loop early, to suspend execution and resume execution later on, and to execute some additional instructions between two iterations (e.g. for visualization or for testing purposes). The crucial point is that, for all of these purposes, the code of the algorithm need not be modified (the source need not even be available).

Correctness

An algorithm is correct if three conditions are fulfilled for every feasible input:

  1. All instructions are well-defined. For example, an instruction that divides by zero, exceeds the range of a numerical type (overflow), accesses an array component outside the array's index range, or accesses an attribute or method of a void / null pointer are ill-defined operations.
  2. Each loop and recursion terminates, that is, the break condition is fulfilled after a finite number of iterative / recursive steps. In case of a recursion with more than one recursive call inside an execution of the recursive routine, this means that every branch of the recursion tree must reach the break condition after a finite number of recursive descents.
  3. If the given input admits no feasible solution, this information is delivered by the algorithm. Otherwise, the algorithm delivers a feasible output.

Invariant

  1. The invariant of a loop consists of all assertions that are true immediately before the first iteration, immediately after the last iteration, and between two iterations. These assertions are called the invariant assertions of this loop.
  2. The invariant of a recursion consists of all assertions that are fulfilled immediately before or after each recursive call (see below). These assertions are called the invariant assertions of this recursion.

Remarks:

  1. Typically, an invariant assertion is parameterized, for example:
    1. in case of an iteration: usually by the number of iterations performed so far;
    2. in case of a recursion: the recursion depth or a dynamically changing value in the auxiliary data (e.g. the length of a sequence).
  2. If an algorithm only accesses its input and its own local data, the invariant assertions are assertions about its input and these local data and nothing else.
  3. The invariant of an algorithm has a specific function in the correctness proof for the algorithm. Typically, in a documentation, only those invariant assertions are stated that are relevant for this purpose.
  4. The invariant is also essential for testing purposes: a white-box test of a loop/recursion should definitely check maintenance of all relevant invariant assertions.
  5. Last not least, the invariant is the very "essence" of the algorithmic idea; deeply understanding the algorithm amounts to understanding the invariant.

Variant

  1. The variant of a loop consists of all differences between the state immediately before an iteration and the state immediately after that iteration (typically, but not exclusively, the values of integral loop variables).
  2. The variant of a recursion consists of all differences between the input of a recursive call and the inputs of all recursive calls that are directly called in (typically, but not exclusively, some integral measure of the size of the input).

Remarks:

  1. If an algorithm only accesses its input and its own local data, the variant states changes of the contents of its input and these local data and nothing else.
  2. The variant of an algorithm has a specific function in the correctness proof for the algorithm. Typically, in a documentation, only those variant assertions are stated that are relevant for this purpose.
  3. The variant is also essential for testing purposes: a white-box test of a loop/recursion should definitely check all relevant changes.

Correctness proofs

  1. Assuming the algorithm is correct:
    1. The variant implies that the break condition will be fulfilled,
      1. in case of a loop: after a finite number of iterations;
      2. in case of a recursion: after a finite number of recursive calls in each branch of the recursive tree.
    2. Correctness of the output follows from what the invariant says about the state immediately after the last iteration.
  2. Proving the invariant amounts to an induction on the number of iterations performed so far / the recursion parameter over which the variant of the recursion is defined.
    1. The invariant is the induction hypothesis.
    2. Proving the induction basis amounts to proving that the preprocessing (initialization) establishes the invariant.
    3. Proving the induction step amounts to proving that an iteration / a recursive call maintains the invariant.

Caveat: Quite frequently, validity of the invariant for the state immediately before the first iteration is based on a separate definition of the "null case". For example, the empty sum (sum over no summands) is 0 by definition, the empty product is 1, the faculty of 0 is 1, the 0-th power of a number is 1, etc. To be on the safe side in such a case, the particular induction step from 0 to 1 should be checked separately.

Remarks:

  1. In a nutshell, the variant proves termination, and the invariant proves that the output is correct.
  2. Typically, well-definedness of all operations is not considered explicitly in correctness proofs. The reason is that well-definedness is non-obvious in rare cases only.
  3. In many cases, correctness of the output follows immediately from what the invariant says about the state immediately after the last iteration; in other cases, additional arguments are necessary to prove correctness of the output.
  4. The variant is also essential to estimate, asymptotically, the number of iterations / the recursion depth.

Induction in case of a recursion

In principle, there are two, in a sense mutually opposite, ways to define the induction. Typically, only one of these two ways is viable:

  1. The depth of a recursive call is the induction variable; the original call to this subroutine has to ensure the induction basis; the induction step has to be ensured by every descent in the recursion tree. Example: binary search.
  2. The induction variable is some parameter of the input. The invariant simply says that the output of a recursive call is correct. In a descent in the recursion tree, that parameter must strictly decrease. The recursion anchor must ensure the induction basis. Examples: mergesort and quicksort; in these examples, the size of the sequence is the induction variable.