Algorithms and correctness: Difference between revisions

From Algowiki
Jump to navigation Jump to search
 
(85 intermediate revisions by the same user not shown)
Line 2: Line 2:


An algorithmic problem is described by:
An algorithmic problem is described by:
# a set of feasible '''inputs''';
# a set of feasible '''inputs''' (a.k.a. '''instances''');
# for each input a set of '''outputs''' (a.k.a. '''feasible solutions''' or '''solutions''', for short);
# for each input a set of '''outputs''' (a.k.a. '''feasible solutions''' or '''solutions''', for short), which may be empty;
# optionally, an '''objective function''', which assigns a real number to each feasible solution (the ''quality'' of the solution). If an objective function is specified, it is also specified whether the objective function is to be ''minimized'' or to be ''maximized''.
# optionally, an '''objective function''', which assigns a real number to each feasible solution (the ''quality'' of the solution). If an objective function is specified, it is also specified whether the objective function is to be ''minimized'' or to be ''maximized''.


== Operations and instructions ==
'''Remark:'''
Typically, sets such as the set of all feasible inputs to a problem and the set of all feasible outputs to an input are given by a [https://en.wikipedia.org/wiki/Genus%E2%80%93differentia_definition genus-differentia definition], which is an example of [https://en.wikipedia.org/wiki/Intensional_definition intensional definitions]. More specifically, an [https://en.wikipedia.org/wiki/Abstract_data_type abstract data type] is given along with the restrictions that must be fulfilled by feasible inputs and outputs, respectively. Sometimes, the abstract data type is called the set of all inputs / outputs, and the members of the respective set that fulfill the restrictions are then the ''feasible'' inputs / outputs.
 
== Instructions, operations and subroutines ==
 
[http://en.wikipedia.org/wiki/Imperative_programming Imperative programming] languages offer possibilities to specify '''instructions''' (a.k.a. '''statements'''). An instruction specifies a sequence of '''(machine) operations'''; executing the instruction on a machine means that the machine performs this sequence of operations. Instructions may be bundled as '''subroutines''' (a.k.a. '''procedures''', '''functions''', '''methods''').
 
== (Source) programs and processes ==
 
A '''program''' is a sequence of instructions in some programming language. A program written in a higher programming language, which is to be compiled or interpreted, is often called a '''source program'''. A '''process''' means the execution of a program on a real or virtual machine.
 
== Termination ==


[http://en.wikipedia.org/wiki/Imperative_programming Imperative programming] languages offer possibilities to specify '''instructions''' (a.k.a. '''statements'''). An instruction specifies a sequence of '''operations'''; executing the instruction on a machine means that the machine performs this sequence of operations.
# A loop has at least one '''break condition'''; this is a part of the (source) program of the loop. '''Termination''' of a loop refers to the process and means that the process evaluates the break condition and this evaluation yields '''true'''.
# '''Termination''' of a subroutine means that the process runs into an unconditional return-statement or, alternatively, runs into a conditional return-statement and this condition yields '''true''' (common high-level languages only provide unconditional return-statements; a return statement at the end of a void-subroutine may be implicit in many languages).
# '''Termination''' of a recursion means that every branch of the recursion has a finite depth, that is, runs into a recursive call that does not call the recursive subroutine any further.


== Algorithm ==
== Algorithm ==


# An algorithm is associated with an [[#Algorithmic problem|algorithmic problem]].
# An algorithm is associated with an [[#Algorithmic problem|algorithmic problem]].
# An algorithm is a sequence of instructions how to compute a feasible output for a given input.
# An algorithm is an abstract description of a process and can be formulated (a.k.a. '''implemented''') as a subroutine. This subroutine is required to compute some feasible output for any given input of the associated algorithmic problem.
# If an objective function is given, the objective function value of the generated solution is a criterion for the quality of an algorithm. More specifically:
# If an objective function is given, the objective function value of the generated solution is a criterion for the quality of an algorithm. More specifically:
## in case of ''minimization'': a low objective function value is favored;
## in case of ''minimization'': a low objective function value is favored;
Line 21: Line 34:


# Basically, a non-trivial algorithm is a loop or recursion plus some '''preprocessing''' (a.k.a. '''initialization''') and/or some '''postprocessing'''.
# Basically, a non-trivial algorithm is a loop or recursion plus some '''preprocessing''' (a.k.a. '''initialization''') and/or some '''postprocessing'''.
# If an algorithm consists of two or more loops/recursions that are executed strictly after each other, it may be viewed as two or more algorithms that are executed after each other.
# If an algorithm consists of two or more loops/recursions that are strictly disjoint parts of an implementation of the algorithm (and thus executed strictly after each other), it may be viewed as two or more algorithms that are executed after each other. Therefore, without loss of generality, an algorithm may be viewed as ''one'' loop or ''one'' recursion plus some pre/postprocessing.
# An iteration of a loop may contain another loop or recursion. Analogously, a recursive call may contain another loop or recursion. We say that a loop/recurions inside a loop/recursions is '''nested''' or the '''inner''' loop/recursion. The nesting loop/recursion is the '''outer''' one.
# An iteration of a loop may contain another loop or recursion. Analogously, a recursive call may contain another loop or recursion. We say that a loop/recursion inside another loop/recursion is '''nested''' or the '''inner''' loop/recursion. Correspondingly, the '''nesting''' loop/recursion is the '''outer''' one.


'''Remark:'''
'''Remarks:'''
# Clearly, a loop may be transformed into a recursion and vice versa. So, every algorithm may be formulated either as a loop or as a recursion. However, in most cases, one of these two options looks simpler and "more natural" than the other one.
# Clearly, a loop may be transformed into a recursion and vice versa. So, every algorithm may be formulated either as a loop or as a recursion. However, in most cases, one of these two options looks simpler and "more natural" than the other one.
# Formulating an algorithm as a loop might be favorable in many cases because a loop allows more control than a recursion. More specifically, a loop may be implemented as an iterator whose method for going one step forward implements the execution of one iteration of the loop. Such an implementation allows to terminate the loop early, to suspend execution and resume execution later on, and to execute some additional instructions between two iterations (e.g. for visualization or for testing).
# Formulating an algorithm as a loop might be favorable in many cases because a loop allows more control than a recursion. More specifically, a loop may be implemented as an [http://en.wikipedia.org/wiki/Iterator iterator] whose method for going one step forward implements the execution of one iteration of the loop. Such an implementation allows one to terminate the loop early, to suspend execution and resume execution later on, and to execute some additional instructions between two iterations (e.g. for visualization or for testing purposes). The crucial point is that, for all of these purposes, the code of the algorithm need not be modified (the source need not even be available).


== Correctness ==
== Correctness ==


An algorithm is correct if three conditions are fulfilled for every feasible input:
An algorithm is correct if three conditions are fulfilled for every feasible input:
# All operations are well-defined. For example, access to an array component outside the array's index range and
# All instructions are '''well-defined'''. For example, an instruction that divides by zero, exceeds the range of a numerical type (''overflow''), accesses an array component outside the array's index range, or accesses an attribute or method of a '''void''' / '''null''' pointer are ill-defined operations.
# Each loop and recursion terminates regularly, that is, the break condition is fulfilled after a finite number of operations.
# Each loop and recursion '''terminates''', that is, the break condition is fulfilled after a finite number of iterative / recursive steps. In case of a recursion with more than one recursive call inside an execution of the recursive routine, this means that every branch of the recursion tree must reach the break condition after a finite number of recursive descents.
# The output delivered by the algorithm is a feasible output for the given input.
# If the given input admits no feasible solution, this information is delivered by the algorithm. Otherwise, the algorithm delivers a feasible output.


== Invariant ==
== Invariant ==


# The '''invariant''' of a loop consists of all assertions that are true immediately before the first iteration, immediately after the last iteration, and between two iterations. These assertions are called the '''invariant assertions''' of this loop.
# The '''invariant''' of a loop consists of all assertions that are true immediately before the first iteration, immediately after the last iteration, and between two iterations. These assertions are called the '''invariant assertions''' of this loop.
# The '''invariant''' of a recursion consists of all assertions that are fulfilled immediately after each recursive call. These assertions are called the '''invariant assertions''' of this recursion.
# The '''invariant''' of a recursion consists of all assertions that are fulfilled immediately before '''or''' after each recursive call (see [[#Induction in case of a recursion|below]]). These assertions are called the '''invariant assertions''' of this recursion.


'''Remarks:'''
'''Remarks:'''
# If an algorithm only accesses its input and its own local data, the invariant assertions are assertions about these local data and nothing else.
# Typically, an invariant assertion is ''parameterized'', for example:
## in case of an iteration: usually by the number of iterations performed so far;
## in case of a recursion: the recursion depth or a dynamically changing value in the auxiliary data (e.g. the length of a sequence).
# If an algorithm only accesses its input and its own local data, the invariant assertions are assertions about its input and these local data and nothing else.
# The invariant of an algorithm has a specific function in the [[#Correctness proofs|correctness proof]] for the algorithm. Typically, in a documentation, only those invariant assertions are stated that are relevant for this purpose.  
# The invariant of an algorithm has a specific function in the [[#Correctness proofs|correctness proof]] for the algorithm. Typically, in a documentation, only those invariant assertions are stated that are relevant for this purpose.  
# The invariant is also essential for testing purposes: a white-box test of a loop/recursion should definitely check maintenance of all relevant invariant assertions.
# The invariant is also essential for testing purposes: a [http://en.wikipedia.org/wiki/White-box_testing white-box test] of a loop/recursion should definitely check maintenance of all relevant invariant assertions.
# Last not least, the invariant is the very "essence" of the algorithmic idea; deeply understanding the algorithm amounts to understanding the invariant.


== Variant ==
== Variant ==


# The '''variant''' of a loop consists of all differences between the state immediately before an iteration and the state immediately after that iteration.
# The '''variant''' of a loop consists of all differences between the state immediately before an iteration and the state immediately after that iteration (typically, but not exclusively, the values of integral loop variables).
# The '''variant''' of a recursion consists of all differences between the input of a recursive call <math>X</math> and the inputs of all recursive calls that are directly called in <math>X</math>.
# The '''variant''' of a recursion consists of all differences between the input of a recursive call <math>X</math> and the inputs of all recursive calls that are directly called in <math>X</math> (typically, but not exclusively, some integral measure of the size of the input).


'''Remarks:'''
'''Remarks:'''
# If an algorithm only accesses its input and its own local data, the variant states changes of the contents of these local data and nothing else.
# If an algorithm only accesses its input and its own local data, the variant states changes of the contents of its input and  these local data and nothing else.
# The variant of an algorithm has a specific function in the [[#Correctness proofs|correctness proof]] for the algorithm. Typically, in a documentation, only those variant assertions are stated that are relevant for this purpose.  
# The variant of an algorithm has a specific function in the [[#Correctness proofs|correctness proof]] for the algorithm. Typically, in a documentation, only those variant assertions are stated that are relevant for this purpose.  
# The invariant is also essential for testing purposes: a white-box test of a loop/recursion should definitely check maintenance of all relevant invariant assertions.
# The variant is also essential for testing purposes: a [http://en.wikipedia.org/wiki/White-box_testing white-box test] of a loop/recursion should definitely check all relevant changes.


== Correctness proofs ==
== Correctness proofs ==


Assuming the algorithm is correct:
# Assuming the algorithm is correct:
# The variant implies that the break condition will be fulfilled
## The variant implies that the break condition will be fulfilled,
## after a finite number of iterations in case of a loop;
### in case of a loop: after a finite number of iterations;
## after a finite number of recursive calls in case of a recursion.
### in case of a recursion: after a finite number of recursive calls in each branch of the recursive tree.
# Correctness of the output hat the invariant says about the state immediately
## Correctness of the output follows from what the invariant says about the state immediately after the last iteration.
# Proving the invariant amounts to an induction on the number of iterations performed so far / the recursion parameter over which the variant of the recursion is defined.
## The invariant is the induction hypothesis.
## Proving the induction basis amounts to proving that the preprocessing (initialization) establishes the invariant.
## Proving the induction step amounts to proving that an iteration / a recursive call maintains the invariant.
 
'''Caveat:'''
Quite frequently, validity of the invariant for the state immediately before the first iteration is based on a separate definition of the "null case". For example, the empty sum (sum over no summands) is 0 by definition, the empty product is 1, the faculty of 0 is 1, the 0-th power of a number is 1, etc. To be on the safe side in such a case, the particular induction step from 0 to 1 should be checked separately.


'''Remarks:'''
'''Remarks:'''
# In a nutshell, the variant proves termination, and the invariant proves that the output is correct.
# In a nutshell, the variant proves termination, and the invariant proves that the output is correct.
# Typically, well-definedness of all operations is not considered explicitly in correctness proof. Only in rare cases is well-definedness non-trivial.
# Typically, well-definedness of all operations is not considered explicitly in correctness proofs. The reason is that well-definedness is non-obvious in rare cases only.
# In many cases, correctness of the output follows immediately from what the invariant says about the state immediately after the last iteration; in other cases, additional arguments are necessary to prove correctness of the output.
# In many cases, correctness of the output follows immediately from what the invariant says about the state immediately after the last iteration; in other cases, additional arguments are necessary to prove correctness of the output.
# The variant is also essential to estimate, asymptotically, the number of iterations / the recursion depth.
# The variant is also essential to estimate, asymptotically, the number of iterations / the recursion depth.
== Induction in case of a recursion ==
In principle, there are two, in a sense  mutually opposite, ways to define the induction. Typically, only one of these two ways is viable:
# The depth of a recursive call is the induction variable; the original call to this subroutine has to ensure the induction basis; the induction step has to be ensured by every descent in the recursion tree. Example: [[Binary search|binary search]].
# The induction variable is some parameter of the input. The invariant simply says that the output of a recursive call is correct. In a descent in the recursion tree, that parameter must strictly decrease. The recursion anchor must ensure the induction basis. Examples: [[Mergesort|mergesort]] and [[Quicksort|quicksort]]; in these examples, the size of the sequence is the induction variable.

Latest revision as of 12:49, 28 April 2016

Algorithmic problem

An algorithmic problem is described by:

  1. a set of feasible inputs (a.k.a. instances);
  2. for each input a set of outputs (a.k.a. feasible solutions or solutions, for short), which may be empty;
  3. optionally, an objective function, which assigns a real number to each feasible solution (the quality of the solution). If an objective function is specified, it is also specified whether the objective function is to be minimized or to be maximized.

Remark: Typically, sets such as the set of all feasible inputs to a problem and the set of all feasible outputs to an input are given by a genus-differentia definition, which is an example of intensional definitions. More specifically, an abstract data type is given along with the restrictions that must be fulfilled by feasible inputs and outputs, respectively. Sometimes, the abstract data type is called the set of all inputs / outputs, and the members of the respective set that fulfill the restrictions are then the feasible inputs / outputs.

Instructions, operations and subroutines

Imperative programming languages offer possibilities to specify instructions (a.k.a. statements). An instruction specifies a sequence of (machine) operations; executing the instruction on a machine means that the machine performs this sequence of operations. Instructions may be bundled as subroutines (a.k.a. procedures, functions, methods).

(Source) programs and processes

A program is a sequence of instructions in some programming language. A program written in a higher programming language, which is to be compiled or interpreted, is often called a source program. A process means the execution of a program on a real or virtual machine.

Termination

  1. A loop has at least one break condition; this is a part of the (source) program of the loop. Termination of a loop refers to the process and means that the process evaluates the break condition and this evaluation yields true.
  2. Termination of a subroutine means that the process runs into an unconditional return-statement or, alternatively, runs into a conditional return-statement and this condition yields true (common high-level languages only provide unconditional return-statements; a return statement at the end of a void-subroutine may be implicit in many languages).
  3. Termination of a recursion means that every branch of the recursion has a finite depth, that is, runs into a recursive call that does not call the recursive subroutine any further.

Algorithm

  1. An algorithm is associated with an algorithmic problem.
  2. An algorithm is an abstract description of a process and can be formulated (a.k.a. implemented) as a subroutine. This subroutine is required to compute some feasible output for any given input of the associated algorithmic problem.
  3. If an objective function is given, the objective function value of the generated solution is a criterion for the quality of an algorithm. More specifically:
    1. in case of minimization: a low objective function value is favored;
    2. in case of maximization: a high objective function value is favored.

Iterative and recursive algorithms

  1. Basically, a non-trivial algorithm is a loop or recursion plus some preprocessing (a.k.a. initialization) and/or some postprocessing.
  2. If an algorithm consists of two or more loops/recursions that are strictly disjoint parts of an implementation of the algorithm (and thus executed strictly after each other), it may be viewed as two or more algorithms that are executed after each other. Therefore, without loss of generality, an algorithm may be viewed as one loop or one recursion plus some pre/postprocessing.
  3. An iteration of a loop may contain another loop or recursion. Analogously, a recursive call may contain another loop or recursion. We say that a loop/recursion inside another loop/recursion is nested or the inner loop/recursion. Correspondingly, the nesting loop/recursion is the outer one.

Remarks:

  1. Clearly, a loop may be transformed into a recursion and vice versa. So, every algorithm may be formulated either as a loop or as a recursion. However, in most cases, one of these two options looks simpler and "more natural" than the other one.
  2. Formulating an algorithm as a loop might be favorable in many cases because a loop allows more control than a recursion. More specifically, a loop may be implemented as an iterator whose method for going one step forward implements the execution of one iteration of the loop. Such an implementation allows one to terminate the loop early, to suspend execution and resume execution later on, and to execute some additional instructions between two iterations (e.g. for visualization or for testing purposes). The crucial point is that, for all of these purposes, the code of the algorithm need not be modified (the source need not even be available).

Correctness

An algorithm is correct if three conditions are fulfilled for every feasible input:

  1. All instructions are well-defined. For example, an instruction that divides by zero, exceeds the range of a numerical type (overflow), accesses an array component outside the array's index range, or accesses an attribute or method of a void / null pointer are ill-defined operations.
  2. Each loop and recursion terminates, that is, the break condition is fulfilled after a finite number of iterative / recursive steps. In case of a recursion with more than one recursive call inside an execution of the recursive routine, this means that every branch of the recursion tree must reach the break condition after a finite number of recursive descents.
  3. If the given input admits no feasible solution, this information is delivered by the algorithm. Otherwise, the algorithm delivers a feasible output.

Invariant

  1. The invariant of a loop consists of all assertions that are true immediately before the first iteration, immediately after the last iteration, and between two iterations. These assertions are called the invariant assertions of this loop.
  2. The invariant of a recursion consists of all assertions that are fulfilled immediately before or after each recursive call (see below). These assertions are called the invariant assertions of this recursion.

Remarks:

  1. Typically, an invariant assertion is parameterized, for example:
    1. in case of an iteration: usually by the number of iterations performed so far;
    2. in case of a recursion: the recursion depth or a dynamically changing value in the auxiliary data (e.g. the length of a sequence).
  2. If an algorithm only accesses its input and its own local data, the invariant assertions are assertions about its input and these local data and nothing else.
  3. The invariant of an algorithm has a specific function in the correctness proof for the algorithm. Typically, in a documentation, only those invariant assertions are stated that are relevant for this purpose.
  4. The invariant is also essential for testing purposes: a white-box test of a loop/recursion should definitely check maintenance of all relevant invariant assertions.
  5. Last not least, the invariant is the very "essence" of the algorithmic idea; deeply understanding the algorithm amounts to understanding the invariant.

Variant

  1. The variant of a loop consists of all differences between the state immediately before an iteration and the state immediately after that iteration (typically, but not exclusively, the values of integral loop variables).
  2. The variant of a recursion consists of all differences between the input of a recursive call [math]\displaystyle{ X }[/math] and the inputs of all recursive calls that are directly called in [math]\displaystyle{ X }[/math] (typically, but not exclusively, some integral measure of the size of the input).

Remarks:

  1. If an algorithm only accesses its input and its own local data, the variant states changes of the contents of its input and these local data and nothing else.
  2. The variant of an algorithm has a specific function in the correctness proof for the algorithm. Typically, in a documentation, only those variant assertions are stated that are relevant for this purpose.
  3. The variant is also essential for testing purposes: a white-box test of a loop/recursion should definitely check all relevant changes.

Correctness proofs

  1. Assuming the algorithm is correct:
    1. The variant implies that the break condition will be fulfilled,
      1. in case of a loop: after a finite number of iterations;
      2. in case of a recursion: after a finite number of recursive calls in each branch of the recursive tree.
    2. Correctness of the output follows from what the invariant says about the state immediately after the last iteration.
  2. Proving the invariant amounts to an induction on the number of iterations performed so far / the recursion parameter over which the variant of the recursion is defined.
    1. The invariant is the induction hypothesis.
    2. Proving the induction basis amounts to proving that the preprocessing (initialization) establishes the invariant.
    3. Proving the induction step amounts to proving that an iteration / a recursive call maintains the invariant.

Caveat: Quite frequently, validity of the invariant for the state immediately before the first iteration is based on a separate definition of the "null case". For example, the empty sum (sum over no summands) is 0 by definition, the empty product is 1, the faculty of 0 is 1, the 0-th power of a number is 1, etc. To be on the safe side in such a case, the particular induction step from 0 to 1 should be checked separately.

Remarks:

  1. In a nutshell, the variant proves termination, and the invariant proves that the output is correct.
  2. Typically, well-definedness of all operations is not considered explicitly in correctness proofs. The reason is that well-definedness is non-obvious in rare cases only.
  3. In many cases, correctness of the output follows immediately from what the invariant says about the state immediately after the last iteration; in other cases, additional arguments are necessary to prove correctness of the output.
  4. The variant is also essential to estimate, asymptotically, the number of iterations / the recursion depth.

Induction in case of a recursion

In principle, there are two, in a sense mutually opposite, ways to define the induction. Typically, only one of these two ways is viable:

  1. The depth of a recursive call is the induction variable; the original call to this subroutine has to ensure the induction basis; the induction step has to be ensured by every descent in the recursion tree. Example: binary search.
  2. The induction variable is some parameter of the input. The invariant simply says that the output of a recursive call is correct. In a descent in the recursion tree, that parameter must strictly decrease. The recursion anchor must ensure the induction basis. Examples: mergesort and quicksort; in these examples, the size of the sequence is the induction variable.