<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>https://wiki.algo.informatik.tu-darmstadt.de/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Weihe</id>
	<title>Algowiki - User contributions [en]</title>
	<link rel="self" type="application/atom+xml" href="https://wiki.algo.informatik.tu-darmstadt.de/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Weihe"/>
	<link rel="alternate" type="text/html" href="https://wiki.algo.informatik.tu-darmstadt.de/Special:Contributions/Weihe"/>
	<updated>2026-04-29T17:35:32Z</updated>
	<subtitle>User contributions</subtitle>
	<generator>MediaWiki 1.38.4</generator>
	<entry>
		<id>https://wiki.algo.informatik.tu-darmstadt.de/index.php?title=Max-Flow_Problems&amp;diff=3886</id>
		<title>Max-Flow Problems</title>
		<link rel="alternate" type="text/html" href="https://wiki.algo.informatik.tu-darmstadt.de/index.php?title=Max-Flow_Problems&amp;diff=3886"/>
		<updated>2018-03-31T08:58:19Z</updated>

		<summary type="html">&lt;p&gt;Weihe: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Basic definitions ==&lt;br /&gt;
&lt;br /&gt;
# [[Basic graph definitions]]&lt;br /&gt;
# [[Basic flow definitions]]&lt;br /&gt;
&lt;br /&gt;
== Standard version ==&lt;br /&gt;
&lt;br /&gt;
'''Input:'''&lt;br /&gt;
# A directed graph &amp;lt;math&amp;gt;G=(V,A)&amp;lt;/math&amp;gt;.&lt;br /&gt;
# A '''source node''' &amp;lt;math&amp;gt;s\in V&amp;lt;/math&amp;gt; and a '''target (a.k.a. sink) node''' &amp;lt;math&amp;gt;t\in V&amp;lt;/math&amp;gt;.&lt;br /&gt;
# A nonnegative '''upper bound (a.k.a. capacity)''' &amp;lt;math&amp;gt;u(a)&amp;lt;/math&amp;gt; for each arc &amp;lt;math&amp;gt;a\in A&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
'''Output:'''&lt;br /&gt;
A [[Basic flow definitions#Feasible flow|feasible &amp;lt;math&amp;gt;(s,t)&amp;lt;/math&amp;gt;-flow]] that has maximum [[Basic flow definitions#Flow value|flow value]] among all feasible &amp;lt;math&amp;gt;(s,t)&amp;lt;/math&amp;gt;-flows.&lt;br /&gt;
&lt;br /&gt;
== Known algorithms ==&lt;br /&gt;
&lt;br /&gt;
# [[Ford-Fulkerson]]&lt;br /&gt;
# [[Edmonds-Karp]]&lt;br /&gt;
# [[Ahuja-Orlin]]&lt;br /&gt;
# [[Dinic]]&lt;br /&gt;
# [[Preflow-push]]&lt;br /&gt;
# [[FIFO preflow-push]]&lt;br /&gt;
# [[Preflow-push with excess scaling]]&lt;br /&gt;
&lt;br /&gt;
== Generalizations ==&lt;br /&gt;
&lt;br /&gt;
# For each arc &amp;lt;math&amp;gt;a\in A&amp;lt;/math&amp;gt;, there is a '''lower bound''' &amp;lt;math&amp;gt;\ell(a)&amp;lt;/math&amp;gt;, and &amp;lt;math&amp;gt;f(a)\geq\ell(a)&amp;lt;/math&amp;gt; is additionally required. The lower bounds need not be nonnegative, so the flow values need not be nonnegative, either. This version is often called '''maximum flow with edge demands'''. It may be reduced to solving two instances of the standard version as follows:&lt;br /&gt;
## First, we construct a new graph &amp;lt;math&amp;gt;G'=(V',A')&amp;lt;/math&amp;gt; as follows: We add a super-source &amp;lt;math&amp;gt;s'&amp;lt;/math&amp;gt; and a super-target &amp;lt;math&amp;gt;t'&amp;lt;/math&amp;gt; to &amp;lt;math&amp;gt;V&amp;lt;/math&amp;gt;. Next,  for every node &amp;lt;math&amp;gt;v\in V&amp;lt;/math&amp;gt;, we add an arc &amp;lt;math&amp;gt;(s',v)&amp;lt;/math&amp;gt; and an arc &amp;lt;math&amp;gt;(v,t')&amp;lt;/math&amp;gt; to &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt;. Finally we add an arc &amp;lt;math&amp;gt;(t,s)&amp;lt;/math&amp;gt; to &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt;, if not in &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; yet.&lt;br /&gt;
## For all arcs &amp;lt;math&amp;gt;a\in A&amp;lt;/math&amp;gt;, we set &amp;lt;math&amp;gt;\ell'(a):=0&amp;lt;/math&amp;gt;. For each node &amp;lt;math&amp;gt;v\in V&amp;lt;/math&amp;gt;, we set &lt;br /&gt;
##:&amp;lt;math&amp;gt;u'(s',v):=\max\{0,\sum_{(w,v)\in A}\ell(w,v)\}&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;u'(v,t'):=\max\{0,\sum_{(v,w)\in A} \ell(v,w)\}&amp;lt;/math&amp;gt;. &lt;br /&gt;
##:For all arcs &amp;lt;math&amp;gt;a\in A&amp;lt;/math&amp;gt;, we set &amp;lt;math&amp;gt;u'(a):=u(a)-\ell(a)&amp;lt;/math&amp;gt;. Finally, we set &amp;lt;math&amp;gt;u'(t,s):=+\infty&amp;lt;/math&amp;gt;.&lt;br /&gt;
## If all additional arcs &amp;lt;math&amp;gt;(s',\cdot)&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;(\cdot,t')&amp;lt;/math&amp;gt; are [[Basic flow definitions#Flow-augmenting paths and saturated arcs|saturated]] by some feasible (and obviously maximum) &amp;lt;math&amp;gt;(s',t')&amp;lt;/math&amp;gt;-flow &amp;lt;math&amp;gt;f&amp;lt;/math&amp;gt;, the values &amp;lt;math&amp;gt;f(a)+\ell(a)&amp;lt;/math&amp;gt; on all original arcs of &amp;lt;math&amp;gt;G&amp;lt;/math&amp;gt; obviously form a feasible &amp;lt;math&amp;gt;(s,t)&amp;lt;/math&amp;gt;-flow in &amp;lt;math&amp;gt;G&amp;lt;/math&amp;gt; with respect to &amp;lt;math&amp;gt;\ell&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;u&amp;lt;/math&amp;gt;. On the other hand, if there is a feasible &amp;lt;math&amp;gt;(s,t)&amp;lt;/math&amp;gt;-flow &amp;lt;math&amp;gt;f&amp;lt;/math&amp;gt; in &amp;lt;math&amp;gt;G&amp;lt;/math&amp;gt; with respect to &amp;lt;math&amp;gt;\ell&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;u&amp;lt;/math&amp;gt;, the values &amp;lt;math&amp;gt;f(a)-\ell(a)&amp;lt;/math&amp;gt; on all original arcs in &amp;lt;math&amp;gt;G&amp;lt;/math&amp;gt; along with saturating flow values on all additional arcs form a feasible (and obviously maximum) &amp;lt;math&amp;gt;(s',t')&amp;lt;/math&amp;gt;-flow.&lt;br /&gt;
## Therefore, we may safely terminate the whole procedure if the additional arcs are '''not''' saturated by a maximum &amp;lt;math&amp;gt;(s',t')&amp;lt;/math&amp;gt;-flow. Otherwise, we may construct an instance &amp;lt;math&amp;gt;(G'=(V,A'),s,t,u')&amp;lt;/math&amp;gt; of the standard version as follows: for each original arc &amp;lt;math&amp;gt;a=(v,w)\in A&amp;lt;/math&amp;gt; insert an opposite arc &amp;lt;math&amp;gt;a'=(w,v)&amp;lt;/math&amp;gt; and set &amp;lt;math&amp;gt;u'(a):=u(a)-f(a)\geq 0&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;u'(a'):=f(a)-\ell(a)\geq 0&amp;lt;/math&amp;gt; (cf. [[Basic flow definitions#Residual network|residual network]]).[[File:Maxflowmultiplesource.png|350px|thumb|right|Max-Flow Problem with several sources and targets]]&lt;br /&gt;
# More than one source and more than one target can be reduced to the standard version by adding a super-source node &amp;lt;math&amp;gt;s&amp;lt;/math&amp;gt;, a super-target node &amp;lt;math&amp;gt;t&amp;lt;/math&amp;gt;, an arc &amp;lt;math&amp;gt;(s,v)&amp;lt;/math&amp;gt; with &amp;quot;infinite&amp;quot; capacity for each source &amp;lt;math&amp;gt;v&amp;lt;/math&amp;gt;, and an arc &amp;lt;math&amp;gt;(v,t)&amp;lt;/math&amp;gt; with &amp;quot;infinite&amp;quot; capacity for each target &amp;lt;math&amp;gt;v&amp;lt;/math&amp;gt; (for example, the sum of the upper bounds of all arcs is sufficiently large to serve as &amp;quot;infinity&amp;quot;).&lt;br /&gt;
# Usually, the term '''generalized flow''' is reserved for the specific generalization in which for each node &amp;lt;math&amp;gt;v\in V\setminus\{s,t\}&amp;lt;/math&amp;gt;, the ratio of the total sum of all incoming flow and the total sum of all outgoing flow is given (in the standard version, this ratio is 1 due to the flow conservation condition).&lt;br /&gt;
# The max-flow problem asks for an optimal steady-state flow. However, in many applications, a certain amount of flow is to be sent as soon as possible from the source to the target. It is easy to see that, if the amount of flow is sufficiently large, then an optimal solution is constant most of the time, and the maximum steady-state flow is this constant flow.&lt;/div&gt;</summary>
		<author><name>Weihe</name></author>
	</entry>
	<entry>
		<id>https://wiki.algo.informatik.tu-darmstadt.de/index.php?title=Preflow-push_with_excess_scaling&amp;diff=3885</id>
		<title>Preflow-push with excess scaling</title>
		<link rel="alternate" type="text/html" href="https://wiki.algo.informatik.tu-darmstadt.de/index.php?title=Preflow-push_with_excess_scaling&amp;diff=3885"/>
		<updated>2017-12-18T17:29:51Z</updated>

		<summary type="html">&lt;p&gt;Weihe: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Abstract view ==&lt;br /&gt;
&lt;br /&gt;
'''Algorithmic problem:'''&lt;br /&gt;
[[Max-Flow Problems#Standard version|max-flow problem (standard version)]]&lt;br /&gt;
&lt;br /&gt;
This is a specialization and slight modification of the [[Preflow-push|generic preflow-push algorithm]]:&lt;br /&gt;
# An additional nonnegative integral number &amp;lt;math&amp;gt;\Delta&amp;lt;/math&amp;gt; is maintained, which is initialized so as to be at least as large as the largest value &amp;lt;math&amp;gt;e_f&amp;lt;/math&amp;gt; (but not more than the next higher power of 2 for complexity reasons, cf. [[#Complexity|here]]).&lt;br /&gt;
# In each iteration, an active node &amp;lt;math&amp;gt;v&amp;lt;/math&amp;gt; may only be chosen if it has '''large [[Basic flow definitions#Preflow|excess]]''', that is, &amp;lt;math&amp;gt;e_f(v)\geq\Delta/2&amp;lt;/math&amp;gt;.&lt;br /&gt;
# Among all active nodes with large excess, we choose one with minimum &amp;lt;math&amp;gt;d&amp;lt;/math&amp;gt;-label.&lt;br /&gt;
# The flow value pushed over an arc &amp;lt;math&amp;gt;(v,w)&amp;lt;/math&amp;gt; from &amp;lt;math&amp;gt;v&amp;lt;/math&amp;gt; to &amp;lt;math&amp;gt;w&amp;lt;/math&amp;gt; must be small enough so the excess of &amp;lt;math&amp;gt;w&amp;lt;/math&amp;gt; does not exceed &amp;lt;math&amp;gt;\Delta&amp;lt;/math&amp;gt; afterwards. In other words, the flow value to be pushed over &amp;lt;math&amp;gt;(v,w)&amp;lt;/math&amp;gt; is ''not'' the minimum of &amp;lt;math&amp;gt;v&amp;lt;/math&amp;gt;'s excess and the residual capacity of &amp;lt;math&amp;gt;(v,w)&amp;lt;/math&amp;gt; as in the generic preflow-push algorithm; here it is the minimum of these two values ''and'' a third value, &amp;lt;math&amp;gt;\Delta-e_f(w)&amp;lt;/math&amp;gt;, where &amp;lt;math&amp;gt;f&amp;lt;/math&amp;gt; denotes the current preflow immediately before this push operation.&lt;br /&gt;
# If there are active nodes but none with large excess, we do ''not'' apply a push or relabel operation but reset &amp;lt;math&amp;gt;\Delta:=\Delta/2&amp;lt;/math&amp;gt; (integral division) repeatedly until there is an active node with large excess.&lt;br /&gt;
&lt;br /&gt;
'''Invariant:'''&lt;br /&gt;
# All invariants of the [[Preflow-push|generic pre-flow push algorithm]].&lt;br /&gt;
# At any time, it is &amp;lt;math&amp;gt;e_f(v)\leq\Delta&amp;lt;/math&amp;gt; for every node &amp;lt;math&amp;gt;v\in V\setminus\{s,t\}&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
'''Variant:'''&lt;br /&gt;
The variant of the [[Preflow-push|generic pre-flow push algorithm]] is extended by a fourth option:  The value of &amp;lt;math&amp;gt;\Delta&amp;lt;/math&amp;gt; is reduced to &amp;lt;math&amp;gt;\Delta/2&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
'''Remarks:'''&lt;br /&gt;
# A sequence of iterations between two successive changes of &amp;lt;math&amp;gt;\Delta&amp;lt;/math&amp;gt; is usually called a '''scaling phase'''.&lt;br /&gt;
# It is quite common in the literature to define an outer loop in which each iteration performs one scaling phase, and an inner loop performs the push-relabel steps. The break condition is usually &amp;lt;math&amp;gt;\Delta=0&amp;lt;/math&amp;gt;. Clearly, when this condition is fulfilled, the [[Preflow-push#Abstract view|break condition]] of the [[Preflow-push|generic preflow-push algorithm]] is fulfilled as well.&lt;br /&gt;
&lt;br /&gt;
== Implementation of point 3 of the abstract algorithm ==&lt;br /&gt;
&lt;br /&gt;
All active nodes with large excess are stored in an array of [[Sets and sequences|sets]] of nodes. The list at index &amp;lt;math&amp;gt;k&amp;lt;/math&amp;gt; contains all active nodes &amp;lt;math&amp;gt;v\in V\setminus\{s,t\}&amp;lt;/math&amp;gt; with large excess and &amp;lt;math&amp;gt;d(v)=k&amp;lt;/math&amp;gt;. This array is computed from scratch in the initialization and after every reduction of &amp;lt;math&amp;gt;\Delta&amp;lt;/math&amp;gt;. A '''current array index''' is maintained, which is not larger than the index of any non-empty node set in the array. Whenever a node is to be chosen, the current array index is increased as long as the list at the current array index is empty. A push step may result in a new active node with large excess. However, this node has to be inserted at the index immediately before the current array index, so the current array index is never decreased by more than one unit at a time.&lt;br /&gt;
&lt;br /&gt;
== Complexity ==&lt;br /&gt;
&lt;br /&gt;
'''Statement:'''&lt;br /&gt;
The asymptotic complexity is in &amp;lt;math&amp;gt;\mathcal{O}(n\!\cdot\!m+n^2\!\cdot\!\log U)&amp;lt;/math&amp;gt;, where &amp;lt;math&amp;gt;n=|V|&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;m=|A|&amp;lt;/math&amp;gt;, and  &amp;lt;math&amp;gt;U=\max\{u(a)|a\in A\}&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
'''Proof:'''&lt;br /&gt;
The [[Preflow-push#Complexity|complexity considerations]] for the [[Preflow-push|generic preflow-push algorithm]] yield &amp;lt;math&amp;gt;\mathcal{O}(n^2)&amp;lt;/math&amp;gt; relabel steps and forward steps of current arcs, and &amp;lt;math&amp;gt;\mathcal{O}(n\!\cdot\!m)&amp;lt;/math&amp;gt; saturating push steps. Due to the above implementation of point 3, all node selections have complexity &amp;lt;math&amp;gt;\mathcal{O}(P+n\cdot\log U)&amp;lt;/math&amp;gt; in total, where &amp;lt;math&amp;gt;P&amp;lt;/math&amp;gt; is the total number of saturating and non-saturating push steps.&lt;br /&gt;
&lt;br /&gt;
So, it suffices to show that there are &amp;lt;math&amp;gt;\mathcal{O}(n^2)&amp;lt;/math&amp;gt; non-saturating push steps in each scaling phase. To see that, we consider the following, dynamically changing '''potential function''':&lt;br /&gt;
:&amp;lt;math&amp;gt;\Phi(f,d):=\sum_{v\in V\setminus\{s,t\}}\frac{e_f(v)}{\Delta}\cdot d(v)&amp;lt;/math&amp;gt;.&lt;br /&gt;
Since &amp;lt;math&amp;gt;e_f(v)\leq\Delta&amp;lt;/math&amp;gt;, it is &amp;lt;math&amp;gt;\Phi(f,d)\leq\sum_{v\in V\setminus\{s,t\}}d(v)&amp;lt;/math&amp;gt;, so all relabel steps together cannot increase &amp;lt;math&amp;gt;\Phi&amp;lt;/math&amp;gt; by more than &amp;lt;math&amp;gt;2n^2&amp;lt;/math&amp;gt; (cf. [[Preflow-push#Complexity|here]]). Each push step - saturating or not - decreases the value of &amp;lt;math&amp;gt;\Phi&amp;lt;/math&amp;gt; because excess is always sent from a node with a higher &amp;lt;math&amp;gt;d&amp;lt;/math&amp;gt;-label to a node with a (one unit) lower &amp;lt;math&amp;gt;d&amp;lt;/math&amp;gt;-label. Therefore, we may safely ignore saturating push steps. Now, a non-saturating push sends at least &amp;lt;math&amp;gt;\Delta/2&amp;lt;/math&amp;gt; units of excess, which decreases &amp;lt;math&amp;gt;\Phi&amp;lt;/math&amp;gt; by at least &amp;lt;math&amp;gt;1/2&amp;lt;/math&amp;gt;. Consequently, the number of non-saturating push steps is in &amp;lt;math&amp;gt;\mathcal{O}(n^2)&amp;lt;/math&amp;gt; in each scaling phase.&lt;br /&gt;
&lt;br /&gt;
'''Remark:'''&lt;br /&gt;
Potential functions are a general concept for complexity considerations. The sum and the maximum of all &amp;lt;math&amp;gt;d&amp;lt;/math&amp;gt;-labels are rather simple examples of potential functions, which are used for the complexity considerations for the [[Preflow-push|generic pre flow-push algorithm]] and for the [[FIFO preflow-push#Correctness|FIFO variant]], respectively.&lt;/div&gt;</summary>
		<author><name>Weihe</name></author>
	</entry>
	<entry>
		<id>https://wiki.algo.informatik.tu-darmstadt.de/index.php?title=Preflow-push&amp;diff=3884</id>
		<title>Preflow-push</title>
		<link rel="alternate" type="text/html" href="https://wiki.algo.informatik.tu-darmstadt.de/index.php?title=Preflow-push&amp;diff=3884"/>
		<updated>2017-06-20T03:53:31Z</updated>

		<summary type="html">&lt;p&gt;Weihe: /* Induction step */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Abstract view ==&lt;br /&gt;
&lt;br /&gt;
'''Also known as:''' ''push-relabel'' algorithm or ''Goldberg-Tarjan'' algorithm&lt;br /&gt;
&lt;br /&gt;
'''Algorithmic problem:'''&lt;br /&gt;
[[Max-Flow Problems#Standard version|max-flow problem (standard version)]]&lt;br /&gt;
&lt;br /&gt;
'''Type of algorithm:'''&lt;br /&gt;
loop.&lt;br /&gt;
&lt;br /&gt;
'''Auxiliary data:'''&lt;br /&gt;
# A nonnegative integral value &amp;lt;math&amp;gt;d(v)&amp;lt;/math&amp;gt; for each node &amp;lt;math&amp;gt;v\in V&amp;lt;/math&amp;gt;.&lt;br /&gt;
# Each node &amp;lt;math&amp;gt;v\in V\setminus\{t\}&amp;lt;/math&amp;gt; has a '''current arc''', which may be implemented as an iterator on the list of outgoing residual arcs of &amp;lt;math&amp;gt;v&amp;lt;/math&amp;gt;.&lt;br /&gt;
# The [[Basic flow definitions#Preflow|excess]] &amp;lt;math&amp;gt;e_f(v)&amp;lt;/math&amp;gt; of a node &amp;lt;math&amp;gt;v\in V\setminus\{s,t\}&amp;lt;/math&amp;gt; with respect to the current [[Basic flow definitions#Preflow|preflow]].&lt;br /&gt;
# A (dynamically changing) [[Sets and sequences|set]] &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt; of nodes.&lt;br /&gt;
&lt;br /&gt;
'''Invariant:'''&lt;br /&gt;
Before and after each iteration:&lt;br /&gt;
# For each arc &amp;lt;math&amp;gt;a\in A&amp;lt;/math&amp;gt;, it is &amp;lt;math&amp;gt;0\leq f(a)\leq u(a)&amp;lt;/math&amp;gt; . If all upper bounds are integral, all &amp;lt;math&amp;gt;f&amp;lt;/math&amp;gt;-values are integral, too.&lt;br /&gt;
# For each node &amp;lt;math&amp;gt;v\in V\setminus\{s,t\} &amp;lt;/math&amp;gt;, it is &amp;lt;math&amp;gt;e_f(v)\geq 0&amp;lt;/math&amp;gt;. In other words, &amp;lt;math&amp;gt;f&amp;lt;/math&amp;gt; is a [[Basic flow definitions#Preflow|preflow]].&lt;br /&gt;
# The node labels &amp;lt;math&amp;gt;d&amp;lt;/math&amp;gt; form a [[Basic flow definitions#Valid distance labeling|valid distance labeling]], and it is &amp;lt;math&amp;gt;d(s)=n:=|V|&amp;lt;/math&amp;gt;.&lt;br /&gt;
# The currently [[Basic flow definitions#Preflow|active nodes]] are stored in &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt;.&lt;br /&gt;
# The current arc of a node is an outgoing arc of the node's in the residual graph. In the list of all of these arcs, no admissible arc precedes the current arc.&lt;br /&gt;
&lt;br /&gt;
'''Variant:'''&lt;br /&gt;
No label &amp;lt;math&amp;gt;d(\cdot)&amp;lt;/math&amp;gt; is ever decreased. In each iteration, one of the following three actions will take place:&lt;br /&gt;
# The label &amp;lt;math&amp;gt;d(v)&amp;lt;/math&amp;gt; of at least one node &amp;lt;math&amp;gt;v\in V\setminus\{s,t\}&amp;lt;/math&amp;gt; is increased.&lt;br /&gt;
# A saturating push is performed.&lt;br /&gt;
# The value of &amp;lt;math&amp;gt;D:=\sum_{v\in V\setminus\{s,t\}\atop e_f(v)&amp;gt;0}d(v)&amp;lt;/math&amp;gt; decreases.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Break condition:'''&lt;br /&gt;
&amp;lt;math&amp;gt;S=\emptyset&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
== Induction basis ==&lt;br /&gt;
&lt;br /&gt;
'''Abstract view:'''&lt;br /&gt;
# For all arcs &amp;lt;math&amp;gt;a\in A&amp;lt;/math&amp;gt;, set &amp;lt;math&amp;gt;f(a):=0&amp;lt;/math&amp;gt;.&lt;br /&gt;
# For each arc &amp;lt;math&amp;gt;(s,v)\in A&amp;lt;/math&amp;gt;, overwrite this value by &amp;lt;math&amp;gt;f(a):=u(a)&amp;lt;/math&amp;gt; and put &amp;lt;math&amp;gt;v&amp;lt;/math&amp;gt; into &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt;.&lt;br /&gt;
# Compute a [[Basic flow definitions#Valid distance labeling|valid distance labeling]] &amp;lt;math&amp;gt;d&amp;lt;/math&amp;gt; for &amp;lt;math&amp;gt;V\setminus\{s\}&amp;lt;/math&amp;gt; with respect to &amp;lt;math&amp;gt;f&amp;lt;/math&amp;gt;, for example, the true distances from all nodes to &amp;lt;math&amp;gt;t&amp;lt;/math&amp;gt; in the residual network of &amp;lt;math&amp;gt;f&amp;lt;/math&amp;gt;.&lt;br /&gt;
# Set &amp;lt;math&amp;gt;d(s):=n&amp;lt;/math&amp;gt;.&lt;br /&gt;
# For all &amp;lt;math&amp;gt;v\in V\setminus\{s,t\}&amp;lt;/math&amp;gt;, reset the current arc of &amp;lt;math&amp;gt;v&amp;lt;/math&amp;gt; so as to point to the first arc in the list of outgoing arcs of &amp;lt;math&amp;gt;v&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
'''Proof:'''&lt;br /&gt;
For the [[Basic graph definitions#Subgraphs|subgraph induced]] by &amp;lt;math&amp;gt;V\setminus\{s\}&amp;lt;/math&amp;gt;, the arguments in the [[Ahuja-Orlin#Correctness|correctness proof]] for the [[Ahuja-Orlin]] algorithm prove that the &amp;lt;math&amp;gt;d&amp;lt;/math&amp;gt;-labels form a valid distance labeling here as well. For &amp;lt;math&amp;gt;s&amp;lt;/math&amp;gt;, nothing is to show because all outgoing arcs are saturated.&lt;br /&gt;
&lt;br /&gt;
== Induction step ==&lt;br /&gt;
&lt;br /&gt;
'''Abstract view:'''&lt;br /&gt;
# Choose an [[Basic flow definitions#Preflow|active node]] &amp;lt;math&amp;gt;v&amp;lt;/math&amp;gt; from &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt;.&lt;br /&gt;
# While the current arc of &amp;lt;math&amp;gt;v&amp;lt;/math&amp;gt; is not void and not [[Basic flow definitions#Valid distance labeling|admissible]] either, move the current arc one step forward.&lt;br /&gt;
# If the current arc of &amp;lt;math&amp;gt;v&amp;lt;/math&amp;gt; is ''not'' void now but an (admissible) outgoing arc &amp;lt;math&amp;gt;(v,w)&amp;lt;/math&amp;gt;, say:&lt;br /&gt;
## If &amp;lt;math&amp;gt;w\neq s&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;w\neq t&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;e_f(w)=0&amp;lt;/math&amp;gt;, insert &amp;lt;math&amp;gt;w&amp;lt;/math&amp;gt; in &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt;.&lt;br /&gt;
## Increase the flow over &amp;lt;math&amp;gt;(v,w)&amp;lt;/math&amp;gt; by the minimum of &amp;lt;math&amp;gt;e_f(v)&amp;lt;/math&amp;gt; and the residual capacity of &amp;lt;math&amp;gt;(v,w)&amp;lt;/math&amp;gt;.&lt;br /&gt;
## Increase &amp;lt;math&amp;gt;e_f(w)&amp;lt;/math&amp;gt; by that value and decrease &amp;lt;math&amp;gt;e_f(v)&amp;lt;/math&amp;gt; by the same value.&lt;br /&gt;
## If &amp;lt;math&amp;gt;e_f(v)=0&amp;lt;/math&amp;gt; now, extract &amp;lt;math&amp;gt;v&amp;lt;/math&amp;gt; from &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt;.&lt;br /&gt;
# Otherwise:&lt;br /&gt;
## Let &amp;lt;math&amp;gt;d_\min&amp;lt;/math&amp;gt; denote the minimal label &amp;lt;math&amp;gt;d(w)&amp;lt;/math&amp;gt; of any arc &amp;lt;math&amp;gt;(v,w)&amp;lt;/math&amp;gt; in the residual network.&lt;br /&gt;
## Set &amp;lt;math&amp;gt;d(v):=d_\min+1&amp;lt;/math&amp;gt;.&lt;br /&gt;
## Reset the current arc of &amp;lt;math&amp;gt;v&amp;lt;/math&amp;gt; so as to point to the beginning of the list of outgoing arcs of &amp;lt;math&amp;gt;v&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
'''Remark:'''&lt;br /&gt;
The preflow-push algorithm is also known as the '''push-relabel''' algorithm. The ''push'' operation is step 3; the ''relabel'' operation is step 4.&lt;br /&gt;
&lt;br /&gt;
'''Proof:'''&lt;br /&gt;
Points 1, 2, and 4 of the invariant and &amp;lt;math&amp;gt;d(s)=n&amp;lt;/math&amp;gt; are obviously fulfilled. The rest of point 3 of the invariant is affected by step 4 only, and the outgoing arcs of &amp;lt;math&amp;gt;v&amp;lt;/math&amp;gt; are the only arcs where the distance labeling may become invalid. However, the extremely conservative increase of &amp;lt;math&amp;gt;d(v)&amp;lt;/math&amp;gt; ensures point 3 of the invariant.&lt;br /&gt;
&lt;br /&gt;
To prove the variant, consider a step in which neither any &amp;lt;math&amp;gt;d&amp;lt;/math&amp;gt;-value is increased nor a saturating push is performed. This means step 3.2 is applied, but the arc &amp;lt;math&amp;gt;(v,w)&amp;lt;/math&amp;gt; is not saturated by that. Potentially, &amp;lt;math&amp;gt;w&amp;lt;/math&amp;gt; becomes active. However, &amp;lt;math&amp;gt;v&amp;lt;/math&amp;gt; definitely becomes inactive since the push step is non-saturating. Now the variant follows from the fact that &amp;lt;math&amp;gt;d(w)=d(v)-1&amp;lt;/math&amp;gt; for an admissible arc &amp;lt;math&amp;gt;(v,w)&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
It remains to show termination; this is proved by the following complexity considerations.&lt;br /&gt;
&lt;br /&gt;
== Complexity ==&lt;br /&gt;
&lt;br /&gt;
'''Statement:'''&lt;br /&gt;
The asymptotic complexity is in &amp;lt;math&amp;gt;\mathcal{O}(n^2m)&amp;lt;/math&amp;gt;, where &amp;lt;math&amp;gt;n=|V|&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;m=|A|&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
'''Proof:'''&lt;br /&gt;
First we show that the total number of relabel operations (step 4 of the main loop) is in &amp;lt;math&amp;gt;\mathcal{O}(n^2)&amp;lt;/math&amp;gt;. To see that, let &amp;lt;math&amp;gt;v\in V\setminus\{s,t\}&amp;lt;/math&amp;gt; be an active node between two iterations of the main loop. A straightforward induction over the number of push operations shows that there is at least one simple &amp;lt;math&amp;gt;(s,v)&amp;lt;/math&amp;gt;-path &amp;lt;math&amp;gt;p&amp;lt;/math&amp;gt; with positive flow on all arcs. The [[Basic graph definitions#Transpose of a graph|transpose]] of &amp;lt;math&amp;gt;p&amp;lt;/math&amp;gt; is [[Basic flow definitions#Flow-augmenting paths and saturated arcs|augmenting]]. Due to the validity of &amp;lt;math&amp;gt;d&amp;lt;/math&amp;gt; (induction hypothesis), &amp;lt;math&amp;gt;d(v)-d(s)=d(v)-n&amp;lt;/math&amp;gt; cannot be larger than the number of arcs on &amp;lt;math&amp;gt;p&amp;lt;/math&amp;gt;, which is not larger than &amp;lt;math&amp;gt;n-1&amp;lt;/math&amp;gt;. Therefore, no node label can be larger than &amp;lt;math&amp;gt;2n-1&amp;lt;/math&amp;gt;. Since node labels are nonnegative and increase at least by one in each relabel operation, the claimed upper bound on the relabel operations follows.&lt;br /&gt;
&lt;br /&gt;
From this bound, we may immediately conclude that the current arc of a node is reset &amp;lt;math&amp;gt;\mathcal{O}(n)&amp;lt;/math&amp;gt; times, so the total number of forward steps of the current arcs of all nodes is in &amp;lt;math&amp;gt;\mathcal{O}(n^3)\subseteq\mathcal{O}(n^2m)&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
The argument in the [[Ahuja-Orlin#Complexity|complexity analysis]] of the [[Ahuja-Orlin]] algorithm to prove that the total number of ''saturating'' push operations is in &amp;lt;math&amp;gt;\mathcal{O}(nm)&amp;lt;/math&amp;gt;, applies here as well.&lt;br /&gt;
&lt;br /&gt;
Finally, we consider the ''non-saturating'' push operations. First note that &amp;lt;math&amp;gt;D\geq 0&amp;lt;/math&amp;gt; before and after each iteration. The value of &amp;lt;math&amp;gt;D&amp;lt;/math&amp;gt; is increased in each relabel operation exactly by the amount by which the label of the current node is increased. Since node labels are never decreased and bounded from above by &amp;lt;math&amp;gt;2n-1&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;D&amp;lt;/math&amp;gt; increases by less than &amp;lt;math&amp;gt;2n^2&amp;lt;/math&amp;gt; in total over all relabel operations. On the other hand, a saturating push operation may increase &amp;lt;math&amp;gt;D&amp;lt;/math&amp;gt; by at most &amp;lt;math&amp;gt;2n-1&amp;lt;/math&amp;gt; (namely, in the case that &amp;lt;math&amp;gt;w&amp;lt;/math&amp;gt; was not active immediately before the push). In summary, the total sum of all values by which &amp;lt;math&amp;gt;D&amp;lt;/math&amp;gt; is increased throughout the algorithm is in &amp;lt;math&amp;gt;\mathcal{O}(n^2m)&amp;lt;/math&amp;gt;. Due to the variant, the value of &amp;lt;math&amp;gt;D&amp;lt;/math&amp;gt; is decreased by at least one in each non-saturating push operation. This proves the claim.&lt;br /&gt;
&lt;br /&gt;
== Heuristic speedup techniques ==&lt;br /&gt;
&lt;br /&gt;
# After &amp;lt;math&amp;gt;\Omega(n)&amp;lt;/math&amp;gt; iterations of the main loop, the &amp;lt;math&amp;gt;d&amp;lt;/math&amp;gt;-values are recomputed analogously to the induction basis: as the current distance of each node to &amp;lt;math&amp;gt;t&amp;lt;/math&amp;gt; in the residual network. This modification is seldom enough, so the asymptotic complexity is not increased. In practice, this technique may save many unnecessary relabel steps.&lt;br /&gt;
# The main loop may be decomposed into two phases: First, as much flow as possible is sent into &amp;lt;math&amp;gt;t&amp;lt;/math&amp;gt;; second, all surplus flow that cannot reach &amp;lt;math&amp;gt;t&amp;lt;/math&amp;gt; is sent back to &amp;lt;math&amp;gt;s&amp;lt;/math&amp;gt;. The first phase may be finished once there is no more path in the residual network from any active node to &amp;lt;math&amp;gt;t&amp;lt;/math&amp;gt;. A sufficient and easy-to-check condition for that is &amp;lt;math&amp;gt;d(v)\geq n&amp;lt;/math&amp;gt; for all active nodes &amp;lt;math&amp;gt;v&amp;lt;/math&amp;gt;. All nodes from which &amp;lt;math&amp;gt;t&amp;lt;/math&amp;gt; is reachable may be safely disregarded in the second phase. For any other node, to save unnecessary relabel operations, the distance label may be safely increased to the minimum number of arcs in the residual network from this node back to &amp;lt;math&amp;gt;s&amp;lt;/math&amp;gt;.&lt;/div&gt;</summary>
		<author><name>Weihe</name></author>
	</entry>
	<entry>
		<id>https://wiki.algo.informatik.tu-darmstadt.de/index.php?title=Preflow-push&amp;diff=3883</id>
		<title>Preflow-push</title>
		<link rel="alternate" type="text/html" href="https://wiki.algo.informatik.tu-darmstadt.de/index.php?title=Preflow-push&amp;diff=3883"/>
		<updated>2017-06-12T08:34:31Z</updated>

		<summary type="html">&lt;p&gt;Weihe: /* Induction step */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Abstract view ==&lt;br /&gt;
&lt;br /&gt;
'''Also known as:''' ''push-relabel'' algorithm or ''Goldberg-Tarjan'' algorithm&lt;br /&gt;
&lt;br /&gt;
'''Algorithmic problem:'''&lt;br /&gt;
[[Max-Flow Problems#Standard version|max-flow problem (standard version)]]&lt;br /&gt;
&lt;br /&gt;
'''Type of algorithm:'''&lt;br /&gt;
loop.&lt;br /&gt;
&lt;br /&gt;
'''Auxiliary data:'''&lt;br /&gt;
# A nonnegative integral value &amp;lt;math&amp;gt;d(v)&amp;lt;/math&amp;gt; for each node &amp;lt;math&amp;gt;v\in V&amp;lt;/math&amp;gt;.&lt;br /&gt;
# Each node &amp;lt;math&amp;gt;v\in V\setminus\{t\}&amp;lt;/math&amp;gt; has a '''current arc''', which may be implemented as an iterator on the list of outgoing residual arcs of &amp;lt;math&amp;gt;v&amp;lt;/math&amp;gt;.&lt;br /&gt;
# The [[Basic flow definitions#Preflow|excess]] &amp;lt;math&amp;gt;e_f(v)&amp;lt;/math&amp;gt; of a node &amp;lt;math&amp;gt;v\in V\setminus\{s,t\}&amp;lt;/math&amp;gt; with respect to the current [[Basic flow definitions#Preflow|preflow]].&lt;br /&gt;
# A (dynamically changing) [[Sets and sequences|set]] &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt; of nodes.&lt;br /&gt;
&lt;br /&gt;
'''Invariant:'''&lt;br /&gt;
Before and after each iteration:&lt;br /&gt;
# For each arc &amp;lt;math&amp;gt;a\in A&amp;lt;/math&amp;gt;, it is &amp;lt;math&amp;gt;0\leq f(a)\leq u(a)&amp;lt;/math&amp;gt; . If all upper bounds are integral, all &amp;lt;math&amp;gt;f&amp;lt;/math&amp;gt;-values are integral, too.&lt;br /&gt;
# For each node &amp;lt;math&amp;gt;v\in V\setminus\{s,t\} &amp;lt;/math&amp;gt;, it is &amp;lt;math&amp;gt;e_f(v)\geq 0&amp;lt;/math&amp;gt;. In other words, &amp;lt;math&amp;gt;f&amp;lt;/math&amp;gt; is a [[Basic flow definitions#Preflow|preflow]].&lt;br /&gt;
# The node labels &amp;lt;math&amp;gt;d&amp;lt;/math&amp;gt; form a [[Basic flow definitions#Valid distance labeling|valid distance labeling]], and it is &amp;lt;math&amp;gt;d(s)=n:=|V|&amp;lt;/math&amp;gt;.&lt;br /&gt;
# The currently [[Basic flow definitions#Preflow|active nodes]] are stored in &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt;.&lt;br /&gt;
# The current arc of a node is an outgoing arc of the node's in the residual graph. In the list of all of these arcs, no admissible arc precedes the current arc.&lt;br /&gt;
&lt;br /&gt;
'''Variant:'''&lt;br /&gt;
No label &amp;lt;math&amp;gt;d(\cdot)&amp;lt;/math&amp;gt; is ever decreased. In each iteration, one of the following three actions will take place:&lt;br /&gt;
# The label &amp;lt;math&amp;gt;d(v)&amp;lt;/math&amp;gt; of at least one node &amp;lt;math&amp;gt;v\in V\setminus\{s,t\}&amp;lt;/math&amp;gt; is increased.&lt;br /&gt;
# A saturating push is performed.&lt;br /&gt;
# The value of &amp;lt;math&amp;gt;D:=\sum_{v\in V\setminus\{s,t\}\atop e_f(v)&amp;gt;0}d(v)&amp;lt;/math&amp;gt; decreases.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Break condition:'''&lt;br /&gt;
&amp;lt;math&amp;gt;S=\emptyset&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
== Induction basis ==&lt;br /&gt;
&lt;br /&gt;
'''Abstract view:'''&lt;br /&gt;
# For all arcs &amp;lt;math&amp;gt;a\in A&amp;lt;/math&amp;gt;, set &amp;lt;math&amp;gt;f(a):=0&amp;lt;/math&amp;gt;.&lt;br /&gt;
# For each arc &amp;lt;math&amp;gt;(s,v)\in A&amp;lt;/math&amp;gt;, overwrite this value by &amp;lt;math&amp;gt;f(a):=u(a)&amp;lt;/math&amp;gt; and put &amp;lt;math&amp;gt;v&amp;lt;/math&amp;gt; into &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt;.&lt;br /&gt;
# Compute a [[Basic flow definitions#Valid distance labeling|valid distance labeling]] &amp;lt;math&amp;gt;d&amp;lt;/math&amp;gt; for &amp;lt;math&amp;gt;V\setminus\{s\}&amp;lt;/math&amp;gt; with respect to &amp;lt;math&amp;gt;f&amp;lt;/math&amp;gt;, for example, the true distances from all nodes to &amp;lt;math&amp;gt;t&amp;lt;/math&amp;gt; in the residual network of &amp;lt;math&amp;gt;f&amp;lt;/math&amp;gt;.&lt;br /&gt;
# Set &amp;lt;math&amp;gt;d(s):=n&amp;lt;/math&amp;gt;.&lt;br /&gt;
# For all &amp;lt;math&amp;gt;v\in V\setminus\{s,t\}&amp;lt;/math&amp;gt;, reset the current arc of &amp;lt;math&amp;gt;v&amp;lt;/math&amp;gt; so as to point to the first arc in the list of outgoing arcs of &amp;lt;math&amp;gt;v&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
'''Proof:'''&lt;br /&gt;
For the [[Basic graph definitions#Subgraphs|subgraph induced]] by &amp;lt;math&amp;gt;V\setminus\{s\}&amp;lt;/math&amp;gt;, the arguments in the [[Ahuja-Orlin#Correctness|correctness proof]] for the [[Ahuja-Orlin]] algorithm prove that the &amp;lt;math&amp;gt;d&amp;lt;/math&amp;gt;-labels form a valid distance labeling here as well. For &amp;lt;math&amp;gt;s&amp;lt;/math&amp;gt;, nothing is to show because all outgoing arcs are saturated.&lt;br /&gt;
&lt;br /&gt;
== Induction step ==&lt;br /&gt;
&lt;br /&gt;
'''Abstract view:'''&lt;br /&gt;
# Choose an [[Basic flow definitions#Preflow|active node]] &amp;lt;math&amp;gt;v&amp;lt;/math&amp;gt; from &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt;.&lt;br /&gt;
# While the current arc of &amp;lt;math&amp;gt;v&amp;lt;/math&amp;gt; is not void and not [[Basic flow definitions#Valid distance labeling|admissible]] either, move the current arc one step forward.&lt;br /&gt;
# If the current arc of &amp;lt;math&amp;gt;v&amp;lt;/math&amp;gt; is ''not'' void now but an (admissible) outgoing arc &amp;lt;math&amp;gt;(v,w)&amp;lt;/math&amp;gt;, say:&lt;br /&gt;
## If &amp;lt;math&amp;gt;w\neq s&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;e_f(w)=0&amp;lt;/math&amp;gt;, insert &amp;lt;math&amp;gt;w&amp;lt;/math&amp;gt; in &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt;.&lt;br /&gt;
## Increase the flow over &amp;lt;math&amp;gt;(v,w)&amp;lt;/math&amp;gt; by the minimum of &amp;lt;math&amp;gt;e_f(v)&amp;lt;/math&amp;gt; and the residual capacity of &amp;lt;math&amp;gt;(v,w)&amp;lt;/math&amp;gt;.&lt;br /&gt;
## Increase &amp;lt;math&amp;gt;e_f(w)&amp;lt;/math&amp;gt; by that value and decrease &amp;lt;math&amp;gt;e_f(v)&amp;lt;/math&amp;gt; by the same value.&lt;br /&gt;
## If &amp;lt;math&amp;gt;e_f(v)=0&amp;lt;/math&amp;gt; now, extract &amp;lt;math&amp;gt;v&amp;lt;/math&amp;gt; from &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt;.&lt;br /&gt;
# Otherwise:&lt;br /&gt;
## Let &amp;lt;math&amp;gt;d_\min&amp;lt;/math&amp;gt; denote the minimal label &amp;lt;math&amp;gt;d(w)&amp;lt;/math&amp;gt; of any arc &amp;lt;math&amp;gt;(v,w)&amp;lt;/math&amp;gt; in the residual network.&lt;br /&gt;
## Set &amp;lt;math&amp;gt;d(v):=d_\min+1&amp;lt;/math&amp;gt;.&lt;br /&gt;
## Reset the current arc of &amp;lt;math&amp;gt;v&amp;lt;/math&amp;gt; so as to point to the beginning of the list of outgoing arcs of &amp;lt;math&amp;gt;v&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
'''Remark:'''&lt;br /&gt;
The preflow-push algorithm is also known as the '''push-relabel''' algorithm. The ''push'' operation is step 3; the ''relabel'' operation is step 4.&lt;br /&gt;
&lt;br /&gt;
'''Proof:'''&lt;br /&gt;
Points 1, 2, and 4 of the invariant and &amp;lt;math&amp;gt;d(s)=n&amp;lt;/math&amp;gt; are obviously fulfilled. The rest of point 3 of the invariant is affected by step 4 only, and the outgoing arcs of &amp;lt;math&amp;gt;v&amp;lt;/math&amp;gt; are the only arcs where the distance labeling may become invalid. However, the extremely conservative increase of &amp;lt;math&amp;gt;d(v)&amp;lt;/math&amp;gt; ensures point 3 of the invariant.&lt;br /&gt;
&lt;br /&gt;
To prove the variant, consider a step in which neither any &amp;lt;math&amp;gt;d&amp;lt;/math&amp;gt;-value is increased nor a saturating push is performed. This means step 3.2 is applied, but the arc &amp;lt;math&amp;gt;(v,w)&amp;lt;/math&amp;gt; is not saturated by that. Potentially, &amp;lt;math&amp;gt;w&amp;lt;/math&amp;gt; becomes active. However, &amp;lt;math&amp;gt;v&amp;lt;/math&amp;gt; definitely becomes inactive since the push step is non-saturating. Now the variant follows from the fact that &amp;lt;math&amp;gt;d(w)=d(v)-1&amp;lt;/math&amp;gt; for an admissible arc &amp;lt;math&amp;gt;(v,w)&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
It remains to show termination; this is proved by the following complexity considerations.&lt;br /&gt;
&lt;br /&gt;
== Complexity ==&lt;br /&gt;
&lt;br /&gt;
'''Statement:'''&lt;br /&gt;
The asymptotic complexity is in &amp;lt;math&amp;gt;\mathcal{O}(n^2m)&amp;lt;/math&amp;gt;, where &amp;lt;math&amp;gt;n=|V|&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;m=|A|&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
'''Proof:'''&lt;br /&gt;
First we show that the total number of relabel operations (step 4 of the main loop) is in &amp;lt;math&amp;gt;\mathcal{O}(n^2)&amp;lt;/math&amp;gt;. To see that, let &amp;lt;math&amp;gt;v\in V\setminus\{s,t\}&amp;lt;/math&amp;gt; be an active node between two iterations of the main loop. A straightforward induction over the number of push operations shows that there is at least one simple &amp;lt;math&amp;gt;(s,v)&amp;lt;/math&amp;gt;-path &amp;lt;math&amp;gt;p&amp;lt;/math&amp;gt; with positive flow on all arcs. The [[Basic graph definitions#Transpose of a graph|transpose]] of &amp;lt;math&amp;gt;p&amp;lt;/math&amp;gt; is [[Basic flow definitions#Flow-augmenting paths and saturated arcs|augmenting]]. Due to the validity of &amp;lt;math&amp;gt;d&amp;lt;/math&amp;gt; (induction hypothesis), &amp;lt;math&amp;gt;d(v)-d(s)=d(v)-n&amp;lt;/math&amp;gt; cannot be larger than the number of arcs on &amp;lt;math&amp;gt;p&amp;lt;/math&amp;gt;, which is not larger than &amp;lt;math&amp;gt;n-1&amp;lt;/math&amp;gt;. Therefore, no node label can be larger than &amp;lt;math&amp;gt;2n-1&amp;lt;/math&amp;gt;. Since node labels are nonnegative and increase at least by one in each relabel operation, the claimed upper bound on the relabel operations follows.&lt;br /&gt;
&lt;br /&gt;
From this bound, we may immediately conclude that the current arc of a node is reset &amp;lt;math&amp;gt;\mathcal{O}(n)&amp;lt;/math&amp;gt; times, so the total number of forward steps of the current arcs of all nodes is in &amp;lt;math&amp;gt;\mathcal{O}(n^3)\subseteq\mathcal{O}(n^2m)&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
The argument in the [[Ahuja-Orlin#Complexity|complexity analysis]] of the [[Ahuja-Orlin]] algorithm to prove that the total number of ''saturating'' push operations is in &amp;lt;math&amp;gt;\mathcal{O}(nm)&amp;lt;/math&amp;gt;, applies here as well.&lt;br /&gt;
&lt;br /&gt;
Finally, we consider the ''non-saturating'' push operations. First note that &amp;lt;math&amp;gt;D\geq 0&amp;lt;/math&amp;gt; before and after each iteration. The value of &amp;lt;math&amp;gt;D&amp;lt;/math&amp;gt; is increased in each relabel operation exactly by the amount by which the label of the current node is increased. Since node labels are never decreased and bounded from above by &amp;lt;math&amp;gt;2n-1&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;D&amp;lt;/math&amp;gt; increases by less than &amp;lt;math&amp;gt;2n^2&amp;lt;/math&amp;gt; in total over all relabel operations. On the other hand, a saturating push operation may increase &amp;lt;math&amp;gt;D&amp;lt;/math&amp;gt; by at most &amp;lt;math&amp;gt;2n-1&amp;lt;/math&amp;gt; (namely, in the case that &amp;lt;math&amp;gt;w&amp;lt;/math&amp;gt; was not active immediately before the push). In summary, the total sum of all values by which &amp;lt;math&amp;gt;D&amp;lt;/math&amp;gt; is increased throughout the algorithm is in &amp;lt;math&amp;gt;\mathcal{O}(n^2m)&amp;lt;/math&amp;gt;. Due to the variant, the value of &amp;lt;math&amp;gt;D&amp;lt;/math&amp;gt; is decreased by at least one in each non-saturating push operation. This proves the claim.&lt;br /&gt;
&lt;br /&gt;
== Heuristic speedup techniques ==&lt;br /&gt;
&lt;br /&gt;
# After &amp;lt;math&amp;gt;\Omega(n)&amp;lt;/math&amp;gt; iterations of the main loop, the &amp;lt;math&amp;gt;d&amp;lt;/math&amp;gt;-values are recomputed analogously to the induction basis: as the current distance of each node to &amp;lt;math&amp;gt;t&amp;lt;/math&amp;gt; in the residual network. This modification is seldom enough, so the asymptotic complexity is not increased. In practice, this technique may save many unnecessary relabel steps.&lt;br /&gt;
# The main loop may be decomposed into two phases: First, as much flow as possible is sent into &amp;lt;math&amp;gt;t&amp;lt;/math&amp;gt;; second, all surplus flow that cannot reach &amp;lt;math&amp;gt;t&amp;lt;/math&amp;gt; is sent back to &amp;lt;math&amp;gt;s&amp;lt;/math&amp;gt;. The first phase may be finished once there is no more path in the residual network from any active node to &amp;lt;math&amp;gt;t&amp;lt;/math&amp;gt;. A sufficient and easy-to-check condition for that is &amp;lt;math&amp;gt;d(v)\geq n&amp;lt;/math&amp;gt; for all active nodes &amp;lt;math&amp;gt;v&amp;lt;/math&amp;gt;. All nodes from which &amp;lt;math&amp;gt;t&amp;lt;/math&amp;gt; is reachable may be safely disregarded in the second phase. For any other node, to save unnecessary relabel operations, the distance label may be safely increased to the minimum number of arcs in the residual network from this node back to &amp;lt;math&amp;gt;s&amp;lt;/math&amp;gt;.&lt;/div&gt;</summary>
		<author><name>Weihe</name></author>
	</entry>
	<entry>
		<id>https://wiki.algo.informatik.tu-darmstadt.de/index.php?title=Preflow-push&amp;diff=3882</id>
		<title>Preflow-push</title>
		<link rel="alternate" type="text/html" href="https://wiki.algo.informatik.tu-darmstadt.de/index.php?title=Preflow-push&amp;diff=3882"/>
		<updated>2017-06-12T08:33:02Z</updated>

		<summary type="html">&lt;p&gt;Weihe: /* Induction step */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Abstract view ==&lt;br /&gt;
&lt;br /&gt;
'''Also known as:''' ''push-relabel'' algorithm or ''Goldberg-Tarjan'' algorithm&lt;br /&gt;
&lt;br /&gt;
'''Algorithmic problem:'''&lt;br /&gt;
[[Max-Flow Problems#Standard version|max-flow problem (standard version)]]&lt;br /&gt;
&lt;br /&gt;
'''Type of algorithm:'''&lt;br /&gt;
loop.&lt;br /&gt;
&lt;br /&gt;
'''Auxiliary data:'''&lt;br /&gt;
# A nonnegative integral value &amp;lt;math&amp;gt;d(v)&amp;lt;/math&amp;gt; for each node &amp;lt;math&amp;gt;v\in V&amp;lt;/math&amp;gt;.&lt;br /&gt;
# Each node &amp;lt;math&amp;gt;v\in V\setminus\{t\}&amp;lt;/math&amp;gt; has a '''current arc''', which may be implemented as an iterator on the list of outgoing residual arcs of &amp;lt;math&amp;gt;v&amp;lt;/math&amp;gt;.&lt;br /&gt;
# The [[Basic flow definitions#Preflow|excess]] &amp;lt;math&amp;gt;e_f(v)&amp;lt;/math&amp;gt; of a node &amp;lt;math&amp;gt;v\in V\setminus\{s,t\}&amp;lt;/math&amp;gt; with respect to the current [[Basic flow definitions#Preflow|preflow]].&lt;br /&gt;
# A (dynamically changing) [[Sets and sequences|set]] &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt; of nodes.&lt;br /&gt;
&lt;br /&gt;
'''Invariant:'''&lt;br /&gt;
Before and after each iteration:&lt;br /&gt;
# For each arc &amp;lt;math&amp;gt;a\in A&amp;lt;/math&amp;gt;, it is &amp;lt;math&amp;gt;0\leq f(a)\leq u(a)&amp;lt;/math&amp;gt; . If all upper bounds are integral, all &amp;lt;math&amp;gt;f&amp;lt;/math&amp;gt;-values are integral, too.&lt;br /&gt;
# For each node &amp;lt;math&amp;gt;v\in V\setminus\{s,t\} &amp;lt;/math&amp;gt;, it is &amp;lt;math&amp;gt;e_f(v)\geq 0&amp;lt;/math&amp;gt;. In other words, &amp;lt;math&amp;gt;f&amp;lt;/math&amp;gt; is a [[Basic flow definitions#Preflow|preflow]].&lt;br /&gt;
# The node labels &amp;lt;math&amp;gt;d&amp;lt;/math&amp;gt; form a [[Basic flow definitions#Valid distance labeling|valid distance labeling]], and it is &amp;lt;math&amp;gt;d(s)=n:=|V|&amp;lt;/math&amp;gt;.&lt;br /&gt;
# The currently [[Basic flow definitions#Preflow|active nodes]] are stored in &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt;.&lt;br /&gt;
# The current arc of a node is an outgoing arc of the node's in the residual graph. In the list of all of these arcs, no admissible arc precedes the current arc.&lt;br /&gt;
&lt;br /&gt;
'''Variant:'''&lt;br /&gt;
No label &amp;lt;math&amp;gt;d(\cdot)&amp;lt;/math&amp;gt; is ever decreased. In each iteration, one of the following three actions will take place:&lt;br /&gt;
# The label &amp;lt;math&amp;gt;d(v)&amp;lt;/math&amp;gt; of at least one node &amp;lt;math&amp;gt;v\in V\setminus\{s,t\}&amp;lt;/math&amp;gt; is increased.&lt;br /&gt;
# A saturating push is performed.&lt;br /&gt;
# The value of &amp;lt;math&amp;gt;D:=\sum_{v\in V\setminus\{s,t\}\atop e_f(v)&amp;gt;0}d(v)&amp;lt;/math&amp;gt; decreases.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Break condition:'''&lt;br /&gt;
&amp;lt;math&amp;gt;S=\emptyset&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
== Induction basis ==&lt;br /&gt;
&lt;br /&gt;
'''Abstract view:'''&lt;br /&gt;
# For all arcs &amp;lt;math&amp;gt;a\in A&amp;lt;/math&amp;gt;, set &amp;lt;math&amp;gt;f(a):=0&amp;lt;/math&amp;gt;.&lt;br /&gt;
# For each arc &amp;lt;math&amp;gt;(s,v)\in A&amp;lt;/math&amp;gt;, overwrite this value by &amp;lt;math&amp;gt;f(a):=u(a)&amp;lt;/math&amp;gt; and put &amp;lt;math&amp;gt;v&amp;lt;/math&amp;gt; into &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt;.&lt;br /&gt;
# Compute a [[Basic flow definitions#Valid distance labeling|valid distance labeling]] &amp;lt;math&amp;gt;d&amp;lt;/math&amp;gt; for &amp;lt;math&amp;gt;V\setminus\{s\}&amp;lt;/math&amp;gt; with respect to &amp;lt;math&amp;gt;f&amp;lt;/math&amp;gt;, for example, the true distances from all nodes to &amp;lt;math&amp;gt;t&amp;lt;/math&amp;gt; in the residual network of &amp;lt;math&amp;gt;f&amp;lt;/math&amp;gt;.&lt;br /&gt;
# Set &amp;lt;math&amp;gt;d(s):=n&amp;lt;/math&amp;gt;.&lt;br /&gt;
# For all &amp;lt;math&amp;gt;v\in V\setminus\{s,t\}&amp;lt;/math&amp;gt;, reset the current arc of &amp;lt;math&amp;gt;v&amp;lt;/math&amp;gt; so as to point to the first arc in the list of outgoing arcs of &amp;lt;math&amp;gt;v&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
'''Proof:'''&lt;br /&gt;
For the [[Basic graph definitions#Subgraphs|subgraph induced]] by &amp;lt;math&amp;gt;V\setminus\{s\}&amp;lt;/math&amp;gt;, the arguments in the [[Ahuja-Orlin#Correctness|correctness proof]] for the [[Ahuja-Orlin]] algorithm prove that the &amp;lt;math&amp;gt;d&amp;lt;/math&amp;gt;-labels form a valid distance labeling here as well. For &amp;lt;math&amp;gt;s&amp;lt;/math&amp;gt;, nothing is to show because all outgoing arcs are saturated.&lt;br /&gt;
&lt;br /&gt;
== Induction step ==&lt;br /&gt;
&lt;br /&gt;
'''Abstract view:'''&lt;br /&gt;
# Choose an [[Basic flow definitions#Preflow|active node]] &amp;lt;math&amp;gt;v&amp;lt;/math&amp;gt; from &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt;.&lt;br /&gt;
# While the current arc of &amp;lt;math&amp;gt;v&amp;lt;/math&amp;gt; is not void and not [[Basic flow definitions#Valid distance labeling|admissible]] either, move the current arc one step forward.&lt;br /&gt;
# If the current arc of &amp;lt;math&amp;gt;v&amp;lt;/math&amp;gt; is ''not'' void now but an (admissible) outgoing arc &amp;lt;math&amp;gt;(v,w)&amp;lt;/math&amp;gt;, say:&lt;br /&gt;
## If &amp;lt;math&amp;gt;w\neq s,t&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;e_f(w)=0&amp;lt;/math&amp;gt;, insert &amp;lt;math&amp;gt;w&amp;lt;/math&amp;gt; in &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt;.&lt;br /&gt;
## Increase the flow over &amp;lt;math&amp;gt;(v,w)&amp;lt;/math&amp;gt; by the minimum of &amp;lt;math&amp;gt;e_f(v)&amp;lt;/math&amp;gt; and the residual capacity of &amp;lt;math&amp;gt;(v,w)&amp;lt;/math&amp;gt;.&lt;br /&gt;
## Increase &amp;lt;math&amp;gt;e_f(w)&amp;lt;/math&amp;gt; by that value and decrease &amp;lt;math&amp;gt;e_f(v)&amp;lt;/math&amp;gt; by the same value.&lt;br /&gt;
## If &amp;lt;math&amp;gt;e_f(v)=0&amp;lt;/math&amp;gt; now, extract &amp;lt;math&amp;gt;v&amp;lt;/math&amp;gt; from &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt;.&lt;br /&gt;
# Otherwise:&lt;br /&gt;
## Let &amp;lt;math&amp;gt;d_\min&amp;lt;/math&amp;gt; denote the minimal label &amp;lt;math&amp;gt;d(w)&amp;lt;/math&amp;gt; of any arc &amp;lt;math&amp;gt;(v,w)&amp;lt;/math&amp;gt; in the residual network.&lt;br /&gt;
## Set &amp;lt;math&amp;gt;d(v):=d_\min+1&amp;lt;/math&amp;gt;.&lt;br /&gt;
## Reset the current arc of &amp;lt;math&amp;gt;v&amp;lt;/math&amp;gt; so as to point to the beginning of the list of outgoing arcs of &amp;lt;math&amp;gt;v&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
'''Remark:'''&lt;br /&gt;
The preflow-push algorithm is also known as the '''push-relabel''' algorithm. The ''push'' operation is step 3; the ''relabel'' operation is step 4.&lt;br /&gt;
&lt;br /&gt;
'''Proof:'''&lt;br /&gt;
Points 1, 2, and 4 of the invariant and &amp;lt;math&amp;gt;d(s)=n&amp;lt;/math&amp;gt; are obviously fulfilled. The rest of point 3 of the invariant is affected by step 4 only, and the outgoing arcs of &amp;lt;math&amp;gt;v&amp;lt;/math&amp;gt; are the only arcs where the distance labeling may become invalid. However, the extremely conservative increase of &amp;lt;math&amp;gt;d(v)&amp;lt;/math&amp;gt; ensures point 3 of the invariant.&lt;br /&gt;
&lt;br /&gt;
To prove the variant, consider a step in which neither any &amp;lt;math&amp;gt;d&amp;lt;/math&amp;gt;-value is increased nor a saturating push is performed. This means step 3.2 is applied, but the arc &amp;lt;math&amp;gt;(v,w)&amp;lt;/math&amp;gt; is not saturated by that. Potentially, &amp;lt;math&amp;gt;w&amp;lt;/math&amp;gt; becomes active. However, &amp;lt;math&amp;gt;v&amp;lt;/math&amp;gt; definitely becomes inactive since the push step is non-saturating. Now the variant follows from the fact that &amp;lt;math&amp;gt;d(w)=d(v)-1&amp;lt;/math&amp;gt; for an admissible arc &amp;lt;math&amp;gt;(v,w)&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
It remains to show termination; this is proved by the following complexity considerations.&lt;br /&gt;
&lt;br /&gt;
== Complexity ==&lt;br /&gt;
&lt;br /&gt;
'''Statement:'''&lt;br /&gt;
The asymptotic complexity is in &amp;lt;math&amp;gt;\mathcal{O}(n^2m)&amp;lt;/math&amp;gt;, where &amp;lt;math&amp;gt;n=|V|&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;m=|A|&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
'''Proof:'''&lt;br /&gt;
First we show that the total number of relabel operations (step 4 of the main loop) is in &amp;lt;math&amp;gt;\mathcal{O}(n^2)&amp;lt;/math&amp;gt;. To see that, let &amp;lt;math&amp;gt;v\in V\setminus\{s,t\}&amp;lt;/math&amp;gt; be an active node between two iterations of the main loop. A straightforward induction over the number of push operations shows that there is at least one simple &amp;lt;math&amp;gt;(s,v)&amp;lt;/math&amp;gt;-path &amp;lt;math&amp;gt;p&amp;lt;/math&amp;gt; with positive flow on all arcs. The [[Basic graph definitions#Transpose of a graph|transpose]] of &amp;lt;math&amp;gt;p&amp;lt;/math&amp;gt; is [[Basic flow definitions#Flow-augmenting paths and saturated arcs|augmenting]]. Due to the validity of &amp;lt;math&amp;gt;d&amp;lt;/math&amp;gt; (induction hypothesis), &amp;lt;math&amp;gt;d(v)-d(s)=d(v)-n&amp;lt;/math&amp;gt; cannot be larger than the number of arcs on &amp;lt;math&amp;gt;p&amp;lt;/math&amp;gt;, which is not larger than &amp;lt;math&amp;gt;n-1&amp;lt;/math&amp;gt;. Therefore, no node label can be larger than &amp;lt;math&amp;gt;2n-1&amp;lt;/math&amp;gt;. Since node labels are nonnegative and increase at least by one in each relabel operation, the claimed upper bound on the relabel operations follows.&lt;br /&gt;
&lt;br /&gt;
From this bound, we may immediately conclude that the current arc of a node is reset &amp;lt;math&amp;gt;\mathcal{O}(n)&amp;lt;/math&amp;gt; times, so the total number of forward steps of the current arcs of all nodes is in &amp;lt;math&amp;gt;\mathcal{O}(n^3)\subseteq\mathcal{O}(n^2m)&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
The argument in the [[Ahuja-Orlin#Complexity|complexity analysis]] of the [[Ahuja-Orlin]] algorithm to prove that the total number of ''saturating'' push operations is in &amp;lt;math&amp;gt;\mathcal{O}(nm)&amp;lt;/math&amp;gt;, applies here as well.&lt;br /&gt;
&lt;br /&gt;
Finally, we consider the ''non-saturating'' push operations. First note that &amp;lt;math&amp;gt;D\geq 0&amp;lt;/math&amp;gt; before and after each iteration. The value of &amp;lt;math&amp;gt;D&amp;lt;/math&amp;gt; is increased in each relabel operation exactly by the amount by which the label of the current node is increased. Since node labels are never decreased and bounded from above by &amp;lt;math&amp;gt;2n-1&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;D&amp;lt;/math&amp;gt; increases by less than &amp;lt;math&amp;gt;2n^2&amp;lt;/math&amp;gt; in total over all relabel operations. On the other hand, a saturating push operation may increase &amp;lt;math&amp;gt;D&amp;lt;/math&amp;gt; by at most &amp;lt;math&amp;gt;2n-1&amp;lt;/math&amp;gt; (namely, in the case that &amp;lt;math&amp;gt;w&amp;lt;/math&amp;gt; was not active immediately before the push). In summary, the total sum of all values by which &amp;lt;math&amp;gt;D&amp;lt;/math&amp;gt; is increased throughout the algorithm is in &amp;lt;math&amp;gt;\mathcal{O}(n^2m)&amp;lt;/math&amp;gt;. Due to the variant, the value of &amp;lt;math&amp;gt;D&amp;lt;/math&amp;gt; is decreased by at least one in each non-saturating push operation. This proves the claim.&lt;br /&gt;
&lt;br /&gt;
== Heuristic speedup techniques ==&lt;br /&gt;
&lt;br /&gt;
# After &amp;lt;math&amp;gt;\Omega(n)&amp;lt;/math&amp;gt; iterations of the main loop, the &amp;lt;math&amp;gt;d&amp;lt;/math&amp;gt;-values are recomputed analogously to the induction basis: as the current distance of each node to &amp;lt;math&amp;gt;t&amp;lt;/math&amp;gt; in the residual network. This modification is seldom enough, so the asymptotic complexity is not increased. In practice, this technique may save many unnecessary relabel steps.&lt;br /&gt;
# The main loop may be decomposed into two phases: First, as much flow as possible is sent into &amp;lt;math&amp;gt;t&amp;lt;/math&amp;gt;; second, all surplus flow that cannot reach &amp;lt;math&amp;gt;t&amp;lt;/math&amp;gt; is sent back to &amp;lt;math&amp;gt;s&amp;lt;/math&amp;gt;. The first phase may be finished once there is no more path in the residual network from any active node to &amp;lt;math&amp;gt;t&amp;lt;/math&amp;gt;. A sufficient and easy-to-check condition for that is &amp;lt;math&amp;gt;d(v)\geq n&amp;lt;/math&amp;gt; for all active nodes &amp;lt;math&amp;gt;v&amp;lt;/math&amp;gt;. All nodes from which &amp;lt;math&amp;gt;t&amp;lt;/math&amp;gt; is reachable may be safely disregarded in the second phase. For any other node, to save unnecessary relabel operations, the distance label may be safely increased to the minimum number of arcs in the residual network from this node back to &amp;lt;math&amp;gt;s&amp;lt;/math&amp;gt;.&lt;/div&gt;</summary>
		<author><name>Weihe</name></author>
	</entry>
	<entry>
		<id>https://wiki.algo.informatik.tu-darmstadt.de/index.php?title=Preflow-push&amp;diff=3881</id>
		<title>Preflow-push</title>
		<link rel="alternate" type="text/html" href="https://wiki.algo.informatik.tu-darmstadt.de/index.php?title=Preflow-push&amp;diff=3881"/>
		<updated>2017-06-12T08:32:19Z</updated>

		<summary type="html">&lt;p&gt;Weihe: /* Induction step */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Abstract view ==&lt;br /&gt;
&lt;br /&gt;
'''Also known as:''' ''push-relabel'' algorithm or ''Goldberg-Tarjan'' algorithm&lt;br /&gt;
&lt;br /&gt;
'''Algorithmic problem:'''&lt;br /&gt;
[[Max-Flow Problems#Standard version|max-flow problem (standard version)]]&lt;br /&gt;
&lt;br /&gt;
'''Type of algorithm:'''&lt;br /&gt;
loop.&lt;br /&gt;
&lt;br /&gt;
'''Auxiliary data:'''&lt;br /&gt;
# A nonnegative integral value &amp;lt;math&amp;gt;d(v)&amp;lt;/math&amp;gt; for each node &amp;lt;math&amp;gt;v\in V&amp;lt;/math&amp;gt;.&lt;br /&gt;
# Each node &amp;lt;math&amp;gt;v\in V\setminus\{t\}&amp;lt;/math&amp;gt; has a '''current arc''', which may be implemented as an iterator on the list of outgoing residual arcs of &amp;lt;math&amp;gt;v&amp;lt;/math&amp;gt;.&lt;br /&gt;
# The [[Basic flow definitions#Preflow|excess]] &amp;lt;math&amp;gt;e_f(v)&amp;lt;/math&amp;gt; of a node &amp;lt;math&amp;gt;v\in V\setminus\{s,t\}&amp;lt;/math&amp;gt; with respect to the current [[Basic flow definitions#Preflow|preflow]].&lt;br /&gt;
# A (dynamically changing) [[Sets and sequences|set]] &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt; of nodes.&lt;br /&gt;
&lt;br /&gt;
'''Invariant:'''&lt;br /&gt;
Before and after each iteration:&lt;br /&gt;
# For each arc &amp;lt;math&amp;gt;a\in A&amp;lt;/math&amp;gt;, it is &amp;lt;math&amp;gt;0\leq f(a)\leq u(a)&amp;lt;/math&amp;gt; . If all upper bounds are integral, all &amp;lt;math&amp;gt;f&amp;lt;/math&amp;gt;-values are integral, too.&lt;br /&gt;
# For each node &amp;lt;math&amp;gt;v\in V\setminus\{s,t\} &amp;lt;/math&amp;gt;, it is &amp;lt;math&amp;gt;e_f(v)\geq 0&amp;lt;/math&amp;gt;. In other words, &amp;lt;math&amp;gt;f&amp;lt;/math&amp;gt; is a [[Basic flow definitions#Preflow|preflow]].&lt;br /&gt;
# The node labels &amp;lt;math&amp;gt;d&amp;lt;/math&amp;gt; form a [[Basic flow definitions#Valid distance labeling|valid distance labeling]], and it is &amp;lt;math&amp;gt;d(s)=n:=|V|&amp;lt;/math&amp;gt;.&lt;br /&gt;
# The currently [[Basic flow definitions#Preflow|active nodes]] are stored in &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt;.&lt;br /&gt;
# The current arc of a node is an outgoing arc of the node's in the residual graph. In the list of all of these arcs, no admissible arc precedes the current arc.&lt;br /&gt;
&lt;br /&gt;
'''Variant:'''&lt;br /&gt;
No label &amp;lt;math&amp;gt;d(\cdot)&amp;lt;/math&amp;gt; is ever decreased. In each iteration, one of the following three actions will take place:&lt;br /&gt;
# The label &amp;lt;math&amp;gt;d(v)&amp;lt;/math&amp;gt; of at least one node &amp;lt;math&amp;gt;v\in V\setminus\{s,t\}&amp;lt;/math&amp;gt; is increased.&lt;br /&gt;
# A saturating push is performed.&lt;br /&gt;
# The value of &amp;lt;math&amp;gt;D:=\sum_{v\in V\setminus\{s,t\}\atop e_f(v)&amp;gt;0}d(v)&amp;lt;/math&amp;gt; decreases.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Break condition:'''&lt;br /&gt;
&amp;lt;math&amp;gt;S=\emptyset&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
== Induction basis ==&lt;br /&gt;
&lt;br /&gt;
'''Abstract view:'''&lt;br /&gt;
# For all arcs &amp;lt;math&amp;gt;a\in A&amp;lt;/math&amp;gt;, set &amp;lt;math&amp;gt;f(a):=0&amp;lt;/math&amp;gt;.&lt;br /&gt;
# For each arc &amp;lt;math&amp;gt;(s,v)\in A&amp;lt;/math&amp;gt;, overwrite this value by &amp;lt;math&amp;gt;f(a):=u(a)&amp;lt;/math&amp;gt; and put &amp;lt;math&amp;gt;v&amp;lt;/math&amp;gt; into &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt;.&lt;br /&gt;
# Compute a [[Basic flow definitions#Valid distance labeling|valid distance labeling]] &amp;lt;math&amp;gt;d&amp;lt;/math&amp;gt; for &amp;lt;math&amp;gt;V\setminus\{s\}&amp;lt;/math&amp;gt; with respect to &amp;lt;math&amp;gt;f&amp;lt;/math&amp;gt;, for example, the true distances from all nodes to &amp;lt;math&amp;gt;t&amp;lt;/math&amp;gt; in the residual network of &amp;lt;math&amp;gt;f&amp;lt;/math&amp;gt;.&lt;br /&gt;
# Set &amp;lt;math&amp;gt;d(s):=n&amp;lt;/math&amp;gt;.&lt;br /&gt;
# For all &amp;lt;math&amp;gt;v\in V\setminus\{s,t\}&amp;lt;/math&amp;gt;, reset the current arc of &amp;lt;math&amp;gt;v&amp;lt;/math&amp;gt; so as to point to the first arc in the list of outgoing arcs of &amp;lt;math&amp;gt;v&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
'''Proof:'''&lt;br /&gt;
For the [[Basic graph definitions#Subgraphs|subgraph induced]] by &amp;lt;math&amp;gt;V\setminus\{s\}&amp;lt;/math&amp;gt;, the arguments in the [[Ahuja-Orlin#Correctness|correctness proof]] for the [[Ahuja-Orlin]] algorithm prove that the &amp;lt;math&amp;gt;d&amp;lt;/math&amp;gt;-labels form a valid distance labeling here as well. For &amp;lt;math&amp;gt;s&amp;lt;/math&amp;gt;, nothing is to show because all outgoing arcs are saturated.&lt;br /&gt;
&lt;br /&gt;
== Induction step ==&lt;br /&gt;
&lt;br /&gt;
'''Abstract view:'''&lt;br /&gt;
# Choose an [[Basic flow definitions#Preflow|active node]] &amp;lt;math&amp;gt;v&amp;lt;/math&amp;gt; from &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt;.&lt;br /&gt;
# While the current arc of &amp;lt;math&amp;gt;v&amp;lt;/math&amp;gt; is not void and not [[Basic flow definitions#Valid distance labeling|admissible]] either, move the current arc one step forward.&lt;br /&gt;
# If the current arc of &amp;lt;math&amp;gt;v&amp;lt;/math&amp;gt; is ''not'' void now but an (admissible) outgoing arc &amp;lt;math&amp;gt;(v,w)&amp;lt;/math&amp;gt;, say:&lt;br /&gt;
## If &amp;lt;math&amp;gt;w\neq s&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;e_f(w)=0&amp;lt;/math&amp;gt;, insert &amp;lt;math&amp;gt;w&amp;lt;/math&amp;gt; in &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt;.&lt;br /&gt;
## Increase the flow over &amp;lt;math&amp;gt;(v,w)&amp;lt;/math&amp;gt; by the minimum of &amp;lt;math&amp;gt;e_f(v)&amp;lt;/math&amp;gt; and the residual capacity of &amp;lt;math&amp;gt;(v,w)&amp;lt;/math&amp;gt;.&lt;br /&gt;
## Increase &amp;lt;math&amp;gt;e_f(w)&amp;lt;/math&amp;gt; by that value and decrease &amp;lt;math&amp;gt;e_f(v)&amp;lt;/math&amp;gt; by the same value.&lt;br /&gt;
## If &amp;lt;math&amp;gt;e_f(v)=0&amp;lt;/math&amp;gt; now, extract &amp;lt;math&amp;gt;v&amp;lt;/math&amp;gt; from &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt;.&lt;br /&gt;
# Otherwise:&lt;br /&gt;
## Let &amp;lt;math&amp;gt;d_\min&amp;lt;/math&amp;gt; denote the minimal label &amp;lt;math&amp;gt;d(w)&amp;lt;/math&amp;gt; of any arc &amp;lt;math&amp;gt;(v,w)&amp;lt;/math&amp;gt; in the residual network.&lt;br /&gt;
## Set &amp;lt;math&amp;gt;d(v):=d_\min+1&amp;lt;/math&amp;gt;.&lt;br /&gt;
## Reset the current arc of &amp;lt;math&amp;gt;v&amp;lt;/math&amp;gt; so as to point to the beginning of the list of outgoing arcs of &amp;lt;math&amp;gt;v&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
'''Remark:'''&lt;br /&gt;
The preflow-push algorithm is also known as the '''push-relabel''' algorithm. The ''push'' operation is step 3; the ''relabel'' operation is step 4.&lt;br /&gt;
&lt;br /&gt;
'''Proof:'''&lt;br /&gt;
Points 1, 2, and 4 of the invariant and &amp;lt;math&amp;gt;d(s)=n&amp;lt;/math&amp;gt; are obviously fulfilled. The rest of point 3 of the invariant is affected by step 4 only, and the outgoing arcs of &amp;lt;math&amp;gt;v&amp;lt;/math&amp;gt; are the only arcs where the distance labeling may become invalid. However, the extremely conservative increase of &amp;lt;math&amp;gt;d(v)&amp;lt;/math&amp;gt; ensures point 3 of the invariant.&lt;br /&gt;
&lt;br /&gt;
To prove the variant, consider a step in which neither any &amp;lt;math&amp;gt;d&amp;lt;/math&amp;gt;-value is increased nor a saturating push is performed. This means step 3.2 is applied, but the arc &amp;lt;math&amp;gt;(v,w)&amp;lt;/math&amp;gt; is not saturated by that. Potentially, &amp;lt;math&amp;gt;w&amp;lt;/math&amp;gt; becomes active. However, &amp;lt;math&amp;gt;v&amp;lt;/math&amp;gt; definitely becomes inactive since the push step is non-saturating. Now the variant follows from the fact that &amp;lt;math&amp;gt;d(w)=d(v)-1&amp;lt;/math&amp;gt; for an admissible arc &amp;lt;math&amp;gt;(v,w)&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
It remains to show termination; this is proved by the following complexity considerations.&lt;br /&gt;
&lt;br /&gt;
== Complexity ==&lt;br /&gt;
&lt;br /&gt;
'''Statement:'''&lt;br /&gt;
The asymptotic complexity is in &amp;lt;math&amp;gt;\mathcal{O}(n^2m)&amp;lt;/math&amp;gt;, where &amp;lt;math&amp;gt;n=|V|&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;m=|A|&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
'''Proof:'''&lt;br /&gt;
First we show that the total number of relabel operations (step 4 of the main loop) is in &amp;lt;math&amp;gt;\mathcal{O}(n^2)&amp;lt;/math&amp;gt;. To see that, let &amp;lt;math&amp;gt;v\in V\setminus\{s,t\}&amp;lt;/math&amp;gt; be an active node between two iterations of the main loop. A straightforward induction over the number of push operations shows that there is at least one simple &amp;lt;math&amp;gt;(s,v)&amp;lt;/math&amp;gt;-path &amp;lt;math&amp;gt;p&amp;lt;/math&amp;gt; with positive flow on all arcs. The [[Basic graph definitions#Transpose of a graph|transpose]] of &amp;lt;math&amp;gt;p&amp;lt;/math&amp;gt; is [[Basic flow definitions#Flow-augmenting paths and saturated arcs|augmenting]]. Due to the validity of &amp;lt;math&amp;gt;d&amp;lt;/math&amp;gt; (induction hypothesis), &amp;lt;math&amp;gt;d(v)-d(s)=d(v)-n&amp;lt;/math&amp;gt; cannot be larger than the number of arcs on &amp;lt;math&amp;gt;p&amp;lt;/math&amp;gt;, which is not larger than &amp;lt;math&amp;gt;n-1&amp;lt;/math&amp;gt;. Therefore, no node label can be larger than &amp;lt;math&amp;gt;2n-1&amp;lt;/math&amp;gt;. Since node labels are nonnegative and increase at least by one in each relabel operation, the claimed upper bound on the relabel operations follows.&lt;br /&gt;
&lt;br /&gt;
From this bound, we may immediately conclude that the current arc of a node is reset &amp;lt;math&amp;gt;\mathcal{O}(n)&amp;lt;/math&amp;gt; times, so the total number of forward steps of the current arcs of all nodes is in &amp;lt;math&amp;gt;\mathcal{O}(n^3)\subseteq\mathcal{O}(n^2m)&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
The argument in the [[Ahuja-Orlin#Complexity|complexity analysis]] of the [[Ahuja-Orlin]] algorithm to prove that the total number of ''saturating'' push operations is in &amp;lt;math&amp;gt;\mathcal{O}(nm)&amp;lt;/math&amp;gt;, applies here as well.&lt;br /&gt;
&lt;br /&gt;
Finally, we consider the ''non-saturating'' push operations. First note that &amp;lt;math&amp;gt;D\geq 0&amp;lt;/math&amp;gt; before and after each iteration. The value of &amp;lt;math&amp;gt;D&amp;lt;/math&amp;gt; is increased in each relabel operation exactly by the amount by which the label of the current node is increased. Since node labels are never decreased and bounded from above by &amp;lt;math&amp;gt;2n-1&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;D&amp;lt;/math&amp;gt; increases by less than &amp;lt;math&amp;gt;2n^2&amp;lt;/math&amp;gt; in total over all relabel operations. On the other hand, a saturating push operation may increase &amp;lt;math&amp;gt;D&amp;lt;/math&amp;gt; by at most &amp;lt;math&amp;gt;2n-1&amp;lt;/math&amp;gt; (namely, in the case that &amp;lt;math&amp;gt;w&amp;lt;/math&amp;gt; was not active immediately before the push). In summary, the total sum of all values by which &amp;lt;math&amp;gt;D&amp;lt;/math&amp;gt; is increased throughout the algorithm is in &amp;lt;math&amp;gt;\mathcal{O}(n^2m)&amp;lt;/math&amp;gt;. Due to the variant, the value of &amp;lt;math&amp;gt;D&amp;lt;/math&amp;gt; is decreased by at least one in each non-saturating push operation. This proves the claim.&lt;br /&gt;
&lt;br /&gt;
== Heuristic speedup techniques ==&lt;br /&gt;
&lt;br /&gt;
# After &amp;lt;math&amp;gt;\Omega(n)&amp;lt;/math&amp;gt; iterations of the main loop, the &amp;lt;math&amp;gt;d&amp;lt;/math&amp;gt;-values are recomputed analogously to the induction basis: as the current distance of each node to &amp;lt;math&amp;gt;t&amp;lt;/math&amp;gt; in the residual network. This modification is seldom enough, so the asymptotic complexity is not increased. In practice, this technique may save many unnecessary relabel steps.&lt;br /&gt;
# The main loop may be decomposed into two phases: First, as much flow as possible is sent into &amp;lt;math&amp;gt;t&amp;lt;/math&amp;gt;; second, all surplus flow that cannot reach &amp;lt;math&amp;gt;t&amp;lt;/math&amp;gt; is sent back to &amp;lt;math&amp;gt;s&amp;lt;/math&amp;gt;. The first phase may be finished once there is no more path in the residual network from any active node to &amp;lt;math&amp;gt;t&amp;lt;/math&amp;gt;. A sufficient and easy-to-check condition for that is &amp;lt;math&amp;gt;d(v)\geq n&amp;lt;/math&amp;gt; for all active nodes &amp;lt;math&amp;gt;v&amp;lt;/math&amp;gt;. All nodes from which &amp;lt;math&amp;gt;t&amp;lt;/math&amp;gt; is reachable may be safely disregarded in the second phase. For any other node, to save unnecessary relabel operations, the distance label may be safely increased to the minimum number of arcs in the residual network from this node back to &amp;lt;math&amp;gt;s&amp;lt;/math&amp;gt;.&lt;/div&gt;</summary>
		<author><name>Weihe</name></author>
	</entry>
	<entry>
		<id>https://wiki.algo.informatik.tu-darmstadt.de/index.php?title=Preflow-push&amp;diff=3880</id>
		<title>Preflow-push</title>
		<link rel="alternate" type="text/html" href="https://wiki.algo.informatik.tu-darmstadt.de/index.php?title=Preflow-push&amp;diff=3880"/>
		<updated>2017-06-12T08:32:05Z</updated>

		<summary type="html">&lt;p&gt;Weihe: /* Induction step */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Abstract view ==&lt;br /&gt;
&lt;br /&gt;
'''Also known as:''' ''push-relabel'' algorithm or ''Goldberg-Tarjan'' algorithm&lt;br /&gt;
&lt;br /&gt;
'''Algorithmic problem:'''&lt;br /&gt;
[[Max-Flow Problems#Standard version|max-flow problem (standard version)]]&lt;br /&gt;
&lt;br /&gt;
'''Type of algorithm:'''&lt;br /&gt;
loop.&lt;br /&gt;
&lt;br /&gt;
'''Auxiliary data:'''&lt;br /&gt;
# A nonnegative integral value &amp;lt;math&amp;gt;d(v)&amp;lt;/math&amp;gt; for each node &amp;lt;math&amp;gt;v\in V&amp;lt;/math&amp;gt;.&lt;br /&gt;
# Each node &amp;lt;math&amp;gt;v\in V\setminus\{t\}&amp;lt;/math&amp;gt; has a '''current arc''', which may be implemented as an iterator on the list of outgoing residual arcs of &amp;lt;math&amp;gt;v&amp;lt;/math&amp;gt;.&lt;br /&gt;
# The [[Basic flow definitions#Preflow|excess]] &amp;lt;math&amp;gt;e_f(v)&amp;lt;/math&amp;gt; of a node &amp;lt;math&amp;gt;v\in V\setminus\{s,t\}&amp;lt;/math&amp;gt; with respect to the current [[Basic flow definitions#Preflow|preflow]].&lt;br /&gt;
# A (dynamically changing) [[Sets and sequences|set]] &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt; of nodes.&lt;br /&gt;
&lt;br /&gt;
'''Invariant:'''&lt;br /&gt;
Before and after each iteration:&lt;br /&gt;
# For each arc &amp;lt;math&amp;gt;a\in A&amp;lt;/math&amp;gt;, it is &amp;lt;math&amp;gt;0\leq f(a)\leq u(a)&amp;lt;/math&amp;gt; . If all upper bounds are integral, all &amp;lt;math&amp;gt;f&amp;lt;/math&amp;gt;-values are integral, too.&lt;br /&gt;
# For each node &amp;lt;math&amp;gt;v\in V\setminus\{s,t\} &amp;lt;/math&amp;gt;, it is &amp;lt;math&amp;gt;e_f(v)\geq 0&amp;lt;/math&amp;gt;. In other words, &amp;lt;math&amp;gt;f&amp;lt;/math&amp;gt; is a [[Basic flow definitions#Preflow|preflow]].&lt;br /&gt;
# The node labels &amp;lt;math&amp;gt;d&amp;lt;/math&amp;gt; form a [[Basic flow definitions#Valid distance labeling|valid distance labeling]], and it is &amp;lt;math&amp;gt;d(s)=n:=|V|&amp;lt;/math&amp;gt;.&lt;br /&gt;
# The currently [[Basic flow definitions#Preflow|active nodes]] are stored in &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt;.&lt;br /&gt;
# The current arc of a node is an outgoing arc of the node's in the residual graph. In the list of all of these arcs, no admissible arc precedes the current arc.&lt;br /&gt;
&lt;br /&gt;
'''Variant:'''&lt;br /&gt;
No label &amp;lt;math&amp;gt;d(\cdot)&amp;lt;/math&amp;gt; is ever decreased. In each iteration, one of the following three actions will take place:&lt;br /&gt;
# The label &amp;lt;math&amp;gt;d(v)&amp;lt;/math&amp;gt; of at least one node &amp;lt;math&amp;gt;v\in V\setminus\{s,t\}&amp;lt;/math&amp;gt; is increased.&lt;br /&gt;
# A saturating push is performed.&lt;br /&gt;
# The value of &amp;lt;math&amp;gt;D:=\sum_{v\in V\setminus\{s,t\}\atop e_f(v)&amp;gt;0}d(v)&amp;lt;/math&amp;gt; decreases.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Break condition:'''&lt;br /&gt;
&amp;lt;math&amp;gt;S=\emptyset&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
== Induction basis ==&lt;br /&gt;
&lt;br /&gt;
'''Abstract view:'''&lt;br /&gt;
# For all arcs &amp;lt;math&amp;gt;a\in A&amp;lt;/math&amp;gt;, set &amp;lt;math&amp;gt;f(a):=0&amp;lt;/math&amp;gt;.&lt;br /&gt;
# For each arc &amp;lt;math&amp;gt;(s,v)\in A&amp;lt;/math&amp;gt;, overwrite this value by &amp;lt;math&amp;gt;f(a):=u(a)&amp;lt;/math&amp;gt; and put &amp;lt;math&amp;gt;v&amp;lt;/math&amp;gt; into &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt;.&lt;br /&gt;
# Compute a [[Basic flow definitions#Valid distance labeling|valid distance labeling]] &amp;lt;math&amp;gt;d&amp;lt;/math&amp;gt; for &amp;lt;math&amp;gt;V\setminus\{s\}&amp;lt;/math&amp;gt; with respect to &amp;lt;math&amp;gt;f&amp;lt;/math&amp;gt;, for example, the true distances from all nodes to &amp;lt;math&amp;gt;t&amp;lt;/math&amp;gt; in the residual network of &amp;lt;math&amp;gt;f&amp;lt;/math&amp;gt;.&lt;br /&gt;
# Set &amp;lt;math&amp;gt;d(s):=n&amp;lt;/math&amp;gt;.&lt;br /&gt;
# For all &amp;lt;math&amp;gt;v\in V\setminus\{s,t\}&amp;lt;/math&amp;gt;, reset the current arc of &amp;lt;math&amp;gt;v&amp;lt;/math&amp;gt; so as to point to the first arc in the list of outgoing arcs of &amp;lt;math&amp;gt;v&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
'''Proof:'''&lt;br /&gt;
For the [[Basic graph definitions#Subgraphs|subgraph induced]] by &amp;lt;math&amp;gt;V\setminus\{s\}&amp;lt;/math&amp;gt;, the arguments in the [[Ahuja-Orlin#Correctness|correctness proof]] for the [[Ahuja-Orlin]] algorithm prove that the &amp;lt;math&amp;gt;d&amp;lt;/math&amp;gt;-labels form a valid distance labeling here as well. For &amp;lt;math&amp;gt;s&amp;lt;/math&amp;gt;, nothing is to show because all outgoing arcs are saturated.&lt;br /&gt;
&lt;br /&gt;
== Induction step ==&lt;br /&gt;
&lt;br /&gt;
'''Abstract view:'''&lt;br /&gt;
# Choose an [[Basic flow definitions#Preflow|active node]] &amp;lt;math&amp;gt;v&amp;lt;/math&amp;gt; from &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt;.&lt;br /&gt;
# While the current arc of &amp;lt;math&amp;gt;v&amp;lt;/math&amp;gt; is not void and not [[Basic flow definitions#Valid distance labeling|admissible]] either, move the current arc one step forward.&lt;br /&gt;
# If the current arc of &amp;lt;math&amp;gt;v&amp;lt;/math&amp;gt; is ''not'' void now but an (admissible) outgoing arc &amp;lt;math&amp;gt;(v,w)&amp;lt;/math&amp;gt;, say:&lt;br /&gt;
## If &amp;lt;math&amp;gt;w\neq s&amp;lt;/math&amp;gt; and and and &amp;lt;math&amp;gt;e_f(w)=0&amp;lt;/math&amp;gt;, insert &amp;lt;math&amp;gt;w&amp;lt;/math&amp;gt; in &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt;.&lt;br /&gt;
## Increase the flow over &amp;lt;math&amp;gt;(v,w)&amp;lt;/math&amp;gt; by the minimum of &amp;lt;math&amp;gt;e_f(v)&amp;lt;/math&amp;gt; and the residual capacity of &amp;lt;math&amp;gt;(v,w)&amp;lt;/math&amp;gt;.&lt;br /&gt;
## Increase &amp;lt;math&amp;gt;e_f(w)&amp;lt;/math&amp;gt; by that value and decrease &amp;lt;math&amp;gt;e_f(v)&amp;lt;/math&amp;gt; by the same value.&lt;br /&gt;
## If &amp;lt;math&amp;gt;e_f(v)=0&amp;lt;/math&amp;gt; now, extract &amp;lt;math&amp;gt;v&amp;lt;/math&amp;gt; from &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt;.&lt;br /&gt;
# Otherwise:&lt;br /&gt;
## Let &amp;lt;math&amp;gt;d_\min&amp;lt;/math&amp;gt; denote the minimal label &amp;lt;math&amp;gt;d(w)&amp;lt;/math&amp;gt; of any arc &amp;lt;math&amp;gt;(v,w)&amp;lt;/math&amp;gt; in the residual network.&lt;br /&gt;
## Set &amp;lt;math&amp;gt;d(v):=d_\min+1&amp;lt;/math&amp;gt;.&lt;br /&gt;
## Reset the current arc of &amp;lt;math&amp;gt;v&amp;lt;/math&amp;gt; so as to point to the beginning of the list of outgoing arcs of &amp;lt;math&amp;gt;v&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
'''Remark:'''&lt;br /&gt;
The preflow-push algorithm is also known as the '''push-relabel''' algorithm. The ''push'' operation is step 3; the ''relabel'' operation is step 4.&lt;br /&gt;
&lt;br /&gt;
'''Proof:'''&lt;br /&gt;
Points 1, 2, and 4 of the invariant and &amp;lt;math&amp;gt;d(s)=n&amp;lt;/math&amp;gt; are obviously fulfilled. The rest of point 3 of the invariant is affected by step 4 only, and the outgoing arcs of &amp;lt;math&amp;gt;v&amp;lt;/math&amp;gt; are the only arcs where the distance labeling may become invalid. However, the extremely conservative increase of &amp;lt;math&amp;gt;d(v)&amp;lt;/math&amp;gt; ensures point 3 of the invariant.&lt;br /&gt;
&lt;br /&gt;
To prove the variant, consider a step in which neither any &amp;lt;math&amp;gt;d&amp;lt;/math&amp;gt;-value is increased nor a saturating push is performed. This means step 3.2 is applied, but the arc &amp;lt;math&amp;gt;(v,w)&amp;lt;/math&amp;gt; is not saturated by that. Potentially, &amp;lt;math&amp;gt;w&amp;lt;/math&amp;gt; becomes active. However, &amp;lt;math&amp;gt;v&amp;lt;/math&amp;gt; definitely becomes inactive since the push step is non-saturating. Now the variant follows from the fact that &amp;lt;math&amp;gt;d(w)=d(v)-1&amp;lt;/math&amp;gt; for an admissible arc &amp;lt;math&amp;gt;(v,w)&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
It remains to show termination; this is proved by the following complexity considerations.&lt;br /&gt;
&lt;br /&gt;
== Complexity ==&lt;br /&gt;
&lt;br /&gt;
'''Statement:'''&lt;br /&gt;
The asymptotic complexity is in &amp;lt;math&amp;gt;\mathcal{O}(n^2m)&amp;lt;/math&amp;gt;, where &amp;lt;math&amp;gt;n=|V|&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;m=|A|&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
'''Proof:'''&lt;br /&gt;
First we show that the total number of relabel operations (step 4 of the main loop) is in &amp;lt;math&amp;gt;\mathcal{O}(n^2)&amp;lt;/math&amp;gt;. To see that, let &amp;lt;math&amp;gt;v\in V\setminus\{s,t\}&amp;lt;/math&amp;gt; be an active node between two iterations of the main loop. A straightforward induction over the number of push operations shows that there is at least one simple &amp;lt;math&amp;gt;(s,v)&amp;lt;/math&amp;gt;-path &amp;lt;math&amp;gt;p&amp;lt;/math&amp;gt; with positive flow on all arcs. The [[Basic graph definitions#Transpose of a graph|transpose]] of &amp;lt;math&amp;gt;p&amp;lt;/math&amp;gt; is [[Basic flow definitions#Flow-augmenting paths and saturated arcs|augmenting]]. Due to the validity of &amp;lt;math&amp;gt;d&amp;lt;/math&amp;gt; (induction hypothesis), &amp;lt;math&amp;gt;d(v)-d(s)=d(v)-n&amp;lt;/math&amp;gt; cannot be larger than the number of arcs on &amp;lt;math&amp;gt;p&amp;lt;/math&amp;gt;, which is not larger than &amp;lt;math&amp;gt;n-1&amp;lt;/math&amp;gt;. Therefore, no node label can be larger than &amp;lt;math&amp;gt;2n-1&amp;lt;/math&amp;gt;. Since node labels are nonnegative and increase at least by one in each relabel operation, the claimed upper bound on the relabel operations follows.&lt;br /&gt;
&lt;br /&gt;
From this bound, we may immediately conclude that the current arc of a node is reset &amp;lt;math&amp;gt;\mathcal{O}(n)&amp;lt;/math&amp;gt; times, so the total number of forward steps of the current arcs of all nodes is in &amp;lt;math&amp;gt;\mathcal{O}(n^3)\subseteq\mathcal{O}(n^2m)&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
The argument in the [[Ahuja-Orlin#Complexity|complexity analysis]] of the [[Ahuja-Orlin]] algorithm to prove that the total number of ''saturating'' push operations is in &amp;lt;math&amp;gt;\mathcal{O}(nm)&amp;lt;/math&amp;gt;, applies here as well.&lt;br /&gt;
&lt;br /&gt;
Finally, we consider the ''non-saturating'' push operations. First note that &amp;lt;math&amp;gt;D\geq 0&amp;lt;/math&amp;gt; before and after each iteration. The value of &amp;lt;math&amp;gt;D&amp;lt;/math&amp;gt; is increased in each relabel operation exactly by the amount by which the label of the current node is increased. Since node labels are never decreased and bounded from above by &amp;lt;math&amp;gt;2n-1&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;D&amp;lt;/math&amp;gt; increases by less than &amp;lt;math&amp;gt;2n^2&amp;lt;/math&amp;gt; in total over all relabel operations. On the other hand, a saturating push operation may increase &amp;lt;math&amp;gt;D&amp;lt;/math&amp;gt; by at most &amp;lt;math&amp;gt;2n-1&amp;lt;/math&amp;gt; (namely, in the case that &amp;lt;math&amp;gt;w&amp;lt;/math&amp;gt; was not active immediately before the push). In summary, the total sum of all values by which &amp;lt;math&amp;gt;D&amp;lt;/math&amp;gt; is increased throughout the algorithm is in &amp;lt;math&amp;gt;\mathcal{O}(n^2m)&amp;lt;/math&amp;gt;. Due to the variant, the value of &amp;lt;math&amp;gt;D&amp;lt;/math&amp;gt; is decreased by at least one in each non-saturating push operation. This proves the claim.&lt;br /&gt;
&lt;br /&gt;
== Heuristic speedup techniques ==&lt;br /&gt;
&lt;br /&gt;
# After &amp;lt;math&amp;gt;\Omega(n)&amp;lt;/math&amp;gt; iterations of the main loop, the &amp;lt;math&amp;gt;d&amp;lt;/math&amp;gt;-values are recomputed analogously to the induction basis: as the current distance of each node to &amp;lt;math&amp;gt;t&amp;lt;/math&amp;gt; in the residual network. This modification is seldom enough, so the asymptotic complexity is not increased. In practice, this technique may save many unnecessary relabel steps.&lt;br /&gt;
# The main loop may be decomposed into two phases: First, as much flow as possible is sent into &amp;lt;math&amp;gt;t&amp;lt;/math&amp;gt;; second, all surplus flow that cannot reach &amp;lt;math&amp;gt;t&amp;lt;/math&amp;gt; is sent back to &amp;lt;math&amp;gt;s&amp;lt;/math&amp;gt;. The first phase may be finished once there is no more path in the residual network from any active node to &amp;lt;math&amp;gt;t&amp;lt;/math&amp;gt;. A sufficient and easy-to-check condition for that is &amp;lt;math&amp;gt;d(v)\geq n&amp;lt;/math&amp;gt; for all active nodes &amp;lt;math&amp;gt;v&amp;lt;/math&amp;gt;. All nodes from which &amp;lt;math&amp;gt;t&amp;lt;/math&amp;gt; is reachable may be safely disregarded in the second phase. For any other node, to save unnecessary relabel operations, the distance label may be safely increased to the minimum number of arcs in the residual network from this node back to &amp;lt;math&amp;gt;s&amp;lt;/math&amp;gt;.&lt;/div&gt;</summary>
		<author><name>Weihe</name></author>
	</entry>
	<entry>
		<id>https://wiki.algo.informatik.tu-darmstadt.de/index.php?title=Preflow-push&amp;diff=3879</id>
		<title>Preflow-push</title>
		<link rel="alternate" type="text/html" href="https://wiki.algo.informatik.tu-darmstadt.de/index.php?title=Preflow-push&amp;diff=3879"/>
		<updated>2017-06-12T08:30:08Z</updated>

		<summary type="html">&lt;p&gt;Weihe: /* Induction step */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Abstract view ==&lt;br /&gt;
&lt;br /&gt;
'''Also known as:''' ''push-relabel'' algorithm or ''Goldberg-Tarjan'' algorithm&lt;br /&gt;
&lt;br /&gt;
'''Algorithmic problem:'''&lt;br /&gt;
[[Max-Flow Problems#Standard version|max-flow problem (standard version)]]&lt;br /&gt;
&lt;br /&gt;
'''Type of algorithm:'''&lt;br /&gt;
loop.&lt;br /&gt;
&lt;br /&gt;
'''Auxiliary data:'''&lt;br /&gt;
# A nonnegative integral value &amp;lt;math&amp;gt;d(v)&amp;lt;/math&amp;gt; for each node &amp;lt;math&amp;gt;v\in V&amp;lt;/math&amp;gt;.&lt;br /&gt;
# Each node &amp;lt;math&amp;gt;v\in V\setminus\{t\}&amp;lt;/math&amp;gt; has a '''current arc''', which may be implemented as an iterator on the list of outgoing residual arcs of &amp;lt;math&amp;gt;v&amp;lt;/math&amp;gt;.&lt;br /&gt;
# The [[Basic flow definitions#Preflow|excess]] &amp;lt;math&amp;gt;e_f(v)&amp;lt;/math&amp;gt; of a node &amp;lt;math&amp;gt;v\in V\setminus\{s,t\}&amp;lt;/math&amp;gt; with respect to the current [[Basic flow definitions#Preflow|preflow]].&lt;br /&gt;
# A (dynamically changing) [[Sets and sequences|set]] &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt; of nodes.&lt;br /&gt;
&lt;br /&gt;
'''Invariant:'''&lt;br /&gt;
Before and after each iteration:&lt;br /&gt;
# For each arc &amp;lt;math&amp;gt;a\in A&amp;lt;/math&amp;gt;, it is &amp;lt;math&amp;gt;0\leq f(a)\leq u(a)&amp;lt;/math&amp;gt; . If all upper bounds are integral, all &amp;lt;math&amp;gt;f&amp;lt;/math&amp;gt;-values are integral, too.&lt;br /&gt;
# For each node &amp;lt;math&amp;gt;v\in V\setminus\{s,t\} &amp;lt;/math&amp;gt;, it is &amp;lt;math&amp;gt;e_f(v)\geq 0&amp;lt;/math&amp;gt;. In other words, &amp;lt;math&amp;gt;f&amp;lt;/math&amp;gt; is a [[Basic flow definitions#Preflow|preflow]].&lt;br /&gt;
# The node labels &amp;lt;math&amp;gt;d&amp;lt;/math&amp;gt; form a [[Basic flow definitions#Valid distance labeling|valid distance labeling]], and it is &amp;lt;math&amp;gt;d(s)=n:=|V|&amp;lt;/math&amp;gt;.&lt;br /&gt;
# The currently [[Basic flow definitions#Preflow|active nodes]] are stored in &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt;.&lt;br /&gt;
# The current arc of a node is an outgoing arc of the node's in the residual graph. In the list of all of these arcs, no admissible arc precedes the current arc.&lt;br /&gt;
&lt;br /&gt;
'''Variant:'''&lt;br /&gt;
No label &amp;lt;math&amp;gt;d(\cdot)&amp;lt;/math&amp;gt; is ever decreased. In each iteration, one of the following three actions will take place:&lt;br /&gt;
# The label &amp;lt;math&amp;gt;d(v)&amp;lt;/math&amp;gt; of at least one node &amp;lt;math&amp;gt;v\in V\setminus\{s,t\}&amp;lt;/math&amp;gt; is increased.&lt;br /&gt;
# A saturating push is performed.&lt;br /&gt;
# The value of &amp;lt;math&amp;gt;D:=\sum_{v\in V\setminus\{s,t\}\atop e_f(v)&amp;gt;0}d(v)&amp;lt;/math&amp;gt; decreases.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Break condition:'''&lt;br /&gt;
&amp;lt;math&amp;gt;S=\emptyset&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
== Induction basis ==&lt;br /&gt;
&lt;br /&gt;
'''Abstract view:'''&lt;br /&gt;
# For all arcs &amp;lt;math&amp;gt;a\in A&amp;lt;/math&amp;gt;, set &amp;lt;math&amp;gt;f(a):=0&amp;lt;/math&amp;gt;.&lt;br /&gt;
# For each arc &amp;lt;math&amp;gt;(s,v)\in A&amp;lt;/math&amp;gt;, overwrite this value by &amp;lt;math&amp;gt;f(a):=u(a)&amp;lt;/math&amp;gt; and put &amp;lt;math&amp;gt;v&amp;lt;/math&amp;gt; into &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt;.&lt;br /&gt;
# Compute a [[Basic flow definitions#Valid distance labeling|valid distance labeling]] &amp;lt;math&amp;gt;d&amp;lt;/math&amp;gt; for &amp;lt;math&amp;gt;V\setminus\{s\}&amp;lt;/math&amp;gt; with respect to &amp;lt;math&amp;gt;f&amp;lt;/math&amp;gt;, for example, the true distances from all nodes to &amp;lt;math&amp;gt;t&amp;lt;/math&amp;gt; in the residual network of &amp;lt;math&amp;gt;f&amp;lt;/math&amp;gt;.&lt;br /&gt;
# Set &amp;lt;math&amp;gt;d(s):=n&amp;lt;/math&amp;gt;.&lt;br /&gt;
# For all &amp;lt;math&amp;gt;v\in V\setminus\{s,t\}&amp;lt;/math&amp;gt;, reset the current arc of &amp;lt;math&amp;gt;v&amp;lt;/math&amp;gt; so as to point to the first arc in the list of outgoing arcs of &amp;lt;math&amp;gt;v&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
'''Proof:'''&lt;br /&gt;
For the [[Basic graph definitions#Subgraphs|subgraph induced]] by &amp;lt;math&amp;gt;V\setminus\{s\}&amp;lt;/math&amp;gt;, the arguments in the [[Ahuja-Orlin#Correctness|correctness proof]] for the [[Ahuja-Orlin]] algorithm prove that the &amp;lt;math&amp;gt;d&amp;lt;/math&amp;gt;-labels form a valid distance labeling here as well. For &amp;lt;math&amp;gt;s&amp;lt;/math&amp;gt;, nothing is to show because all outgoing arcs are saturated.&lt;br /&gt;
&lt;br /&gt;
== Induction step ==&lt;br /&gt;
&lt;br /&gt;
'''Abstract view:'''&lt;br /&gt;
# Choose an [[Basic flow definitions#Preflow|active node]] &amp;lt;math&amp;gt;v&amp;lt;/math&amp;gt; from &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt;.&lt;br /&gt;
# While the current arc of &amp;lt;math&amp;gt;v&amp;lt;/math&amp;gt; is not void and not [[Basic flow definitions#Valid distance labeling|admissible]] either, move the current arc one step forward.&lt;br /&gt;
# If the current arc of &amp;lt;math&amp;gt;v&amp;lt;/math&amp;gt; is ''not'' void now but an (admissible) outgoing arc &amp;lt;math&amp;gt;(v,w)&amp;lt;/math&amp;gt;, say:&lt;br /&gt;
## If &amp;lt;math&amp;gt;w\neq s&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;e_f(w)=0&amp;lt;/math&amp;gt;, insert &amp;lt;math&amp;gt;w&amp;lt;/math&amp;gt; in &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt;.&lt;br /&gt;
## Increase the flow over &amp;lt;math&amp;gt;(v,w)&amp;lt;/math&amp;gt; by the minimum of &amp;lt;math&amp;gt;e_f(v)&amp;lt;/math&amp;gt; and the residual capacity of &amp;lt;math&amp;gt;(v,w)&amp;lt;/math&amp;gt;.&lt;br /&gt;
## Increase &amp;lt;math&amp;gt;e_f(w)&amp;lt;/math&amp;gt; by that value and decrease &amp;lt;math&amp;gt;e_f(v)&amp;lt;/math&amp;gt; by the same value.&lt;br /&gt;
## If &amp;lt;math&amp;gt;e_f(v)=0&amp;lt;/math&amp;gt; now, extract &amp;lt;math&amp;gt;v&amp;lt;/math&amp;gt; from &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt;.&lt;br /&gt;
# Otherwise:&lt;br /&gt;
## Let &amp;lt;math&amp;gt;d_\min&amp;lt;/math&amp;gt; denote the minimal label &amp;lt;math&amp;gt;d(w)&amp;lt;/math&amp;gt; of any arc &amp;lt;math&amp;gt;(v,w)&amp;lt;/math&amp;gt; in the residual network.&lt;br /&gt;
## Set &amp;lt;math&amp;gt;d(v):=d_\min+1&amp;lt;/math&amp;gt;.&lt;br /&gt;
## Reset the current arc of &amp;lt;math&amp;gt;v&amp;lt;/math&amp;gt; so as to point to the beginning of the list of outgoing arcs of &amp;lt;math&amp;gt;v&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
'''Remark:'''&lt;br /&gt;
The preflow-push algorithm is also known as the '''push-relabel''' algorithm. The ''push'' operation is step 3; the ''relabel'' operation is step 4.&lt;br /&gt;
&lt;br /&gt;
'''Proof:'''&lt;br /&gt;
Points 1, 2, and 4 of the invariant and &amp;lt;math&amp;gt;d(s)=n&amp;lt;/math&amp;gt; are obviously fulfilled. The rest of point 3 of the invariant is affected by step 4 only, and the outgoing arcs of &amp;lt;math&amp;gt;v&amp;lt;/math&amp;gt; are the only arcs where the distance labeling may become invalid. However, the extremely conservative increase of &amp;lt;math&amp;gt;d(v)&amp;lt;/math&amp;gt; ensures point 3 of the invariant.&lt;br /&gt;
&lt;br /&gt;
To prove the variant, consider a step in which neither any &amp;lt;math&amp;gt;d&amp;lt;/math&amp;gt;-value is increased nor a saturating push is performed. This means step 3.2 is applied, but the arc &amp;lt;math&amp;gt;(v,w)&amp;lt;/math&amp;gt; is not saturated by that. Potentially, &amp;lt;math&amp;gt;w&amp;lt;/math&amp;gt; becomes active. However, &amp;lt;math&amp;gt;v&amp;lt;/math&amp;gt; definitely becomes inactive since the push step is non-saturating. Now the variant follows from the fact that &amp;lt;math&amp;gt;d(w)=d(v)-1&amp;lt;/math&amp;gt; for an admissible arc &amp;lt;math&amp;gt;(v,w)&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
It remains to show termination; this is proved by the following complexity considerations.&lt;br /&gt;
&lt;br /&gt;
== Complexity ==&lt;br /&gt;
&lt;br /&gt;
'''Statement:'''&lt;br /&gt;
The asymptotic complexity is in &amp;lt;math&amp;gt;\mathcal{O}(n^2m)&amp;lt;/math&amp;gt;, where &amp;lt;math&amp;gt;n=|V|&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;m=|A|&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
'''Proof:'''&lt;br /&gt;
First we show that the total number of relabel operations (step 4 of the main loop) is in &amp;lt;math&amp;gt;\mathcal{O}(n^2)&amp;lt;/math&amp;gt;. To see that, let &amp;lt;math&amp;gt;v\in V\setminus\{s,t\}&amp;lt;/math&amp;gt; be an active node between two iterations of the main loop. A straightforward induction over the number of push operations shows that there is at least one simple &amp;lt;math&amp;gt;(s,v)&amp;lt;/math&amp;gt;-path &amp;lt;math&amp;gt;p&amp;lt;/math&amp;gt; with positive flow on all arcs. The [[Basic graph definitions#Transpose of a graph|transpose]] of &amp;lt;math&amp;gt;p&amp;lt;/math&amp;gt; is [[Basic flow definitions#Flow-augmenting paths and saturated arcs|augmenting]]. Due to the validity of &amp;lt;math&amp;gt;d&amp;lt;/math&amp;gt; (induction hypothesis), &amp;lt;math&amp;gt;d(v)-d(s)=d(v)-n&amp;lt;/math&amp;gt; cannot be larger than the number of arcs on &amp;lt;math&amp;gt;p&amp;lt;/math&amp;gt;, which is not larger than &amp;lt;math&amp;gt;n-1&amp;lt;/math&amp;gt;. Therefore, no node label can be larger than &amp;lt;math&amp;gt;2n-1&amp;lt;/math&amp;gt;. Since node labels are nonnegative and increase at least by one in each relabel operation, the claimed upper bound on the relabel operations follows.&lt;br /&gt;
&lt;br /&gt;
From this bound, we may immediately conclude that the current arc of a node is reset &amp;lt;math&amp;gt;\mathcal{O}(n)&amp;lt;/math&amp;gt; times, so the total number of forward steps of the current arcs of all nodes is in &amp;lt;math&amp;gt;\mathcal{O}(n^3)\subseteq\mathcal{O}(n^2m)&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
The argument in the [[Ahuja-Orlin#Complexity|complexity analysis]] of the [[Ahuja-Orlin]] algorithm to prove that the total number of ''saturating'' push operations is in &amp;lt;math&amp;gt;\mathcal{O}(nm)&amp;lt;/math&amp;gt;, applies here as well.&lt;br /&gt;
&lt;br /&gt;
Finally, we consider the ''non-saturating'' push operations. First note that &amp;lt;math&amp;gt;D\geq 0&amp;lt;/math&amp;gt; before and after each iteration. The value of &amp;lt;math&amp;gt;D&amp;lt;/math&amp;gt; is increased in each relabel operation exactly by the amount by which the label of the current node is increased. Since node labels are never decreased and bounded from above by &amp;lt;math&amp;gt;2n-1&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;D&amp;lt;/math&amp;gt; increases by less than &amp;lt;math&amp;gt;2n^2&amp;lt;/math&amp;gt; in total over all relabel operations. On the other hand, a saturating push operation may increase &amp;lt;math&amp;gt;D&amp;lt;/math&amp;gt; by at most &amp;lt;math&amp;gt;2n-1&amp;lt;/math&amp;gt; (namely, in the case that &amp;lt;math&amp;gt;w&amp;lt;/math&amp;gt; was not active immediately before the push). In summary, the total sum of all values by which &amp;lt;math&amp;gt;D&amp;lt;/math&amp;gt; is increased throughout the algorithm is in &amp;lt;math&amp;gt;\mathcal{O}(n^2m)&amp;lt;/math&amp;gt;. Due to the variant, the value of &amp;lt;math&amp;gt;D&amp;lt;/math&amp;gt; is decreased by at least one in each non-saturating push operation. This proves the claim.&lt;br /&gt;
&lt;br /&gt;
== Heuristic speedup techniques ==&lt;br /&gt;
&lt;br /&gt;
# After &amp;lt;math&amp;gt;\Omega(n)&amp;lt;/math&amp;gt; iterations of the main loop, the &amp;lt;math&amp;gt;d&amp;lt;/math&amp;gt;-values are recomputed analogously to the induction basis: as the current distance of each node to &amp;lt;math&amp;gt;t&amp;lt;/math&amp;gt; in the residual network. This modification is seldom enough, so the asymptotic complexity is not increased. In practice, this technique may save many unnecessary relabel steps.&lt;br /&gt;
# The main loop may be decomposed into two phases: First, as much flow as possible is sent into &amp;lt;math&amp;gt;t&amp;lt;/math&amp;gt;; second, all surplus flow that cannot reach &amp;lt;math&amp;gt;t&amp;lt;/math&amp;gt; is sent back to &amp;lt;math&amp;gt;s&amp;lt;/math&amp;gt;. The first phase may be finished once there is no more path in the residual network from any active node to &amp;lt;math&amp;gt;t&amp;lt;/math&amp;gt;. A sufficient and easy-to-check condition for that is &amp;lt;math&amp;gt;d(v)\geq n&amp;lt;/math&amp;gt; for all active nodes &amp;lt;math&amp;gt;v&amp;lt;/math&amp;gt;. All nodes from which &amp;lt;math&amp;gt;t&amp;lt;/math&amp;gt; is reachable may be safely disregarded in the second phase. For any other node, to save unnecessary relabel operations, the distance label may be safely increased to the minimum number of arcs in the residual network from this node back to &amp;lt;math&amp;gt;s&amp;lt;/math&amp;gt;.&lt;/div&gt;</summary>
		<author><name>Weihe</name></author>
	</entry>
	<entry>
		<id>https://wiki.algo.informatik.tu-darmstadt.de/index.php?title=Preflow-push&amp;diff=3878</id>
		<title>Preflow-push</title>
		<link rel="alternate" type="text/html" href="https://wiki.algo.informatik.tu-darmstadt.de/index.php?title=Preflow-push&amp;diff=3878"/>
		<updated>2017-06-12T08:29:58Z</updated>

		<summary type="html">&lt;p&gt;Weihe: /* Induction step */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Abstract view ==&lt;br /&gt;
&lt;br /&gt;
'''Also known as:''' ''push-relabel'' algorithm or ''Goldberg-Tarjan'' algorithm&lt;br /&gt;
&lt;br /&gt;
'''Algorithmic problem:'''&lt;br /&gt;
[[Max-Flow Problems#Standard version|max-flow problem (standard version)]]&lt;br /&gt;
&lt;br /&gt;
'''Type of algorithm:'''&lt;br /&gt;
loop.&lt;br /&gt;
&lt;br /&gt;
'''Auxiliary data:'''&lt;br /&gt;
# A nonnegative integral value &amp;lt;math&amp;gt;d(v)&amp;lt;/math&amp;gt; for each node &amp;lt;math&amp;gt;v\in V&amp;lt;/math&amp;gt;.&lt;br /&gt;
# Each node &amp;lt;math&amp;gt;v\in V\setminus\{t\}&amp;lt;/math&amp;gt; has a '''current arc''', which may be implemented as an iterator on the list of outgoing residual arcs of &amp;lt;math&amp;gt;v&amp;lt;/math&amp;gt;.&lt;br /&gt;
# The [[Basic flow definitions#Preflow|excess]] &amp;lt;math&amp;gt;e_f(v)&amp;lt;/math&amp;gt; of a node &amp;lt;math&amp;gt;v\in V\setminus\{s,t\}&amp;lt;/math&amp;gt; with respect to the current [[Basic flow definitions#Preflow|preflow]].&lt;br /&gt;
# A (dynamically changing) [[Sets and sequences|set]] &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt; of nodes.&lt;br /&gt;
&lt;br /&gt;
'''Invariant:'''&lt;br /&gt;
Before and after each iteration:&lt;br /&gt;
# For each arc &amp;lt;math&amp;gt;a\in A&amp;lt;/math&amp;gt;, it is &amp;lt;math&amp;gt;0\leq f(a)\leq u(a)&amp;lt;/math&amp;gt; . If all upper bounds are integral, all &amp;lt;math&amp;gt;f&amp;lt;/math&amp;gt;-values are integral, too.&lt;br /&gt;
# For each node &amp;lt;math&amp;gt;v\in V\setminus\{s,t\} &amp;lt;/math&amp;gt;, it is &amp;lt;math&amp;gt;e_f(v)\geq 0&amp;lt;/math&amp;gt;. In other words, &amp;lt;math&amp;gt;f&amp;lt;/math&amp;gt; is a [[Basic flow definitions#Preflow|preflow]].&lt;br /&gt;
# The node labels &amp;lt;math&amp;gt;d&amp;lt;/math&amp;gt; form a [[Basic flow definitions#Valid distance labeling|valid distance labeling]], and it is &amp;lt;math&amp;gt;d(s)=n:=|V|&amp;lt;/math&amp;gt;.&lt;br /&gt;
# The currently [[Basic flow definitions#Preflow|active nodes]] are stored in &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt;.&lt;br /&gt;
# The current arc of a node is an outgoing arc of the node's in the residual graph. In the list of all of these arcs, no admissible arc precedes the current arc.&lt;br /&gt;
&lt;br /&gt;
'''Variant:'''&lt;br /&gt;
No label &amp;lt;math&amp;gt;d(\cdot)&amp;lt;/math&amp;gt; is ever decreased. In each iteration, one of the following three actions will take place:&lt;br /&gt;
# The label &amp;lt;math&amp;gt;d(v)&amp;lt;/math&amp;gt; of at least one node &amp;lt;math&amp;gt;v\in V\setminus\{s,t\}&amp;lt;/math&amp;gt; is increased.&lt;br /&gt;
# A saturating push is performed.&lt;br /&gt;
# The value of &amp;lt;math&amp;gt;D:=\sum_{v\in V\setminus\{s,t\}\atop e_f(v)&amp;gt;0}d(v)&amp;lt;/math&amp;gt; decreases.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Break condition:'''&lt;br /&gt;
&amp;lt;math&amp;gt;S=\emptyset&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
== Induction basis ==&lt;br /&gt;
&lt;br /&gt;
'''Abstract view:'''&lt;br /&gt;
# For all arcs &amp;lt;math&amp;gt;a\in A&amp;lt;/math&amp;gt;, set &amp;lt;math&amp;gt;f(a):=0&amp;lt;/math&amp;gt;.&lt;br /&gt;
# For each arc &amp;lt;math&amp;gt;(s,v)\in A&amp;lt;/math&amp;gt;, overwrite this value by &amp;lt;math&amp;gt;f(a):=u(a)&amp;lt;/math&amp;gt; and put &amp;lt;math&amp;gt;v&amp;lt;/math&amp;gt; into &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt;.&lt;br /&gt;
# Compute a [[Basic flow definitions#Valid distance labeling|valid distance labeling]] &amp;lt;math&amp;gt;d&amp;lt;/math&amp;gt; for &amp;lt;math&amp;gt;V\setminus\{s\}&amp;lt;/math&amp;gt; with respect to &amp;lt;math&amp;gt;f&amp;lt;/math&amp;gt;, for example, the true distances from all nodes to &amp;lt;math&amp;gt;t&amp;lt;/math&amp;gt; in the residual network of &amp;lt;math&amp;gt;f&amp;lt;/math&amp;gt;.&lt;br /&gt;
# Set &amp;lt;math&amp;gt;d(s):=n&amp;lt;/math&amp;gt;.&lt;br /&gt;
# For all &amp;lt;math&amp;gt;v\in V\setminus\{s,t\}&amp;lt;/math&amp;gt;, reset the current arc of &amp;lt;math&amp;gt;v&amp;lt;/math&amp;gt; so as to point to the first arc in the list of outgoing arcs of &amp;lt;math&amp;gt;v&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
'''Proof:'''&lt;br /&gt;
For the [[Basic graph definitions#Subgraphs|subgraph induced]] by &amp;lt;math&amp;gt;V\setminus\{s\}&amp;lt;/math&amp;gt;, the arguments in the [[Ahuja-Orlin#Correctness|correctness proof]] for the [[Ahuja-Orlin]] algorithm prove that the &amp;lt;math&amp;gt;d&amp;lt;/math&amp;gt;-labels form a valid distance labeling here as well. For &amp;lt;math&amp;gt;s&amp;lt;/math&amp;gt;, nothing is to show because all outgoing arcs are saturated.&lt;br /&gt;
&lt;br /&gt;
== Induction step ==&lt;br /&gt;
&lt;br /&gt;
'''Abstract view:'''&lt;br /&gt;
# Choose an [[Basic flow definitions#Preflow|active node]] &amp;lt;math&amp;gt;v&amp;lt;/math&amp;gt; from &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt;.&lt;br /&gt;
# While the current arc of &amp;lt;math&amp;gt;v&amp;lt;/math&amp;gt; is not void and not [[Basic flow definitions#Valid distance labeling|admissible]] either, move the current arc one step forward.&lt;br /&gt;
# If the current arc of &amp;lt;math&amp;gt;v&amp;lt;/math&amp;gt; is ''not'' void now but an (admissible) outgoing arc &amp;lt;math&amp;gt;(v,w)&amp;lt;/math&amp;gt;, say:&lt;br /&gt;
## If &amp;lt;math&amp;gt;w\neq t&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;e_f(w)=0&amp;lt;/math&amp;gt;, insert &amp;lt;math&amp;gt;w&amp;lt;/math&amp;gt; in &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt;.&lt;br /&gt;
## Increase the flow over &amp;lt;math&amp;gt;(v,w)&amp;lt;/math&amp;gt; by the minimum of &amp;lt;math&amp;gt;e_f(v)&amp;lt;/math&amp;gt; and the residual capacity of &amp;lt;math&amp;gt;(v,w)&amp;lt;/math&amp;gt;.&lt;br /&gt;
## Increase &amp;lt;math&amp;gt;e_f(w)&amp;lt;/math&amp;gt; by that value and decrease &amp;lt;math&amp;gt;e_f(v)&amp;lt;/math&amp;gt; by the same value.&lt;br /&gt;
## If &amp;lt;math&amp;gt;e_f(v)=0&amp;lt;/math&amp;gt; now, extract &amp;lt;math&amp;gt;v&amp;lt;/math&amp;gt; from &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt;.&lt;br /&gt;
# Otherwise:&lt;br /&gt;
## Let &amp;lt;math&amp;gt;d_\min&amp;lt;/math&amp;gt; denote the minimal label &amp;lt;math&amp;gt;d(w)&amp;lt;/math&amp;gt; of any arc &amp;lt;math&amp;gt;(v,w)&amp;lt;/math&amp;gt; in the residual network.&lt;br /&gt;
## Set &amp;lt;math&amp;gt;d(v):=d_\min+1&amp;lt;/math&amp;gt;.&lt;br /&gt;
## Reset the current arc of &amp;lt;math&amp;gt;v&amp;lt;/math&amp;gt; so as to point to the beginning of the list of outgoing arcs of &amp;lt;math&amp;gt;v&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
'''Remark:'''&lt;br /&gt;
The preflow-push algorithm is also known as the '''push-relabel''' algorithm. The ''push'' operation is step 3; the ''relabel'' operation is step 4.&lt;br /&gt;
&lt;br /&gt;
'''Proof:'''&lt;br /&gt;
Points 1, 2, and 4 of the invariant and &amp;lt;math&amp;gt;d(s)=n&amp;lt;/math&amp;gt; are obviously fulfilled. The rest of point 3 of the invariant is affected by step 4 only, and the outgoing arcs of &amp;lt;math&amp;gt;v&amp;lt;/math&amp;gt; are the only arcs where the distance labeling may become invalid. However, the extremely conservative increase of &amp;lt;math&amp;gt;d(v)&amp;lt;/math&amp;gt; ensures point 3 of the invariant.&lt;br /&gt;
&lt;br /&gt;
To prove the variant, consider a step in which neither any &amp;lt;math&amp;gt;d&amp;lt;/math&amp;gt;-value is increased nor a saturating push is performed. This means step 3.2 is applied, but the arc &amp;lt;math&amp;gt;(v,w)&amp;lt;/math&amp;gt; is not saturated by that. Potentially, &amp;lt;math&amp;gt;w&amp;lt;/math&amp;gt; becomes active. However, &amp;lt;math&amp;gt;v&amp;lt;/math&amp;gt; definitely becomes inactive since the push step is non-saturating. Now the variant follows from the fact that &amp;lt;math&amp;gt;d(w)=d(v)-1&amp;lt;/math&amp;gt; for an admissible arc &amp;lt;math&amp;gt;(v,w)&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
It remains to show termination; this is proved by the following complexity considerations.&lt;br /&gt;
&lt;br /&gt;
== Complexity ==&lt;br /&gt;
&lt;br /&gt;
'''Statement:'''&lt;br /&gt;
The asymptotic complexity is in &amp;lt;math&amp;gt;\mathcal{O}(n^2m)&amp;lt;/math&amp;gt;, where &amp;lt;math&amp;gt;n=|V|&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;m=|A|&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
'''Proof:'''&lt;br /&gt;
First we show that the total number of relabel operations (step 4 of the main loop) is in &amp;lt;math&amp;gt;\mathcal{O}(n^2)&amp;lt;/math&amp;gt;. To see that, let &amp;lt;math&amp;gt;v\in V\setminus\{s,t\}&amp;lt;/math&amp;gt; be an active node between two iterations of the main loop. A straightforward induction over the number of push operations shows that there is at least one simple &amp;lt;math&amp;gt;(s,v)&amp;lt;/math&amp;gt;-path &amp;lt;math&amp;gt;p&amp;lt;/math&amp;gt; with positive flow on all arcs. The [[Basic graph definitions#Transpose of a graph|transpose]] of &amp;lt;math&amp;gt;p&amp;lt;/math&amp;gt; is [[Basic flow definitions#Flow-augmenting paths and saturated arcs|augmenting]]. Due to the validity of &amp;lt;math&amp;gt;d&amp;lt;/math&amp;gt; (induction hypothesis), &amp;lt;math&amp;gt;d(v)-d(s)=d(v)-n&amp;lt;/math&amp;gt; cannot be larger than the number of arcs on &amp;lt;math&amp;gt;p&amp;lt;/math&amp;gt;, which is not larger than &amp;lt;math&amp;gt;n-1&amp;lt;/math&amp;gt;. Therefore, no node label can be larger than &amp;lt;math&amp;gt;2n-1&amp;lt;/math&amp;gt;. Since node labels are nonnegative and increase at least by one in each relabel operation, the claimed upper bound on the relabel operations follows.&lt;br /&gt;
&lt;br /&gt;
From this bound, we may immediately conclude that the current arc of a node is reset &amp;lt;math&amp;gt;\mathcal{O}(n)&amp;lt;/math&amp;gt; times, so the total number of forward steps of the current arcs of all nodes is in &amp;lt;math&amp;gt;\mathcal{O}(n^3)\subseteq\mathcal{O}(n^2m)&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
The argument in the [[Ahuja-Orlin#Complexity|complexity analysis]] of the [[Ahuja-Orlin]] algorithm to prove that the total number of ''saturating'' push operations is in &amp;lt;math&amp;gt;\mathcal{O}(nm)&amp;lt;/math&amp;gt;, applies here as well.&lt;br /&gt;
&lt;br /&gt;
Finally, we consider the ''non-saturating'' push operations. First note that &amp;lt;math&amp;gt;D\geq 0&amp;lt;/math&amp;gt; before and after each iteration. The value of &amp;lt;math&amp;gt;D&amp;lt;/math&amp;gt; is increased in each relabel operation exactly by the amount by which the label of the current node is increased. Since node labels are never decreased and bounded from above by &amp;lt;math&amp;gt;2n-1&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;D&amp;lt;/math&amp;gt; increases by less than &amp;lt;math&amp;gt;2n^2&amp;lt;/math&amp;gt; in total over all relabel operations. On the other hand, a saturating push operation may increase &amp;lt;math&amp;gt;D&amp;lt;/math&amp;gt; by at most &amp;lt;math&amp;gt;2n-1&amp;lt;/math&amp;gt; (namely, in the case that &amp;lt;math&amp;gt;w&amp;lt;/math&amp;gt; was not active immediately before the push). In summary, the total sum of all values by which &amp;lt;math&amp;gt;D&amp;lt;/math&amp;gt; is increased throughout the algorithm is in &amp;lt;math&amp;gt;\mathcal{O}(n^2m)&amp;lt;/math&amp;gt;. Due to the variant, the value of &amp;lt;math&amp;gt;D&amp;lt;/math&amp;gt; is decreased by at least one in each non-saturating push operation. This proves the claim.&lt;br /&gt;
&lt;br /&gt;
== Heuristic speedup techniques ==&lt;br /&gt;
&lt;br /&gt;
# After &amp;lt;math&amp;gt;\Omega(n)&amp;lt;/math&amp;gt; iterations of the main loop, the &amp;lt;math&amp;gt;d&amp;lt;/math&amp;gt;-values are recomputed analogously to the induction basis: as the current distance of each node to &amp;lt;math&amp;gt;t&amp;lt;/math&amp;gt; in the residual network. This modification is seldom enough, so the asymptotic complexity is not increased. In practice, this technique may save many unnecessary relabel steps.&lt;br /&gt;
# The main loop may be decomposed into two phases: First, as much flow as possible is sent into &amp;lt;math&amp;gt;t&amp;lt;/math&amp;gt;; second, all surplus flow that cannot reach &amp;lt;math&amp;gt;t&amp;lt;/math&amp;gt; is sent back to &amp;lt;math&amp;gt;s&amp;lt;/math&amp;gt;. The first phase may be finished once there is no more path in the residual network from any active node to &amp;lt;math&amp;gt;t&amp;lt;/math&amp;gt;. A sufficient and easy-to-check condition for that is &amp;lt;math&amp;gt;d(v)\geq n&amp;lt;/math&amp;gt; for all active nodes &amp;lt;math&amp;gt;v&amp;lt;/math&amp;gt;. All nodes from which &amp;lt;math&amp;gt;t&amp;lt;/math&amp;gt; is reachable may be safely disregarded in the second phase. For any other node, to save unnecessary relabel operations, the distance label may be safely increased to the minimum number of arcs in the residual network from this node back to &amp;lt;math&amp;gt;s&amp;lt;/math&amp;gt;.&lt;/div&gt;</summary>
		<author><name>Weihe</name></author>
	</entry>
	<entry>
		<id>https://wiki.algo.informatik.tu-darmstadt.de/index.php?title=Preflow-push&amp;diff=3877</id>
		<title>Preflow-push</title>
		<link rel="alternate" type="text/html" href="https://wiki.algo.informatik.tu-darmstadt.de/index.php?title=Preflow-push&amp;diff=3877"/>
		<updated>2017-06-12T08:29:38Z</updated>

		<summary type="html">&lt;p&gt;Weihe: /* Induction step */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Abstract view ==&lt;br /&gt;
&lt;br /&gt;
'''Also known as:''' ''push-relabel'' algorithm or ''Goldberg-Tarjan'' algorithm&lt;br /&gt;
&lt;br /&gt;
'''Algorithmic problem:'''&lt;br /&gt;
[[Max-Flow Problems#Standard version|max-flow problem (standard version)]]&lt;br /&gt;
&lt;br /&gt;
'''Type of algorithm:'''&lt;br /&gt;
loop.&lt;br /&gt;
&lt;br /&gt;
'''Auxiliary data:'''&lt;br /&gt;
# A nonnegative integral value &amp;lt;math&amp;gt;d(v)&amp;lt;/math&amp;gt; for each node &amp;lt;math&amp;gt;v\in V&amp;lt;/math&amp;gt;.&lt;br /&gt;
# Each node &amp;lt;math&amp;gt;v\in V\setminus\{t\}&amp;lt;/math&amp;gt; has a '''current arc''', which may be implemented as an iterator on the list of outgoing residual arcs of &amp;lt;math&amp;gt;v&amp;lt;/math&amp;gt;.&lt;br /&gt;
# The [[Basic flow definitions#Preflow|excess]] &amp;lt;math&amp;gt;e_f(v)&amp;lt;/math&amp;gt; of a node &amp;lt;math&amp;gt;v\in V\setminus\{s,t\}&amp;lt;/math&amp;gt; with respect to the current [[Basic flow definitions#Preflow|preflow]].&lt;br /&gt;
# A (dynamically changing) [[Sets and sequences|set]] &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt; of nodes.&lt;br /&gt;
&lt;br /&gt;
'''Invariant:'''&lt;br /&gt;
Before and after each iteration:&lt;br /&gt;
# For each arc &amp;lt;math&amp;gt;a\in A&amp;lt;/math&amp;gt;, it is &amp;lt;math&amp;gt;0\leq f(a)\leq u(a)&amp;lt;/math&amp;gt; . If all upper bounds are integral, all &amp;lt;math&amp;gt;f&amp;lt;/math&amp;gt;-values are integral, too.&lt;br /&gt;
# For each node &amp;lt;math&amp;gt;v\in V\setminus\{s,t\} &amp;lt;/math&amp;gt;, it is &amp;lt;math&amp;gt;e_f(v)\geq 0&amp;lt;/math&amp;gt;. In other words, &amp;lt;math&amp;gt;f&amp;lt;/math&amp;gt; is a [[Basic flow definitions#Preflow|preflow]].&lt;br /&gt;
# The node labels &amp;lt;math&amp;gt;d&amp;lt;/math&amp;gt; form a [[Basic flow definitions#Valid distance labeling|valid distance labeling]], and it is &amp;lt;math&amp;gt;d(s)=n:=|V|&amp;lt;/math&amp;gt;.&lt;br /&gt;
# The currently [[Basic flow definitions#Preflow|active nodes]] are stored in &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt;.&lt;br /&gt;
# The current arc of a node is an outgoing arc of the node's in the residual graph. In the list of all of these arcs, no admissible arc precedes the current arc.&lt;br /&gt;
&lt;br /&gt;
'''Variant:'''&lt;br /&gt;
No label &amp;lt;math&amp;gt;d(\cdot)&amp;lt;/math&amp;gt; is ever decreased. In each iteration, one of the following three actions will take place:&lt;br /&gt;
# The label &amp;lt;math&amp;gt;d(v)&amp;lt;/math&amp;gt; of at least one node &amp;lt;math&amp;gt;v\in V\setminus\{s,t\}&amp;lt;/math&amp;gt; is increased.&lt;br /&gt;
# A saturating push is performed.&lt;br /&gt;
# The value of &amp;lt;math&amp;gt;D:=\sum_{v\in V\setminus\{s,t\}\atop e_f(v)&amp;gt;0}d(v)&amp;lt;/math&amp;gt; decreases.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Break condition:'''&lt;br /&gt;
&amp;lt;math&amp;gt;S=\emptyset&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
== Induction basis ==&lt;br /&gt;
&lt;br /&gt;
'''Abstract view:'''&lt;br /&gt;
# For all arcs &amp;lt;math&amp;gt;a\in A&amp;lt;/math&amp;gt;, set &amp;lt;math&amp;gt;f(a):=0&amp;lt;/math&amp;gt;.&lt;br /&gt;
# For each arc &amp;lt;math&amp;gt;(s,v)\in A&amp;lt;/math&amp;gt;, overwrite this value by &amp;lt;math&amp;gt;f(a):=u(a)&amp;lt;/math&amp;gt; and put &amp;lt;math&amp;gt;v&amp;lt;/math&amp;gt; into &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt;.&lt;br /&gt;
# Compute a [[Basic flow definitions#Valid distance labeling|valid distance labeling]] &amp;lt;math&amp;gt;d&amp;lt;/math&amp;gt; for &amp;lt;math&amp;gt;V\setminus\{s\}&amp;lt;/math&amp;gt; with respect to &amp;lt;math&amp;gt;f&amp;lt;/math&amp;gt;, for example, the true distances from all nodes to &amp;lt;math&amp;gt;t&amp;lt;/math&amp;gt; in the residual network of &amp;lt;math&amp;gt;f&amp;lt;/math&amp;gt;.&lt;br /&gt;
# Set &amp;lt;math&amp;gt;d(s):=n&amp;lt;/math&amp;gt;.&lt;br /&gt;
# For all &amp;lt;math&amp;gt;v\in V\setminus\{s,t\}&amp;lt;/math&amp;gt;, reset the current arc of &amp;lt;math&amp;gt;v&amp;lt;/math&amp;gt; so as to point to the first arc in the list of outgoing arcs of &amp;lt;math&amp;gt;v&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
'''Proof:'''&lt;br /&gt;
For the [[Basic graph definitions#Subgraphs|subgraph induced]] by &amp;lt;math&amp;gt;V\setminus\{s\}&amp;lt;/math&amp;gt;, the arguments in the [[Ahuja-Orlin#Correctness|correctness proof]] for the [[Ahuja-Orlin]] algorithm prove that the &amp;lt;math&amp;gt;d&amp;lt;/math&amp;gt;-labels form a valid distance labeling here as well. For &amp;lt;math&amp;gt;s&amp;lt;/math&amp;gt;, nothing is to show because all outgoing arcs are saturated.&lt;br /&gt;
&lt;br /&gt;
== Induction step ==&lt;br /&gt;
&lt;br /&gt;
'''Abstract view:'''&lt;br /&gt;
# Choose an [[Basic flow definitions#Preflow|active node]] &amp;lt;math&amp;gt;v&amp;lt;/math&amp;gt; from &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt;.&lt;br /&gt;
# While the current arc of &amp;lt;math&amp;gt;v&amp;lt;/math&amp;gt; is not void and not [[Basic flow definitions#Valid distance labeling|admissible]] either, move the current arc one step forward.&lt;br /&gt;
# If the current arc of &amp;lt;math&amp;gt;v&amp;lt;/math&amp;gt; is ''not'' void now but an (admissible) outgoing arc &amp;lt;math&amp;gt;(v,w)&amp;lt;/math&amp;gt;, say:&lt;br /&gt;
## If &amp;lt;math&amp;gt;w\neq s&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;e_f(w)=0&amp;lt;/math&amp;gt;, insert &amp;lt;math&amp;gt;w&amp;lt;/math&amp;gt; in &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt;.&lt;br /&gt;
## Increase the flow over &amp;lt;math&amp;gt;(v,w)&amp;lt;/math&amp;gt; by the minimum of &amp;lt;math&amp;gt;e_f(v)&amp;lt;/math&amp;gt; and the residual capacity of &amp;lt;math&amp;gt;(v,w)&amp;lt;/math&amp;gt;.&lt;br /&gt;
## Increase &amp;lt;math&amp;gt;e_f(w)&amp;lt;/math&amp;gt; by that value and decrease &amp;lt;math&amp;gt;e_f(v)&amp;lt;/math&amp;gt; by the same value.&lt;br /&gt;
## If &amp;lt;math&amp;gt;e_f(v)=0&amp;lt;/math&amp;gt; now, extract &amp;lt;math&amp;gt;v&amp;lt;/math&amp;gt; from &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt;.&lt;br /&gt;
# Otherwise:&lt;br /&gt;
## Let &amp;lt;math&amp;gt;d_\min&amp;lt;/math&amp;gt; denote the minimal label &amp;lt;math&amp;gt;d(w)&amp;lt;/math&amp;gt; of any arc &amp;lt;math&amp;gt;(v,w)&amp;lt;/math&amp;gt; in the residual network.&lt;br /&gt;
## Set &amp;lt;math&amp;gt;d(v):=d_\min+1&amp;lt;/math&amp;gt;.&lt;br /&gt;
## Reset the current arc of &amp;lt;math&amp;gt;v&amp;lt;/math&amp;gt; so as to point to the beginning of the list of outgoing arcs of &amp;lt;math&amp;gt;v&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
'''Remark:'''&lt;br /&gt;
The preflow-push algorithm is also known as the '''push-relabel''' algorithm. The ''push'' operation is step 3; the ''relabel'' operation is step 4.&lt;br /&gt;
&lt;br /&gt;
'''Proof:'''&lt;br /&gt;
Points 1, 2, and 4 of the invariant and &amp;lt;math&amp;gt;d(s)=n&amp;lt;/math&amp;gt; are obviously fulfilled. The rest of point 3 of the invariant is affected by step 4 only, and the outgoing arcs of &amp;lt;math&amp;gt;v&amp;lt;/math&amp;gt; are the only arcs where the distance labeling may become invalid. However, the extremely conservative increase of &amp;lt;math&amp;gt;d(v)&amp;lt;/math&amp;gt; ensures point 3 of the invariant.&lt;br /&gt;
&lt;br /&gt;
To prove the variant, consider a step in which neither any &amp;lt;math&amp;gt;d&amp;lt;/math&amp;gt;-value is increased nor a saturating push is performed. This means step 3.2 is applied, but the arc &amp;lt;math&amp;gt;(v,w)&amp;lt;/math&amp;gt; is not saturated by that. Potentially, &amp;lt;math&amp;gt;w&amp;lt;/math&amp;gt; becomes active. However, &amp;lt;math&amp;gt;v&amp;lt;/math&amp;gt; definitely becomes inactive since the push step is non-saturating. Now the variant follows from the fact that &amp;lt;math&amp;gt;d(w)=d(v)-1&amp;lt;/math&amp;gt; for an admissible arc &amp;lt;math&amp;gt;(v,w)&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
It remains to show termination; this is proved by the following complexity considerations.&lt;br /&gt;
&lt;br /&gt;
== Complexity ==&lt;br /&gt;
&lt;br /&gt;
'''Statement:'''&lt;br /&gt;
The asymptotic complexity is in &amp;lt;math&amp;gt;\mathcal{O}(n^2m)&amp;lt;/math&amp;gt;, where &amp;lt;math&amp;gt;n=|V|&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;m=|A|&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
'''Proof:'''&lt;br /&gt;
First we show that the total number of relabel operations (step 4 of the main loop) is in &amp;lt;math&amp;gt;\mathcal{O}(n^2)&amp;lt;/math&amp;gt;. To see that, let &amp;lt;math&amp;gt;v\in V\setminus\{s,t\}&amp;lt;/math&amp;gt; be an active node between two iterations of the main loop. A straightforward induction over the number of push operations shows that there is at least one simple &amp;lt;math&amp;gt;(s,v)&amp;lt;/math&amp;gt;-path &amp;lt;math&amp;gt;p&amp;lt;/math&amp;gt; with positive flow on all arcs. The [[Basic graph definitions#Transpose of a graph|transpose]] of &amp;lt;math&amp;gt;p&amp;lt;/math&amp;gt; is [[Basic flow definitions#Flow-augmenting paths and saturated arcs|augmenting]]. Due to the validity of &amp;lt;math&amp;gt;d&amp;lt;/math&amp;gt; (induction hypothesis), &amp;lt;math&amp;gt;d(v)-d(s)=d(v)-n&amp;lt;/math&amp;gt; cannot be larger than the number of arcs on &amp;lt;math&amp;gt;p&amp;lt;/math&amp;gt;, which is not larger than &amp;lt;math&amp;gt;n-1&amp;lt;/math&amp;gt;. Therefore, no node label can be larger than &amp;lt;math&amp;gt;2n-1&amp;lt;/math&amp;gt;. Since node labels are nonnegative and increase at least by one in each relabel operation, the claimed upper bound on the relabel operations follows.&lt;br /&gt;
&lt;br /&gt;
From this bound, we may immediately conclude that the current arc of a node is reset &amp;lt;math&amp;gt;\mathcal{O}(n)&amp;lt;/math&amp;gt; times, so the total number of forward steps of the current arcs of all nodes is in &amp;lt;math&amp;gt;\mathcal{O}(n^3)\subseteq\mathcal{O}(n^2m)&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
The argument in the [[Ahuja-Orlin#Complexity|complexity analysis]] of the [[Ahuja-Orlin]] algorithm to prove that the total number of ''saturating'' push operations is in &amp;lt;math&amp;gt;\mathcal{O}(nm)&amp;lt;/math&amp;gt;, applies here as well.&lt;br /&gt;
&lt;br /&gt;
Finally, we consider the ''non-saturating'' push operations. First note that &amp;lt;math&amp;gt;D\geq 0&amp;lt;/math&amp;gt; before and after each iteration. The value of &amp;lt;math&amp;gt;D&amp;lt;/math&amp;gt; is increased in each relabel operation exactly by the amount by which the label of the current node is increased. Since node labels are never decreased and bounded from above by &amp;lt;math&amp;gt;2n-1&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;D&amp;lt;/math&amp;gt; increases by less than &amp;lt;math&amp;gt;2n^2&amp;lt;/math&amp;gt; in total over all relabel operations. On the other hand, a saturating push operation may increase &amp;lt;math&amp;gt;D&amp;lt;/math&amp;gt; by at most &amp;lt;math&amp;gt;2n-1&amp;lt;/math&amp;gt; (namely, in the case that &amp;lt;math&amp;gt;w&amp;lt;/math&amp;gt; was not active immediately before the push). In summary, the total sum of all values by which &amp;lt;math&amp;gt;D&amp;lt;/math&amp;gt; is increased throughout the algorithm is in &amp;lt;math&amp;gt;\mathcal{O}(n^2m)&amp;lt;/math&amp;gt;. Due to the variant, the value of &amp;lt;math&amp;gt;D&amp;lt;/math&amp;gt; is decreased by at least one in each non-saturating push operation. This proves the claim.&lt;br /&gt;
&lt;br /&gt;
== Heuristic speedup techniques ==&lt;br /&gt;
&lt;br /&gt;
# After &amp;lt;math&amp;gt;\Omega(n)&amp;lt;/math&amp;gt; iterations of the main loop, the &amp;lt;math&amp;gt;d&amp;lt;/math&amp;gt;-values are recomputed analogously to the induction basis: as the current distance of each node to &amp;lt;math&amp;gt;t&amp;lt;/math&amp;gt; in the residual network. This modification is seldom enough, so the asymptotic complexity is not increased. In practice, this technique may save many unnecessary relabel steps.&lt;br /&gt;
# The main loop may be decomposed into two phases: First, as much flow as possible is sent into &amp;lt;math&amp;gt;t&amp;lt;/math&amp;gt;; second, all surplus flow that cannot reach &amp;lt;math&amp;gt;t&amp;lt;/math&amp;gt; is sent back to &amp;lt;math&amp;gt;s&amp;lt;/math&amp;gt;. The first phase may be finished once there is no more path in the residual network from any active node to &amp;lt;math&amp;gt;t&amp;lt;/math&amp;gt;. A sufficient and easy-to-check condition for that is &amp;lt;math&amp;gt;d(v)\geq n&amp;lt;/math&amp;gt; for all active nodes &amp;lt;math&amp;gt;v&amp;lt;/math&amp;gt;. All nodes from which &amp;lt;math&amp;gt;t&amp;lt;/math&amp;gt; is reachable may be safely disregarded in the second phase. For any other node, to save unnecessary relabel operations, the distance label may be safely increased to the minimum number of arcs in the residual network from this node back to &amp;lt;math&amp;gt;s&amp;lt;/math&amp;gt;.&lt;/div&gt;</summary>
		<author><name>Weihe</name></author>
	</entry>
	<entry>
		<id>https://wiki.algo.informatik.tu-darmstadt.de/index.php?title=Preflow-push&amp;diff=3876</id>
		<title>Preflow-push</title>
		<link rel="alternate" type="text/html" href="https://wiki.algo.informatik.tu-darmstadt.de/index.php?title=Preflow-push&amp;diff=3876"/>
		<updated>2017-06-12T08:29:18Z</updated>

		<summary type="html">&lt;p&gt;Weihe: /* Induction step */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Abstract view ==&lt;br /&gt;
&lt;br /&gt;
'''Also known as:''' ''push-relabel'' algorithm or ''Goldberg-Tarjan'' algorithm&lt;br /&gt;
&lt;br /&gt;
'''Algorithmic problem:'''&lt;br /&gt;
[[Max-Flow Problems#Standard version|max-flow problem (standard version)]]&lt;br /&gt;
&lt;br /&gt;
'''Type of algorithm:'''&lt;br /&gt;
loop.&lt;br /&gt;
&lt;br /&gt;
'''Auxiliary data:'''&lt;br /&gt;
# A nonnegative integral value &amp;lt;math&amp;gt;d(v)&amp;lt;/math&amp;gt; for each node &amp;lt;math&amp;gt;v\in V&amp;lt;/math&amp;gt;.&lt;br /&gt;
# Each node &amp;lt;math&amp;gt;v\in V\setminus\{t\}&amp;lt;/math&amp;gt; has a '''current arc''', which may be implemented as an iterator on the list of outgoing residual arcs of &amp;lt;math&amp;gt;v&amp;lt;/math&amp;gt;.&lt;br /&gt;
# The [[Basic flow definitions#Preflow|excess]] &amp;lt;math&amp;gt;e_f(v)&amp;lt;/math&amp;gt; of a node &amp;lt;math&amp;gt;v\in V\setminus\{s,t\}&amp;lt;/math&amp;gt; with respect to the current [[Basic flow definitions#Preflow|preflow]].&lt;br /&gt;
# A (dynamically changing) [[Sets and sequences|set]] &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt; of nodes.&lt;br /&gt;
&lt;br /&gt;
'''Invariant:'''&lt;br /&gt;
Before and after each iteration:&lt;br /&gt;
# For each arc &amp;lt;math&amp;gt;a\in A&amp;lt;/math&amp;gt;, it is &amp;lt;math&amp;gt;0\leq f(a)\leq u(a)&amp;lt;/math&amp;gt; . If all upper bounds are integral, all &amp;lt;math&amp;gt;f&amp;lt;/math&amp;gt;-values are integral, too.&lt;br /&gt;
# For each node &amp;lt;math&amp;gt;v\in V\setminus\{s,t\} &amp;lt;/math&amp;gt;, it is &amp;lt;math&amp;gt;e_f(v)\geq 0&amp;lt;/math&amp;gt;. In other words, &amp;lt;math&amp;gt;f&amp;lt;/math&amp;gt; is a [[Basic flow definitions#Preflow|preflow]].&lt;br /&gt;
# The node labels &amp;lt;math&amp;gt;d&amp;lt;/math&amp;gt; form a [[Basic flow definitions#Valid distance labeling|valid distance labeling]], and it is &amp;lt;math&amp;gt;d(s)=n:=|V|&amp;lt;/math&amp;gt;.&lt;br /&gt;
# The currently [[Basic flow definitions#Preflow|active nodes]] are stored in &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt;.&lt;br /&gt;
# The current arc of a node is an outgoing arc of the node's in the residual graph. In the list of all of these arcs, no admissible arc precedes the current arc.&lt;br /&gt;
&lt;br /&gt;
'''Variant:'''&lt;br /&gt;
No label &amp;lt;math&amp;gt;d(\cdot)&amp;lt;/math&amp;gt; is ever decreased. In each iteration, one of the following three actions will take place:&lt;br /&gt;
# The label &amp;lt;math&amp;gt;d(v)&amp;lt;/math&amp;gt; of at least one node &amp;lt;math&amp;gt;v\in V\setminus\{s,t\}&amp;lt;/math&amp;gt; is increased.&lt;br /&gt;
# A saturating push is performed.&lt;br /&gt;
# The value of &amp;lt;math&amp;gt;D:=\sum_{v\in V\setminus\{s,t\}\atop e_f(v)&amp;gt;0}d(v)&amp;lt;/math&amp;gt; decreases.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Break condition:'''&lt;br /&gt;
&amp;lt;math&amp;gt;S=\emptyset&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
== Induction basis ==&lt;br /&gt;
&lt;br /&gt;
'''Abstract view:'''&lt;br /&gt;
# For all arcs &amp;lt;math&amp;gt;a\in A&amp;lt;/math&amp;gt;, set &amp;lt;math&amp;gt;f(a):=0&amp;lt;/math&amp;gt;.&lt;br /&gt;
# For each arc &amp;lt;math&amp;gt;(s,v)\in A&amp;lt;/math&amp;gt;, overwrite this value by &amp;lt;math&amp;gt;f(a):=u(a)&amp;lt;/math&amp;gt; and put &amp;lt;math&amp;gt;v&amp;lt;/math&amp;gt; into &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt;.&lt;br /&gt;
# Compute a [[Basic flow definitions#Valid distance labeling|valid distance labeling]] &amp;lt;math&amp;gt;d&amp;lt;/math&amp;gt; for &amp;lt;math&amp;gt;V\setminus\{s\}&amp;lt;/math&amp;gt; with respect to &amp;lt;math&amp;gt;f&amp;lt;/math&amp;gt;, for example, the true distances from all nodes to &amp;lt;math&amp;gt;t&amp;lt;/math&amp;gt; in the residual network of &amp;lt;math&amp;gt;f&amp;lt;/math&amp;gt;.&lt;br /&gt;
# Set &amp;lt;math&amp;gt;d(s):=n&amp;lt;/math&amp;gt;.&lt;br /&gt;
# For all &amp;lt;math&amp;gt;v\in V\setminus\{s,t\}&amp;lt;/math&amp;gt;, reset the current arc of &amp;lt;math&amp;gt;v&amp;lt;/math&amp;gt; so as to point to the first arc in the list of outgoing arcs of &amp;lt;math&amp;gt;v&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
'''Proof:'''&lt;br /&gt;
For the [[Basic graph definitions#Subgraphs|subgraph induced]] by &amp;lt;math&amp;gt;V\setminus\{s\}&amp;lt;/math&amp;gt;, the arguments in the [[Ahuja-Orlin#Correctness|correctness proof]] for the [[Ahuja-Orlin]] algorithm prove that the &amp;lt;math&amp;gt;d&amp;lt;/math&amp;gt;-labels form a valid distance labeling here as well. For &amp;lt;math&amp;gt;s&amp;lt;/math&amp;gt;, nothing is to show because all outgoing arcs are saturated.&lt;br /&gt;
&lt;br /&gt;
== Induction step ==&lt;br /&gt;
&lt;br /&gt;
'''Abstract view:'''&lt;br /&gt;
# Choose an [[Basic flow definitions#Preflow|active node]] &amp;lt;math&amp;gt;v&amp;lt;/math&amp;gt; from &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt;.&lt;br /&gt;
# While the current arc of &amp;lt;math&amp;gt;v&amp;lt;/math&amp;gt; is not void and not [[Basic flow definitions#Valid distance labeling|admissible]] either, move the current arc one step forward.&lt;br /&gt;
# If the current arc of &amp;lt;math&amp;gt;v&amp;lt;/math&amp;gt; is ''not'' void now but an (admissible) outgoing arc &amp;lt;math&amp;gt;(v,w)&amp;lt;/math&amp;gt;, say:&lt;br /&gt;
## If &amp;lt;math&amp;gt;w\neq s,t&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;e_f(w)=0&amp;lt;/math&amp;gt;, insert &amp;lt;math&amp;gt;w&amp;lt;/math&amp;gt; in &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt;.&lt;br /&gt;
## Increase the flow over &amp;lt;math&amp;gt;(v,w)&amp;lt;/math&amp;gt; by the minimum of &amp;lt;math&amp;gt;e_f(v)&amp;lt;/math&amp;gt; and the residual capacity of &amp;lt;math&amp;gt;(v,w)&amp;lt;/math&amp;gt;.&lt;br /&gt;
## Increase &amp;lt;math&amp;gt;e_f(w)&amp;lt;/math&amp;gt; by that value and decrease &amp;lt;math&amp;gt;e_f(v)&amp;lt;/math&amp;gt; by the same value.&lt;br /&gt;
## If &amp;lt;math&amp;gt;e_f(v)=0&amp;lt;/math&amp;gt; now, extract &amp;lt;math&amp;gt;v&amp;lt;/math&amp;gt; from &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt;.&lt;br /&gt;
# Otherwise:&lt;br /&gt;
## Let &amp;lt;math&amp;gt;d_\min&amp;lt;/math&amp;gt; denote the minimal label &amp;lt;math&amp;gt;d(w)&amp;lt;/math&amp;gt; of any arc &amp;lt;math&amp;gt;(v,w)&amp;lt;/math&amp;gt; in the residual network.&lt;br /&gt;
## Set &amp;lt;math&amp;gt;d(v):=d_\min+1&amp;lt;/math&amp;gt;.&lt;br /&gt;
## Reset the current arc of &amp;lt;math&amp;gt;v&amp;lt;/math&amp;gt; so as to point to the beginning of the list of outgoing arcs of &amp;lt;math&amp;gt;v&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
'''Remark:'''&lt;br /&gt;
The preflow-push algorithm is also known as the '''push-relabel''' algorithm. The ''push'' operation is step 3; the ''relabel'' operation is step 4.&lt;br /&gt;
&lt;br /&gt;
'''Proof:'''&lt;br /&gt;
Points 1, 2, and 4 of the invariant and &amp;lt;math&amp;gt;d(s)=n&amp;lt;/math&amp;gt; are obviously fulfilled. The rest of point 3 of the invariant is affected by step 4 only, and the outgoing arcs of &amp;lt;math&amp;gt;v&amp;lt;/math&amp;gt; are the only arcs where the distance labeling may become invalid. However, the extremely conservative increase of &amp;lt;math&amp;gt;d(v)&amp;lt;/math&amp;gt; ensures point 3 of the invariant.&lt;br /&gt;
&lt;br /&gt;
To prove the variant, consider a step in which neither any &amp;lt;math&amp;gt;d&amp;lt;/math&amp;gt;-value is increased nor a saturating push is performed. This means step 3.2 is applied, but the arc &amp;lt;math&amp;gt;(v,w)&amp;lt;/math&amp;gt; is not saturated by that. Potentially, &amp;lt;math&amp;gt;w&amp;lt;/math&amp;gt; becomes active. However, &amp;lt;math&amp;gt;v&amp;lt;/math&amp;gt; definitely becomes inactive since the push step is non-saturating. Now the variant follows from the fact that &amp;lt;math&amp;gt;d(w)=d(v)-1&amp;lt;/math&amp;gt; for an admissible arc &amp;lt;math&amp;gt;(v,w)&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
It remains to show termination; this is proved by the following complexity considerations.&lt;br /&gt;
&lt;br /&gt;
== Complexity ==&lt;br /&gt;
&lt;br /&gt;
'''Statement:'''&lt;br /&gt;
The asymptotic complexity is in &amp;lt;math&amp;gt;\mathcal{O}(n^2m)&amp;lt;/math&amp;gt;, where &amp;lt;math&amp;gt;n=|V|&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;m=|A|&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
'''Proof:'''&lt;br /&gt;
First we show that the total number of relabel operations (step 4 of the main loop) is in &amp;lt;math&amp;gt;\mathcal{O}(n^2)&amp;lt;/math&amp;gt;. To see that, let &amp;lt;math&amp;gt;v\in V\setminus\{s,t\}&amp;lt;/math&amp;gt; be an active node between two iterations of the main loop. A straightforward induction over the number of push operations shows that there is at least one simple &amp;lt;math&amp;gt;(s,v)&amp;lt;/math&amp;gt;-path &amp;lt;math&amp;gt;p&amp;lt;/math&amp;gt; with positive flow on all arcs. The [[Basic graph definitions#Transpose of a graph|transpose]] of &amp;lt;math&amp;gt;p&amp;lt;/math&amp;gt; is [[Basic flow definitions#Flow-augmenting paths and saturated arcs|augmenting]]. Due to the validity of &amp;lt;math&amp;gt;d&amp;lt;/math&amp;gt; (induction hypothesis), &amp;lt;math&amp;gt;d(v)-d(s)=d(v)-n&amp;lt;/math&amp;gt; cannot be larger than the number of arcs on &amp;lt;math&amp;gt;p&amp;lt;/math&amp;gt;, which is not larger than &amp;lt;math&amp;gt;n-1&amp;lt;/math&amp;gt;. Therefore, no node label can be larger than &amp;lt;math&amp;gt;2n-1&amp;lt;/math&amp;gt;. Since node labels are nonnegative and increase at least by one in each relabel operation, the claimed upper bound on the relabel operations follows.&lt;br /&gt;
&lt;br /&gt;
From this bound, we may immediately conclude that the current arc of a node is reset &amp;lt;math&amp;gt;\mathcal{O}(n)&amp;lt;/math&amp;gt; times, so the total number of forward steps of the current arcs of all nodes is in &amp;lt;math&amp;gt;\mathcal{O}(n^3)\subseteq\mathcal{O}(n^2m)&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
The argument in the [[Ahuja-Orlin#Complexity|complexity analysis]] of the [[Ahuja-Orlin]] algorithm to prove that the total number of ''saturating'' push operations is in &amp;lt;math&amp;gt;\mathcal{O}(nm)&amp;lt;/math&amp;gt;, applies here as well.&lt;br /&gt;
&lt;br /&gt;
Finally, we consider the ''non-saturating'' push operations. First note that &amp;lt;math&amp;gt;D\geq 0&amp;lt;/math&amp;gt; before and after each iteration. The value of &amp;lt;math&amp;gt;D&amp;lt;/math&amp;gt; is increased in each relabel operation exactly by the amount by which the label of the current node is increased. Since node labels are never decreased and bounded from above by &amp;lt;math&amp;gt;2n-1&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;D&amp;lt;/math&amp;gt; increases by less than &amp;lt;math&amp;gt;2n^2&amp;lt;/math&amp;gt; in total over all relabel operations. On the other hand, a saturating push operation may increase &amp;lt;math&amp;gt;D&amp;lt;/math&amp;gt; by at most &amp;lt;math&amp;gt;2n-1&amp;lt;/math&amp;gt; (namely, in the case that &amp;lt;math&amp;gt;w&amp;lt;/math&amp;gt; was not active immediately before the push). In summary, the total sum of all values by which &amp;lt;math&amp;gt;D&amp;lt;/math&amp;gt; is increased throughout the algorithm is in &amp;lt;math&amp;gt;\mathcal{O}(n^2m)&amp;lt;/math&amp;gt;. Due to the variant, the value of &amp;lt;math&amp;gt;D&amp;lt;/math&amp;gt; is decreased by at least one in each non-saturating push operation. This proves the claim.&lt;br /&gt;
&lt;br /&gt;
== Heuristic speedup techniques ==&lt;br /&gt;
&lt;br /&gt;
# After &amp;lt;math&amp;gt;\Omega(n)&amp;lt;/math&amp;gt; iterations of the main loop, the &amp;lt;math&amp;gt;d&amp;lt;/math&amp;gt;-values are recomputed analogously to the induction basis: as the current distance of each node to &amp;lt;math&amp;gt;t&amp;lt;/math&amp;gt; in the residual network. This modification is seldom enough, so the asymptotic complexity is not increased. In practice, this technique may save many unnecessary relabel steps.&lt;br /&gt;
# The main loop may be decomposed into two phases: First, as much flow as possible is sent into &amp;lt;math&amp;gt;t&amp;lt;/math&amp;gt;; second, all surplus flow that cannot reach &amp;lt;math&amp;gt;t&amp;lt;/math&amp;gt; is sent back to &amp;lt;math&amp;gt;s&amp;lt;/math&amp;gt;. The first phase may be finished once there is no more path in the residual network from any active node to &amp;lt;math&amp;gt;t&amp;lt;/math&amp;gt;. A sufficient and easy-to-check condition for that is &amp;lt;math&amp;gt;d(v)\geq n&amp;lt;/math&amp;gt; for all active nodes &amp;lt;math&amp;gt;v&amp;lt;/math&amp;gt;. All nodes from which &amp;lt;math&amp;gt;t&amp;lt;/math&amp;gt; is reachable may be safely disregarded in the second phase. For any other node, to save unnecessary relabel operations, the distance label may be safely increased to the minimum number of arcs in the residual network from this node back to &amp;lt;math&amp;gt;s&amp;lt;/math&amp;gt;.&lt;/div&gt;</summary>
		<author><name>Weihe</name></author>
	</entry>
	<entry>
		<id>https://wiki.algo.informatik.tu-darmstadt.de/index.php?title=Preflow-push&amp;diff=3875</id>
		<title>Preflow-push</title>
		<link rel="alternate" type="text/html" href="https://wiki.algo.informatik.tu-darmstadt.de/index.php?title=Preflow-push&amp;diff=3875"/>
		<updated>2017-06-12T08:28:49Z</updated>

		<summary type="html">&lt;p&gt;Weihe: /* Induction step */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Abstract view ==&lt;br /&gt;
&lt;br /&gt;
'''Also known as:''' ''push-relabel'' algorithm or ''Goldberg-Tarjan'' algorithm&lt;br /&gt;
&lt;br /&gt;
'''Algorithmic problem:'''&lt;br /&gt;
[[Max-Flow Problems#Standard version|max-flow problem (standard version)]]&lt;br /&gt;
&lt;br /&gt;
'''Type of algorithm:'''&lt;br /&gt;
loop.&lt;br /&gt;
&lt;br /&gt;
'''Auxiliary data:'''&lt;br /&gt;
# A nonnegative integral value &amp;lt;math&amp;gt;d(v)&amp;lt;/math&amp;gt; for each node &amp;lt;math&amp;gt;v\in V&amp;lt;/math&amp;gt;.&lt;br /&gt;
# Each node &amp;lt;math&amp;gt;v\in V\setminus\{t\}&amp;lt;/math&amp;gt; has a '''current arc''', which may be implemented as an iterator on the list of outgoing residual arcs of &amp;lt;math&amp;gt;v&amp;lt;/math&amp;gt;.&lt;br /&gt;
# The [[Basic flow definitions#Preflow|excess]] &amp;lt;math&amp;gt;e_f(v)&amp;lt;/math&amp;gt; of a node &amp;lt;math&amp;gt;v\in V\setminus\{s,t\}&amp;lt;/math&amp;gt; with respect to the current [[Basic flow definitions#Preflow|preflow]].&lt;br /&gt;
# A (dynamically changing) [[Sets and sequences|set]] &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt; of nodes.&lt;br /&gt;
&lt;br /&gt;
'''Invariant:'''&lt;br /&gt;
Before and after each iteration:&lt;br /&gt;
# For each arc &amp;lt;math&amp;gt;a\in A&amp;lt;/math&amp;gt;, it is &amp;lt;math&amp;gt;0\leq f(a)\leq u(a)&amp;lt;/math&amp;gt; . If all upper bounds are integral, all &amp;lt;math&amp;gt;f&amp;lt;/math&amp;gt;-values are integral, too.&lt;br /&gt;
# For each node &amp;lt;math&amp;gt;v\in V\setminus\{s,t\} &amp;lt;/math&amp;gt;, it is &amp;lt;math&amp;gt;e_f(v)\geq 0&amp;lt;/math&amp;gt;. In other words, &amp;lt;math&amp;gt;f&amp;lt;/math&amp;gt; is a [[Basic flow definitions#Preflow|preflow]].&lt;br /&gt;
# The node labels &amp;lt;math&amp;gt;d&amp;lt;/math&amp;gt; form a [[Basic flow definitions#Valid distance labeling|valid distance labeling]], and it is &amp;lt;math&amp;gt;d(s)=n:=|V|&amp;lt;/math&amp;gt;.&lt;br /&gt;
# The currently [[Basic flow definitions#Preflow|active nodes]] are stored in &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt;.&lt;br /&gt;
# The current arc of a node is an outgoing arc of the node's in the residual graph. In the list of all of these arcs, no admissible arc precedes the current arc.&lt;br /&gt;
&lt;br /&gt;
'''Variant:'''&lt;br /&gt;
No label &amp;lt;math&amp;gt;d(\cdot)&amp;lt;/math&amp;gt; is ever decreased. In each iteration, one of the following three actions will take place:&lt;br /&gt;
# The label &amp;lt;math&amp;gt;d(v)&amp;lt;/math&amp;gt; of at least one node &amp;lt;math&amp;gt;v\in V\setminus\{s,t\}&amp;lt;/math&amp;gt; is increased.&lt;br /&gt;
# A saturating push is performed.&lt;br /&gt;
# The value of &amp;lt;math&amp;gt;D:=\sum_{v\in V\setminus\{s,t\}\atop e_f(v)&amp;gt;0}d(v)&amp;lt;/math&amp;gt; decreases.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Break condition:'''&lt;br /&gt;
&amp;lt;math&amp;gt;S=\emptyset&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
== Induction basis ==&lt;br /&gt;
&lt;br /&gt;
'''Abstract view:'''&lt;br /&gt;
# For all arcs &amp;lt;math&amp;gt;a\in A&amp;lt;/math&amp;gt;, set &amp;lt;math&amp;gt;f(a):=0&amp;lt;/math&amp;gt;.&lt;br /&gt;
# For each arc &amp;lt;math&amp;gt;(s,v)\in A&amp;lt;/math&amp;gt;, overwrite this value by &amp;lt;math&amp;gt;f(a):=u(a)&amp;lt;/math&amp;gt; and put &amp;lt;math&amp;gt;v&amp;lt;/math&amp;gt; into &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt;.&lt;br /&gt;
# Compute a [[Basic flow definitions#Valid distance labeling|valid distance labeling]] &amp;lt;math&amp;gt;d&amp;lt;/math&amp;gt; for &amp;lt;math&amp;gt;V\setminus\{s\}&amp;lt;/math&amp;gt; with respect to &amp;lt;math&amp;gt;f&amp;lt;/math&amp;gt;, for example, the true distances from all nodes to &amp;lt;math&amp;gt;t&amp;lt;/math&amp;gt; in the residual network of &amp;lt;math&amp;gt;f&amp;lt;/math&amp;gt;.&lt;br /&gt;
# Set &amp;lt;math&amp;gt;d(s):=n&amp;lt;/math&amp;gt;.&lt;br /&gt;
# For all &amp;lt;math&amp;gt;v\in V\setminus\{s,t\}&amp;lt;/math&amp;gt;, reset the current arc of &amp;lt;math&amp;gt;v&amp;lt;/math&amp;gt; so as to point to the first arc in the list of outgoing arcs of &amp;lt;math&amp;gt;v&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
'''Proof:'''&lt;br /&gt;
For the [[Basic graph definitions#Subgraphs|subgraph induced]] by &amp;lt;math&amp;gt;V\setminus\{s\}&amp;lt;/math&amp;gt;, the arguments in the [[Ahuja-Orlin#Correctness|correctness proof]] for the [[Ahuja-Orlin]] algorithm prove that the &amp;lt;math&amp;gt;d&amp;lt;/math&amp;gt;-labels form a valid distance labeling here as well. For &amp;lt;math&amp;gt;s&amp;lt;/math&amp;gt;, nothing is to show because all outgoing arcs are saturated.&lt;br /&gt;
&lt;br /&gt;
== Induction step ==&lt;br /&gt;
&lt;br /&gt;
'''Abstract view:'''&lt;br /&gt;
# Choose an [[Basic flow definitions#Preflow|active node]] &amp;lt;math&amp;gt;v&amp;lt;/math&amp;gt; from &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt;.&lt;br /&gt;
# While the current arc of &amp;lt;math&amp;gt;v&amp;lt;/math&amp;gt; is not void and not [[Basic flow definitions#Valid distance labeling|admissible]] either, move the current arc one step forward.&lt;br /&gt;
# If the current arc of &amp;lt;math&amp;gt;v&amp;lt;/math&amp;gt; is ''not'' void now but an (admissible) outgoing arc &amp;lt;math&amp;gt;(v,w)&amp;lt;/math&amp;gt;, say:&lt;br /&gt;
## If &amp;lt;math&amp;gt;w\neq s&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;w\neq t&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;e_f(w)=0&amp;lt;/math&amp;gt;, insert &amp;lt;math&amp;gt;w&amp;lt;/math&amp;gt; in &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt;.&lt;br /&gt;
## Increase the flow over &amp;lt;math&amp;gt;(v,w)&amp;lt;/math&amp;gt; by the minimum of &amp;lt;math&amp;gt;e_f(v)&amp;lt;/math&amp;gt; and the residual capacity of &amp;lt;math&amp;gt;(v,w)&amp;lt;/math&amp;gt;.&lt;br /&gt;
## Increase &amp;lt;math&amp;gt;e_f(w)&amp;lt;/math&amp;gt; by that value and decrease &amp;lt;math&amp;gt;e_f(v)&amp;lt;/math&amp;gt; by the same value.&lt;br /&gt;
## If &amp;lt;math&amp;gt;e_f(v)=0&amp;lt;/math&amp;gt; now, extract &amp;lt;math&amp;gt;v&amp;lt;/math&amp;gt; from &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt;.&lt;br /&gt;
# Otherwise:&lt;br /&gt;
## Let &amp;lt;math&amp;gt;d_\min&amp;lt;/math&amp;gt; denote the minimal label &amp;lt;math&amp;gt;d(w)&amp;lt;/math&amp;gt; of any arc &amp;lt;math&amp;gt;(v,w)&amp;lt;/math&amp;gt; in the residual network.&lt;br /&gt;
## Set &amp;lt;math&amp;gt;d(v):=d_\min+1&amp;lt;/math&amp;gt;.&lt;br /&gt;
## Reset the current arc of &amp;lt;math&amp;gt;v&amp;lt;/math&amp;gt; so as to point to the beginning of the list of outgoing arcs of &amp;lt;math&amp;gt;v&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
'''Remark:'''&lt;br /&gt;
The preflow-push algorithm is also known as the '''push-relabel''' algorithm. The ''push'' operation is step 3; the ''relabel'' operation is step 4.&lt;br /&gt;
&lt;br /&gt;
'''Proof:'''&lt;br /&gt;
Points 1, 2, and 4 of the invariant and &amp;lt;math&amp;gt;d(s)=n&amp;lt;/math&amp;gt; are obviously fulfilled. The rest of point 3 of the invariant is affected by step 4 only, and the outgoing arcs of &amp;lt;math&amp;gt;v&amp;lt;/math&amp;gt; are the only arcs where the distance labeling may become invalid. However, the extremely conservative increase of &amp;lt;math&amp;gt;d(v)&amp;lt;/math&amp;gt; ensures point 3 of the invariant.&lt;br /&gt;
&lt;br /&gt;
To prove the variant, consider a step in which neither any &amp;lt;math&amp;gt;d&amp;lt;/math&amp;gt;-value is increased nor a saturating push is performed. This means step 3.2 is applied, but the arc &amp;lt;math&amp;gt;(v,w)&amp;lt;/math&amp;gt; is not saturated by that. Potentially, &amp;lt;math&amp;gt;w&amp;lt;/math&amp;gt; becomes active. However, &amp;lt;math&amp;gt;v&amp;lt;/math&amp;gt; definitely becomes inactive since the push step is non-saturating. Now the variant follows from the fact that &amp;lt;math&amp;gt;d(w)=d(v)-1&amp;lt;/math&amp;gt; for an admissible arc &amp;lt;math&amp;gt;(v,w)&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
It remains to show termination; this is proved by the following complexity considerations.&lt;br /&gt;
&lt;br /&gt;
== Complexity ==&lt;br /&gt;
&lt;br /&gt;
'''Statement:'''&lt;br /&gt;
The asymptotic complexity is in &amp;lt;math&amp;gt;\mathcal{O}(n^2m)&amp;lt;/math&amp;gt;, where &amp;lt;math&amp;gt;n=|V|&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;m=|A|&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
'''Proof:'''&lt;br /&gt;
First we show that the total number of relabel operations (step 4 of the main loop) is in &amp;lt;math&amp;gt;\mathcal{O}(n^2)&amp;lt;/math&amp;gt;. To see that, let &amp;lt;math&amp;gt;v\in V\setminus\{s,t\}&amp;lt;/math&amp;gt; be an active node between two iterations of the main loop. A straightforward induction over the number of push operations shows that there is at least one simple &amp;lt;math&amp;gt;(s,v)&amp;lt;/math&amp;gt;-path &amp;lt;math&amp;gt;p&amp;lt;/math&amp;gt; with positive flow on all arcs. The [[Basic graph definitions#Transpose of a graph|transpose]] of &amp;lt;math&amp;gt;p&amp;lt;/math&amp;gt; is [[Basic flow definitions#Flow-augmenting paths and saturated arcs|augmenting]]. Due to the validity of &amp;lt;math&amp;gt;d&amp;lt;/math&amp;gt; (induction hypothesis), &amp;lt;math&amp;gt;d(v)-d(s)=d(v)-n&amp;lt;/math&amp;gt; cannot be larger than the number of arcs on &amp;lt;math&amp;gt;p&amp;lt;/math&amp;gt;, which is not larger than &amp;lt;math&amp;gt;n-1&amp;lt;/math&amp;gt;. Therefore, no node label can be larger than &amp;lt;math&amp;gt;2n-1&amp;lt;/math&amp;gt;. Since node labels are nonnegative and increase at least by one in each relabel operation, the claimed upper bound on the relabel operations follows.&lt;br /&gt;
&lt;br /&gt;
From this bound, we may immediately conclude that the current arc of a node is reset &amp;lt;math&amp;gt;\mathcal{O}(n)&amp;lt;/math&amp;gt; times, so the total number of forward steps of the current arcs of all nodes is in &amp;lt;math&amp;gt;\mathcal{O}(n^3)\subseteq\mathcal{O}(n^2m)&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
The argument in the [[Ahuja-Orlin#Complexity|complexity analysis]] of the [[Ahuja-Orlin]] algorithm to prove that the total number of ''saturating'' push operations is in &amp;lt;math&amp;gt;\mathcal{O}(nm)&amp;lt;/math&amp;gt;, applies here as well.&lt;br /&gt;
&lt;br /&gt;
Finally, we consider the ''non-saturating'' push operations. First note that &amp;lt;math&amp;gt;D\geq 0&amp;lt;/math&amp;gt; before and after each iteration. The value of &amp;lt;math&amp;gt;D&amp;lt;/math&amp;gt; is increased in each relabel operation exactly by the amount by which the label of the current node is increased. Since node labels are never decreased and bounded from above by &amp;lt;math&amp;gt;2n-1&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;D&amp;lt;/math&amp;gt; increases by less than &amp;lt;math&amp;gt;2n^2&amp;lt;/math&amp;gt; in total over all relabel operations. On the other hand, a saturating push operation may increase &amp;lt;math&amp;gt;D&amp;lt;/math&amp;gt; by at most &amp;lt;math&amp;gt;2n-1&amp;lt;/math&amp;gt; (namely, in the case that &amp;lt;math&amp;gt;w&amp;lt;/math&amp;gt; was not active immediately before the push). In summary, the total sum of all values by which &amp;lt;math&amp;gt;D&amp;lt;/math&amp;gt; is increased throughout the algorithm is in &amp;lt;math&amp;gt;\mathcal{O}(n^2m)&amp;lt;/math&amp;gt;. Due to the variant, the value of &amp;lt;math&amp;gt;D&amp;lt;/math&amp;gt; is decreased by at least one in each non-saturating push operation. This proves the claim.&lt;br /&gt;
&lt;br /&gt;
== Heuristic speedup techniques ==&lt;br /&gt;
&lt;br /&gt;
# After &amp;lt;math&amp;gt;\Omega(n)&amp;lt;/math&amp;gt; iterations of the main loop, the &amp;lt;math&amp;gt;d&amp;lt;/math&amp;gt;-values are recomputed analogously to the induction basis: as the current distance of each node to &amp;lt;math&amp;gt;t&amp;lt;/math&amp;gt; in the residual network. This modification is seldom enough, so the asymptotic complexity is not increased. In practice, this technique may save many unnecessary relabel steps.&lt;br /&gt;
# The main loop may be decomposed into two phases: First, as much flow as possible is sent into &amp;lt;math&amp;gt;t&amp;lt;/math&amp;gt;; second, all surplus flow that cannot reach &amp;lt;math&amp;gt;t&amp;lt;/math&amp;gt; is sent back to &amp;lt;math&amp;gt;s&amp;lt;/math&amp;gt;. The first phase may be finished once there is no more path in the residual network from any active node to &amp;lt;math&amp;gt;t&amp;lt;/math&amp;gt;. A sufficient and easy-to-check condition for that is &amp;lt;math&amp;gt;d(v)\geq n&amp;lt;/math&amp;gt; for all active nodes &amp;lt;math&amp;gt;v&amp;lt;/math&amp;gt;. All nodes from which &amp;lt;math&amp;gt;t&amp;lt;/math&amp;gt; is reachable may be safely disregarded in the second phase. For any other node, to save unnecessary relabel operations, the distance label may be safely increased to the minimum number of arcs in the residual network from this node back to &amp;lt;math&amp;gt;s&amp;lt;/math&amp;gt;.&lt;/div&gt;</summary>
		<author><name>Weihe</name></author>
	</entry>
	<entry>
		<id>https://wiki.algo.informatik.tu-darmstadt.de/index.php?title=Three_indians%27_algorithm&amp;diff=3874</id>
		<title>Three indians' algorithm</title>
		<link rel="alternate" type="text/html" href="https://wiki.algo.informatik.tu-darmstadt.de/index.php?title=Three_indians%27_algorithm&amp;diff=3874"/>
		<updated>2017-03-16T11:27:41Z</updated>

		<summary type="html">&lt;p&gt;Weihe: /* Induction step */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== General information ==&lt;br /&gt;
&lt;br /&gt;
'''Algorithmic problem:''' [[Blocking flow]].&lt;br /&gt;
&lt;br /&gt;
'''Type of algorithm:''' loop.&lt;br /&gt;
&lt;br /&gt;
== Abstract view ==&lt;br /&gt;
&lt;br /&gt;
'''Invariant:'''&lt;br /&gt;
The current flow is feasible.&lt;br /&gt;
&lt;br /&gt;
'''Variant:'''&lt;br /&gt;
The number of nodes strictly decreases.&lt;br /&gt;
&lt;br /&gt;
'''Break condition:'''&lt;br /&gt;
There is no more [[Basic graph definitions#Paths|ordinary &amp;lt;math&amp;gt;(s,t)&amp;lt;/math&amp;gt;-path]] in &amp;lt;math&amp;gt;G&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
== Induction basis ==&lt;br /&gt;
&lt;br /&gt;
'''Abstract view:'''&lt;br /&gt;
The flow is initialized to be feasible, for example, the zero flow.&lt;br /&gt;
&lt;br /&gt;
'''Proof:'''&lt;br /&gt;
Obvious.&lt;br /&gt;
&lt;br /&gt;
== Induction step ==&lt;br /&gt;
&lt;br /&gt;
'''Abstract view:'''&lt;br /&gt;
Choose the node &amp;lt;math&amp;gt;v_0&amp;lt;/math&amp;gt; through which the minimum amount of flow may go, and propagate this amount from &amp;lt;math&amp;gt;v_0&amp;lt;/math&amp;gt; forward to &amp;lt;math&amp;gt;t&amp;lt;/math&amp;gt; and backward to &amp;lt;math&amp;gt;s&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
'''Implementation:'''&lt;br /&gt;
# For each node &amp;lt;math&amp;gt;v\in V\setminus\{s,t\}&amp;lt;/math&amp;gt;, let &amp;lt;math&amp;gt;T(v)&amp;lt;/math&amp;gt; denote the '''throughput''' of &amp;lt;math&amp;gt;v&amp;lt;/math&amp;gt;, which is defined by &amp;lt;math&amp;gt;T(v):=\min\left\{\sum_{w:(v,w)\in A}u(v,w),\sum_{w:(w,v)\in A}u(w,v)\right\}&amp;lt;/math&amp;gt;.&lt;br /&gt;
# Remove all nodes with zero throughput and all arcs incident to at least one of these nodes.&lt;br /&gt;
# Let &amp;lt;math&amp;gt;v_0\in V\setminus\{s,t\}&amp;lt;/math&amp;gt; be a node with minimum throughput &amp;lt;math&amp;gt;T(v_0)&amp;lt;/math&amp;gt;.&lt;br /&gt;
# Set &amp;lt;math&amp;gt;F(v_0):=T(v_0)&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;F(v):=0&amp;lt;/math&amp;gt; for all nodes &amp;lt;math&amp;gt;v\in V\setminus\{s,t,v_0\}&amp;lt;/math&amp;gt;.&lt;br /&gt;
# Run a modified [[Breadth-first search|BFS]] from &amp;lt;math&amp;gt;v_0&amp;lt;/math&amp;gt;, where for every processed arc &amp;lt;math&amp;gt;(v,w)\in A&amp;lt;/math&amp;gt;:&lt;br /&gt;
## Let &amp;lt;math&amp;gt;\Delta:=\min\{u(v,w),F(v)\}&amp;lt;/math&amp;gt;.&lt;br /&gt;
## Set the flow over &amp;lt;math&amp;gt;(v,w)&amp;lt;/math&amp;gt; to &amp;lt;math&amp;gt;\Delta&amp;lt;/math&amp;gt;.&lt;br /&gt;
## Increase &amp;lt;math&amp;gt;F(w)&amp;lt;/math&amp;gt; by &amp;lt;math&amp;gt;\Delta&amp;lt;/math&amp;gt;.&lt;br /&gt;
## Decrease &amp;lt;math&amp;gt;F(v)&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;T(v)&amp;lt;/math&amp;gt;, and &amp;lt;math&amp;gt;u(v,w)&amp;lt;/math&amp;gt; by &amp;lt;math&amp;gt;\Delta&amp;lt;/math&amp;gt;.&lt;br /&gt;
## If &amp;lt;math&amp;gt;u(v,w)=0&amp;lt;/math&amp;gt;, remove &amp;lt;math&amp;gt;(v,w)&amp;lt;/math&amp;gt; from &amp;lt;math&amp;gt;G&amp;lt;/math&amp;gt;.&lt;br /&gt;
## If &amp;lt;math&amp;gt;F(v)=0&amp;lt;/math&amp;gt;, finish this iteration.&lt;br /&gt;
# Run steps 4+5 from &amp;lt;math&amp;gt;v_0&amp;lt;/math&amp;gt; on the [[Basic graph definitions#Transpose of a graph|transpose]] of &amp;lt;math&amp;gt;G&amp;lt;/math&amp;gt; with the value of &amp;lt;math&amp;gt;T(v_0)&amp;lt;/math&amp;gt; as computed in step 1 (all removals apply to &amp;lt;math&amp;gt;G&amp;lt;/math&amp;gt;).&lt;br /&gt;
&lt;br /&gt;
'''Proof:'''&lt;br /&gt;
Consider step 5 (step 6 is analogous). The specific choice of &amp;lt;matH&amp;gt;v_0&amp;lt;/math&amp;gt; ensures &amp;lt;math&amp;gt;F(v)\leq T(v_0)\leq T(v)&amp;lt;/math&amp;gt; for each node &amp;lt;math&amp;gt;v\in V&amp;lt;/math&amp;gt; at any time. Therefore, all flow arrived at &amp;lt;math&amp;gt;v&amp;lt;/math&amp;gt; can be moved forward along the arcs leaving &amp;lt;math&amp;gt;v&amp;lt;/math&amp;gt;. In other words, when &amp;lt;math&amp;gt;v&amp;lt;/math&amp;gt; is finished, it is &amp;lt;math&amp;gt;F(v)=0&amp;lt;/math&amp;gt;. Since &amp;lt;math&amp;gt;G&amp;lt;/math&amp;gt; is an [[Basic graph definitions#Cycles|&amp;lt;math&amp;gt;(s,t)&amp;lt;/math&amp;gt;-graph]], the result is a feasible &amp;lt;math&amp;gt;(s,t)&amp;lt;/math&amp;gt;-flow, and the flow value has increased by the initial value of &amp;lt;math&amp;gt;T(v_0)&amp;lt;/math&amp;gt;. In particular, it is &amp;lt;math&amp;gt;T(v_0)=0&amp;lt;/math&amp;gt; at the end, so at least &amp;lt;math&amp;gt;v_0&amp;lt;/math&amp;gt; will be removed in this iteration.&lt;br /&gt;
&lt;br /&gt;
== Correctness ==&lt;br /&gt;
&lt;br /&gt;
When a node is removed, either all of its outgoing arcs or all of its ingoing arcs are saturated. Therefore, an arc is only removed if this arc itself or all immediate predecessors or all immediate successors are saturated. In particular, an arc is only removed if it is not on any flow-augmenting path anymore. Therefore, when the break condition is fulfilled, the flow is blocking. Termination follows from the following complexity considerations.&lt;br /&gt;
&lt;br /&gt;
== Complexity ==&lt;br /&gt;
&lt;br /&gt;
'''Statement:'''&lt;br /&gt;
The asymptotic complexity is in &amp;lt;math&amp;gt;\mathcal{O}(n^2)&amp;lt;/math&amp;gt;, where &amp;lt;math&amp;gt;n=|V|&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
'''Proof:'''&lt;br /&gt;
Due to the variant, the number of iterations is linear in &amp;lt;math&amp;gt;n&amp;lt;/math&amp;gt;. When an arc is saturated, it is removed in step 5.5 (resp., 6.5), so the total number of saturating increases of flow values of arcs over all iterations of the main loop is linear in the number of arcs, which is in &amp;lt;math&amp;gt;\mathcal{O}(n^2)&amp;lt;/math&amp;gt;. In each execution of step 5 (resp., step 6), the flow on at most one outgoing (resp., incoming) arc of each node is increased without saturation, so the number of ''non''-saturating flow value settings is linear in the number of nodes in each iteration.&lt;br /&gt;
&lt;br /&gt;
== Remarks ==&lt;br /&gt;
&lt;br /&gt;
# The algorithm is named after three indian researchers, V. M. Malhotra, M. Pramodh Kumar, and S. N. Mahashwari.&lt;br /&gt;
# Of course, the nodes and arcs need not really be removed from the graph. However, &amp;quot;removed&amp;quot; nodes and arcs must be hidden from the algorithm to ensure the asymptotic complexity; a Boolean label &amp;quot;is removed&amp;quot; does not suffice for that.&lt;br /&gt;
# This application of [[Breadth-first search|BFS]] is an example for one of the remarks on [[Graph traversal#Remarks|graph traversal]]: It is reasonable to implement graph-traversal algorithms as iterators. In fact, then the modification may be simply added to the loop that runs the iterator.&lt;/div&gt;</summary>
		<author><name>Weihe</name></author>
	</entry>
	<entry>
		<id>https://wiki.algo.informatik.tu-darmstadt.de/index.php?title=Three_indians%27_algorithm&amp;diff=3873</id>
		<title>Three indians' algorithm</title>
		<link rel="alternate" type="text/html" href="https://wiki.algo.informatik.tu-darmstadt.de/index.php?title=Three_indians%27_algorithm&amp;diff=3873"/>
		<updated>2017-03-16T11:26:41Z</updated>

		<summary type="html">&lt;p&gt;Weihe: /* Induction step */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== General information ==&lt;br /&gt;
&lt;br /&gt;
'''Algorithmic problem:''' [[Blocking flow]].&lt;br /&gt;
&lt;br /&gt;
'''Type of algorithm:''' loop.&lt;br /&gt;
&lt;br /&gt;
== Abstract view ==&lt;br /&gt;
&lt;br /&gt;
'''Invariant:'''&lt;br /&gt;
The current flow is feasible.&lt;br /&gt;
&lt;br /&gt;
'''Variant:'''&lt;br /&gt;
The number of nodes strictly decreases.&lt;br /&gt;
&lt;br /&gt;
'''Break condition:'''&lt;br /&gt;
There is no more [[Basic graph definitions#Paths|ordinary &amp;lt;math&amp;gt;(s,t)&amp;lt;/math&amp;gt;-path]] in &amp;lt;math&amp;gt;G&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
== Induction basis ==&lt;br /&gt;
&lt;br /&gt;
'''Abstract view:'''&lt;br /&gt;
The flow is initialized to be feasible, for example, the zero flow.&lt;br /&gt;
&lt;br /&gt;
'''Proof:'''&lt;br /&gt;
Obvious.&lt;br /&gt;
&lt;br /&gt;
== Induction step ==&lt;br /&gt;
&lt;br /&gt;
'''Abstract view:'''&lt;br /&gt;
Choose the node &amp;lt;math&amp;gt;v_0&amp;lt;/math&amp;gt; through which the minimum amount of flow may go, and propagate this amount from &amp;lt;math&amp;gt;v_0&amp;lt;/math&amp;gt; forward to &amp;lt;math&amp;gt;t&amp;lt;/math&amp;gt; and backward to &amp;lt;math&amp;gt;s&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
'''Implementation:'''&lt;br /&gt;
# For each node &amp;lt;math&amp;gt;v\in V\setminus\{s,t\}&amp;lt;/math&amp;gt;, let &amp;lt;math&amp;gt;T(v)&amp;lt;/math&amp;gt; denote the '''throughput''' of &amp;lt;math&amp;gt;v&amp;lt;/math&amp;gt;, which is defined by &amp;lt;math&amp;gt;T(v):=\min\left\{\sum_{w:(v,w)\in A}u(v,w),\sum_{w:(w,v)\in A}u(w,v)\right\}&amp;lt;/math&amp;gt;.&lt;br /&gt;
# Remove all nodes with zero throughput and all arcs incident to at least one of these nodes.&lt;br /&gt;
# Let &amp;lt;math&amp;gt;v_0\in V\setminus\{s,t\}&amp;lt;/math&amp;gt; be a node with minimum throughput &amp;lt;math&amp;gt;T(v_0)&amp;lt;/math&amp;gt;.&lt;br /&gt;
# Set &amp;lt;math&amp;gt;F(v_0):=T(v_0)&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;F(v):=0&amp;lt;/math&amp;gt; for all nodes &amp;lt;math&amp;gt;v\in V\setminus\{s,t,v_0\}&amp;lt;/math&amp;gt;.&lt;br /&gt;
# Run a modified [[Breadth-first search|BFS]] from &amp;lt;math&amp;gt;v_0&amp;lt;/math&amp;gt;, where for every processed arc &amp;lt;math&amp;gt;(v,w)\in A&amp;lt;/math&amp;gt;:&lt;br /&gt;
## Let &amp;lt;math&amp;gt;\Delta:=\min\{u(v,w),F(v)\}&amp;lt;/math&amp;gt;.&lt;br /&gt;
## Set the flow over &amp;lt;math&amp;gt;(v,w)&amp;lt;/math&amp;gt; to &amp;lt;math&amp;gt;\Delta&amp;lt;/math&amp;gt;.&lt;br /&gt;
## Increase &amp;lt;math&amp;gt;F(w)&amp;lt;/math&amp;gt; by &amp;lt;math&amp;gt;\Delta&amp;lt;/math&amp;gt;.&lt;br /&gt;
## Decrease &amp;lt;math&amp;gt;F(v)&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;T(v)&amp;lt;/math&amp;gt;, and &amp;lt;math&amp;gt;u(v,w)&amp;lt;/math&amp;gt; by &amp;lt;math&amp;gt;\Delta&amp;lt;/math&amp;gt;.&lt;br /&gt;
## If &amp;lt;math&amp;gt;u(v,w)=0&amp;lt;/math&amp;gt;, remove &amp;lt;math&amp;gt;(v,w)&amp;lt;/math&amp;gt; from &amp;lt;math&amp;gt;G&amp;lt;/math&amp;gt;.&lt;br /&gt;
## If &amp;lt;math&amp;gt;F(v)=0&amp;lt;/math&amp;gt;, finish this iteration.&lt;br /&gt;
# Run steps 4+5 from &amp;lt;math&amp;gt;v_0&amp;lt;/math&amp;gt; on the [[Basic graph definitions#Transpose of a graph|transpose]] of &amp;lt;math&amp;gt;G&amp;lt;/math&amp;gt; with the value of &amp;lt;math&amp;gt;T_0&amp;lt;/math&amp;gt; as computed in step 1 (all removals apply to &amp;lt;math&amp;gt;G&amp;lt;/math&amp;gt;).&lt;br /&gt;
&lt;br /&gt;
'''Proof:'''&lt;br /&gt;
Consider step 5 (step 6 is analogous). The specific choice of &amp;lt;matH&amp;gt;v_0&amp;lt;/math&amp;gt; ensures &amp;lt;math&amp;gt;F(v)\leq T(v_0)\leq T(v)&amp;lt;/math&amp;gt; for each node &amp;lt;math&amp;gt;v\in V&amp;lt;/math&amp;gt; at any time. Therefore, all flow arrived at &amp;lt;math&amp;gt;v&amp;lt;/math&amp;gt; can be moved forward along the arcs leaving &amp;lt;math&amp;gt;v&amp;lt;/math&amp;gt;. In other words, when &amp;lt;math&amp;gt;v&amp;lt;/math&amp;gt; is finished, it is &amp;lt;math&amp;gt;F(v)=0&amp;lt;/math&amp;gt;. Since &amp;lt;math&amp;gt;G&amp;lt;/math&amp;gt; is an [[Basic graph definitions#Cycles|&amp;lt;math&amp;gt;(s,t)&amp;lt;/math&amp;gt;-graph]], the result is a feasible &amp;lt;math&amp;gt;(s,t)&amp;lt;/math&amp;gt;-flow, and the flow value has increased by the initial value of &amp;lt;math&amp;gt;T(v_0)&amp;lt;/math&amp;gt;. In particular, it is &amp;lt;math&amp;gt;T(v_0)=0&amp;lt;/math&amp;gt; at the end, so at least &amp;lt;math&amp;gt;v_0&amp;lt;/math&amp;gt; will be removed in this iteration.&lt;br /&gt;
&lt;br /&gt;
== Correctness ==&lt;br /&gt;
&lt;br /&gt;
When a node is removed, either all of its outgoing arcs or all of its ingoing arcs are saturated. Therefore, an arc is only removed if this arc itself or all immediate predecessors or all immediate successors are saturated. In particular, an arc is only removed if it is not on any flow-augmenting path anymore. Therefore, when the break condition is fulfilled, the flow is blocking. Termination follows from the following complexity considerations.&lt;br /&gt;
&lt;br /&gt;
== Complexity ==&lt;br /&gt;
&lt;br /&gt;
'''Statement:'''&lt;br /&gt;
The asymptotic complexity is in &amp;lt;math&amp;gt;\mathcal{O}(n^2)&amp;lt;/math&amp;gt;, where &amp;lt;math&amp;gt;n=|V|&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
'''Proof:'''&lt;br /&gt;
Due to the variant, the number of iterations is linear in &amp;lt;math&amp;gt;n&amp;lt;/math&amp;gt;. When an arc is saturated, it is removed in step 5.5 (resp., 6.5), so the total number of saturating increases of flow values of arcs over all iterations of the main loop is linear in the number of arcs, which is in &amp;lt;math&amp;gt;\mathcal{O}(n^2)&amp;lt;/math&amp;gt;. In each execution of step 5 (resp., step 6), the flow on at most one outgoing (resp., incoming) arc of each node is increased without saturation, so the number of ''non''-saturating flow value settings is linear in the number of nodes in each iteration.&lt;br /&gt;
&lt;br /&gt;
== Remarks ==&lt;br /&gt;
&lt;br /&gt;
# The algorithm is named after three indian researchers, V. M. Malhotra, M. Pramodh Kumar, and S. N. Mahashwari.&lt;br /&gt;
# Of course, the nodes and arcs need not really be removed from the graph. However, &amp;quot;removed&amp;quot; nodes and arcs must be hidden from the algorithm to ensure the asymptotic complexity; a Boolean label &amp;quot;is removed&amp;quot; does not suffice for that.&lt;br /&gt;
# This application of [[Breadth-first search|BFS]] is an example for one of the remarks on [[Graph traversal#Remarks|graph traversal]]: It is reasonable to implement graph-traversal algorithms as iterators. In fact, then the modification may be simply added to the loop that runs the iterator.&lt;/div&gt;</summary>
		<author><name>Weihe</name></author>
	</entry>
	<entry>
		<id>https://wiki.algo.informatik.tu-darmstadt.de/index.php?title=Three_indians%27_algorithm&amp;diff=3872</id>
		<title>Three indians' algorithm</title>
		<link rel="alternate" type="text/html" href="https://wiki.algo.informatik.tu-darmstadt.de/index.php?title=Three_indians%27_algorithm&amp;diff=3872"/>
		<updated>2017-03-16T11:26:24Z</updated>

		<summary type="html">&lt;p&gt;Weihe: /* Induction step */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== General information ==&lt;br /&gt;
&lt;br /&gt;
'''Algorithmic problem:''' [[Blocking flow]].&lt;br /&gt;
&lt;br /&gt;
'''Type of algorithm:''' loop.&lt;br /&gt;
&lt;br /&gt;
== Abstract view ==&lt;br /&gt;
&lt;br /&gt;
'''Invariant:'''&lt;br /&gt;
The current flow is feasible.&lt;br /&gt;
&lt;br /&gt;
'''Variant:'''&lt;br /&gt;
The number of nodes strictly decreases.&lt;br /&gt;
&lt;br /&gt;
'''Break condition:'''&lt;br /&gt;
There is no more [[Basic graph definitions#Paths|ordinary &amp;lt;math&amp;gt;(s,t)&amp;lt;/math&amp;gt;-path]] in &amp;lt;math&amp;gt;G&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
== Induction basis ==&lt;br /&gt;
&lt;br /&gt;
'''Abstract view:'''&lt;br /&gt;
The flow is initialized to be feasible, for example, the zero flow.&lt;br /&gt;
&lt;br /&gt;
'''Proof:'''&lt;br /&gt;
Obvious.&lt;br /&gt;
&lt;br /&gt;
== Induction step ==&lt;br /&gt;
&lt;br /&gt;
'''Abstract view:'''&lt;br /&gt;
Choose the node &amp;lt;math&amp;gt;v_0&amp;lt;/math&amp;gt; through which the minimum amount of flow may go, and propagate this amount from &amp;lt;math&amp;gt;v_0&amp;lt;/math&amp;gt; forward to &amp;lt;math&amp;gt;t&amp;lt;/math&amp;gt; and backward to &amp;lt;math&amp;gt;s&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
'''Implementation:'''&lt;br /&gt;
# For each node &amp;lt;math&amp;gt;v\in V\setminus\{s,t\}&amp;lt;/math&amp;gt;, let &amp;lt;math&amp;gt;T(v)&amp;lt;/math&amp;gt; denote the '''throughput''' of &amp;lt;math&amp;gt;v&amp;lt;/math&amp;gt;, which is defined by &amp;lt;math&amp;gt;T(v):=\min\left\{\sum_{w:(v,w)\in A}u(v,w),\sum_{w:(w,v)\in A}u(w,v)\right\}&amp;lt;/math&amp;gt;.&lt;br /&gt;
# Remove all nodes with zero throughput and all arcs incident to at least one of these nodes.&lt;br /&gt;
# Let &amp;lt;math&amp;gt;v_0\in V\setminus\{s,t\}&amp;lt;/math&amp;gt; be a node with minimum throughput &amp;lt;math&amp;gt;T(v_0)&amp;lt;/math&amp;gt;.&lt;br /&gt;
# Set &amp;lt;math&amp;gt;F(v_0):=T(v_0)&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;F(v):=0&amp;lt;/math&amp;gt; for all nodes &amp;lt;math&amp;gt;v\in V\setminus\{s,t,v_0\}&amp;lt;/math&amp;gt;.&lt;br /&gt;
# Run a modified [[Breadth-first search|BFS]] from &amp;lt;math&amp;gt;v_0&amp;lt;/math&amp;gt;, where for every processed arc &amp;lt;math&amp;gt;(v,w)\in A&amp;lt;/math&amp;gt;:&lt;br /&gt;
## Let &amp;lt;math&amp;gt;\Delta:=\min\{u(v,w),F(v)\}&amp;lt;/math&amp;gt;.&lt;br /&gt;
## Set the flow over &amp;lt;math&amp;gt;(v,w)&amp;lt;/math&amp;gt; to &amp;lt;math&amp;gt;\Delta&amp;lt;/math&amp;gt;.&lt;br /&gt;
## Increase &amp;lt;math&amp;gt;F(w)&amp;lt;/math&amp;gt; by &amp;lt;math&amp;gt;\Delta&amp;lt;/math&amp;gt;.&lt;br /&gt;
## Decrease &amp;lt;math&amp;gt;F(v)&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;T(v)&amp;lt;/math&amp;gt;, and &amp;lt;math&amp;gt;u(v,w)&amp;lt;/math&amp;gt; by &amp;lt;math&amp;gt;\Delta&amp;lt;/math&amp;gt;.&lt;br /&gt;
## If &amp;lt;math&amp;gt;u(v,w)=0&amp;lt;/math&amp;gt;, remove &amp;lt;math&amp;gt;(v,w)&amp;lt;/math&amp;gt; from &amp;lt;math&amp;gt;G&amp;lt;/math&amp;gt;.&lt;br /&gt;
## If &amp;lt;math&amp;gt;F(v)=0&amp;lt;/math&amp;gt;, finish this iteration.&lt;br /&gt;
# Run steps 4+5 from &amp;lt;math&amp;gt;v_0&amp;lt;/math&amp;gt; on the [[Basic graph definitions#Transpose of a graph|transpose]] of &amp;lt;math&amp;gt;G&amp;lt;/math&amp;gt; with the value of &amp;lt;math&amp;gt;T&amp;lt;/math&amp;gt; computed in step 1 (all removals apply to &amp;lt;math&amp;gt;G&amp;lt;/math&amp;gt;).&lt;br /&gt;
&lt;br /&gt;
'''Proof:'''&lt;br /&gt;
Consider step 5 (step 6 is analogous). The specific choice of &amp;lt;matH&amp;gt;v_0&amp;lt;/math&amp;gt; ensures &amp;lt;math&amp;gt;F(v)\leq T(v_0)\leq T(v)&amp;lt;/math&amp;gt; for each node &amp;lt;math&amp;gt;v\in V&amp;lt;/math&amp;gt; at any time. Therefore, all flow arrived at &amp;lt;math&amp;gt;v&amp;lt;/math&amp;gt; can be moved forward along the arcs leaving &amp;lt;math&amp;gt;v&amp;lt;/math&amp;gt;. In other words, when &amp;lt;math&amp;gt;v&amp;lt;/math&amp;gt; is finished, it is &amp;lt;math&amp;gt;F(v)=0&amp;lt;/math&amp;gt;. Since &amp;lt;math&amp;gt;G&amp;lt;/math&amp;gt; is an [[Basic graph definitions#Cycles|&amp;lt;math&amp;gt;(s,t)&amp;lt;/math&amp;gt;-graph]], the result is a feasible &amp;lt;math&amp;gt;(s,t)&amp;lt;/math&amp;gt;-flow, and the flow value has increased by the initial value of &amp;lt;math&amp;gt;T(v_0)&amp;lt;/math&amp;gt;. In particular, it is &amp;lt;math&amp;gt;T(v_0)=0&amp;lt;/math&amp;gt; at the end, so at least &amp;lt;math&amp;gt;v_0&amp;lt;/math&amp;gt; will be removed in this iteration.&lt;br /&gt;
&lt;br /&gt;
== Correctness ==&lt;br /&gt;
&lt;br /&gt;
When a node is removed, either all of its outgoing arcs or all of its ingoing arcs are saturated. Therefore, an arc is only removed if this arc itself or all immediate predecessors or all immediate successors are saturated. In particular, an arc is only removed if it is not on any flow-augmenting path anymore. Therefore, when the break condition is fulfilled, the flow is blocking. Termination follows from the following complexity considerations.&lt;br /&gt;
&lt;br /&gt;
== Complexity ==&lt;br /&gt;
&lt;br /&gt;
'''Statement:'''&lt;br /&gt;
The asymptotic complexity is in &amp;lt;math&amp;gt;\mathcal{O}(n^2)&amp;lt;/math&amp;gt;, where &amp;lt;math&amp;gt;n=|V|&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
'''Proof:'''&lt;br /&gt;
Due to the variant, the number of iterations is linear in &amp;lt;math&amp;gt;n&amp;lt;/math&amp;gt;. When an arc is saturated, it is removed in step 5.5 (resp., 6.5), so the total number of saturating increases of flow values of arcs over all iterations of the main loop is linear in the number of arcs, which is in &amp;lt;math&amp;gt;\mathcal{O}(n^2)&amp;lt;/math&amp;gt;. In each execution of step 5 (resp., step 6), the flow on at most one outgoing (resp., incoming) arc of each node is increased without saturation, so the number of ''non''-saturating flow value settings is linear in the number of nodes in each iteration.&lt;br /&gt;
&lt;br /&gt;
== Remarks ==&lt;br /&gt;
&lt;br /&gt;
# The algorithm is named after three indian researchers, V. M. Malhotra, M. Pramodh Kumar, and S. N. Mahashwari.&lt;br /&gt;
# Of course, the nodes and arcs need not really be removed from the graph. However, &amp;quot;removed&amp;quot; nodes and arcs must be hidden from the algorithm to ensure the asymptotic complexity; a Boolean label &amp;quot;is removed&amp;quot; does not suffice for that.&lt;br /&gt;
# This application of [[Breadth-first search|BFS]] is an example for one of the remarks on [[Graph traversal#Remarks|graph traversal]]: It is reasonable to implement graph-traversal algorithms as iterators. In fact, then the modification may be simply added to the loop that runs the iterator.&lt;/div&gt;</summary>
		<author><name>Weihe</name></author>
	</entry>
	<entry>
		<id>https://wiki.algo.informatik.tu-darmstadt.de/index.php?title=Three_indians%27_algorithm&amp;diff=3871</id>
		<title>Three indians' algorithm</title>
		<link rel="alternate" type="text/html" href="https://wiki.algo.informatik.tu-darmstadt.de/index.php?title=Three_indians%27_algorithm&amp;diff=3871"/>
		<updated>2017-03-16T11:25:18Z</updated>

		<summary type="html">&lt;p&gt;Weihe: /* Induction step */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== General information ==&lt;br /&gt;
&lt;br /&gt;
'''Algorithmic problem:''' [[Blocking flow]].&lt;br /&gt;
&lt;br /&gt;
'''Type of algorithm:''' loop.&lt;br /&gt;
&lt;br /&gt;
== Abstract view ==&lt;br /&gt;
&lt;br /&gt;
'''Invariant:'''&lt;br /&gt;
The current flow is feasible.&lt;br /&gt;
&lt;br /&gt;
'''Variant:'''&lt;br /&gt;
The number of nodes strictly decreases.&lt;br /&gt;
&lt;br /&gt;
'''Break condition:'''&lt;br /&gt;
There is no more [[Basic graph definitions#Paths|ordinary &amp;lt;math&amp;gt;(s,t)&amp;lt;/math&amp;gt;-path]] in &amp;lt;math&amp;gt;G&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
== Induction basis ==&lt;br /&gt;
&lt;br /&gt;
'''Abstract view:'''&lt;br /&gt;
The flow is initialized to be feasible, for example, the zero flow.&lt;br /&gt;
&lt;br /&gt;
'''Proof:'''&lt;br /&gt;
Obvious.&lt;br /&gt;
&lt;br /&gt;
== Induction step ==&lt;br /&gt;
&lt;br /&gt;
'''Abstract view:'''&lt;br /&gt;
Choose the node &amp;lt;math&amp;gt;v_0&amp;lt;/math&amp;gt; through which the minimum amount of flow may go, and propagate this amount from &amp;lt;math&amp;gt;v_0&amp;lt;/math&amp;gt; forward to &amp;lt;math&amp;gt;t&amp;lt;/math&amp;gt; and backward to &amp;lt;math&amp;gt;s&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
'''Implementation:'''&lt;br /&gt;
# For each node &amp;lt;math&amp;gt;v\in V\setminus\{s,t\}&amp;lt;/math&amp;gt;, let &amp;lt;math&amp;gt;T(v)&amp;lt;/math&amp;gt; denote the '''throughput''' of &amp;lt;math&amp;gt;v&amp;lt;/math&amp;gt;, which is defined by &amp;lt;math&amp;gt;T(v):=\min\left\{\sum_{w:(v,w)\in A}u(v,w),\sum_{w:(w,v)\in A}u(w,v)\right\}&amp;lt;/math&amp;gt;.&lt;br /&gt;
# Remove all nodes with zero throughput and all arcs incident to at least one of these nodes.&lt;br /&gt;
# Let &amp;lt;math&amp;gt;v_0\in V\setminus\{s,t\}&amp;lt;/math&amp;gt; be a node with minimum throughput &amp;lt;math&amp;gt;T(v_0)&amp;lt;/math&amp;gt;.&lt;br /&gt;
# Set &amp;lt;math&amp;gt;F(v_0):=T(v_0)&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;F(v):=0&amp;lt;/math&amp;gt; for all nodes &amp;lt;math&amp;gt;v\in V\setminus\{s,t,v_0\}&amp;lt;/math&amp;gt;.&lt;br /&gt;
# Run a modified [[Breadth-first search|BFS]] from &amp;lt;math&amp;gt;v_0&amp;lt;/math&amp;gt;, where for every processed arc &amp;lt;math&amp;gt;(v,w)\in A&amp;lt;/math&amp;gt;:&lt;br /&gt;
## Let &amp;lt;math&amp;gt;\Delta:=\min\{u(v,w),F(v)\}&amp;lt;/math&amp;gt;.&lt;br /&gt;
## Set the flow over &amp;lt;math&amp;gt;(v,w)&amp;lt;/math&amp;gt; to &amp;lt;math&amp;gt;\Delta&amp;lt;/math&amp;gt;.&lt;br /&gt;
## Increase &amp;lt;math&amp;gt;F(w)&amp;lt;/math&amp;gt; by &amp;lt;math&amp;gt;\Delta&amp;lt;/math&amp;gt;.&lt;br /&gt;
## Decrease &amp;lt;math&amp;gt;F(v)&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;T(v)&amp;lt;/math&amp;gt;, and &amp;lt;math&amp;gt;u(v,w)&amp;lt;/math&amp;gt; by &amp;lt;math&amp;gt;\Delta&amp;lt;/math&amp;gt;.&lt;br /&gt;
## If &amp;lt;math&amp;gt;u(v,w)=0&amp;lt;/math&amp;gt;, remove &amp;lt;math&amp;gt;(v,w)&amp;lt;/math&amp;gt; from &amp;lt;math&amp;gt;G&amp;lt;/math&amp;gt;.&lt;br /&gt;
## If &amp;lt;math&amp;gt;F(v)=0&amp;lt;/math&amp;gt;, finish this iteration.&lt;br /&gt;
# Run steps 4+5 from &amp;lt;math&amp;gt;v_0&amp;lt;/math&amp;gt; on the [[Basic graph definitions#Transpose of a graph|transpose]] of &amp;lt;math&amp;gt;G&amp;lt;/math&amp;gt; with the value of  computed in step 1 (all removals apply to &amp;lt;math&amp;gt;G&amp;lt;/math&amp;gt;).&lt;br /&gt;
&lt;br /&gt;
'''Proof:'''&lt;br /&gt;
Consider step 5 (step 6 is analogous). The specific choice of &amp;lt;matH&amp;gt;v_0&amp;lt;/math&amp;gt; ensures &amp;lt;math&amp;gt;F(v)\leq T(v_0)\leq T(v)&amp;lt;/math&amp;gt; for each node &amp;lt;math&amp;gt;v\in V&amp;lt;/math&amp;gt; at any time. Therefore, all flow arrived at &amp;lt;math&amp;gt;v&amp;lt;/math&amp;gt; can be moved forward along the arcs leaving &amp;lt;math&amp;gt;v&amp;lt;/math&amp;gt;. In other words, when &amp;lt;math&amp;gt;v&amp;lt;/math&amp;gt; is finished, it is &amp;lt;math&amp;gt;F(v)=0&amp;lt;/math&amp;gt;. Since &amp;lt;math&amp;gt;G&amp;lt;/math&amp;gt; is an [[Basic graph definitions#Cycles|&amp;lt;math&amp;gt;(s,t)&amp;lt;/math&amp;gt;-graph]], the result is a feasible &amp;lt;math&amp;gt;(s,t)&amp;lt;/math&amp;gt;-flow, and the flow value has increased by the initial value of &amp;lt;math&amp;gt;T(v_0)&amp;lt;/math&amp;gt;. In particular, it is &amp;lt;math&amp;gt;T(v_0)=0&amp;lt;/math&amp;gt; at the end, so at least &amp;lt;math&amp;gt;v_0&amp;lt;/math&amp;gt; will be removed in this iteration.&lt;br /&gt;
&lt;br /&gt;
== Correctness ==&lt;br /&gt;
&lt;br /&gt;
When a node is removed, either all of its outgoing arcs or all of its ingoing arcs are saturated. Therefore, an arc is only removed if this arc itself or all immediate predecessors or all immediate successors are saturated. In particular, an arc is only removed if it is not on any flow-augmenting path anymore. Therefore, when the break condition is fulfilled, the flow is blocking. Termination follows from the following complexity considerations.&lt;br /&gt;
&lt;br /&gt;
== Complexity ==&lt;br /&gt;
&lt;br /&gt;
'''Statement:'''&lt;br /&gt;
The asymptotic complexity is in &amp;lt;math&amp;gt;\mathcal{O}(n^2)&amp;lt;/math&amp;gt;, where &amp;lt;math&amp;gt;n=|V|&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
'''Proof:'''&lt;br /&gt;
Due to the variant, the number of iterations is linear in &amp;lt;math&amp;gt;n&amp;lt;/math&amp;gt;. When an arc is saturated, it is removed in step 5.5 (resp., 6.5), so the total number of saturating increases of flow values of arcs over all iterations of the main loop is linear in the number of arcs, which is in &amp;lt;math&amp;gt;\mathcal{O}(n^2)&amp;lt;/math&amp;gt;. In each execution of step 5 (resp., step 6), the flow on at most one outgoing (resp., incoming) arc of each node is increased without saturation, so the number of ''non''-saturating flow value settings is linear in the number of nodes in each iteration.&lt;br /&gt;
&lt;br /&gt;
== Remarks ==&lt;br /&gt;
&lt;br /&gt;
# The algorithm is named after three indian researchers, V. M. Malhotra, M. Pramodh Kumar, and S. N. Mahashwari.&lt;br /&gt;
# Of course, the nodes and arcs need not really be removed from the graph. However, &amp;quot;removed&amp;quot; nodes and arcs must be hidden from the algorithm to ensure the asymptotic complexity; a Boolean label &amp;quot;is removed&amp;quot; does not suffice for that.&lt;br /&gt;
# This application of [[Breadth-first search|BFS]] is an example for one of the remarks on [[Graph traversal#Remarks|graph traversal]]: It is reasonable to implement graph-traversal algorithms as iterators. In fact, then the modification may be simply added to the loop that runs the iterator.&lt;/div&gt;</summary>
		<author><name>Weihe</name></author>
	</entry>
	<entry>
		<id>https://wiki.algo.informatik.tu-darmstadt.de/index.php?title=Three_indians%27_algorithm&amp;diff=3870</id>
		<title>Three indians' algorithm</title>
		<link rel="alternate" type="text/html" href="https://wiki.algo.informatik.tu-darmstadt.de/index.php?title=Three_indians%27_algorithm&amp;diff=3870"/>
		<updated>2017-03-16T11:24:56Z</updated>

		<summary type="html">&lt;p&gt;Weihe: /* Induction step */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== General information ==&lt;br /&gt;
&lt;br /&gt;
'''Algorithmic problem:''' [[Blocking flow]].&lt;br /&gt;
&lt;br /&gt;
'''Type of algorithm:''' loop.&lt;br /&gt;
&lt;br /&gt;
== Abstract view ==&lt;br /&gt;
&lt;br /&gt;
'''Invariant:'''&lt;br /&gt;
The current flow is feasible.&lt;br /&gt;
&lt;br /&gt;
'''Variant:'''&lt;br /&gt;
The number of nodes strictly decreases.&lt;br /&gt;
&lt;br /&gt;
'''Break condition:'''&lt;br /&gt;
There is no more [[Basic graph definitions#Paths|ordinary &amp;lt;math&amp;gt;(s,t)&amp;lt;/math&amp;gt;-path]] in &amp;lt;math&amp;gt;G&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
== Induction basis ==&lt;br /&gt;
&lt;br /&gt;
'''Abstract view:'''&lt;br /&gt;
The flow is initialized to be feasible, for example, the zero flow.&lt;br /&gt;
&lt;br /&gt;
'''Proof:'''&lt;br /&gt;
Obvious.&lt;br /&gt;
&lt;br /&gt;
== Induction step ==&lt;br /&gt;
&lt;br /&gt;
'''Abstract view:'''&lt;br /&gt;
Choose the node &amp;lt;math&amp;gt;v_0&amp;lt;/math&amp;gt; through which the minimum amount of flow may go, and propagate this amount from &amp;lt;math&amp;gt;v_0&amp;lt;/math&amp;gt; forward to &amp;lt;math&amp;gt;t&amp;lt;/math&amp;gt; and backward to &amp;lt;math&amp;gt;s&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
'''Implementation:'''&lt;br /&gt;
# For each node &amp;lt;math&amp;gt;v\in V\setminus\{s,t\}&amp;lt;/math&amp;gt;, let &amp;lt;math&amp;gt;T(v)&amp;lt;/math&amp;gt; denote the '''throughput''' of &amp;lt;math&amp;gt;v&amp;lt;/math&amp;gt;, which is defined by &amp;lt;math&amp;gt;T(v):=\min\left\{\sum_{w:(v,w)\in A}u(v,w),\sum_{w:(w,v)\in A}u(w,v)\right\}&amp;lt;/math&amp;gt;.&lt;br /&gt;
# Remove all nodes with zero throughput and all arcs incident to at least one of these nodes.&lt;br /&gt;
# Let &amp;lt;math&amp;gt;v_0\in V\setminus\{s,t\}&amp;lt;/math&amp;gt; be a node with minimum throughput &amp;lt;math&amp;gt;T(v_0)&amp;lt;/math&amp;gt;.&lt;br /&gt;
# Set &amp;lt;math&amp;gt;F(v_0):=T(v_0)&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;F(v):=0&amp;lt;/math&amp;gt; for all nodes &amp;lt;math&amp;gt;v\in V\setminus\{s,t,v_0\}&amp;lt;/math&amp;gt;.&lt;br /&gt;
# Run a modified [[Breadth-first search|BFS]] from &amp;lt;math&amp;gt;v_0&amp;lt;/math&amp;gt;, where for every processed arc &amp;lt;math&amp;gt;(v,w)\in A&amp;lt;/math&amp;gt;:&lt;br /&gt;
## Let &amp;lt;math&amp;gt;\Delta:=\min\{u(v,w),F(v)\}&amp;lt;/math&amp;gt;.&lt;br /&gt;
## Set the flow over &amp;lt;math&amp;gt;(v,w)&amp;lt;/math&amp;gt; to &amp;lt;math&amp;gt;\Delta&amp;lt;/math&amp;gt;.&lt;br /&gt;
## Increase &amp;lt;math&amp;gt;F(w)&amp;lt;/math&amp;gt; by &amp;lt;math&amp;gt;\Delta&amp;lt;/math&amp;gt;.&lt;br /&gt;
## Decrease &amp;lt;math&amp;gt;F(v)&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;T(v)&amp;lt;/math&amp;gt;, and &amp;lt;math&amp;gt;u(v,w)&amp;lt;/math&amp;gt; by &amp;lt;math&amp;gt;\Delta&amp;lt;/math&amp;gt;.&lt;br /&gt;
## If &amp;lt;math&amp;gt;u(v,w)=0&amp;lt;/math&amp;gt;, remove &amp;lt;math&amp;gt;(v,w)&amp;lt;/math&amp;gt; from &amp;lt;math&amp;gt;G&amp;lt;/math&amp;gt;.&lt;br /&gt;
## If &amp;lt;math&amp;gt;F(v)=0&amp;lt;/math&amp;gt;, finish this iteration.&lt;br /&gt;
# Run steps 4+5 from &amp;lt;math&amp;gt;v_0&amp;lt;/math&amp;gt; on the [[Basic graph definitions#Transpose of a graph|transpose]] of &amp;lt;math&amp;gt;G&amp;lt;/math&amp;gt; with the value of .&lt;br /&gt;
&lt;br /&gt;
'''Proof:'''&lt;br /&gt;
Consider step 5 (step 6 is analogous). The specific choice of &amp;lt;matH&amp;gt;v_0&amp;lt;/math&amp;gt; ensures &amp;lt;math&amp;gt;F(v)\leq T(v_0)\leq T(v)&amp;lt;/math&amp;gt; for each node &amp;lt;math&amp;gt;v\in V&amp;lt;/math&amp;gt; at any time. Therefore, all flow arrived at &amp;lt;math&amp;gt;v&amp;lt;/math&amp;gt; can be moved forward along the arcs leaving &amp;lt;math&amp;gt;v&amp;lt;/math&amp;gt;. In other words, when &amp;lt;math&amp;gt;v&amp;lt;/math&amp;gt; is finished, it is &amp;lt;math&amp;gt;F(v)=0&amp;lt;/math&amp;gt;. Since &amp;lt;math&amp;gt;G&amp;lt;/math&amp;gt; is an [[Basic graph definitions#Cycles|&amp;lt;math&amp;gt;(s,t)&amp;lt;/math&amp;gt;-graph]], the result is a feasible &amp;lt;math&amp;gt;(s,t)&amp;lt;/math&amp;gt;-flow, and the flow value has increased by the initial value of &amp;lt;math&amp;gt;T(v_0)&amp;lt;/math&amp;gt;. In particular, it is &amp;lt;math&amp;gt;T(v_0)=0&amp;lt;/math&amp;gt; at the end, so at least &amp;lt;math&amp;gt;v_0&amp;lt;/math&amp;gt; will be removed in this iteration.&lt;br /&gt;
&lt;br /&gt;
== Correctness ==&lt;br /&gt;
&lt;br /&gt;
When a node is removed, either all of its outgoing arcs or all of its ingoing arcs are saturated. Therefore, an arc is only removed if this arc itself or all immediate predecessors or all immediate successors are saturated. In particular, an arc is only removed if it is not on any flow-augmenting path anymore. Therefore, when the break condition is fulfilled, the flow is blocking. Termination follows from the following complexity considerations.&lt;br /&gt;
&lt;br /&gt;
== Complexity ==&lt;br /&gt;
&lt;br /&gt;
'''Statement:'''&lt;br /&gt;
The asymptotic complexity is in &amp;lt;math&amp;gt;\mathcal{O}(n^2)&amp;lt;/math&amp;gt;, where &amp;lt;math&amp;gt;n=|V|&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
'''Proof:'''&lt;br /&gt;
Due to the variant, the number of iterations is linear in &amp;lt;math&amp;gt;n&amp;lt;/math&amp;gt;. When an arc is saturated, it is removed in step 5.5 (resp., 6.5), so the total number of saturating increases of flow values of arcs over all iterations of the main loop is linear in the number of arcs, which is in &amp;lt;math&amp;gt;\mathcal{O}(n^2)&amp;lt;/math&amp;gt;. In each execution of step 5 (resp., step 6), the flow on at most one outgoing (resp., incoming) arc of each node is increased without saturation, so the number of ''non''-saturating flow value settings is linear in the number of nodes in each iteration.&lt;br /&gt;
&lt;br /&gt;
== Remarks ==&lt;br /&gt;
&lt;br /&gt;
# The algorithm is named after three indian researchers, V. M. Malhotra, M. Pramodh Kumar, and S. N. Mahashwari.&lt;br /&gt;
# Of course, the nodes and arcs need not really be removed from the graph. However, &amp;quot;removed&amp;quot; nodes and arcs must be hidden from the algorithm to ensure the asymptotic complexity; a Boolean label &amp;quot;is removed&amp;quot; does not suffice for that.&lt;br /&gt;
# This application of [[Breadth-first search|BFS]] is an example for one of the remarks on [[Graph traversal#Remarks|graph traversal]]: It is reasonable to implement graph-traversal algorithms as iterators. In fact, then the modification may be simply added to the loop that runs the iterator.&lt;/div&gt;</summary>
		<author><name>Weihe</name></author>
	</entry>
	<entry>
		<id>https://wiki.algo.informatik.tu-darmstadt.de/index.php?title=Three_indians%27_algorithm&amp;diff=3869</id>
		<title>Three indians' algorithm</title>
		<link rel="alternate" type="text/html" href="https://wiki.algo.informatik.tu-darmstadt.de/index.php?title=Three_indians%27_algorithm&amp;diff=3869"/>
		<updated>2017-03-16T11:24:01Z</updated>

		<summary type="html">&lt;p&gt;Weihe: /* Induction step */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== General information ==&lt;br /&gt;
&lt;br /&gt;
'''Algorithmic problem:''' [[Blocking flow]].&lt;br /&gt;
&lt;br /&gt;
'''Type of algorithm:''' loop.&lt;br /&gt;
&lt;br /&gt;
== Abstract view ==&lt;br /&gt;
&lt;br /&gt;
'''Invariant:'''&lt;br /&gt;
The current flow is feasible.&lt;br /&gt;
&lt;br /&gt;
'''Variant:'''&lt;br /&gt;
The number of nodes strictly decreases.&lt;br /&gt;
&lt;br /&gt;
'''Break condition:'''&lt;br /&gt;
There is no more [[Basic graph definitions#Paths|ordinary &amp;lt;math&amp;gt;(s,t)&amp;lt;/math&amp;gt;-path]] in &amp;lt;math&amp;gt;G&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
== Induction basis ==&lt;br /&gt;
&lt;br /&gt;
'''Abstract view:'''&lt;br /&gt;
The flow is initialized to be feasible, for example, the zero flow.&lt;br /&gt;
&lt;br /&gt;
'''Proof:'''&lt;br /&gt;
Obvious.&lt;br /&gt;
&lt;br /&gt;
== Induction step ==&lt;br /&gt;
&lt;br /&gt;
'''Abstract view:'''&lt;br /&gt;
Choose the node &amp;lt;math&amp;gt;v_0&amp;lt;/math&amp;gt; through which the minimum amount of flow may go, and propagate this amount from &amp;lt;math&amp;gt;v_0&amp;lt;/math&amp;gt; forward to &amp;lt;math&amp;gt;t&amp;lt;/math&amp;gt; and backward to &amp;lt;math&amp;gt;s&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
'''Implementation:'''&lt;br /&gt;
# For each node &amp;lt;math&amp;gt;v\in V\setminus\{s,t\}&amp;lt;/math&amp;gt;, let &amp;lt;math&amp;gt;T(v)&amp;lt;/math&amp;gt; denote the '''throughput''' of &amp;lt;math&amp;gt;v&amp;lt;/math&amp;gt;, which is defined by &amp;lt;math&amp;gt;T(v):=\min\left\{\sum_{w:(v,w)\in A}u(v,w),\sum_{w:(w,v)\in A}u(w,v)\right\}&amp;lt;/math&amp;gt;.&lt;br /&gt;
# Remove all nodes with zero throughput and all arcs incident to at least one of these nodes.&lt;br /&gt;
# Let &amp;lt;math&amp;gt;v_0\in V\setminus\{s,t\}&amp;lt;/math&amp;gt; be a node with minimum throughput &amp;lt;math&amp;gt;T(v_0)&amp;lt;/math&amp;gt;.&lt;br /&gt;
# Set &amp;lt;math&amp;gt;F(v_0):=T(v_0)&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;F(v):=0&amp;lt;/math&amp;gt; for all nodes &amp;lt;math&amp;gt;v\in V\setminus\{s,t,v_0\}&amp;lt;/math&amp;gt;.&lt;br /&gt;
# Run a modified [[Breadth-first search|BFS]] from &amp;lt;math&amp;gt;v_0&amp;lt;/math&amp;gt;, where for every processed arc &amp;lt;math&amp;gt;(v,w)\in A&amp;lt;/math&amp;gt;:&lt;br /&gt;
## Let &amp;lt;math&amp;gt;\Delta:=\min\{u(v,w),F(v)\}&amp;lt;/math&amp;gt;.&lt;br /&gt;
## Set the flow over &amp;lt;math&amp;gt;(v,w)&amp;lt;/math&amp;gt; to &amp;lt;math&amp;gt;\Delta&amp;lt;/math&amp;gt;.&lt;br /&gt;
## Increase &amp;lt;math&amp;gt;F(w)&amp;lt;/math&amp;gt; by &amp;lt;math&amp;gt;\Delta&amp;lt;/math&amp;gt;.&lt;br /&gt;
## Decrease &amp;lt;math&amp;gt;F(v)&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;T(v)&amp;lt;/math&amp;gt;, and &amp;lt;math&amp;gt;u(v,w)&amp;lt;/math&amp;gt; by &amp;lt;math&amp;gt;\Delta&amp;lt;/math&amp;gt;.&lt;br /&gt;
## If &amp;lt;math&amp;gt;u(v,w)=0&amp;lt;/math&amp;gt;, remove &amp;lt;math&amp;gt;(v,w)&amp;lt;/math&amp;gt; from &amp;lt;math&amp;gt;G&amp;lt;/math&amp;gt;.&lt;br /&gt;
## If &amp;lt;math&amp;gt;F(v)=0&amp;lt;/math&amp;gt;, finish this iteration.&lt;br /&gt;
# Run steps 4+5 from &amp;lt;math&amp;gt;v_0&amp;lt;/math&amp;gt; on the [[Basic graph definitions#Transpose of a graph|transpose]] of &amp;lt;math&amp;gt;G&amp;lt;/math&amp;gt; with the value of &amp;lt;math&amp;gt;T_0&amp;lt;/math&amp;gt; computed in step 1 (all removals apply to &amp;lt;math&amp;gt;G&amp;lt;/math&amp;gt;).&lt;br /&gt;
&lt;br /&gt;
'''Proof:'''&lt;br /&gt;
Consider step 5 (step 6 is analogous). The specific choice of &amp;lt;matH&amp;gt;v_0&amp;lt;/math&amp;gt; ensures &amp;lt;math&amp;gt;F(v)\leq T(v_0)\leq T(v)&amp;lt;/math&amp;gt; for each node &amp;lt;math&amp;gt;v\in V&amp;lt;/math&amp;gt; at any time. Therefore, all flow arrived at &amp;lt;math&amp;gt;v&amp;lt;/math&amp;gt; can be moved forward along the arcs leaving &amp;lt;math&amp;gt;v&amp;lt;/math&amp;gt;. In other words, when &amp;lt;math&amp;gt;v&amp;lt;/math&amp;gt; is finished, it is &amp;lt;math&amp;gt;F(v)=0&amp;lt;/math&amp;gt;. Since &amp;lt;math&amp;gt;G&amp;lt;/math&amp;gt; is an [[Basic graph definitions#Cycles|&amp;lt;math&amp;gt;(s,t)&amp;lt;/math&amp;gt;-graph]], the result is a feasible &amp;lt;math&amp;gt;(s,t)&amp;lt;/math&amp;gt;-flow, and the flow value has increased by the initial value of &amp;lt;math&amp;gt;T(v_0)&amp;lt;/math&amp;gt;. In particular, it is &amp;lt;math&amp;gt;T(v_0)=0&amp;lt;/math&amp;gt; at the end, so at least &amp;lt;math&amp;gt;v_0&amp;lt;/math&amp;gt; will be removed in this iteration.&lt;br /&gt;
&lt;br /&gt;
== Correctness ==&lt;br /&gt;
&lt;br /&gt;
When a node is removed, either all of its outgoing arcs or all of its ingoing arcs are saturated. Therefore, an arc is only removed if this arc itself or all immediate predecessors or all immediate successors are saturated. In particular, an arc is only removed if it is not on any flow-augmenting path anymore. Therefore, when the break condition is fulfilled, the flow is blocking. Termination follows from the following complexity considerations.&lt;br /&gt;
&lt;br /&gt;
== Complexity ==&lt;br /&gt;
&lt;br /&gt;
'''Statement:'''&lt;br /&gt;
The asymptotic complexity is in &amp;lt;math&amp;gt;\mathcal{O}(n^2)&amp;lt;/math&amp;gt;, where &amp;lt;math&amp;gt;n=|V|&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
'''Proof:'''&lt;br /&gt;
Due to the variant, the number of iterations is linear in &amp;lt;math&amp;gt;n&amp;lt;/math&amp;gt;. When an arc is saturated, it is removed in step 5.5 (resp., 6.5), so the total number of saturating increases of flow values of arcs over all iterations of the main loop is linear in the number of arcs, which is in &amp;lt;math&amp;gt;\mathcal{O}(n^2)&amp;lt;/math&amp;gt;. In each execution of step 5 (resp., step 6), the flow on at most one outgoing (resp., incoming) arc of each node is increased without saturation, so the number of ''non''-saturating flow value settings is linear in the number of nodes in each iteration.&lt;br /&gt;
&lt;br /&gt;
== Remarks ==&lt;br /&gt;
&lt;br /&gt;
# The algorithm is named after three indian researchers, V. M. Malhotra, M. Pramodh Kumar, and S. N. Mahashwari.&lt;br /&gt;
# Of course, the nodes and arcs need not really be removed from the graph. However, &amp;quot;removed&amp;quot; nodes and arcs must be hidden from the algorithm to ensure the asymptotic complexity; a Boolean label &amp;quot;is removed&amp;quot; does not suffice for that.&lt;br /&gt;
# This application of [[Breadth-first search|BFS]] is an example for one of the remarks on [[Graph traversal#Remarks|graph traversal]]: It is reasonable to implement graph-traversal algorithms as iterators. In fact, then the modification may be simply added to the loop that runs the iterator.&lt;/div&gt;</summary>
		<author><name>Weihe</name></author>
	</entry>
	<entry>
		<id>https://wiki.algo.informatik.tu-darmstadt.de/index.php?title=Preflow-push&amp;diff=3868</id>
		<title>Preflow-push</title>
		<link rel="alternate" type="text/html" href="https://wiki.algo.informatik.tu-darmstadt.de/index.php?title=Preflow-push&amp;diff=3868"/>
		<updated>2017-03-16T11:19:02Z</updated>

		<summary type="html">&lt;p&gt;Weihe: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Abstract view ==&lt;br /&gt;
&lt;br /&gt;
'''Also known as:''' ''push-relabel'' algorithm or ''Goldberg-Tarjan'' algorithm&lt;br /&gt;
&lt;br /&gt;
'''Algorithmic problem:'''&lt;br /&gt;
[[Max-Flow Problems#Standard version|max-flow problem (standard version)]]&lt;br /&gt;
&lt;br /&gt;
'''Type of algorithm:'''&lt;br /&gt;
loop.&lt;br /&gt;
&lt;br /&gt;
'''Auxiliary data:'''&lt;br /&gt;
# A nonnegative integral value &amp;lt;math&amp;gt;d(v)&amp;lt;/math&amp;gt; for each node &amp;lt;math&amp;gt;v\in V&amp;lt;/math&amp;gt;.&lt;br /&gt;
# Each node &amp;lt;math&amp;gt;v\in V\setminus\{t\}&amp;lt;/math&amp;gt; has a '''current arc''', which may be implemented as an iterator on the list of outgoing residual arcs of &amp;lt;math&amp;gt;v&amp;lt;/math&amp;gt;.&lt;br /&gt;
# The [[Basic flow definitions#Preflow|excess]] &amp;lt;math&amp;gt;e_f(v)&amp;lt;/math&amp;gt; of a node &amp;lt;math&amp;gt;v\in V\setminus\{s,t\}&amp;lt;/math&amp;gt; with respect to the current [[Basic flow definitions#Preflow|preflow]].&lt;br /&gt;
# A (dynamically changing) [[Sets and sequences|set]] &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt; of nodes.&lt;br /&gt;
&lt;br /&gt;
'''Invariant:'''&lt;br /&gt;
Before and after each iteration:&lt;br /&gt;
# For each arc &amp;lt;math&amp;gt;a\in A&amp;lt;/math&amp;gt;, it is &amp;lt;math&amp;gt;0\leq f(a)\leq u(a)&amp;lt;/math&amp;gt; . If all upper bounds are integral, all &amp;lt;math&amp;gt;f&amp;lt;/math&amp;gt;-values are integral, too.&lt;br /&gt;
# For each node &amp;lt;math&amp;gt;v\in V\setminus\{s,t\} &amp;lt;/math&amp;gt;, it is &amp;lt;math&amp;gt;e_f(v)\geq 0&amp;lt;/math&amp;gt;. In other words, &amp;lt;math&amp;gt;f&amp;lt;/math&amp;gt; is a [[Basic flow definitions#Preflow|preflow]].&lt;br /&gt;
# The node labels &amp;lt;math&amp;gt;d&amp;lt;/math&amp;gt; form a [[Basic flow definitions#Valid distance labeling|valid distance labeling]], and it is &amp;lt;math&amp;gt;d(s)=n:=|V|&amp;lt;/math&amp;gt;.&lt;br /&gt;
# The currently [[Basic flow definitions#Preflow|active nodes]] are stored in &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt;.&lt;br /&gt;
# The current arc of a node is an outgoing arc of the node's in the residual graph. In the list of all of these arcs, no admissible arc precedes the current arc.&lt;br /&gt;
&lt;br /&gt;
'''Variant:'''&lt;br /&gt;
No label &amp;lt;math&amp;gt;d(\cdot)&amp;lt;/math&amp;gt; is ever decreased. In each iteration, one of the following three actions will take place:&lt;br /&gt;
# The label &amp;lt;math&amp;gt;d(v)&amp;lt;/math&amp;gt; of at least one node &amp;lt;math&amp;gt;v\in V\setminus\{s,t\}&amp;lt;/math&amp;gt; is increased.&lt;br /&gt;
# A saturating push is performed.&lt;br /&gt;
# The value of &amp;lt;math&amp;gt;D:=\sum_{v\in V\setminus\{s,t\}\atop e_f(v)&amp;gt;0}d(v)&amp;lt;/math&amp;gt; decreases.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Break condition:'''&lt;br /&gt;
&amp;lt;math&amp;gt;S=\emptyset&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
== Induction basis ==&lt;br /&gt;
&lt;br /&gt;
'''Abstract view:'''&lt;br /&gt;
# For all arcs &amp;lt;math&amp;gt;a\in A&amp;lt;/math&amp;gt;, set &amp;lt;math&amp;gt;f(a):=0&amp;lt;/math&amp;gt;.&lt;br /&gt;
# For each arc &amp;lt;math&amp;gt;(s,v)\in A&amp;lt;/math&amp;gt;, overwrite this value by &amp;lt;math&amp;gt;f(a):=u(a)&amp;lt;/math&amp;gt; and put &amp;lt;math&amp;gt;v&amp;lt;/math&amp;gt; into &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt;.&lt;br /&gt;
# Compute a [[Basic flow definitions#Valid distance labeling|valid distance labeling]] &amp;lt;math&amp;gt;d&amp;lt;/math&amp;gt; for &amp;lt;math&amp;gt;V\setminus\{s\}&amp;lt;/math&amp;gt; with respect to &amp;lt;math&amp;gt;f&amp;lt;/math&amp;gt;, for example, the true distances from all nodes to &amp;lt;math&amp;gt;t&amp;lt;/math&amp;gt; in the residual network of &amp;lt;math&amp;gt;f&amp;lt;/math&amp;gt;.&lt;br /&gt;
# Set &amp;lt;math&amp;gt;d(s):=n&amp;lt;/math&amp;gt;.&lt;br /&gt;
# For all &amp;lt;math&amp;gt;v\in V\setminus\{s,t\}&amp;lt;/math&amp;gt;, reset the current arc of &amp;lt;math&amp;gt;v&amp;lt;/math&amp;gt; so as to point to the first arc in the list of outgoing arcs of &amp;lt;math&amp;gt;v&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
'''Proof:'''&lt;br /&gt;
For the [[Basic graph definitions#Subgraphs|subgraph induced]] by &amp;lt;math&amp;gt;V\setminus\{s\}&amp;lt;/math&amp;gt;, the arguments in the [[Ahuja-Orlin#Correctness|correctness proof]] for the [[Ahuja-Orlin]] algorithm prove that the &amp;lt;math&amp;gt;d&amp;lt;/math&amp;gt;-labels form a valid distance labeling here as well. For &amp;lt;math&amp;gt;s&amp;lt;/math&amp;gt;, nothing is to show because all outgoing arcs are saturated.&lt;br /&gt;
&lt;br /&gt;
== Induction step ==&lt;br /&gt;
&lt;br /&gt;
'''Abstract view:'''&lt;br /&gt;
# Choose an [[Basic flow definitions#Preflow|active node]] &amp;lt;math&amp;gt;v&amp;lt;/math&amp;gt; from &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt;.&lt;br /&gt;
# While the current arc of &amp;lt;math&amp;gt;v&amp;lt;/math&amp;gt; is not void and not [[Basic flow definitions#Valid distance labeling|admissible]] either, move the current arc one step forward.&lt;br /&gt;
# If the current arc of &amp;lt;math&amp;gt;v&amp;lt;/math&amp;gt; is ''not'' void now but an (admissible) outgoing arc &amp;lt;math&amp;gt;(v,w)&amp;lt;/math&amp;gt;, say:&lt;br /&gt;
## If &amp;lt;math&amp;gt;w\neq s&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;e_f(w)=0&amp;lt;/math&amp;gt;, insert &amp;lt;math&amp;gt;w&amp;lt;/math&amp;gt; in &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt;.&lt;br /&gt;
## Increase the flow over &amp;lt;math&amp;gt;(v,w)&amp;lt;/math&amp;gt; by the minimum of &amp;lt;math&amp;gt;e_f(v)&amp;lt;/math&amp;gt; and the residual capacity of &amp;lt;math&amp;gt;(v,w)&amp;lt;/math&amp;gt;.&lt;br /&gt;
## Increase &amp;lt;math&amp;gt;e_f(w)&amp;lt;/math&amp;gt; by that value and decrease &amp;lt;math&amp;gt;e_f(v)&amp;lt;/math&amp;gt; by the same value.&lt;br /&gt;
## If &amp;lt;math&amp;gt;e_f(v)=0&amp;lt;/math&amp;gt; now, extract &amp;lt;math&amp;gt;v&amp;lt;/math&amp;gt; from &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt;.&lt;br /&gt;
# Otherwise:&lt;br /&gt;
## Let &amp;lt;math&amp;gt;d_\min&amp;lt;/math&amp;gt; denote the minimal label &amp;lt;math&amp;gt;d(w)&amp;lt;/math&amp;gt; of any arc &amp;lt;math&amp;gt;(v,w)&amp;lt;/math&amp;gt; in the residual network.&lt;br /&gt;
## Set &amp;lt;math&amp;gt;d(v):=d_\min+1&amp;lt;/math&amp;gt;.&lt;br /&gt;
## Reset the current arc of &amp;lt;math&amp;gt;v&amp;lt;/math&amp;gt; so as to point to the beginning of the list of outgoing arcs of &amp;lt;math&amp;gt;v&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
'''Remark:'''&lt;br /&gt;
The preflow-push algorithm is also known as the '''push-relabel''' algorithm. The ''push'' operation is step 3; the ''relabel'' operation is step 4.&lt;br /&gt;
&lt;br /&gt;
'''Proof:'''&lt;br /&gt;
Points 1, 2, and 4 of the invariant and &amp;lt;math&amp;gt;d(s)=n&amp;lt;/math&amp;gt; are obviously fulfilled. The rest of point 3 of the invariant is affected by step 4 only, and the outgoing arcs of &amp;lt;math&amp;gt;v&amp;lt;/math&amp;gt; are the only arcs where the distance labeling may become invalid. However, the extremely conservative increase of &amp;lt;math&amp;gt;d(v)&amp;lt;/math&amp;gt; ensures point 3 of the invariant.&lt;br /&gt;
&lt;br /&gt;
To prove the variant, consider a step in which neither any &amp;lt;math&amp;gt;d&amp;lt;/math&amp;gt;-value is increased nor a saturating push is performed. This means step 3.2 is applied, but the arc &amp;lt;math&amp;gt;(v,w)&amp;lt;/math&amp;gt; is not saturated by that. Potentially, &amp;lt;math&amp;gt;w&amp;lt;/math&amp;gt; becomes active. However, &amp;lt;math&amp;gt;v&amp;lt;/math&amp;gt; definitely becomes inactive since the push step is non-saturating. Now the variant follows from the fact that &amp;lt;math&amp;gt;d(w)=d(v)-1&amp;lt;/math&amp;gt; for an admissible arc &amp;lt;math&amp;gt;(v,w)&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
It remains to show termination; this is proved by the following complexity considerations.&lt;br /&gt;
&lt;br /&gt;
== Complexity ==&lt;br /&gt;
&lt;br /&gt;
'''Statement:'''&lt;br /&gt;
The asymptotic complexity is in &amp;lt;math&amp;gt;\mathcal{O}(n^2m)&amp;lt;/math&amp;gt;, where &amp;lt;math&amp;gt;n=|V|&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;m=|A|&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
'''Proof:'''&lt;br /&gt;
First we show that the total number of relabel operations (step 4 of the main loop) is in &amp;lt;math&amp;gt;\mathcal{O}(n^2)&amp;lt;/math&amp;gt;. To see that, let &amp;lt;math&amp;gt;v\in V\setminus\{s,t\}&amp;lt;/math&amp;gt; be an active node between two iterations of the main loop. A straightforward induction over the number of push operations shows that there is at least one simple &amp;lt;math&amp;gt;(s,v)&amp;lt;/math&amp;gt;-path &amp;lt;math&amp;gt;p&amp;lt;/math&amp;gt; with positive flow on all arcs. The [[Basic graph definitions#Transpose of a graph|transpose]] of &amp;lt;math&amp;gt;p&amp;lt;/math&amp;gt; is [[Basic flow definitions#Flow-augmenting paths and saturated arcs|augmenting]]. Due to the validity of &amp;lt;math&amp;gt;d&amp;lt;/math&amp;gt; (induction hypothesis), &amp;lt;math&amp;gt;d(v)-d(s)=d(v)-n&amp;lt;/math&amp;gt; cannot be larger than the number of arcs on &amp;lt;math&amp;gt;p&amp;lt;/math&amp;gt;, which is not larger than &amp;lt;math&amp;gt;n-1&amp;lt;/math&amp;gt;. Therefore, no node label can be larger than &amp;lt;math&amp;gt;2n-1&amp;lt;/math&amp;gt;. Since node labels are nonnegative and increase at least by one in each relabel operation, the claimed upper bound on the relabel operations follows.&lt;br /&gt;
&lt;br /&gt;
From this bound, we may immediately conclude that the current arc of a node is reset &amp;lt;math&amp;gt;\mathcal{O}(n)&amp;lt;/math&amp;gt; times, so the total number of forward steps of the current arcs of all nodes is in &amp;lt;math&amp;gt;\mathcal{O}(n^3)\subseteq\mathcal{O}(n^2m)&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
The argument in the [[Ahuja-Orlin#Complexity|complexity analysis]] of the [[Ahuja-Orlin]] algorithm to prove that the total number of ''saturating'' push operations is in &amp;lt;math&amp;gt;\mathcal{O}(nm)&amp;lt;/math&amp;gt;, applies here as well.&lt;br /&gt;
&lt;br /&gt;
Finally, we consider the ''non-saturating'' push operations. First note that &amp;lt;math&amp;gt;D\geq 0&amp;lt;/math&amp;gt; before and after each iteration. The value of &amp;lt;math&amp;gt;D&amp;lt;/math&amp;gt; is increased in each relabel operation exactly by the amount by which the label of the current node is increased. Since node labels are never decreased and bounded from above by &amp;lt;math&amp;gt;2n-1&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;D&amp;lt;/math&amp;gt; increases by less than &amp;lt;math&amp;gt;2n^2&amp;lt;/math&amp;gt; in total over all relabel operations. On the other hand, a saturating push operation may increase &amp;lt;math&amp;gt;D&amp;lt;/math&amp;gt; by at most &amp;lt;math&amp;gt;2n-1&amp;lt;/math&amp;gt; (namely, in the case that &amp;lt;math&amp;gt;w&amp;lt;/math&amp;gt; was not active immediately before the push). In summary, the total sum of all values by which &amp;lt;math&amp;gt;D&amp;lt;/math&amp;gt; is increased throughout the algorithm is in &amp;lt;math&amp;gt;\mathcal{O}(n^2m)&amp;lt;/math&amp;gt;. Due to the variant, the value of &amp;lt;math&amp;gt;D&amp;lt;/math&amp;gt; is decreased by at least one in each non-saturating push operation. This proves the claim.&lt;br /&gt;
&lt;br /&gt;
== Heuristic speedup techniques ==&lt;br /&gt;
&lt;br /&gt;
# After &amp;lt;math&amp;gt;\Omega(n)&amp;lt;/math&amp;gt; iterations of the main loop, the &amp;lt;math&amp;gt;d&amp;lt;/math&amp;gt;-values are recomputed analogously to the induction basis: as the current distance of each node to &amp;lt;math&amp;gt;t&amp;lt;/math&amp;gt; in the residual network. This modification is seldom enough, so the asymptotic complexity is not increased. In practice, this technique may save many unnecessary relabel steps.&lt;br /&gt;
# The main loop may be decomposed into two phases: First, as much flow as possible is sent into &amp;lt;math&amp;gt;t&amp;lt;/math&amp;gt;; second, all surplus flow that cannot reach &amp;lt;math&amp;gt;t&amp;lt;/math&amp;gt; is sent back to &amp;lt;math&amp;gt;s&amp;lt;/math&amp;gt;. The first phase may be finished once there is no more path in the residual network from any active node to &amp;lt;math&amp;gt;t&amp;lt;/math&amp;gt;. A sufficient and easy-to-check condition for that is &amp;lt;math&amp;gt;d(v)\geq n&amp;lt;/math&amp;gt; for all active nodes &amp;lt;math&amp;gt;v&amp;lt;/math&amp;gt;. All nodes from which &amp;lt;math&amp;gt;t&amp;lt;/math&amp;gt; is reachable may be safely disregarded in the second phase. For any other node, to save unnecessary relabel operations, the distance label may be safely increased to the minimum number of arcs in the residual network from this node back to &amp;lt;math&amp;gt;s&amp;lt;/math&amp;gt;.&lt;/div&gt;</summary>
		<author><name>Weihe</name></author>
	</entry>
	<entry>
		<id>https://wiki.algo.informatik.tu-darmstadt.de/index.php?title=B-tree:_remove&amp;diff=3867</id>
		<title>B-tree: remove</title>
		<link rel="alternate" type="text/html" href="https://wiki.algo.informatik.tu-darmstadt.de/index.php?title=B-tree:_remove&amp;diff=3867"/>
		<updated>2017-03-03T13:58:43Z</updated>

		<summary type="html">&lt;p&gt;Weihe: /* Abstract View */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Category:Videos]]&lt;br /&gt;
{{#ev:youtube|https://www.youtube.com/watch?v=vbRZ8h6ROYc|500|right||frame}}&lt;br /&gt;
&lt;br /&gt;
[[Category:B-Tree]]&lt;br /&gt;
[[Category:Algorithm]]&lt;br /&gt;
== General Information ==&lt;br /&gt;
'''Algorithmic problem:''' [[Sorted sequence#Remove|Sorted sequence: remove]]&lt;br /&gt;
&lt;br /&gt;
'''Type of algorithm:''' loop&lt;br /&gt;
&lt;br /&gt;
'''Auxiliary data:'''&lt;br /&gt;
## Pointers, &amp;lt;math&amp;gt;p&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;p'&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;p_1&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;p_2&amp;lt;/math&amp;gt; of type &amp;quot;pointer to B-tree node&amp;quot;.&lt;br /&gt;
## A Boolean variable '''''found''''', which is '''''false''''', in the beginning and set '''''true''''' once the key &amp;lt;math&amp;gt;K&amp;lt;/math&amp;gt; to be removed is seen.&lt;br /&gt;
&lt;br /&gt;
== Abstract View ==&lt;br /&gt;
&lt;br /&gt;
'''Invariant:'''&lt;br /&gt;
# &amp;lt;math&amp;gt;p&amp;lt;/math&amp;gt; points to a node of the B-tree.&lt;br /&gt;
# If &amp;lt;math&amp;gt;found = false&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;K&amp;lt;/math&amp;gt; is in the [[Directed Tree#Ranges of Search Tree Nodes|range]] of the node to which &amp;lt;math&amp;gt;p&amp;lt;/math&amp;gt; points.&lt;br /&gt;
# It is &amp;lt;math&amp;gt;found = true&amp;lt;/math&amp;gt; if, and only if, &amp;lt;math&amp;gt;K&amp;lt;/math&amp;gt; is contained at least once in one of the current nodes of previous iterations.&lt;br /&gt;
# If &amp;lt;math&amp;gt;found = true&amp;lt;/math&amp;gt;&lt;br /&gt;
## &amp;lt;math&amp;gt;p'&amp;lt;/math&amp;gt; points to a node where &amp;lt;math&amp;gt;K&amp;lt;/math&amp;gt; is currently stored.&lt;br /&gt;
## The [[Directed Tree#Order of Tree Nodes|immediate predecessor]] of &amp;lt;math&amp;gt;K&amp;lt;/math&amp;gt; is in the [[Directed Tree#Ranges of Search Tree Nodes|range]] of the node to which &amp;lt;math&amp;gt;p&amp;lt;/math&amp;gt; points.&lt;br /&gt;
# If &amp;lt;math&amp;gt;p&amp;lt;/math&amp;gt; points to the root, at least two keys are currently stored in the root; otherwise, at least &amp;lt;math&amp;gt;M&amp;lt;/math&amp;gt; keys are currently stored in the node to which &amp;lt;math&amp;gt;p&amp;lt;/math&amp;gt; points.&lt;br /&gt;
&lt;br /&gt;
'''Variant:''' &amp;lt;math&amp;gt;p&amp;lt;/math&amp;gt; is redirected to a node one level deeper.&lt;br /&gt;
&lt;br /&gt;
'''Break condition:''' &amp;lt;math&amp;gt;p&amp;lt;/math&amp;gt; points to a leaf.&lt;br /&gt;
&lt;br /&gt;
'''Remark:''' For example, the height of the subtree rooted at the node to which &amp;lt;math&amp;gt;p&amp;lt;/math&amp;gt; points may be chosen as the induction parameter. For conciseness, the induction parameter is omitted in the following.&lt;br /&gt;
&lt;br /&gt;
== Induction Basis ==&lt;br /&gt;
&lt;br /&gt;
=== Abstract view: ===&lt;br /&gt;
&lt;br /&gt;
# If the tree is empty, terminate the algorithm and return '''''false'''''&lt;br /&gt;
# The pointer &amp;lt;math&amp;gt;p&amp;lt;/math&amp;gt; is initialized so as to point to the root &amp;lt;math&amp;gt;N&amp;lt;/math&amp;gt;&lt;br /&gt;
# If &amp;lt;math&amp;gt;N&amp;lt;/math&amp;gt; is a leaf:&lt;br /&gt;
## Remove &amp;lt;math&amp;gt;K&amp;lt;/math&amp;gt; if contained in &amp;lt;math&amp;gt;N&amp;lt;/math&amp;gt;.&lt;br /&gt;
## If &amp;lt;math&amp;gt;N&amp;lt;/math&amp;gt; is empty now, make the tree empty.&lt;br /&gt;
## Terminate the algorithm and return '''''true''''' if &amp;lt;math&amp;gt;K&amp;lt;/math&amp;gt; was contained in &amp;lt;math&amp;gt;N&amp;lt;/math&amp;gt;, '''''false''''' otherwise.&lt;br /&gt;
# Otherwise, if &amp;lt;math&amp;gt;N&amp;lt;/math&amp;gt; contains exactly one key and is not a leaf: rearrange the root and its two children appropriately.&lt;br /&gt;
&lt;br /&gt;
=== Implementation: ===&lt;br /&gt;
&lt;br /&gt;
# If the tree is empty, terminate the algorithm and return '''''false'''''.&lt;br /&gt;
# Let &amp;lt;math&amp;gt;p&amp;lt;/math&amp;gt; point to the root.&lt;br /&gt;
# If &amp;lt;math&amp;gt;p.children[0] = void&amp;lt;/math&amp;gt; (that is, the root is a leaf):&lt;br /&gt;
## If &amp;lt;math&amp;gt;K&amp;lt;/math&amp;gt; is contained in the root, say at position &amp;lt;math&amp;gt;k&amp;lt;/math&amp;gt;:&lt;br /&gt;
### Remove the occurrence of &amp;lt;math&amp;gt;K&amp;lt;/math&amp;gt; at position &amp;lt;math&amp;gt;k&amp;lt;/math&amp;gt;.&lt;br /&gt;
### If the root is empty now, make the tree an empty tree.&lt;br /&gt;
### Otherwise (that is, the root is still non-empty):&lt;br /&gt;
#### For &amp;lt;math&amp;gt;j \in \{k+1,\dots,p.n\}&amp;lt;/math&amp;gt; (in this order), set &amp;lt;math&amp;gt;p.keys[j-1] := p.keys[j]&amp;lt;/math&amp;gt;.&lt;br /&gt;
#### Set &amp;lt;math&amp;gt;p.n := p.n - 1&amp;lt;/math&amp;gt;.&lt;br /&gt;
### Terminate the algorithm and return '''''true'''''&lt;br /&gt;
## Otherwise, terminate the algorithm and return '''''false'''''.&lt;br /&gt;
# If &amp;lt;math&amp;gt;p.n = 1&amp;lt;/math&amp;gt;:&lt;br /&gt;
## If &amp;lt;math&amp;gt;p.children[0].n = p.children[1].n = M - 1&amp;lt;/math&amp;gt;:&lt;br /&gt;
### Set &amp;lt;math&amp;gt;p.keys[M] := p.keys[1]&amp;lt;/math&amp;gt;.&lt;br /&gt;
### Set &amp;lt;math&amp;gt;p_1 := p.children[0]&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;p_2 := p.children[1]&amp;lt;/math&amp;gt;.&lt;br /&gt;
### Set &amp;lt;math&amp;gt;p.children[0] := p_1.children[0]&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;p.children[M] := p_2.children[0]&amp;lt;/math&amp;gt;.&lt;br /&gt;
### For &amp;lt;math&amp;gt; j = 1,\dots,M-1&amp;lt;/math&amp;gt; set &amp;lt;math&amp;gt;p.keys[j] := p_1.keys[j]&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;p.children[j] := p_1.children[j]&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;p.keys[M + j] := p_2.keys[j]&amp;lt;/math&amp;gt;, and &amp;lt;math&amp;gt;p.children[M + j] := p_2.children[j]&amp;lt;/math&amp;gt;.&lt;br /&gt;
### Set &amp;lt;math&amp;gt;p.n := 2M - 1&amp;lt;/math&amp;gt;.&lt;br /&gt;
## Otherwise:&lt;br /&gt;
### If &amp;lt;math&amp;gt;K \leq p.keys[1]&amp;lt;/math&amp;gt;:&lt;br /&gt;
#### If &amp;lt;math&amp;gt;p.children[0].n = M - 1&amp;lt;/math&amp;gt;, call [[B-Tree:Shift_Key_to_Sibling]] with pointer &amp;lt;math&amp;gt;p&amp;lt;/math&amp;gt;, index &amp;lt;math&amp;gt;1&amp;lt;/math&amp;gt;, and &amp;lt;math&amp;gt;shiftRight = false&amp;lt;/math&amp;gt;.&lt;br /&gt;
#### If &amp;lt;math&amp;gt;p.keys[1] = K&amp;lt;/math&amp;gt;, set &amp;lt;math&amp;gt;p' := p&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;found := true&amp;lt;/math&amp;gt;.&lt;br /&gt;
#### Set &amp;lt;math&amp;gt;p := p.children[0]&amp;lt;/math&amp;gt;.&lt;br /&gt;
### Otherwise (that is, &amp;lt;math&amp;gt;K &amp;gt; p.keys[1]&amp;lt;/math&amp;gt;):&lt;br /&gt;
#### If &amp;lt;math&amp;gt;p.children[1].n = M - 1&amp;lt;/math&amp;gt;, call [[B-Tree:Shift_Key_to_Sibling]] with pointer &amp;lt;math&amp;gt;p&amp;lt;/math&amp;gt;, index &amp;lt;math&amp;gt;1&amp;lt;/math&amp;gt;, and &amp;lt;math&amp;gt;shiftRight = true&amp;lt;/math&amp;gt;.&lt;br /&gt;
#### If &amp;lt;math&amp;gt;p.keys[1] = K&amp;lt;/math&amp;gt;, set &amp;lt;math&amp;gt;p' := p&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;found = true&amp;lt;/math&amp;gt;.&lt;br /&gt;
#### Set &amp;lt;math&amp;gt;p := p.children[1]&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
=== Proof: ===&lt;br /&gt;
&lt;br /&gt;
Basically, we have to verify that the [[B-Trees|implementation invariants of the B-tree data structure]] and the above-mentioned loop invariants of the removal procedure are fulfilled after the preprocessing.&lt;br /&gt;
&lt;br /&gt;
If the tree is empty (Step 1), the proof is trivial, so consider the case that the tree is non-empty.&lt;br /&gt;
&lt;br /&gt;
Implementation invariants #1, #2, and #8 are trivially fulfilled. Implementation invariants #3, #5, #6, and #7 are guaranteed by Steps 3.1.3 and 4.1.3-5 and by the postconditions of the subroutines called in Step 4.2, respectively. Implementation invariant #4 is guaranteed by Step 3.1.2 and the postconditions of the subroutines called in Step 4, respectively.&lt;br /&gt;
&lt;br /&gt;
The loop invariants of the removal procedure are only affected if Step 4 is executed. Loop invariants #1, #2, and #3 are obvious, #4 does not apply, and #5 is guaranteed by the subroutines called in Step 4.&lt;br /&gt;
&lt;br /&gt;
Obviously, the case distinction in Step 4 covers all possible cases.&lt;br /&gt;
&lt;br /&gt;
== Induction Step ==&lt;br /&gt;
&lt;br /&gt;
=== Abstract view: ===&lt;br /&gt;
&lt;br /&gt;
# If a leaf is reached:&lt;br /&gt;
## If &amp;lt;math&amp;gt;K&amp;lt;/math&amp;gt; is in that leaf, remove it.&lt;br /&gt;
## Otherwise, if &amp;lt;math&amp;gt;K&amp;lt;/math&amp;gt; has already been seen, overwrite the found occurrence of &amp;lt;math&amp;gt;K&amp;lt;/math&amp;gt; with its immediate predecessor (which is &amp;lt;math&amp;gt;p.keys[p.n]&amp;lt;/math&amp;gt;).&lt;br /&gt;
## Terminate the algorithm&lt;br /&gt;
# Otherwise:&lt;br /&gt;
## Let &amp;lt;math&amp;gt;k \in \{0,\dots,p.n\}&amp;lt;/math&amp;gt; such that &amp;lt;math&amp;gt;p.children[k]&amp;lt;/math&amp;gt; is the next node where to descend.&lt;br /&gt;
## If &amp;lt;math&amp;gt;p.children[k].n = M - 1&amp;lt;/math&amp;gt;, rearrange the current node at its children appropriately.&lt;br /&gt;
## Check whether &amp;lt;math&amp;gt;K&amp;lt;/math&amp;gt; is in the current node.&lt;br /&gt;
&lt;br /&gt;
=== Implementation: ===&lt;br /&gt;
&lt;br /&gt;
# If &amp;lt;math&amp;gt;p.children[0] = void&amp;lt;/math&amp;gt; (that is, &amp;lt;math&amp;gt;p&amp;lt;/math&amp;gt; points to aleaf):&lt;br /&gt;
## If &amp;lt;math&amp;gt;K&amp;lt;/math&amp;gt; is contained in that leaf:&lt;br /&gt;
### Let &amp;lt;math&amp;gt;k \in \{1,\dots,p.n\}&amp;lt;/math&amp;gt; such that &amp;lt;math&amp;gt;p.keys[j-1] := p.keys[j]&amp;lt;/math&amp;gt;.&lt;br /&gt;
### For &amp;lt;math&amp;gt;j \in \{k + 1,\dots,p.n\}&amp;lt;/math&amp;gt; (in this order), set &amp;lt;math&amp;gt;p.keys[j-1] := p.keys[j]&amp;lt;/math&amp;gt;.&lt;br /&gt;
### Remove the key at position &amp;lt;math&amp;gt;p.n&amp;lt;/math&amp;gt; in the node pointed by &amp;lt;math&amp;gt;p&amp;lt;/math&amp;gt;.&lt;br /&gt;
### Set &amp;lt;math&amp;gt;p.n := p.n - 1&amp;lt;/math&amp;gt;.&lt;br /&gt;
### Terminate the algorithm and return '''''true'''''&lt;br /&gt;
## Otherwise, if &amp;lt;math&amp;gt;found&amp;lt;/math&amp;gt;:&lt;br /&gt;
### Let &amp;lt;math&amp;gt;k \in \{1,\dots,p'.n\}&amp;lt;/math&amp;gt; such that &amp;lt;math&amp;gt;p'.keys[k] = K&amp;lt;/math&amp;gt;.&lt;br /&gt;
### Set &amp;lt;math&amp;gt;p'.keys[k] := p.keys[p.n]&amp;lt;/math&amp;gt;.&lt;br /&gt;
### Remove the key at position &amp;lt;math&amp;gt;p.n&amp;lt;/math&amp;gt; in the node pointed by &amp;lt;math&amp;gt;p&amp;lt;/math&amp;gt;&lt;br /&gt;
### Set &amp;lt;math&amp;gt;p.n := p.n - 1&amp;lt;/math&amp;gt;&lt;br /&gt;
### Terminate the algorithm and return '''''true'''''&lt;br /&gt;
## Otherwise terminate the algorithm and return '''''false'''''&lt;br /&gt;
# Otherwise (that is, &amp;lt;math&amp;gt;p&amp;lt;/math&amp;gt; does not point to a leaf):&lt;br /&gt;
## If &amp;lt;math&amp;gt;K \leq p.keys[1]&amp;lt;/math&amp;gt;, set &amp;lt;math&amp;gt;k := 0&amp;lt;/math&amp;gt;; otherwise, let &amp;lt;math&amp;gt;k \in \{1,\dots,p.n\}&amp;lt;/math&amp;gt; be maximal such that &amp;lt;math&amp;gt;K &amp;gt; p.keys[k]&amp;lt;/math&amp;gt;&lt;br /&gt;
## If &amp;lt;math&amp;gt;p.children[k].n = M - 1&amp;lt;/math&amp;gt;:&lt;br /&gt;
### If &amp;lt;math&amp;gt;k = p.n&amp;lt;/math&amp;gt; (that is, no sibling to the right):&lt;br /&gt;
#### If &amp;lt;math&amp;gt;p.children[k-1].n = M - 1&amp;lt;/math&amp;gt;: call [[B-tree: merge two siblings|B-tree: merge two siblings]] with pointer &amp;lt;math&amp;gt;p&amp;lt;/math&amp;gt; and index &amp;lt;math&amp;gt;k&amp;lt;/math&amp;gt;.&lt;br /&gt;
#### Otherwise (that is, &amp;lt;math&amp;gt;p.children[k-1].n &amp;gt; M -1&amp;lt;/math&amp;gt;): call [[B-Tree:Shift_Key_to_Sibling]] with pointer &amp;lt;math&amp;gt;p&amp;lt;/math&amp;gt; index &amp;lt;math&amp;gt;k&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;shiftRight = true&amp;lt;/math&amp;gt;.&lt;br /&gt;
### Otherwise (that is, &amp;lt;math&amp;gt;k &amp;lt; p.n&amp;lt;/math&amp;gt;):&lt;br /&gt;
#### If &amp;lt;math&amp;gt;p.children[k+1].n = M - 1&amp;lt;/math&amp;gt;: call [[B-tree: merge two siblings|B-tree: merge two siblings]] with pointer &amp;lt;math&amp;gt;p&amp;lt;/math&amp;gt; and index &amp;lt;math&amp;gt;k + 1&amp;lt;/math&amp;gt;.&lt;br /&gt;
#### Otherwise (that is, &amp;lt;math&amp;gt;p.children[k+1].n &amp;gt; M = 1)&amp;lt;/math&amp;gt;: call [[B-Tree:Shift_Key_to_Sibling]] with pointer &amp;lt;math&amp;gt;p&amp;lt;/math&amp;gt; index &amp;lt;math&amp;gt;k + 1&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;shiftRight = false&amp;lt;/math&amp;gt;.&lt;br /&gt;
## If &amp;lt;math&amp;gt;K&amp;lt;/math&amp;gt; is contained in the node to which &amp;lt;math&amp;gt;p&amp;lt;/math&amp;gt; points:&lt;br /&gt;
### Set &amp;lt;math&amp;gt;found = true&amp;lt;/math&amp;gt;&lt;br /&gt;
### Set &amp;lt;math&amp;gt;p' := p&amp;lt;/math&amp;gt;&lt;br /&gt;
## If &amp;lt;math&amp;gt;K \leq p.keys[1]&amp;lt;/math&amp;gt;, set &amp;lt;math&amp;gt;k := 0&amp;lt;/math&amp;gt;; otherwise, let &amp;lt;math&amp;gt;k \in \{1,\dots,p.n\}&amp;lt;/math&amp;gt; be maximal such that &amp;lt;math&amp;gt;K &amp;gt; p.keys[k]&amp;lt;/math&amp;gt;.&lt;br /&gt;
## Set &amp;lt;math&amp;gt;p := p.children[k]&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Correctness: ===&lt;br /&gt;
&lt;br /&gt;
Analogously to the induction basis, we have to verify that the [[B-Trees|implementation invariants of the B-tree data structure]] and the above-mentioned loop invariants of the removal procedure are maintained by an iteration of the main loop.&lt;br /&gt;
&lt;br /&gt;
Again, implementation invariants #1 and #2 are trivially fulfilled. In case points to a leaf, implementation invariants #4, #7, and #8 are trivially maintained as well. In this case, Steps 1.1.3 and 1.2.3 guarantee #3, Step 1.1.2 guarantees #5, and Steps 1.1.2 and 4 guarantee #6. In case does not point to a leaf, implementation invariants #3-8 are guaranteed by the subroutines called in Step 2.&lt;br /&gt;
&lt;br /&gt;
Finally, consider the loop invariants of the removal procedure. #1 is trivial; #2 is guaranteed by Step 2.4; #3 and #4.1 by Step 2.3; #4.2 by Step 4, and #5 by the subroutines called in Step 2.&lt;br /&gt;
&lt;br /&gt;
== Complexity ==&lt;br /&gt;
&lt;br /&gt;
'''Statement:''' The asymptotic complexity is in &amp;lt;math&amp;gt;\Theta(T\cdot\log n)&amp;lt;/math&amp;gt; in the best and worst case, where &amp;lt;math&amp;gt;T&amp;lt;/math&amp;gt; is the complexity of the comparison.&lt;br /&gt;
&lt;br /&gt;
'''Proof:''' Follows immediately from the facts that&lt;br /&gt;
# &amp;lt;math&amp;gt;M&amp;lt;/math&amp;gt; is assumed to be fixed, and&lt;br /&gt;
# the height of a B-tree with &amp;lt;math&amp;gt;n&amp;lt;/math&amp;gt; nodes is in &amp;lt;math&amp;gt;\Theta(\log n)&amp;lt;/math&amp;gt; (cf. the remark clause of the [[B-Trees]] page).&lt;br /&gt;
&lt;br /&gt;
== Further Information ==&lt;br /&gt;
In the above specification, each node with exactly &amp;lt;math&amp;gt;M - 1&amp;lt;/math&amp;gt; keys is modified when visited. This is done for precautionary reasons only. With a slight modification, this can be avoided: when the leaf is reached, go back along the traversed path and modify each nodes with &amp;lt;math&amp;gt;M - 1&amp;lt;/math&amp;gt; keys until the first node with more than &amp;lt;math&amp;gt;M - 1&amp;lt;/math&amp;gt; keys is visited. Evidently, this reduces the number of modifications. However, chances are high that these nodes have to be modified sooner or later, so the true benefit is not clear. The version of the removal procedure presented here has primarily been selected because its loop invariant is simpler and more intuitive.&lt;/div&gt;</summary>
		<author><name>Weihe</name></author>
	</entry>
	<entry>
		<id>https://wiki.algo.informatik.tu-darmstadt.de/index.php?title=B-tree:_insert&amp;diff=3866</id>
		<title>B-tree: insert</title>
		<link rel="alternate" type="text/html" href="https://wiki.algo.informatik.tu-darmstadt.de/index.php?title=B-tree:_insert&amp;diff=3866"/>
		<updated>2017-03-03T13:47:09Z</updated>

		<summary type="html">&lt;p&gt;Weihe: /* Abstract View */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Category:Videos]]&lt;br /&gt;
{{#ev:youtube|https://www.youtube.com/watch?v=vbRZ8h6ROYc|500|right||frame}}&lt;br /&gt;
[[Category: B-Tree]]&lt;br /&gt;
[[Category: Algorithm]]&lt;br /&gt;
&lt;br /&gt;
[[Category: Checkup]]&lt;br /&gt;
&lt;br /&gt;
== General Information ==&lt;br /&gt;
'''Algorithmic problem:''' [[Sorted sequence#Insert|Sorted sequence: insert]]&lt;br /&gt;
&lt;br /&gt;
'''Type of algorithm:''' loop&lt;br /&gt;
&lt;br /&gt;
'''Auxiliary data:''' Pointers &amp;lt;math&amp;gt;p&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;p'&amp;lt;/math&amp;gt; of type &amp;quot;pointer to a B-tree node of key type &amp;lt;math&amp;gt;\mathcal{K}&amp;lt;/math&amp;gt;.&amp;quot;&lt;br /&gt;
&lt;br /&gt;
== Abstract View ==&lt;br /&gt;
'''Invariant:''' After &amp;lt;math&amp;gt;i\geq 0&amp;lt;/math&amp;gt; iterations:&lt;br /&gt;
&lt;br /&gt;
# Pointer &amp;lt;math&amp;gt;p&amp;lt;/math&amp;gt; points to some node of the B-tree on height level &amp;lt;math&amp;gt;i&amp;lt;/math&amp;gt; such that &amp;lt;math&amp;gt;n &amp;lt; 2M -1&amp;lt;/math&amp;gt; for this node.&lt;br /&gt;
# The key to be inserted is in the [[Directed Tree#Ranges of Search Tree Nodes|range]] of that node&lt;br /&gt;
&lt;br /&gt;
'''Variant:''' &amp;lt;math&amp;gt;i&amp;lt;/math&amp;gt; is increased by &amp;lt;math&amp;gt;1&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Break condition: &amp;lt;math&amp;gt;p&amp;lt;/math&amp;gt; points to a leaf.&lt;br /&gt;
&lt;br /&gt;
'''Remark:''' For example, the height of the subtree rooted at the node to which &amp;lt;math&amp;gt;p&amp;lt;/math&amp;gt; points may be chosen as the induction parameter. For conciseness, the induction parameter is omitted in the following.&lt;br /&gt;
&lt;br /&gt;
== Induction Basis ==&lt;br /&gt;
&lt;br /&gt;
'''Abstract view:'''&lt;br /&gt;
# If the tree is empty, create a new root with &amp;lt;math&amp;gt;K&amp;lt;/math&amp;gt; the only key and terminate the algorithm.&lt;br /&gt;
# Otherwise, '''''p''''' points to the root.&lt;br /&gt;
# If the root is full:&lt;br /&gt;
## A new root is created, and the old root is dhe unique child of the new root.&lt;br /&gt;
## The old root is '''split''' into two siblings.&lt;br /&gt;
## '''''p''''' points to the new root.&lt;br /&gt;
&lt;br /&gt;
'''Implementation:'''&lt;br /&gt;
# If the tree is empty:&lt;br /&gt;
## Create a new, empty node and let &amp;lt;math&amp;gt;p&amp;lt;/math&amp;gt; point to it.&lt;br /&gt;
## Set &amp;lt;math&amp;gt;p.keys[1] := K&amp;lt;/math&amp;gt;.&lt;br /&gt;
## Terminate the algorithm.&lt;br /&gt;
# Let &amp;lt;math&amp;gt;p&amp;lt;/math&amp;gt; point to the root of the tree.&lt;br /&gt;
#If &amp;lt;math&amp;gt;p.n = 2M - 1&amp;lt;/math&amp;gt;:&lt;br /&gt;
## Create a new, empty node and let &amp;lt;math&amp;gt;p'&amp;lt;/math&amp;gt; point to it.&lt;br /&gt;
## Cat &amp;lt;math&amp;gt;p'&amp;lt;/math&amp;gt;.children[0] &amp;lt;math&amp;gt;:= p&amp;lt;/math&amp;gt;.&lt;br /&gt;
## Call [[B-Trees|B-tree: split node into two siblings]] with pointer &amp;lt;math&amp;gt;p'&amp;lt;/math&amp;gt; and index &amp;lt;math&amp;gt;0&amp;lt;/math&amp;gt;.&lt;br /&gt;
## Set &amp;lt;math&amp;gt; p := p'&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''Proof:'''&lt;br /&gt;
Basically, we have to verify that the [[B-Trees|implementation invariants of the B-tree data structure]] and the above-mentioned loop invariants of the insert procedure are fulfilled after the preprocessing.&lt;br /&gt;
&lt;br /&gt;
If the tree is empty or, otherwise, the root is not full, all invariants are trivially fulfilled by Step 1 and Step 2, respectively. So consider the case that the tree is not empty and the root is full, that is, Step 3 is executed.&lt;br /&gt;
&lt;br /&gt;
Implementation invariants #1, #2, and #8 are trivially. Maintenance of the implementation invariants #3-#7 is guaranteed by the split procedure.&lt;br /&gt;
Finally, the loop invariants result from Step 3.4.&lt;br /&gt;
&lt;br /&gt;
== Induction Step ==&lt;br /&gt;
&lt;br /&gt;
=== Abstract view: ===&lt;br /&gt;
&lt;br /&gt;
# If the current node '''''N''''' is a leaf, insert the new key in '''''N''''' and terminate the algorithm.&lt;br /&gt;
# Otherwise, let '''''N'''''' be the child of '''''N''''' such that the key to be inserted is in the [[Directed Tree#Ranges of Search Tree Nodes|range]] of that child (ties arbitrarily broken).&lt;br /&gt;
# If '''''N'''''' is full, '''''N'''''' is '''split''' into two siblings.&lt;br /&gt;
# The new current node is the child of '''''N''''' such that the key to be inserted is in the [[Directed Tree#Ranges of Search Tree Nodes|range]] of that child (one of these two siblings, in fact; ties arbitrarily broken).&lt;br /&gt;
&lt;br /&gt;
=== Implementation: ===&lt;br /&gt;
# If &amp;lt;math&amp;gt;p.children[0] = void&amp;lt;/math&amp;gt; (that is, &amp;lt;math&amp;gt;N&amp;lt;/math&amp;gt; is a leaf):&lt;br /&gt;
## If &amp;lt;math&amp;gt; K \geq p.keys[p.n]&amp;lt;/math&amp;gt;, insert the key at position &amp;lt;math&amp;gt;p.n + 1&amp;lt;/math&amp;gt;; otherwise:&lt;br /&gt;
### Let &amp;lt;math&amp;gt;k \in \{1,\dots,p.n\}&amp;lt;/math&amp;gt; be the minimal position such that &amp;lt;math&amp;gt;K &amp;lt; p.keys[k]&amp;lt;/math&amp;gt;.&lt;br /&gt;
### For &amp;lt;math&amp;gt;j = p.n,\dots,k&amp;lt;/math&amp;gt; (in that order), set &amp;lt;math&amp;gt;p.keys[j+1] := p.keys[k]&amp;lt;/math&amp;gt;.&lt;br /&gt;
### Insert the new key at position &amp;lt;math&amp;gt;k&amp;lt;/math&amp;gt;&lt;br /&gt;
## Set &amp;lt;math&amp;gt;p.n := p.n + 1&amp;lt;/math&amp;gt;.&lt;br /&gt;
## Terminate the algorithm.&lt;br /&gt;
# If &amp;lt;math&amp;gt;K &amp;lt; p.keys[1]&amp;lt;/math&amp;gt; set &amp;lt;math&amp;gt;k := 0&amp;lt;/math&amp;gt;; otherwise, let &amp;lt;math&amp;gt;k \in \{1,\dots,p.n\}&amp;lt;/math&amp;gt; be maximal such that &amp;lt;math&amp;gt;K \geq p.keys[k]&amp;lt;/math&amp;gt;.&lt;br /&gt;
# If &amp;lt;math&amp;gt;p.children[k].n = 2M - 1&amp;lt;/math&amp;gt;: call [[B-Trees|B-tree: split node into two siblings]] with pointer &amp;lt;math&amp;gt;p&amp;lt;/math&amp;gt; and index &amp;lt;math&amp;gt;k&amp;lt;/math&amp;gt;.&lt;br /&gt;
# If &amp;lt;math&amp;gt;K &amp;lt; p.keys[1]&amp;lt;/math&amp;gt; set &amp;lt;math&amp;gt;p := p.children[0]&amp;lt;/math&amp;gt;; otherwise, set &amp;lt;math&amp;gt;p := p.children[k]&amp;lt;/math&amp;gt; where &amp;lt;math&amp;gt;k \in \{1,\dots,p.n\}&amp;lt;/math&amp;gt; is maximal such that &amp;lt;math&amp;gt;K \geq p.keys[k]&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
=== Correctness: ===&lt;br /&gt;
Analogously to the induction basis, we have to verify that the [[B-Trees|implementation invariants of the B-tree data structure]] and the above-mentioned loop invariants of the insert procedure are maintained through an iteration of the main loop.&lt;br /&gt;
&lt;br /&gt;
Again, the implementation invariants #1, #2 and #8 are trivially maintained. If the node is not full, the other implementation invariants and the above-mentioned loop invariants of the insert procedure are maintained as well. So consider the case that the node is full.&lt;br /&gt;
&lt;br /&gt;
Analogously to the induction basis, the implementation invariants #3-#7 are the guaranteed by the split operation.&lt;br /&gt;
&lt;br /&gt;
Finally, the loop invariants of the insert procedure result from Step 4&lt;br /&gt;
&lt;br /&gt;
== Pseudocode ==&lt;br /&gt;
&lt;br /&gt;
=== B-TREE-INSERT(''T'',''k'') ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
 B-TREE-INSERT(''T'',''k'')&lt;br /&gt;
  1 ''r'' = ''T.root''&lt;br /&gt;
  2 '''if''' ''r.n'' == 2''t'' - 1&lt;br /&gt;
  3      ''s'' = ALLOCATE-NODE()&lt;br /&gt;
  4      ''T.root'' = ''s''&lt;br /&gt;
  5      ''s.leaf'' = FALSE&lt;br /&gt;
  6      ''s.n'' = 0&lt;br /&gt;
  7      ''s.c&amp;lt;sub&amp;gt;1&amp;lt;/sub&amp;gt;'' = ''r''&lt;br /&gt;
  8      B-TREE-SPLIT-CHILD(''s'', 1)&lt;br /&gt;
  9      B-TREE-INSERT-NONFULL(''s, k'')&lt;br /&gt;
 10 '''else''' B-TREE-INSERT-NONFULL(''r, k'')   &lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== B-TREE-INSERT-NONFULL(''x'',''k'') ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
 B-TREE-INSERT-NONFULL(''x'',''k'')&lt;br /&gt;
  1 ''i'' = ''x.n''&lt;br /&gt;
  2 '''if''' ''x.leaf''&lt;br /&gt;
  3      '''while''' ''i &amp;amp;ge; 1 and ''k'' &amp;lt; ''x.key''&amp;lt;sub&amp;gt;i&amp;lt;/sub&amp;gt;&lt;br /&gt;
  4              ''x.key''&amp;lt;sub&amp;gt;i+1&amp;lt;/sub&amp;gt;= ''x.key''&amp;lt;sub&amp;gt;i&amp;lt;/sub&amp;gt;&lt;br /&gt;
  5              ''i'' = ''i'' - 1&lt;br /&gt;
  6      ''x.key''&amp;lt;sub&amp;gt;i&amp;lt;/sub&amp;gt;+111 = ''k''&lt;br /&gt;
  7      ''x.n'' = ''x.n'' + 1&lt;br /&gt;
  8      DISK-WRITE(''x)&lt;br /&gt;
  9 '''else while''' ''i'' &amp;amp;ge; 1 and ''k'' &amp;lt; ''x.key''&amp;lt;sub&amp;gt;i&amp;lt;/sub&amp;gt;&lt;br /&gt;
 10             ''i'' = ''i'' - 1&lt;br /&gt;
 11        ''i'' = ''i'' + 1&lt;br /&gt;
 12        DISK-READ(''x.c''&amp;lt;sub&amp;gt;i&amp;lt;/sub&amp;gt;)&lt;br /&gt;
 13        '''if''' ''x.c&amp;lt;sub&amp;gt;i&amp;lt;/sub&amp;gt;.n'' == 2''t'' - 1&lt;br /&gt;
 14               B-TREE-SPLIT-CHILD(''x, i'')&lt;br /&gt;
 15               '''if''' ''k'' &amp;gt; ''x.key''&amp;lt;sub&amp;gt;i&amp;lt;/sub&amp;gt;&lt;br /&gt;
 16                       ''i'' = ''i'' + 1&lt;br /&gt;
 17         B-TREE-INSERT-NONFULL(''x.c&amp;lt;sub&amp;gt;i&amp;lt;/sub&amp;gt;, k)   &lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Complexity ==&lt;br /&gt;
'''Statement:''' The asymptotic complexity is in &amp;lt;math&amp;gt;\Theta(T\cdot\log n)&amp;lt;/math&amp;gt; in the worst case, where &amp;lt;math&amp;gt;T&amp;lt;/math&amp;gt; is the complexity of the comparison.&lt;br /&gt;
&lt;br /&gt;
'''Proof:''' Follows immediately from the facts that&lt;br /&gt;
&lt;br /&gt;
# &amp;lt;math&amp;gt;M&amp;lt;/math&amp;gt; is assumed to be fixed, and&lt;br /&gt;
# the height of a B-tree with &amp;lt;math&amp;gt;n&amp;lt;/math&amp;gt; nodes is in &amp;lt;math&amp;gt;\Theta(\log n)&amp;lt;/math&amp;gt; (cf. the remark clause of the [[B-Trees|B-tree]] page).&lt;br /&gt;
&lt;br /&gt;
== Further Information ==&lt;br /&gt;
In the above specification, each full node is split into two when visited. This is done for precautionary reasons only. With a slight modification, this can be avoided: when the leaf is reached, go back along the traversed path and split full nodes until the first non-full node is reached. Evidently, this reduces the number of splits. However, chances are high that these nodes have to be split sooner or later, so the true benefit is not clear. The version presented here has primarily been selected because its loop invariant is simpler and more intuitive.&lt;/div&gt;</summary>
		<author><name>Weihe</name></author>
	</entry>
	<entry>
		<id>https://wiki.algo.informatik.tu-darmstadt.de/index.php?title=B-tree:_find&amp;diff=3865</id>
		<title>B-tree: find</title>
		<link rel="alternate" type="text/html" href="https://wiki.algo.informatik.tu-darmstadt.de/index.php?title=B-tree:_find&amp;diff=3865"/>
		<updated>2017-03-03T13:46:49Z</updated>

		<summary type="html">&lt;p&gt;Weihe: /* Abstract View */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Category:Videos]]&lt;br /&gt;
{{#ev:youtube|https://www.youtube.com/watch?v=vbRZ8h6ROYc|500|right||frame}}&lt;br /&gt;
[[Category:B-Tree]]&lt;br /&gt;
[[Category:Algorithm]]&lt;br /&gt;
== General Information ==&lt;br /&gt;
'''Algorithmic problem:''' [[Sorted sequence#Find|Sorted sequence: find]]&lt;br /&gt;
&lt;br /&gt;
'''Type of algorithm:''' loop&lt;br /&gt;
&lt;br /&gt;
'''Auxiliary data:''' A pointer &amp;lt;math&amp;gt;p&amp;lt;/math&amp;gt; of type &amp;quot;pointer to a B-tree node of key type &amp;lt;math&amp;gt;\mathcal{K}&amp;lt;/math&amp;gt;&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
== Abstract View ==&lt;br /&gt;
'''Invariant:''' After &amp;lt;math&amp;gt;i\geq 0&amp;lt;/math&amp;gt; iterations:&lt;br /&gt;
# pointer &amp;lt;math&amp;gt;p&amp;lt;/math&amp;gt; points to some node of the B-tree on height level &amp;lt;math&amp;gt;i&amp;lt;/math&amp;gt; and&lt;br /&gt;
# the searched key is in the [[Directed Tree#Ranges of Search Tree Nodes|range]] of that node.&lt;br /&gt;
&lt;br /&gt;
'''Variant:''' &amp;lt;math&amp;gt;i&amp;lt;/math&amp;gt; is increased by &amp;lt;math&amp;gt;1&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
'''Break condition:'''&lt;br /&gt;
# &amp;lt;math&amp;gt;p&amp;lt;/math&amp;gt; points to a leaf of the B-tree or (that is, inclusive-or)&lt;br /&gt;
# the searched key is in the node to which &amp;lt;math&amp;gt;p&amp;lt;/math&amp;gt; points.&lt;br /&gt;
&lt;br /&gt;
'''Remark:''' For example, the height of the subtree rooted at the node to which &amp;lt;math&amp;gt;p&amp;lt;/math&amp;gt; points may be chosen as the induction parameter. For conciseness, the induction parameter is omitted in the following.&lt;br /&gt;
&lt;br /&gt;
== Induction Basis ==&lt;br /&gt;
'''Abstract view:''' '''''p''''' is initialized so as to point to the root of the B-tree.&lt;br /&gt;
&lt;br /&gt;
'''Implementation:''' Obvious.&lt;br /&gt;
&lt;br /&gt;
'''Proof:''' Obvious.&lt;br /&gt;
&lt;br /&gt;
== Induction Step ==&lt;br /&gt;
&lt;br /&gt;
'''Abstract view:'''&lt;br /&gt;
# Let &amp;lt;math&amp;gt;N&amp;lt;/math&amp;gt; denote the node to which &amp;lt;math&amp;gt;p&amp;lt;/math&amp;gt; currently points.&lt;br /&gt;
# If the searched key is in &amp;lt;math&amp;gt;N&amp;lt;/math&amp;gt;, terminate the algorithm and return '''true'''.&lt;br /&gt;
# Otherwise, if &amp;lt;math&amp;gt;N&amp;lt;/math&amp;gt; is a leaf, terminate the algorithm and return '''false'''.&lt;br /&gt;
# Otherwise, let &amp;lt;math&amp;gt;p&amp;lt;/math&amp;gt; point the child of &amp;lt;math&amp;gt;N&amp;lt;/math&amp;gt; such that the searched key is in the [[Directed Tree#Ranges of Search Tree Nodes|range]] of that child.&lt;br /&gt;
&lt;br /&gt;
'''Implementation:'''&lt;br /&gt;
# If &amp;lt;math&amp;gt;K&amp;lt;/math&amp;gt; is one of the values &amp;lt;math&amp;gt;p&amp;lt;/math&amp;gt;.keys&amp;lt;math&amp;gt;[1],\dots,p&amp;lt;/math&amp;gt;.keys&amp;lt;math&amp;gt;[p.n]&amp;lt;/math&amp;gt;, terminate the algorithm and return '''true'''.&lt;br /&gt;
# If &amp;lt;math&amp;gt;p&amp;lt;/math&amp;gt;.children&amp;lt;math&amp;gt;[0] =&amp;lt;/math&amp;gt; void (that is, the current node is a leaf), terminate the algorithm and return '''false'''.&lt;br /&gt;
# If &amp;lt;math&amp;gt;K &amp;lt; p&amp;lt;/math&amp;gt;.keys&amp;lt;math&amp;gt;[1]&amp;lt;/math&amp;gt; set &amp;lt;math&amp;gt;p := p&amp;lt;/math&amp;gt;.children&amp;lt;math&amp;gt;[0]&amp;lt;/math&amp;gt;.&lt;br /&gt;
# Otherwise, if &amp;lt;math&amp;gt;K &amp;gt; p&amp;lt;/math&amp;gt;.keys&amp;lt;math&amp;gt;[p.n]&amp;lt;/math&amp;gt; set &amp;lt;math&amp;gt;p := p&amp;lt;/math&amp;gt;.children&amp;lt;math&amp;gt;[p.n]&amp;lt;/math&amp;gt;.&lt;br /&gt;
# Otherwise, there is exactly one &amp;lt;math&amp;gt;i \in \{1,\dots,p.n-1\}&amp;lt;/math&amp;gt; such that &amp;lt;math&amp;gt;p&amp;lt;/math&amp;gt;.keys&amp;lt;math&amp;gt;[i] &amp;lt; K &amp;lt; p&amp;lt;/math&amp;gt;.keys&amp;lt;math&amp;gt;[i+1]&amp;lt;/math&amp;gt;.&lt;br /&gt;
# Set &amp;lt;math&amp;gt;p := p&amp;lt;/math&amp;gt;.children&amp;lt;math&amp;gt;[i]&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
'''Correctness:'''&lt;br /&gt;
Obvious.&lt;br /&gt;
&lt;br /&gt;
== Complexity ==&lt;br /&gt;
&lt;br /&gt;
'''Statement:''' The asymptotic complexity is in &amp;lt;math&amp;gt;\Theta(T\cdot\log n)&amp;lt;/math&amp;gt; in the worst case, where &amp;lt;math&amp;gt;T&amp;lt;/math&amp;gt; is the complexity of the comparison and &amp;lt;math&amp;gt;n&amp;lt;/math&amp;gt; the total number of keys in the B-tree.&lt;br /&gt;
&lt;br /&gt;
'''Proof:''' Follows immediately from the the [[B-tree#Depth of a B-tree|maximum height]] of the B-tree.&lt;br /&gt;
&lt;br /&gt;
== Pseudocode == &lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
 B-TREE-FIND(''x,k'')&lt;br /&gt;
 1 ''i'' = 1&lt;br /&gt;
 2 '''while''' i &amp;amp;le; ''x.n'' and ''k'' &amp;amp;gt; ''x.key&amp;lt;sub&amp;gt;i&amp;lt;/sub&amp;gt;'' &lt;br /&gt;
 3        ''i'' = ''i'' + 1&lt;br /&gt;
 4 '''if''' ''i'' &amp;amp;le; ''x.n'' and ''k'' == ''x.key&amp;lt;sub&amp;gt;i&amp;lt;/sub&amp;gt;'' &lt;br /&gt;
 5        '''return''' (''x.i'')&lt;br /&gt;
 6 '''elseif''' '' x.leaf''&lt;br /&gt;
 7        '''return''' NIL&lt;br /&gt;
 8 '''else''' DISK-READ(''x.c&amp;lt;sub&amp;gt;i&amp;lt;/sub&amp;gt;'') &lt;br /&gt;
 9        '''return''' B-TREE-FIND(''x.c&amp;lt;sub&amp;gt;i&amp;lt;/sub&amp;gt;,k'')&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;/div&gt;</summary>
		<author><name>Weihe</name></author>
	</entry>
	<entry>
		<id>https://wiki.algo.informatik.tu-darmstadt.de/index.php?title=Binary_search_tree:_traverse&amp;diff=3864</id>
		<title>Binary search tree: traverse</title>
		<link rel="alternate" type="text/html" href="https://wiki.algo.informatik.tu-darmstadt.de/index.php?title=Binary_search_tree:_traverse&amp;diff=3864"/>
		<updated>2017-03-03T13:41:12Z</updated>

		<summary type="html">&lt;p&gt;Weihe: /* Abstract View */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Category:Videos]]&lt;br /&gt;
[[Category:Algorithms]]&lt;br /&gt;
[[Category:Search Algorithms]]&lt;br /&gt;
[[Category:Tree Algorithms]]&lt;br /&gt;
[[Category:Binary_Search_Tree]]&lt;br /&gt;
{{#ev:youtube|https://www.youtube.com/watch?v=PXqM9q57BMk|500|right|Binary search tree: traverse|frame}}&lt;br /&gt;
== Genaral Information ==&lt;br /&gt;
'''Algorithmic problem:''' [[Sorted sequence#Traverse|Sorted sequence: traverse]]&lt;br /&gt;
&lt;br /&gt;
'''Type of algorithm:''' loop&lt;br /&gt;
&lt;br /&gt;
'''Auxiliary data:'''&lt;br /&gt;
# A [[Sets and sequences#Stacks and queues|stack]] '''''S''''' whose elements are pairs consisting of&lt;br /&gt;
## a natural number '''''seenChildren''''' in the range &amp;lt;math&amp;gt;\{0,1,2\}&amp;lt;/math&amp;gt; and&lt;br /&gt;
## a binary search tree node '''''node'''''.&lt;br /&gt;
# Pointers '''''elem''''' and '''''elem'''''' to stack elements.&lt;br /&gt;
&lt;br /&gt;
== Abstract View ==&lt;br /&gt;
'''Invariant:''' Before and after each iteration:&lt;br /&gt;
# &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt; contains all nodes on the path (called the '''current path''') from the root to some binary search tree node (called the '''current node''') in the order from the root to the current node (in particular, the current node is the top element of &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt;).&lt;br /&gt;
# Let '''''elem''''' be some element of the stack:]&lt;br /&gt;
## If &amp;lt;math&amp;gt;elem.seenChildren = 0&amp;lt;/math&amp;gt;, neither &amp;lt;math&amp;gt;elem.node.key&amp;lt;/math&amp;gt; nor one of the keys in the [[Directed Tree|subtree]] rooted at &amp;lt;math&amp;gt;elem.node.right&amp;lt;/math&amp;gt; has been appended to &amp;lt;math&amp;gt;L&amp;lt;/math&amp;gt; so far; possibly, some or all keys in the subtree rooted at &amp;lt;math&amp;gt;elem.node.left&amp;lt;/math&amp;gt; have already been appended.&lt;br /&gt;
## If &amp;lt;math&amp;gt;elem.seenChildren = 1&amp;lt;/math&amp;gt;, all keys in the [[Directed Tree|subtree]] rooted at &amp;lt;math&amp;gt;elem.node.left&amp;lt;/math&amp;gt; and, afterwards, &amp;lt;math&amp;gt;elem.node.key&amp;lt;/math&amp;gt; have already been appended to &amp;lt;math&amp;gt;L&amp;lt;/math&amp;gt;; possibly, some or all keys in the subtree rooted at &amp;lt;math&amp;gt;elem.node.right&amp;lt;/math&amp;gt; have been appended as well.&lt;br /&gt;
## If &amp;lt;math&amp;gt;elem.seenChildren = 2&amp;lt;/math&amp;gt;, all keys in the [[Directed Tree|subtree]] rooted at &amp;lt;math&amp;gt;elem.node&amp;lt;/math&amp;gt; have been appended to &amp;lt;math&amp;gt;L&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''Variant:''' Indentify the current content of &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt;  with the string of length &amp;lt;math&amp;gt;|S|&amp;lt;/math&amp;gt; over the alphabet &amp;lt;math&amp;gt;\{0,1,2\}&amp;lt;/math&amp;gt; that is formed by the &amp;lt;math&amp;gt;seenChildren&amp;lt;/math&amp;gt; values of the stack elements &amp;lt;in the order from the root to the current node). Then the current string immediately '''after''' the iteration&lt;br /&gt;
# is either empty (which, clearly, is tantamount to &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt; being empty),&lt;br /&gt;
# or it is lexicographically larger than the string immediately '''before''' the iteration.&lt;br /&gt;
&lt;br /&gt;
'''Break condition:''' &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt; is empty.&lt;br /&gt;
&lt;br /&gt;
'''Remark:''' The number of iterations accomplished so far is the natural induction parameter. For conciseness, the induction parameter is omitted in the following.&lt;br /&gt;
&lt;br /&gt;
== Induction Basis ==&lt;br /&gt;
'''Abstract view:''' Initialize &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt; with the root.&lt;br /&gt;
&lt;br /&gt;
'''Implementation:'''&lt;br /&gt;
# Create a new stack element &amp;lt;math&amp;gt;elem&amp;lt;/math&amp;gt;.&lt;br /&gt;
# Set &amp;lt;math&amp;gt; elem.node := root&amp;lt;/math&amp;gt;.&lt;br /&gt;
# Set &amp;lt;math&amp;gt; elem.seenChildren := 0&amp;lt;/math&amp;gt;.&lt;br /&gt;
# Ensure that &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt; is empty.&lt;br /&gt;
# &amp;lt;math&amp;gt;S.push(elem)&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
'''Proof:''' All invariants are trivially fulfilled.&lt;br /&gt;
&lt;br /&gt;
== Induction Step ==&lt;br /&gt;
'''Abstract view:'''&lt;br /&gt;
# If the left child of the current node has not yet been examined:&lt;br /&gt;
## If the left child of the current node exists, proceed to the left child.&lt;br /&gt;
#Otherwise, if the right child of the current node has not yet been examined:&lt;br /&gt;
## Append the key of the current node to &amp;lt;math&amp;gt;L&amp;lt;/math&amp;gt;.&lt;br /&gt;
## If the right child of the current node exists, proceed to the right child.&lt;br /&gt;
#Otherwise (that is, left and right child examined), proceed to the parent of the current node.&lt;br /&gt;
&lt;br /&gt;
'''Implementation:'''&lt;br /&gt;
# Set &amp;lt;math&amp;gt;elem := S.top()&amp;lt;/math&amp;gt;.&lt;br /&gt;
# Set &amp;lt;math&amp;gt;node := elem.node&amp;lt;/math&amp;gt;&lt;br /&gt;
# If &amp;lt;math&amp;gt;elem.seenChildren = 0&amp;lt;/math&amp;gt;:&lt;br /&gt;
## If &amp;lt;math&amp;gt;node.left \neq void&amp;lt;/math&amp;gt;:&lt;br /&gt;
### Create a new element &amp;lt;math&amp;gt;elem'&amp;lt;/math&amp;gt;.&lt;br /&gt;
### Set &amp;lt;math&amp;gt;elem'.node := node.left&amp;lt;/math&amp;gt;.&lt;br /&gt;
### Set &amp;lt;math&amp;gt;elem'.seenChildren := 0&amp;lt;/math&amp;gt;.&lt;br /&gt;
### &amp;lt;math&amp;gt;S.push(elem')&amp;lt;/math&amp;gt;.&lt;br /&gt;
## Otherwise:&lt;br /&gt;
### Set &amp;lt;math&amp;gt;elem.seenChildren := 1&amp;lt;/math&amp;gt;.&lt;br /&gt;
### &amp;lt;math&amp;gt;L.append(elem.node.key)&amp;lt;/math&amp;gt;.&lt;br /&gt;
# Otherwise, if &amp;lt;math&amp;gt;elem.seenChildren = 1&amp;lt;/math&amp;gt;:&lt;br /&gt;
## If &amp;lt;math&amp;gt;node.right \neq void&amp;lt;/math&amp;gt;:&lt;br /&gt;
### Create a new elem &amp;lt;math&amp;gt;elem'&amp;lt;/math&amp;gt;.&lt;br /&gt;
### Set &amp;lt;math&amp;gt;elem'.node := node.right&amp;lt;/math&amp;gt;.&lt;br /&gt;
### Set &amp;lt;math&amp;gt;elem'.seenChildren := 0&amp;lt;/math&amp;gt;.&lt;br /&gt;
### &amp;lt;math&amp;gt;S.push(elem')&amp;lt;/math&amp;gt;.&lt;br /&gt;
## Otherwise, set &amp;lt;math&amp;gt;elem.seenChildren := 2&amp;lt;/math&amp;gt;.&lt;br /&gt;
# Otherwise, (that is &amp;lt;math&amp;gt;elem.seenChildren = 2&amp;lt;/math&amp;gt;).&lt;br /&gt;
## &amp;lt;math&amp;gt;S.pop()&amp;lt;/math&amp;gt;.&lt;br /&gt;
## If &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt; is empty, terminate the algorithm.&lt;br /&gt;
## Set &amp;lt;math&amp;gt;elem := S.top()&amp;lt;/math&amp;gt;&lt;br /&gt;
## Set &amp;lt;math&amp;gt;elem.seenChildren := elem.seenChildren + 1&amp;lt;/math&amp;gt;.&lt;br /&gt;
## If &amp;lt;math&amp;gt;elem.seenChildren = 1&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;elem.node.left \neq void&amp;lt;/math&amp;gt; then: &amp;lt;math&amp;gt;L.append(elem.node.key)&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
'''Correctness:''' Invariant #1 follows immediately from the fact that each push operation simply extends the current path and each pop operation cuts off the end node of the current path. Next consider Invariant #2.&lt;br /&gt;
&lt;br /&gt;
Invariant #2 is trivially maintained both for &amp;lt;math&amp;gt;elem&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;elem'&amp;lt;/math&amp;gt; in Steps 3.1 and 4.1 and &amp;lt;math&amp;gt;elem&amp;lt;/math&amp;gt; in Step 3.2 and 4.2 (note the append operation in Step 2.2, which reflects the inclusion of the current node in Invariant 2.2). In Step 5, the algorithm returns from a non-empty subtree, so the increase of &amp;lt;math&amp;gt;seenChildren&amp;lt;/math&amp;gt; and the append in case &amp;lt;math&amp;gt;seenChildren = 1&amp;lt;/math&amp;gt; are correct. Note that each key is appended only once because Step 3.2.2 applies only if the left child is empty, and Step 5.4, only if the left child is '''not''' empty.&lt;br /&gt;
Finally, consider the variant. Whenever a new element is pushed, the new string is an extension of the old string, which is clearly lexicographically larger. On the other hand, whenever an element is removed from &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt;, Step 5.4 ensures that the new string is not just a prefix of the old string but is larger at the last position of the new string (which is the second last position of the old string).&lt;br /&gt;
&lt;br /&gt;
== Complexity ==&lt;br /&gt;
'''Statement:''' Linear in the length of the sequence in the worst case.&lt;br /&gt;
&lt;br /&gt;
'''Proof:''' Each iteration of the loop takes constant time. Each node is visited in a most three iterations, viz. once with &amp;lt;math&amp;gt;seenChildren = 0,1,2&amp;lt;/math&amp;gt; respectively. This obvservation proves the claim.&lt;br /&gt;
&lt;br /&gt;
== Pseudocode ==&lt;br /&gt;
:INORDER-TREE-WALK(x)&lt;br /&gt;
::if x ≠ NULL&lt;br /&gt;
:::INORDER-TREE-WALK(left[x])&lt;br /&gt;
:::print key[x]&lt;br /&gt;
:::INORDER-TREE-WALK(right[x])&lt;/div&gt;</summary>
		<author><name>Weihe</name></author>
	</entry>
	<entry>
		<id>https://wiki.algo.informatik.tu-darmstadt.de/index.php?title=Binary_search_tree:_traverse&amp;diff=3863</id>
		<title>Binary search tree: traverse</title>
		<link rel="alternate" type="text/html" href="https://wiki.algo.informatik.tu-darmstadt.de/index.php?title=Binary_search_tree:_traverse&amp;diff=3863"/>
		<updated>2017-03-03T13:40:55Z</updated>

		<summary type="html">&lt;p&gt;Weihe: /* Abstract View */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Category:Videos]]&lt;br /&gt;
[[Category:Algorithms]]&lt;br /&gt;
[[Category:Search Algorithms]]&lt;br /&gt;
[[Category:Tree Algorithms]]&lt;br /&gt;
[[Category:Binary_Search_Tree]]&lt;br /&gt;
{{#ev:youtube|https://www.youtube.com/watch?v=PXqM9q57BMk|500|right|Binary search tree: traverse|frame}}&lt;br /&gt;
== Genaral Information ==&lt;br /&gt;
'''Algorithmic problem:''' [[Sorted sequence#Traverse|Sorted sequence: traverse]]&lt;br /&gt;
&lt;br /&gt;
'''Type of algorithm:''' loop&lt;br /&gt;
&lt;br /&gt;
'''Auxiliary data:'''&lt;br /&gt;
# A [[Sets and sequences#Stacks and queues|stack]] '''''S''''' whose elements are pairs consisting of&lt;br /&gt;
## a natural number '''''seenChildren''''' in the range &amp;lt;math&amp;gt;\{0,1,2\}&amp;lt;/math&amp;gt; and&lt;br /&gt;
## a binary search tree node '''''node'''''.&lt;br /&gt;
# Pointers '''''elem''''' and '''''elem'''''' to stack elements.&lt;br /&gt;
&lt;br /&gt;
== Abstract View ==&lt;br /&gt;
'''Invariant:''' Before and after each iteration:&lt;br /&gt;
# &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt; contains all nodes on the path (called the '''current path''') from the root to some binary search tree node (called the '''current node''') in the order from the root to the current node (in particular, the current node is the top element of &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt;).&lt;br /&gt;
# Let '''''elem''''' be some element of the stack:]&lt;br /&gt;
## If &amp;lt;math&amp;gt;elem.seenChildren = 0&amp;lt;/math&amp;gt;, neither &amp;lt;math&amp;gt;elem.node.key&amp;lt;/math&amp;gt; nor one of the keys in the [[Directed Tree|subtree]] rooted at &amp;lt;math&amp;gt;elem.node.right&amp;lt;/math&amp;gt; has been appended to &amp;lt;math&amp;gt;L&amp;lt;/math&amp;gt; so far; possibly, some or all keys in the subtree rooted at &amp;lt;math&amp;gt;elem.node.left&amp;lt;/math&amp;gt; have already been appended.&lt;br /&gt;
## If &amp;lt;math&amp;gt;elem.seenChildren = 1&amp;lt;/math&amp;gt;, all keys in the [[Directed Tree|subtree]] rooted at &amp;lt;math&amp;gt;elem.node.left&amp;lt;/math&amp;gt; and, afterwards, &amp;lt;math&amp;gt;elem.node.key&amp;lt;/math&amp;gt; have already been appended to &amp;lt;math&amp;gt;L&amp;lt;/math&amp;gt;; possibly, some or all keys in the subtree rooted at &amp;lt;math&amp;gt;elem.node.right&amp;lt;/math&amp;gt; have been appended as well.&lt;br /&gt;
## If &amp;lt;math&amp;gt;elem.seenChildren = 2&amp;lt;/math&amp;gt;, all keys in the [[Directed Tree|subtree]] rooted at &amp;lt;math&amp;gt;elem.node&amp;lt;/math&amp;gt; have been appended to &amp;lt;math&amp;gt;L&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''Variant:''' Indentify the current content of &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt;  with the string of length &amp;lt;math&amp;gt;|S|&amp;lt;/math&amp;gt; over the alphabet &amp;lt;math&amp;gt;\{0,1,2\}&amp;lt;/math&amp;gt; that is formed by the &amp;lt;math&amp;gt;seenChildren&amp;lt;/math&amp;gt; values of the stack elements &amp;lt;in the order from the root to the current node). Then the current string immediately '''after''' the iteration&lt;br /&gt;
# is either empty (which, clearly, is tantamount to &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt; being empty),&lt;br /&gt;
# or it is lexicographically larger than the string immediately '''before''' the iteration.&lt;br /&gt;
&lt;br /&gt;
'''Break condition:''' &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt; is empty.&lt;br /&gt;
&lt;br /&gt;
'''Remark:''' The number of iterations accomplished so far the induction parameter. For conciseness, the induction parameter is omitted in the following.&lt;br /&gt;
&lt;br /&gt;
== Induction Basis ==&lt;br /&gt;
'''Abstract view:''' Initialize &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt; with the root.&lt;br /&gt;
&lt;br /&gt;
'''Implementation:'''&lt;br /&gt;
# Create a new stack element &amp;lt;math&amp;gt;elem&amp;lt;/math&amp;gt;.&lt;br /&gt;
# Set &amp;lt;math&amp;gt; elem.node := root&amp;lt;/math&amp;gt;.&lt;br /&gt;
# Set &amp;lt;math&amp;gt; elem.seenChildren := 0&amp;lt;/math&amp;gt;.&lt;br /&gt;
# Ensure that &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt; is empty.&lt;br /&gt;
# &amp;lt;math&amp;gt;S.push(elem)&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
'''Proof:''' All invariants are trivially fulfilled.&lt;br /&gt;
&lt;br /&gt;
== Induction Step ==&lt;br /&gt;
'''Abstract view:'''&lt;br /&gt;
# If the left child of the current node has not yet been examined:&lt;br /&gt;
## If the left child of the current node exists, proceed to the left child.&lt;br /&gt;
#Otherwise, if the right child of the current node has not yet been examined:&lt;br /&gt;
## Append the key of the current node to &amp;lt;math&amp;gt;L&amp;lt;/math&amp;gt;.&lt;br /&gt;
## If the right child of the current node exists, proceed to the right child.&lt;br /&gt;
#Otherwise (that is, left and right child examined), proceed to the parent of the current node.&lt;br /&gt;
&lt;br /&gt;
'''Implementation:'''&lt;br /&gt;
# Set &amp;lt;math&amp;gt;elem := S.top()&amp;lt;/math&amp;gt;.&lt;br /&gt;
# Set &amp;lt;math&amp;gt;node := elem.node&amp;lt;/math&amp;gt;&lt;br /&gt;
# If &amp;lt;math&amp;gt;elem.seenChildren = 0&amp;lt;/math&amp;gt;:&lt;br /&gt;
## If &amp;lt;math&amp;gt;node.left \neq void&amp;lt;/math&amp;gt;:&lt;br /&gt;
### Create a new element &amp;lt;math&amp;gt;elem'&amp;lt;/math&amp;gt;.&lt;br /&gt;
### Set &amp;lt;math&amp;gt;elem'.node := node.left&amp;lt;/math&amp;gt;.&lt;br /&gt;
### Set &amp;lt;math&amp;gt;elem'.seenChildren := 0&amp;lt;/math&amp;gt;.&lt;br /&gt;
### &amp;lt;math&amp;gt;S.push(elem')&amp;lt;/math&amp;gt;.&lt;br /&gt;
## Otherwise:&lt;br /&gt;
### Set &amp;lt;math&amp;gt;elem.seenChildren := 1&amp;lt;/math&amp;gt;.&lt;br /&gt;
### &amp;lt;math&amp;gt;L.append(elem.node.key)&amp;lt;/math&amp;gt;.&lt;br /&gt;
# Otherwise, if &amp;lt;math&amp;gt;elem.seenChildren = 1&amp;lt;/math&amp;gt;:&lt;br /&gt;
## If &amp;lt;math&amp;gt;node.right \neq void&amp;lt;/math&amp;gt;:&lt;br /&gt;
### Create a new elem &amp;lt;math&amp;gt;elem'&amp;lt;/math&amp;gt;.&lt;br /&gt;
### Set &amp;lt;math&amp;gt;elem'.node := node.right&amp;lt;/math&amp;gt;.&lt;br /&gt;
### Set &amp;lt;math&amp;gt;elem'.seenChildren := 0&amp;lt;/math&amp;gt;.&lt;br /&gt;
### &amp;lt;math&amp;gt;S.push(elem')&amp;lt;/math&amp;gt;.&lt;br /&gt;
## Otherwise, set &amp;lt;math&amp;gt;elem.seenChildren := 2&amp;lt;/math&amp;gt;.&lt;br /&gt;
# Otherwise, (that is &amp;lt;math&amp;gt;elem.seenChildren = 2&amp;lt;/math&amp;gt;).&lt;br /&gt;
## &amp;lt;math&amp;gt;S.pop()&amp;lt;/math&amp;gt;.&lt;br /&gt;
## If &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt; is empty, terminate the algorithm.&lt;br /&gt;
## Set &amp;lt;math&amp;gt;elem := S.top()&amp;lt;/math&amp;gt;&lt;br /&gt;
## Set &amp;lt;math&amp;gt;elem.seenChildren := elem.seenChildren + 1&amp;lt;/math&amp;gt;.&lt;br /&gt;
## If &amp;lt;math&amp;gt;elem.seenChildren = 1&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;elem.node.left \neq void&amp;lt;/math&amp;gt; then: &amp;lt;math&amp;gt;L.append(elem.node.key)&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
'''Correctness:''' Invariant #1 follows immediately from the fact that each push operation simply extends the current path and each pop operation cuts off the end node of the current path. Next consider Invariant #2.&lt;br /&gt;
&lt;br /&gt;
Invariant #2 is trivially maintained both for &amp;lt;math&amp;gt;elem&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;elem'&amp;lt;/math&amp;gt; in Steps 3.1 and 4.1 and &amp;lt;math&amp;gt;elem&amp;lt;/math&amp;gt; in Step 3.2 and 4.2 (note the append operation in Step 2.2, which reflects the inclusion of the current node in Invariant 2.2). In Step 5, the algorithm returns from a non-empty subtree, so the increase of &amp;lt;math&amp;gt;seenChildren&amp;lt;/math&amp;gt; and the append in case &amp;lt;math&amp;gt;seenChildren = 1&amp;lt;/math&amp;gt; are correct. Note that each key is appended only once because Step 3.2.2 applies only if the left child is empty, and Step 5.4, only if the left child is '''not''' empty.&lt;br /&gt;
Finally, consider the variant. Whenever a new element is pushed, the new string is an extension of the old string, which is clearly lexicographically larger. On the other hand, whenever an element is removed from &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt;, Step 5.4 ensures that the new string is not just a prefix of the old string but is larger at the last position of the new string (which is the second last position of the old string).&lt;br /&gt;
&lt;br /&gt;
== Complexity ==&lt;br /&gt;
'''Statement:''' Linear in the length of the sequence in the worst case.&lt;br /&gt;
&lt;br /&gt;
'''Proof:''' Each iteration of the loop takes constant time. Each node is visited in a most three iterations, viz. once with &amp;lt;math&amp;gt;seenChildren = 0,1,2&amp;lt;/math&amp;gt; respectively. This obvservation proves the claim.&lt;br /&gt;
&lt;br /&gt;
== Pseudocode ==&lt;br /&gt;
:INORDER-TREE-WALK(x)&lt;br /&gt;
::if x ≠ NULL&lt;br /&gt;
:::INORDER-TREE-WALK(left[x])&lt;br /&gt;
:::print key[x]&lt;br /&gt;
:::INORDER-TREE-WALK(right[x])&lt;/div&gt;</summary>
		<author><name>Weihe</name></author>
	</entry>
	<entry>
		<id>https://wiki.algo.informatik.tu-darmstadt.de/index.php?title=Binary_search_tree:_remove_node&amp;diff=3862</id>
		<title>Binary search tree: remove node</title>
		<link rel="alternate" type="text/html" href="https://wiki.algo.informatik.tu-darmstadt.de/index.php?title=Binary_search_tree:_remove_node&amp;diff=3862"/>
		<updated>2017-03-03T13:39:53Z</updated>

		<summary type="html">&lt;p&gt;Weihe: /* Abstract View */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Category:Algorithm]]&lt;br /&gt;
[[Category:Binary Search Tree]]&lt;br /&gt;
&amp;lt;div class=&amp;quot;plainlinks&amp;quot; style=&amp;quot;float:right;margin:0 0 5px 5px; border:1px solid #AAAAAA; width:auto; padding:1em; margin: 0px 0px 1em 1em;&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;div style=&amp;quot;font-size: 1.8em;font-weight:bold;text-align: center;margin:0.2em 0 1em 0&amp;quot;&amp;gt;Binary Search Tree&amp;lt;br&amp;gt;Remove node&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;font-size: 1.2em; margin:.5em 0 1em 0; text-align:center&amp;quot;&amp;gt;[[Sorted sequence]]&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;font-size: 1.2em; margin:.5em 0 .5em 0;text-align:center&amp;quot;&amp;gt;[[File:olw_logo1.png|20px]][https://openlearnware.tu-darmstadt.de/#!/resource/binary-search-tree-1938 Openlearnware]&amp;lt;br&amp;gt;See Chapter 5&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
== General Information ==&lt;br /&gt;
'''Algorithmic problem:''' See the [[Binary Search Tree#Remark|remark clause]] of [[Binary Search Tree]]; pointer &amp;lt;math&amp;gt;p&amp;lt;/math&amp;gt; as defined there is the input.&lt;br /&gt;
&lt;br /&gt;
'''Prerequisites:''' &amp;lt;math&amp;gt;p&amp;lt;/math&amp;gt;.left &amp;lt;math&amp;gt;\neq&amp;lt;/math&amp;gt; void.&lt;br /&gt;
&lt;br /&gt;
'''Type of algorithm:''' loop&lt;br /&gt;
&lt;br /&gt;
'''Auxiliary data:''' A pointer &amp;lt;math&amp;gt;p'&amp;lt;/math&amp;gt; of type &amp;quot;pointer to a binary search tree node of type &amp;lt;math&amp;gt;\mathcal{K}&amp;lt;/math&amp;gt;.&amp;quot;&lt;br /&gt;
&lt;br /&gt;
== Abstract View ==&lt;br /&gt;
'''Invariant:'''&lt;br /&gt;
# The [[Directed Tree#Immediate Predecessor and Successor|immediate predecessor]] of '''''K''''' is in the [[Directed Tree#Ranges of Search Tree Nodes|range]] of &amp;lt;math&amp;gt;p'&amp;lt;/math&amp;gt;.&lt;br /&gt;
# It is &amp;lt;math&amp;gt;p'&amp;lt;/math&amp;gt;.right &amp;lt;math&amp;gt;\neq&amp;lt;/math&amp;gt; void.&lt;br /&gt;
&lt;br /&gt;
'''Variant:''' The pointer &amp;lt;math&amp;gt;p'&amp;lt;/math&amp;gt; descends one level deeper, namely to &amp;lt;math&amp;gt;p&amp;lt;/math&amp;gt;'.right.&lt;br /&gt;
&lt;br /&gt;
'''Break condition:''' It is &amp;lt;math&amp;gt;p'&amp;lt;/math&amp;gt;.right.right = void.&lt;br /&gt;
&lt;br /&gt;
'''Remark:''' For example, the height of the subtree rooted at the node to which &amp;lt;math&amp;gt;p&amp;lt;/math&amp;gt; points may be chosen as the induction parameter. For conciseness, the induction parameter is omitted in the following.&lt;br /&gt;
&lt;br /&gt;
== Induction Basis ==&lt;br /&gt;
&lt;br /&gt;
'''Abstract view:''' If &amp;lt;math&amp;gt;p.left&amp;lt;/math&amp;gt; is the immediate predecessor of '''''K''''', overwrite '''''K''''' by its immediate predecessor and terminate; otherwise, initialize &amp;lt;math&amp;gt;p'&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
'''Implementation:'''&lt;br /&gt;
# If &amp;lt;math&amp;gt;p&amp;lt;/math&amp;gt;.left.right = void:&lt;br /&gt;
## Set &amp;lt;math&amp;gt;p&amp;lt;/math&amp;gt;.key := p.left.key.&lt;br /&gt;
## Set &amp;lt;math&amp;gt;p&amp;lt;/math&amp;gt;.left := p.left.left.&lt;br /&gt;
## Terminate the algorithm.&lt;br /&gt;
#Otherwise, set &amp;lt;math&amp;gt;p'&amp;lt;/math&amp;gt; := p.left.&lt;br /&gt;
&lt;br /&gt;
'''Proof:''' Obvious.&lt;br /&gt;
&lt;br /&gt;
== Induction Step ==&lt;br /&gt;
'''Abstract view:''' If &amp;lt;math&amp;gt;p'&amp;lt;/math&amp;gt;.right.key is the immediate predecessor of &amp;lt;math&amp;gt;K&amp;lt;/math&amp;gt;, overwrite &amp;lt;math&amp;gt;K&amp;lt;/math&amp;gt; by its immediate predecessor and terminate; otherwise, let &amp;lt;math&amp;gt;p'&amp;lt;/math&amp;gt; descend one level deeper.&lt;br /&gt;
&lt;br /&gt;
'''Implementation:'''&lt;br /&gt;
# If &amp;lt;math&amp;gt;p'&amp;lt;/math&amp;gt;.right.right = void:&lt;br /&gt;
## Set &amp;lt;math&amp;gt;p&amp;lt;/math&amp;gt;.key := &amp;lt;math&amp;gt;p'&amp;lt;/math&amp;gt;.right.key.&lt;br /&gt;
## Set &amp;lt;math&amp;gt;p'&amp;lt;/math&amp;gt;.right := &amp;lt;math&amp;gt;p'&amp;lt;/math&amp;gt;.right.left.&lt;br /&gt;
## Terminate the algorithm.&lt;br /&gt;
# Set &amp;lt;math&amp;gt;p':=p'&amp;lt;/math&amp;gt;.right.&lt;br /&gt;
&lt;br /&gt;
'''Correctness:''' Obvious.&lt;br /&gt;
&lt;br /&gt;
== Complexity ==&lt;br /&gt;
'''Statement:''' The complexity is in &amp;lt;math&amp;gt;\mathcal{O}(T\cdot h)\subseteq\mathcal{O}(T\cdot n)&amp;lt;/math&amp;gt; in the worst case, where &amp;lt;math&amp;gt;n&amp;lt;/math&amp;gt; is the length of the sequence, &amp;lt;math&amp;gt;h&amp;lt;/math&amp;gt; the height of the tree, and &amp;lt;math&amp;gt;T&amp;lt;/math&amp;gt; the complexity of the comparison.&lt;br /&gt;
&lt;br /&gt;
'''Proof:''' Obvious.&lt;/div&gt;</summary>
		<author><name>Weihe</name></author>
	</entry>
	<entry>
		<id>https://wiki.algo.informatik.tu-darmstadt.de/index.php?title=Binary_search_tree:_remove&amp;diff=3861</id>
		<title>Binary search tree: remove</title>
		<link rel="alternate" type="text/html" href="https://wiki.algo.informatik.tu-darmstadt.de/index.php?title=Binary_search_tree:_remove&amp;diff=3861"/>
		<updated>2017-03-03T13:39:30Z</updated>

		<summary type="html">&lt;p&gt;Weihe: /* Abstract view */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Category:Checkup]]&lt;br /&gt;
&lt;br /&gt;
[[Category:Binary Search Tree]]&lt;br /&gt;
[[Category:Algorithm]]&lt;br /&gt;
&amp;lt;div class=&amp;quot;plainlinks&amp;quot; style=&amp;quot;float:right;margin:0 0 5px 5px; border:1px solid #AAAAAA; width:auto; padding:1em; margin: 0px 0px 1em 1em;&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;div style=&amp;quot;font-size: 1.8em;font-weight:bold;text-align: center;margin:0.2em 0 1em 0&amp;quot;&amp;gt;Binary Search Tree&amp;lt;br&amp;gt;Remove&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;font-size: 1.2em; margin:.5em 0 1em 0; text-align:center&amp;quot;&amp;gt;[[Sorted sequence]]&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;font-size: 1.2em; margin:.5em 0 .5em 0;text-align:center&amp;quot;&amp;gt;[[File:olw_logo1.png|20px]][https://openlearnware.tu-darmstadt.de/#!/resource/binary-search-tree-1938 Openlearnware]&amp;lt;br&amp;gt;See Chapter 5&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
== General Information ==&lt;br /&gt;
&lt;br /&gt;
'''Algorithmic Problem:''' [[Sorted sequence#Remove|Sorted sequence:remove]]&lt;br /&gt;
&lt;br /&gt;
'''Type of algorithm:''' loop&lt;br /&gt;
&lt;br /&gt;
'''Auxiliary data:'''  A pointer &amp;lt;math&amp;gt;p&amp;lt;/math&amp;gt; of type &amp;quot;pointer to binary search tree node of type &amp;lt;math&amp;gt;\mathcal{K}&amp;lt;/math&amp;gt;.&amp;quot;&lt;br /&gt;
&lt;br /&gt;
== Abstract view ==&lt;br /&gt;
'''Invariant''' After &amp;lt;math&amp;gt;i \geq 0&amp;lt;/math&amp;gt; interations:&lt;br /&gt;
# The pointer &amp;lt;math&amp;gt;p&amp;lt;/math&amp;gt; points to a tree node &amp;lt;math&amp;gt;v&amp;lt;/math&amp;gt; on height level &amp;lt;math&amp;gt;i&amp;lt;/math&amp;gt;.&lt;br /&gt;
# The key &amp;lt;math&amp;gt;K&amp;lt;/math&amp;gt; is in [[Directed Tree#Ranges of Search Tree Nodes|range]] of &amp;lt;math&amp;gt;p&amp;lt;/math&amp;gt;, but &amp;lt;math&amp;gt;p.key \neq K&amp;lt;/math&amp;gt;.&lt;br /&gt;
'''Variant:''' &amp;lt;math&amp;gt;i&amp;lt;/math&amp;gt; increased by &amp;lt;math&amp;gt;1&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
'''Break Condition:''' One of the following two conditions is fulfilled:&lt;br /&gt;
# It is &amp;lt;math&amp;gt;K &amp;lt; p&amp;lt;/math&amp;gt;.key and either &amp;lt;math&amp;gt;p&amp;lt;/math&amp;gt;.left = void or &amp;lt;math&amp;gt;p&amp;lt;/math&amp;gt;.left.key &amp;lt;math&amp;gt;= K&amp;lt;/math&amp;gt;.&lt;br /&gt;
# It is &amp;lt;math&amp;gt;K &amp;gt; p&amp;lt;/math&amp;gt;.key and either &amp;lt;math&amp;gt;p&amp;lt;/math&amp;gt;.right = void or &amp;lt;math&amp;gt;p&amp;lt;/math&amp;gt;.right.key = &amp;lt;math&amp;gt;K&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
'''Remark:''' For example, the height of the subtree rooted at the node to which &amp;lt;math&amp;gt;p&amp;lt;/math&amp;gt; points may be chosen as the induction parameter. For conciseness, the induction parameter is omitted in the following.&lt;br /&gt;
&lt;br /&gt;
== Induction basis==&lt;br /&gt;
'''Abstract view:'''&lt;br /&gt;
# If the root contains &amp;lt;math&amp;gt;K&amp;lt;/math&amp;gt;, remove this occurrence of &amp;lt;math&amp;gt;K&amp;lt;/math&amp;gt;. &lt;br /&gt;
# Otherwise, initialize &amp;lt;math&amp;gt;p&amp;lt;/math&amp;gt; so as to point to the root.&lt;br /&gt;
&lt;br /&gt;
'''Implementation:'''&lt;br /&gt;
# If root.key &amp;lt;math&amp;gt;= K&amp;lt;/math&amp;gt;:&lt;br /&gt;
## If root.left = void, set root := root.right.&lt;br /&gt;
## Otherwise, if root.right = void, set root := root.left.&lt;br /&gt;
## Otherwise, call method [[Binary Search Tree:Remove node|remove node]] with pointer root. &lt;br /&gt;
## Terminate the algorithm and return '''true'''. &lt;br /&gt;
&lt;br /&gt;
# Otherwise set &amp;lt;math&amp;gt;p :=&amp;lt;/math&amp;gt; root.&lt;br /&gt;
&lt;br /&gt;
'''Proof:''' Obvious.&lt;br /&gt;
&lt;br /&gt;
== Induction step==&lt;br /&gt;
'''Abstract View:'''&lt;br /&gt;
# If the next node where to go does not exist or contains &amp;lt;math&amp;gt;K&amp;lt;/math&amp;gt;, terminate the algorithm (and in the latter case, remove that node appropriately). &lt;br /&gt;
# Otherwise, descend to that node.&lt;br /&gt;
'''Implementation:'''&lt;br /&gt;
# If &amp;lt;math&amp;gt;K &amp;lt; p&amp;lt;/math&amp;gt;.key:&lt;br /&gt;
## If &amp;lt;math&amp;gt;p&amp;lt;/math&amp;gt;.left = void, terminate the algorithm and return '''false'''. &lt;br /&gt;
## Otherwise if &amp;lt;math&amp;gt;p&amp;lt;/math&amp;gt;.left.key &amp;lt;math&amp;gt;= K&amp;lt;/math&amp;gt;:&lt;br /&gt;
### If &amp;lt;math&amp;gt;p&amp;lt;/math&amp;gt;.left.left = void, set &amp;lt;math&amp;gt;p&amp;lt;/math&amp;gt;.left := p.left.right.&lt;br /&gt;
### Otherwise, if &amp;lt;math&amp;gt;p&amp;lt;/math&amp;gt;.left.right = void, set &amp;lt;math&amp;gt;p&amp;lt;/math&amp;gt;.left := p.left.left.&lt;br /&gt;
### Otherwise, call method [[Binary Search Tree:Remove node|remove node]] with pointer &amp;lt;math&amp;gt;p&amp;lt;/math&amp;gt;.left. &lt;br /&gt;
### Terminate the algorithm and return '''true'''. &lt;br /&gt;
## Otherwise (that is, &amp;lt;math&amp;gt;p&amp;lt;/math&amp;gt;.left &amp;lt;math&amp;gt;\neq&amp;lt;/math&amp;gt; void and &amp;lt;math&amp;gt;p&amp;lt;/math&amp;gt;.left.key &amp;lt;math&amp;gt;\neq K&amp;lt;/math&amp;gt;), set &amp;lt;math&amp;gt;p := p&amp;lt;/math&amp;gt;.left. &lt;br /&gt;
# Otherwise (that is, &amp;lt;math&amp;gt;K &amp;gt; p&amp;lt;/math&amp;gt;.key): &lt;br /&gt;
## If &amp;lt;math&amp;gt;p&amp;lt;/math&amp;gt;.right = void, terminate the algorithm and return '''false'''. &lt;br /&gt;
## Otherwise, if &amp;lt;math&amp;gt;p&amp;lt;/math&amp;gt;.right.key &amp;lt;math&amp;gt;= K&amp;lt;/math&amp;gt;: &lt;br /&gt;
### If &amp;lt;math&amp;gt;p&amp;lt;/math&amp;gt;.right.left = void, set &amp;lt;math&amp;gt;p&amp;lt;/math&amp;gt;.right &amp;lt;math&amp;gt;=p&amp;lt;/math&amp;gt;.right.right. &lt;br /&gt;
### Otherwise, if &amp;lt;math&amp;gt;p&amp;lt;/math&amp;gt;.right.right = void, set &amp;lt;math&amp;gt;p&amp;lt;/math&amp;gt;.right &amp;lt;math&amp;gt;= p&amp;lt;/math&amp;gt;.right.left. &lt;br /&gt;
### Otherwise, call method [[Binary Search Tree:Remove node|remove node]] with pointer &amp;lt;math&amp;gt;p&amp;lt;/math&amp;gt;.right. &lt;br /&gt;
### Terminate the algorithm and return '''true'''. &lt;br /&gt;
## Otherwise (that is, &amp;lt;math&amp;gt;p&amp;lt;/math&amp;gt;.right &amp;lt;math&amp;gt;\neq&amp;lt;/math&amp;gt; void and &amp;lt;math&amp;gt;p&amp;lt;/math&amp;gt;.right.key &amp;lt;math&amp;gt;\neq K&amp;lt;/math&amp;gt;), set &amp;lt;math&amp;gt;p:= p&amp;lt;/math&amp;gt;.right.&lt;br /&gt;
&lt;br /&gt;
'''Correctness:''' Nothing to show.&lt;br /&gt;
&lt;br /&gt;
== Pseudocode ==&lt;br /&gt;
TREE-DELETE(T,z)&lt;br /&gt;
:if left[z] = NULL or right[z] = NULL&lt;br /&gt;
::then y = z&lt;br /&gt;
::else y = TREE-SUCCESSOR(z)&lt;br /&gt;
:if left[y] ≠ NULL&lt;br /&gt;
::then x = left[y]&lt;br /&gt;
::else x = right[y]&lt;br /&gt;
:if x ≠ NULL&lt;br /&gt;
::then p[x] = p [y]&lt;br /&gt;
:if p[y] = NULL&lt;br /&gt;
::then root[T] = x&lt;br /&gt;
::else if y = left[p[y]]&lt;br /&gt;
:::then left[p[y]] = x&lt;br /&gt;
:::else right[p[y]] = x&lt;br /&gt;
:if y ≠ z&lt;br /&gt;
::then key[z] = key[y]&lt;br /&gt;
:::copy y's satellite data into z&lt;br /&gt;
:return y&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Complexity==&lt;br /&gt;
'''Statement:''' The complexity is in &amp;lt;math&amp;gt;\mathcal{O}(T\cdot h)\subseteq\mathcal{O}(T\cdot n)&amp;lt;/math&amp;gt; in the worst case, where &amp;lt;math&amp;gt;n&amp;lt;/math&amp;gt; is the length of the sequence, &amp;lt;math&amp;gt;h&amp;lt;/math&amp;gt; the height of the tree, and &amp;lt;math&amp;gt;T&amp;lt;/math&amp;gt; the complexity of the comparison.&lt;br /&gt;
&lt;br /&gt;
'''Proof:''' Obvious.&lt;/div&gt;</summary>
		<author><name>Weihe</name></author>
	</entry>
	<entry>
		<id>https://wiki.algo.informatik.tu-darmstadt.de/index.php?title=Binary_search_tree:_insert&amp;diff=3860</id>
		<title>Binary search tree: insert</title>
		<link rel="alternate" type="text/html" href="https://wiki.algo.informatik.tu-darmstadt.de/index.php?title=Binary_search_tree:_insert&amp;diff=3860"/>
		<updated>2017-03-03T13:39:00Z</updated>

		<summary type="html">&lt;p&gt;Weihe: /* Abstract View */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Category:Binary Search Tree]]&lt;br /&gt;
&amp;lt;div class=&amp;quot;plainlinks&amp;quot; style=&amp;quot;float:right;margin:0 0 5px 5px; border:1px solid #AAAAAA; width:auto; padding:1em; margin: 0px 0px 1em 1em;&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;div style=&amp;quot;font-size: 1.8em;font-weight:bold;text-align: center;margin:0.2em 0 1em 0&amp;quot;&amp;gt;Binary Search Tree&amp;lt;br&amp;gt;Insert&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;font-size: 1.2em; margin:.5em 0 1em 0; text-align:center&amp;quot;&amp;gt;[[Sorted sequence]]&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;font-size: 1.2em; margin:.5em 0 .5em 0;text-align:center&amp;quot;&amp;gt;[[File:olw_logo1.png|20px]][https://openlearnware.tu-darmstadt.de/#!/resource/binary-search-tree-1938 Openlearnware]&amp;lt;br&amp;gt;See Chapter 4&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
== General Information ==&lt;br /&gt;
'''Algorithmic problem:''' [[Sorted sequence#Insert|Sorted sequence: insert]]&lt;br /&gt;
&lt;br /&gt;
'''Type of algorithm:''' loop&lt;br /&gt;
&lt;br /&gt;
'''Auxiliary data:''' A pointer &amp;lt;math&amp;gt;p&amp;lt;/math&amp;gt; of type &amp;quot;pointer to binary search tree node of type &amp;lt;math&amp;gt;\mathcal{K}&amp;lt;/math&amp;gt;.&amp;quot;&lt;br /&gt;
&lt;br /&gt;
== Abstract View ==&lt;br /&gt;
'''Invariant:''' After &amp;lt;math&amp;gt;i \geq 0&amp;lt;/math&amp;gt; iterations:&lt;br /&gt;
# The pointer &amp;lt;math&amp;gt;p&amp;lt;/math&amp;gt; points to a tree node &amp;lt;math&amp;gt;v&amp;lt;/math&amp;gt; on height level &amp;lt;math&amp;gt;i&amp;lt;/math&amp;gt;.&lt;br /&gt;
# The Key &amp;lt;math&amp;gt;K&amp;lt;/math&amp;gt; is in the [[Directed Tree#Ranges of Search Tree Nodes|range]] of '''''v'''''.&lt;br /&gt;
&lt;br /&gt;
'''Variant:''' &amp;lt;math&amp;gt;i&amp;lt;/math&amp;gt; is increased by 1.&lt;br /&gt;
&lt;br /&gt;
'''Break condition:''' One of the following two conditions is fulfilled:&lt;br /&gt;
# It is &amp;lt;math&amp;gt;p&amp;lt;/math&amp;gt;.key &amp;lt;math&amp;gt;\geq K&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;p&amp;lt;/math&amp;gt;.left &amp;lt;math&amp;gt;=&amp;lt;/math&amp;gt;void.&lt;br /&gt;
# It is &amp;lt;math&amp;gt;p&amp;lt;/math&amp;gt;.key &amp;lt;math&amp;gt;\leq K&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;p&amp;lt;/math&amp;gt;.right &amp;lt;math&amp;gt;=&amp;lt;/math&amp;gt;void.&lt;br /&gt;
&lt;br /&gt;
'''Remark:''' For example, the height of the subtree rooted at the node to which &amp;lt;math&amp;gt;p&amp;lt;/math&amp;gt; points may be chosen as the induction parameter. For conciseness, the induction parameter is omitted in the following.&lt;br /&gt;
&lt;br /&gt;
== Induction Basis ==&lt;br /&gt;
'''Abstract view:''' If the tree is empty, a new root with key &amp;lt;math&amp;gt;K&amp;lt;/math&amp;gt; is created; otherwise, &amp;lt;math&amp;gt;p&amp;lt;/math&amp;gt; is initialized so as to point to the root.&lt;br /&gt;
&lt;br /&gt;
'''Implementation:'''&lt;br /&gt;
# If root = void:&lt;br /&gt;
## Create a new binary search tree node and let root point to it.&lt;br /&gt;
## Set root.key&amp;lt;math&amp;gt;:= K&amp;lt;/math&amp;gt;, root.left &amp;lt;math&amp;gt;:=&amp;lt;/math&amp;gt; void, and root.right &amp;lt;math&amp;gt;:=&amp;lt;/math&amp;gt; void.&lt;br /&gt;
# Otherwise, set &amp;lt;math&amp;gt;p := root&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
'''Proof:''' Obvious.&lt;br /&gt;
&lt;br /&gt;
== Induction Step ==&lt;br /&gt;
'''Abstract view:''' If the direction where to go next is void, insert &amp;lt;math&amp;gt;K&amp;lt;/math&amp;gt; in that empty slot and terminate the algorithm. Otherwise, proceed in that direction.&lt;br /&gt;
&lt;br /&gt;
'''Implementation:'''&lt;br /&gt;
# If &amp;lt;math&amp;gt;p&amp;lt;/math&amp;gt;.key &amp;lt;math&amp;gt;= K&amp;lt;/math&amp;gt;:&lt;br /&gt;
## If &amp;lt;math&amp;gt;p&amp;lt;/math&amp;gt;.left = void:&lt;br /&gt;
### Create a new node and let &amp;lt;math&amp;gt;p&amp;lt;/math&amp;gt;.left point to it.&lt;br /&gt;
### Set &amp;lt;math&amp;gt;p&amp;lt;/math&amp;gt;.left.key &amp;lt;math&amp;gt;:= K&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;p&amp;lt;/math&amp;gt;.left.left &amp;lt;math&amp;gt;:=&amp;lt;/math&amp;gt;void, and &amp;lt;math&amp;gt;p&amp;lt;/math&amp;gt;.left.right &amp;lt;math&amp;gt;:=&amp;lt;/math&amp;gt; void.&lt;br /&gt;
### Terminate the algorithm.&lt;br /&gt;
## Otherwise, if &amp;lt;math&amp;gt;p&amp;lt;/math&amp;gt;.right = void:&lt;br /&gt;
### Create a new node and let &amp;lt;math&amp;gt;p&amp;lt;/math&amp;gt;.right point to it.&lt;br /&gt;
### Set &amp;lt;math&amp;gt;p&amp;lt;/math&amp;gt;.right.key &amp;lt;math&amp;gt;:= K&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;p&amp;lt;/math&amp;gt;.right.left := void, and &amp;lt;math&amp;gt;p&amp;lt;/math&amp;gt;.right.right := void.&lt;br /&gt;
### Terminate the algorithm.&lt;br /&gt;
## Otherwise, set &amp;lt;math&amp;gt;p := p&amp;lt;/math&amp;gt;.left.&lt;br /&gt;
# Otherwise, if &amp;lt;math&amp;gt;p&amp;lt;/math&amp;gt;.key &amp;lt;math&amp;gt;&amp;gt; K&amp;lt;/math&amp;gt;:&lt;br /&gt;
## If &amp;lt;math&amp;gt;p&amp;lt;/math&amp;gt;.left = void:&lt;br /&gt;
### Create a new node and let &amp;lt;math&amp;gt;p&amp;lt;/math&amp;gt;.left point to it.&lt;br /&gt;
### Set &amp;lt;math&amp;gt;p&amp;lt;/math&amp;gt;.left.key &amp;lt;math&amp;gt;:= K&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;p&amp;lt;/math&amp;gt;.left.left &amp;lt;math&amp;gt;:=&amp;lt;/math&amp;gt;void, and &amp;lt;math&amp;gt;p&amp;lt;/math&amp;gt;.left.right &amp;lt;math&amp;gt;:=&amp;lt;/math&amp;gt;void.&lt;br /&gt;
### Terminate the algorithm.&lt;br /&gt;
## Otherwise, set &amp;lt;math&amp;gt;p := p&amp;lt;/math&amp;gt;.left.&lt;br /&gt;
# Otherwise (that is, &amp;lt;math&amp;gt;p&amp;lt;/math&amp;gt;.key &amp;lt;math&amp;gt;&amp;lt; K&amp;lt;/math&amp;gt;):&lt;br /&gt;
## If &amp;lt;math&amp;gt;p&amp;lt;/math&amp;gt;.right = void:&lt;br /&gt;
### Create a new node and let &amp;lt;math&amp;gt;p&amp;lt;/math&amp;gt;.right point to it.&lt;br /&gt;
### Set &amp;lt;math&amp;gt;p&amp;lt;/math&amp;gt;.right.key &amp;lt;math&amp;gt;:= K&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;p&amp;lt;/math&amp;gt;.right.left &amp;lt;math&amp;gt;:=&amp;lt;/math&amp;gt;void, and &amp;lt;math&amp;gt;p&amp;lt;/math&amp;gt;.right.right &amp;lt;math&amp;gt;:=&amp;lt;/math&amp;gt;void.&lt;br /&gt;
### Terminate the algorithm.&lt;br /&gt;
## Otherwise, set &amp;lt;math&amp;gt;p := p&amp;lt;/math&amp;gt;.right.&lt;br /&gt;
&lt;br /&gt;
== Complexity ==&lt;br /&gt;
'''Statement:''' The complexity is in &amp;lt;math&amp;gt;\mathcal{O}(T\cdot h)\subseteq\mathcal{O}(T\cdot n)&amp;lt;/math&amp;gt; in the worst case, where &amp;lt;math&amp;gt;n&amp;lt;/math&amp;gt; is the length of the sequence, &amp;lt;math&amp;gt;h&amp;lt;/math&amp;gt; the height of the tree, and &amp;lt;math&amp;gt;T&amp;lt;/math&amp;gt; the complexity of the comparison.&lt;br /&gt;
&lt;br /&gt;
'''Proof:''' Obvious.&lt;br /&gt;
&lt;br /&gt;
== Pseudocode ==&lt;br /&gt;
TREE-INSERT(T, z)&lt;br /&gt;
:y = Null&lt;br /&gt;
:x = root(T)&lt;br /&gt;
:while x ≠ NULL&lt;br /&gt;
::y = x&lt;br /&gt;
::if key[z] &amp;lt; key[x]&lt;br /&gt;
:::then x = left[x]&lt;br /&gt;
:::then x = right[x]&lt;br /&gt;
:p[z] = y&lt;br /&gt;
:if y = NULL&lt;br /&gt;
::then root[T] = z //Tree was empty&lt;br /&gt;
::else if key[z] &amp;lt; key[y]&lt;br /&gt;
:::then left[y] = z&lt;br /&gt;
:::else right[y] = z&lt;br /&gt;
[[Category:Binary Search Tree]]&lt;/div&gt;</summary>
		<author><name>Weihe</name></author>
	</entry>
	<entry>
		<id>https://wiki.algo.informatik.tu-darmstadt.de/index.php?title=Binary_search_tree:_find&amp;diff=3859</id>
		<title>Binary search tree: find</title>
		<link rel="alternate" type="text/html" href="https://wiki.algo.informatik.tu-darmstadt.de/index.php?title=Binary_search_tree:_find&amp;diff=3859"/>
		<updated>2017-03-03T13:38:32Z</updated>

		<summary type="html">&lt;p&gt;Weihe: /* Abstract view */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Category:Binary_Search_Tree]]&lt;br /&gt;
[[Category:Algorithm]]&lt;br /&gt;
&amp;lt;div class=&amp;quot;plainlinks&amp;quot; style=&amp;quot;float:right;margin:0 0 5px 5px; border:1px solid #AAAAAA; width:auto; padding:1em; margin: 0px 0px 1em 1em;&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;div style=&amp;quot;font-size: 1.8em;font-weight:bold;text-align: center;margin:0.2em 0 1em 0&amp;quot;&amp;gt;Binary Search Tree&amp;lt;br&amp;gt;Find&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;font-size: 1.2em; margin:.5em 0 1em 0; text-align:center&amp;quot;&amp;gt;[[Sorted sequence]]&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;font-size: 1.2em; margin:.5em 0 .5em 0;text-align:center&amp;quot;&amp;gt;[[File:olw_logo1.png|20px]][https://openlearnware.tu-darmstadt.de/#!/resource/binary-search-tree-1938 Openlearnware]&amp;lt;br&amp;gt;See Chapter 2,3&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
== General Information ==&lt;br /&gt;
&lt;br /&gt;
'''Algorithmic Problem:''' [[Sorted sequence#Find|Sorted Sequence:find]]&lt;br /&gt;
&lt;br /&gt;
'''Type of algorithm:''' loop&lt;br /&gt;
&lt;br /&gt;
'''Auxiliary data:'''  A pointer &amp;lt;math&amp;gt;p&amp;lt;/math&amp;gt; of type &amp;quot;pointer to binary search tree node of type &amp;lt;math&amp;gt;\mathcal{K}&amp;lt;/math&amp;gt;.&amp;quot;&lt;br /&gt;
&lt;br /&gt;
== Abstract view ==&lt;br /&gt;
'''Invariant:''' After &amp;lt;math&amp;gt;i\geq 0&amp;lt;/math&amp;gt; Iterations.&lt;br /&gt;
# The pointer &amp;lt;math&amp;gt;p&amp;lt;/math&amp;gt; points to a tree node &amp;lt;math&amp;gt;v&amp;lt;/math&amp;gt; on height level &amp;lt;math&amp;gt;i&amp;lt;/math&amp;gt; (or is void). &lt;br /&gt;
# The key &amp;lt;math&amp;gt;K&amp;lt;/math&amp;gt; is in the [[Directed Tree#Ranges of Search Tree Nodes|range]] of &amp;lt;math&amp;gt;v&amp;lt;/math&amp;gt;.&lt;br /&gt;
'''Variant:''' &amp;lt;math&amp;gt;i&amp;lt;/math&amp;gt; is increased by 1.&lt;br /&gt;
&lt;br /&gt;
'''Break condition:''' Either it is &amp;lt;math&amp;gt;p =&amp;lt;/math&amp;gt;void or, otherwise,  it is &amp;lt;math&amp;gt;p&amp;lt;/math&amp;gt;.key &amp;lt;math&amp;gt;= K&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
'''Remark:''' For example, the height of the subtree rooted at the node to which &amp;lt;math&amp;gt;p&amp;lt;/math&amp;gt; points may be chosen as the induction parameter. For conciseness, the induction parameter is omitted in the following.&lt;br /&gt;
&lt;br /&gt;
== Induction basis ==&lt;br /&gt;
'''Abstract view:''' Set &amp;lt;math&amp;gt;p:=&amp;lt;/math&amp;gt; root.&lt;br /&gt;
&lt;br /&gt;
'''Implementation:''' Obvious&lt;br /&gt;
&lt;br /&gt;
'''Proof:''' Nothing to show&lt;br /&gt;
&lt;br /&gt;
== Induction step ==&lt;br /&gt;
'''Abstract view:''' If &amp;lt;math&amp;gt;p&amp;lt;/math&amp;gt; points to a node but not with key &amp;lt;math&amp;gt;K&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;p&amp;lt;/math&amp;gt; descends in the appropriate direction, left or right.&lt;br /&gt;
&lt;br /&gt;
'''Implementation:'''&lt;br /&gt;
# If &amp;lt;math&amp;gt;p =&amp;lt;/math&amp;gt; void, terminate the algorithm and return '''false'''.&lt;br /&gt;
# Otherwise, if &amp;lt;math&amp;gt;p&amp;lt;/math&amp;gt;.key &amp;lt;math&amp;gt;= K&amp;lt;/math&amp;gt;, terminate the algorithm and return '''true'''.&lt;br /&gt;
# Otherwise:&lt;br /&gt;
## If &amp;lt;math&amp;gt;K &amp;lt; p&amp;lt;/math&amp;gt;.key, set &amp;lt;math&amp;gt;p :=&amp;lt;/math&amp;gt;left.&lt;br /&gt;
## If &amp;lt;math&amp;gt;K &amp;gt; p&amp;lt;/math&amp;gt;.key, set &amp;lt;math&amp;gt;p :=&amp;lt;/math&amp;gt; right.&lt;br /&gt;
&lt;br /&gt;
'''Correctness:''' Obvious.&lt;br /&gt;
&lt;br /&gt;
== Complexity ==&lt;br /&gt;
'''Statement:''' The complexity is in &amp;lt;math&amp;gt;\mathcal{O}(T\cdot h)\subseteq\mathcal{O}(T\cdot n)&amp;lt;/math&amp;gt; in the worst case, where &amp;lt;math&amp;gt;n&amp;lt;/math&amp;gt; is the length of the sequence, &amp;lt;math&amp;gt;h&amp;lt;/math&amp;gt; the height of the tree, and &amp;lt;math&amp;gt;T&amp;lt;/math&amp;gt; the complexity of the comparison.&lt;br /&gt;
&lt;br /&gt;
'''Proof:''' Obvious.&lt;br /&gt;
&lt;br /&gt;
== Pseudocode ==&lt;br /&gt;
TREE-SEARCH (x, k)&lt;br /&gt;
:if x= NIL or k = key[x]&lt;br /&gt;
::then return x&lt;br /&gt;
:if k &amp;lt; key[x]&lt;br /&gt;
::then return TREE-SEARCH(left[x], k)&lt;br /&gt;
:else return TREE-SEARCH(right[x], k)&lt;/div&gt;</summary>
		<author><name>Weihe</name></author>
	</entry>
	<entry>
		<id>https://wiki.algo.informatik.tu-darmstadt.de/index.php?title=Mergesort&amp;diff=3858</id>
		<title>Mergesort</title>
		<link rel="alternate" type="text/html" href="https://wiki.algo.informatik.tu-darmstadt.de/index.php?title=Mergesort&amp;diff=3858"/>
		<updated>2017-03-03T13:36:20Z</updated>

		<summary type="html">&lt;p&gt;Weihe: /* Abstract View */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Category:Videos]]&lt;br /&gt;
[[Category:Sorting Algorithms]]&lt;br /&gt;
[[Category:Divide and Conquer]]&lt;br /&gt;
{{#ev:youtube|https://www.youtube.com/watch?v=7kdQwh-WvhA|500|right|Chapters&lt;br /&gt;
#[00:00] Mergesort&lt;br /&gt;
#[02:36] Fragen&lt;br /&gt;
#[02:44] Wie funktioniert der Algorithmus?&lt;br /&gt;
#[03:04] Was ist die asymptotische Komplexität des Algorithmus?&lt;br /&gt;
#[03:23] Was macht Merge?&lt;br /&gt;
#[03:34] Wie lautet die Invariante?&lt;br /&gt;
#[03:58] Warum ist der Algorithmus korrekt?&lt;br /&gt;
#[04:10] Wie wird die Invariante sichergestellt?&lt;br /&gt;
#[04:31] Was ist die asymptotische Komplexität des Algorithmus?&lt;br /&gt;
|frame}}&lt;br /&gt;
== General Information ==&lt;br /&gt;
'''Algorithmic problem:''' [[Sorting based on pairwise comparison]]&lt;br /&gt;
&lt;br /&gt;
'''Type of algorithm:''' recursion&lt;br /&gt;
&lt;br /&gt;
== Abstract View ==&lt;br /&gt;
'''Invariant:''' After a recursive call, the input sequence of this recursive call is sorted.&lt;br /&gt;
&lt;br /&gt;
'''Variant:''' For a recursive call on a subsequence &amp;lt;math&amp;gt;S'&amp;lt;/math&amp;gt; of &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt;, let &amp;lt;math&amp;gt;S_1'&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;S_2'&amp;lt;/math&amp;gt; denote the subsequences of &amp;lt;math&amp;gt;S'&amp;lt;/math&amp;gt; with which Mergesort is called recursively from that call. Then it is &amp;lt;math&amp;gt;|S_1'| \leq \lceil|S'| /2\rceil&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;|S_2'| \leq \lceil|S'| /2\rceil&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
'''Break condition:''' The current subsequence of the recursive call is a [[Sets and sequences#Singleton, pair, triple, quadruple|singleton]].&lt;br /&gt;
&lt;br /&gt;
'''Remark:''' For a particular recursive call &amp;lt;math&amp;gt;C&amp;lt;/math&amp;gt;, we may, for example, choose the height of the recursion subtree with root &amp;lt;math&amp;gt;C&amp;lt;/math&amp;gt; as the induction parameter. For conciseness, the induction parameter is omitted in the following.&lt;br /&gt;
&lt;br /&gt;
== Induction Basis ==&lt;br /&gt;
'''Abstract view:''' Nothing to do on a singleton.&lt;br /&gt;
&lt;br /&gt;
'''Implementation:''' Ditto.&lt;br /&gt;
&lt;br /&gt;
'''Proof:''' A singleton is trivially sorted.&lt;br /&gt;
&lt;br /&gt;
== Induction Step ==&lt;br /&gt;
'''Abstract view:''' The sequence is divided into two subsequences of approximately half size, it does not matter at all in which way this is done. Both subsequences are sorted recursively using Mergesort. The sorted subsequences are &amp;quot;merged&amp;quot; into one using algorithm [[Merge]].&lt;br /&gt;
&lt;br /&gt;
'''Implementation:''' Obvious.&lt;br /&gt;
&lt;br /&gt;
'''Correctness:''' By induction hypothesis, the recursive calls sort correctly. So, correctness of [[Merge]] implies correctness of Mergesort.&lt;br /&gt;
&lt;br /&gt;
== Complexity ==&lt;br /&gt;
'''Statement:''' The complexity is in &amp;lt;math&amp;gt;O(T\cdot n \log n)&amp;lt;/math&amp;gt; in the best and worst case, where &amp;lt;math&amp;gt;T&amp;lt;/math&amp;gt; is the complexity of the comparison.&lt;br /&gt;
&lt;br /&gt;
'''Proof:''' Obviously, the variant is correct. So, the lengths of &amp;lt;math&amp;gt;S_1'&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;S_2'&amp;lt;/math&amp;gt; are at most &amp;lt;math&amp;gt;\lceil1/2\rceil&amp;lt;/math&amp;gt; of the length of &amp;lt;math&amp;gt;S'&amp;lt;/math&amp;gt;. Consequently, the lengths of &amp;lt;math&amp;gt;S_1'&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;S_2'&amp;lt;/math&amp;gt; are at least &amp;lt;math&amp;gt;\lfloor1/2\rfloor&amp;lt;/math&amp;gt; of the length of &amp;lt;math&amp;gt;S'&amp;lt;/math&amp;gt;. In summary, the overall recursion depth is in &amp;lt;math&amp;gt;\Theta(\log n)&amp;lt;/math&amp;gt; in the best and worst case. Next consider the run time of a single recursive call, which receives some &amp;lt;math&amp;gt;S'&amp;lt;/math&amp;gt; as input and calls Mergesort recursively with two subsequences &amp;lt;math&amp;gt;S_1'&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;S_2'&amp;lt;/math&amp;gt;. The run time of this recursive call (excluding the run times of the recursive calls with &amp;lt;math&amp;gt;S_1'&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;S_2'&amp;lt;/math&amp;gt;) is linear in the length of &amp;lt;math&amp;gt;S'&amp;lt;/math&amp;gt;. Since all recursive calls on the same recursion level operate on pairwise disjoint subsequences, the total run time of all calls on the same recursive level is linear in the length of the original sequence.&lt;br /&gt;
&lt;br /&gt;
==Example implementations==&lt;br /&gt;
===Java===&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;java&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
	public static &amp;lt;T&amp;gt; void mergesort(List&amp;lt;T&amp;gt; liste, Comparator&amp;lt;T&amp;gt; cmp) {&lt;br /&gt;
		if (liste.size() &amp;lt;= 1)&lt;br /&gt;
			return;&lt;br /&gt;
		LinkedList&amp;lt;T&amp;gt; teilliste1 = new LinkedList&amp;lt;T&amp;gt;(); // leer&lt;br /&gt;
		LinkedList&amp;lt;T&amp;gt; teilliste2 = new LinkedList&amp;lt;T&amp;gt;();&lt;br /&gt;
		zerlegeInTeillisten(liste, teilliste1, teilliste2);&lt;br /&gt;
		mergesort(teilliste1, cmp);&lt;br /&gt;
		mergesort(teilliste2, cmp);&lt;br /&gt;
		liste.clear();&lt;br /&gt;
		merge(teilliste1, teilliste2, liste, cmp);&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// -----------------&lt;br /&gt;
	public static &amp;lt;T&amp;gt; void zerlegeInTeillisten(List&amp;lt;T&amp;gt; liste,&lt;br /&gt;
			List&amp;lt;T&amp;gt; teilliste1, List&amp;lt;T&amp;gt; teilliste2) {&lt;br /&gt;
		ListIterator&amp;lt;T&amp;gt; it = liste.listIterator();&lt;br /&gt;
		for (int i = 0; i &amp;lt; liste.size(); i++) {&lt;br /&gt;
			T elem = it.next();&lt;br /&gt;
			if (i &amp;lt;= liste.size() / 2)&lt;br /&gt;
				teilliste1.add(elem); // Haengt elem hinten an teilliste1 an&lt;br /&gt;
			else&lt;br /&gt;
				teilliste2.add(elem); // Dito teilliste2&lt;br /&gt;
		}&lt;br /&gt;
	}&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;/div&gt;</summary>
		<author><name>Weihe</name></author>
	</entry>
	<entry>
		<id>https://wiki.algo.informatik.tu-darmstadt.de/index.php?title=Quicksort&amp;diff=3857</id>
		<title>Quicksort</title>
		<link rel="alternate" type="text/html" href="https://wiki.algo.informatik.tu-darmstadt.de/index.php?title=Quicksort&amp;diff=3857"/>
		<updated>2017-03-03T13:36:04Z</updated>

		<summary type="html">&lt;p&gt;Weihe: /* Abstract view */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Category:Sorting Algorithms]]&lt;br /&gt;
[[Category:Divide and Conquer]]&lt;br /&gt;
&amp;lt;div class=&amp;quot;plainlinks&amp;quot; style=&amp;quot;float:right;margin:0 0 5px 5px; border:1px solid #AAAAAA; width:auto; padding:1em; margin: 0px 0px 1em 1em;&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;div style=&amp;quot;font-size: 1.8em;font-weight:bold;text-align: center;margin:0.2em 0 1em 0&amp;quot;&amp;gt;Quick Sort&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;font-size: 1.2em; margin:.5em 0 1em 0; text-align:center&amp;quot;&amp;gt;whatever&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;font-size: 1.2em; margin:.5em 0 .5em 0;text-align:center&amp;quot;&amp;gt;[[File:olw_logo1.png|20px]][https://openlearnware.tu-darmstadt.de/#!/resource/quick-sort-1945 Openlearnware]&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== General information ==&lt;br /&gt;
'''Algorithmic problem:''' [[Sorting based on pairwise comparison]]&lt;br /&gt;
&lt;br /&gt;
'''Type of algorithm:''' recursion&lt;br /&gt;
&lt;br /&gt;
== Abstract view ==&lt;br /&gt;
&lt;br /&gt;
'''Invariant:''' After a recursive call, the input sequence of this recursive call is sorted.&lt;br /&gt;
&lt;br /&gt;
'''Variant:''' In each recursive call, the sequence of the callee is strictly shorter than that of the caller.&lt;br /&gt;
&lt;br /&gt;
'''Break condition:''' The sequence is empty or a [[Sets and sequences#Singleton, pair, triple, quadruple|singleton]].&lt;br /&gt;
&lt;br /&gt;
'''Remark:''' For a particular recursive call &amp;lt;math&amp;gt;C&amp;lt;/math&amp;gt;, we may, for example, choose the height of the recursion subtree with root &amp;lt;math&amp;gt;C&amp;lt;/math&amp;gt; as the induction parameter. For conciseness, the induction parameter is omitted in the following.&lt;br /&gt;
&lt;br /&gt;
== Induction basis ==&lt;br /&gt;
&lt;br /&gt;
'''Abstract view:''' Nothing to do on an empty sequence or a singleton.&lt;br /&gt;
&lt;br /&gt;
'''Implementation:''' Ditto.&lt;br /&gt;
&lt;br /&gt;
'''Proof:''' Empty sequences and singletons are trivially sorted.&lt;br /&gt;
&lt;br /&gt;
== Induction step ==&lt;br /&gt;
&lt;br /&gt;
=== Abstract view: ===&lt;br /&gt;
# Choose a pivot value &amp;lt;math&amp;gt;p \in [min\{x|x \in S\},\dots,max\{x|x \in S\}]&amp;lt;/math&amp;gt; (note that &amp;lt;math&amp;gt;p&amp;lt;/math&amp;gt; is not required to be an element of &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt;.&lt;br /&gt;
# Partition &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt; into sequences, &amp;lt;math&amp;gt;S_1&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;S_2&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;S_3&amp;lt;/math&amp;gt;, such that &amp;lt;math&amp;gt;x &amp;lt; p&amp;lt;/math&amp;gt; for all &amp;lt;math&amp;gt;x \in S_1&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;x = p&amp;lt;/math&amp;gt; for all &amp;lt;math&amp;gt;x \in S_2&amp;lt;/math&amp;gt;, and &amp;lt;math&amp;gt;x &amp;gt; p&amp;lt;/math&amp;gt; for all &amp;lt;math&amp;gt;x \in S_3&amp;lt;/math&amp;gt;.&lt;br /&gt;
# Sort &amp;lt;math&amp;gt;S_1&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;S_3&amp;lt;/math&amp;gt; recursively.&lt;br /&gt;
# The concatenation of all three lists, &amp;lt;math&amp;gt;S_1 + S_2 + S_3&amp;lt;/math&amp;gt;, is the result of the algorithm.&lt;br /&gt;
&lt;br /&gt;
=== Implementation: ===&lt;br /&gt;
&lt;br /&gt;
# Choose &amp;lt;math&amp;gt;p \in [min\{x|x \in S\},\dots,max\{x|x \in S\}]&amp;lt;/math&amp;gt; according to some pivoting rule.&lt;br /&gt;
# &amp;lt;math&amp;gt;S_1 := S_2 := S_3 := \emptyset&amp;lt;/math&amp;gt;.&lt;br /&gt;
# For all &amp;lt;math&amp;gt;x \in S&amp;lt;/math&amp;gt;, append &amp;lt;math&amp;gt;x&amp;lt;/math&amp;gt; to&lt;br /&gt;
## &amp;lt;math&amp;gt;S_1&amp;lt;/math&amp;gt; if &amp;lt;math&amp;gt;x &amp;lt; p&amp;lt;/math&amp;gt;,&lt;br /&gt;
## &amp;lt;math&amp;gt;S_2&amp;lt;/math&amp;gt; if &amp;lt;math&amp;gt;x = p&amp;lt;/math&amp;gt;,&lt;br /&gt;
## &amp;lt;math&amp;gt;S_3&amp;lt;/math&amp;gt; if &amp;lt;math&amp;gt;x &amp;gt; p&amp;lt;/math&amp;gt;.&lt;br /&gt;
# Call Quicksort on &amp;lt;math&amp;gt;S_1&amp;lt;/math&amp;gt; giving &amp;lt;math&amp;gt;S_1'&amp;lt;/math&amp;gt;&lt;br /&gt;
# Call Quicksort on &amp;lt;math&amp;gt;S_3&amp;lt;/math&amp;gt; giving &amp;lt;math&amp;gt;S_3'&amp;lt;/math&amp;gt;&lt;br /&gt;
# Return &amp;lt;math&amp;gt;S_1' + S_2' + S_3'&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
=== Correctness: ===&lt;br /&gt;
&lt;br /&gt;
By induction hypothesis, &amp;lt;math&amp;gt;S_1'&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;S_3'&amp;lt;/math&amp;gt; are sorted permutations of &amp;lt;math&amp;gt;S_1&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;S_3&amp;lt;/math&amp;gt;, respectively. In particular &amp;lt;math&amp;gt;S_1' + S_2 + S_3'&amp;lt;/math&amp;gt; is a permutation of &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt;. To see that this permutation is sorted, let &amp;lt;math&amp;gt;x&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;y&amp;lt;/math&amp;gt; be two members of &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt; such that &amp;lt;math&amp;gt;y&amp;lt;/math&amp;gt; immediately succeeds &amp;lt;math&amp;gt;x&amp;lt;/math&amp;gt; in the resulting sequence &amp;lt;math&amp;gt;S_1' + S_2 + S_3'&amp;lt;/math&amp;gt;. We have to show &amp;lt;math&amp;gt;x \leq y&amp;lt;/math&amp;gt;.&lt;br /&gt;
# If &amp;lt;math&amp;gt;x,y \in S_1'&amp;lt;/math&amp;gt; or &amp;lt;math&amp;gt;x,y \in S_3'&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;x \leq y&amp;lt;/math&amp;gt; resultes from the induction hypothesis.&lt;br /&gt;
# On the other hand, if &amp;lt;math&amp;gt;x,y \in S_2&amp;lt;/math&amp;gt;. It is &amp;lt;math&amp;gt;x = y = p&amp;lt;/math&amp;gt;, which trivially implies &amp;lt;math&amp;gt;x \leq y&amp;lt;/math&amp;gt;.&lt;br /&gt;
# Finally, for the following cases, &amp;lt;math&amp;gt;x \leq y&amp;lt;/math&amp;gt; is implied by the specific way of partitioning &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt; into &amp;lt;math&amp;gt;S_1'&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;S_2&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;S_3'&amp;lt;/math&amp;gt;:&lt;br /&gt;
## &amp;lt;math&amp;gt;x \in S_1'&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;y \in S_2&amp;lt;/math&amp;gt;&lt;br /&gt;
## &amp;lt;math&amp;gt;x \in S_2&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;y \in S_3'&amp;lt;/math&amp;gt;&lt;br /&gt;
## &amp;lt;math&amp;gt;x \in S_1'&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;y \in S_3'&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Obviously, this case distinction covers all potential cases, so the claim is proved.&lt;br /&gt;
&lt;br /&gt;
== Complexity ==&lt;br /&gt;
&lt;br /&gt;
=== Statement: ===&lt;br /&gt;
[[File:Quicksortrecursion.png|350px|thumb|right|recursion depth of quick sort : [A] best case, [B] avg. case, [C] worst case]]&lt;br /&gt;
In the worst case, the complexity is &amp;lt;math&amp;gt;\Theta(T\cdot n^2)&amp;lt;/math&amp;gt;, where &amp;lt;math&amp;gt;T&amp;lt;/math&amp;gt; is the complexity of the comparison.&lt;br /&gt;
&lt;br /&gt;
If the pivoting rule ensures for some &amp;lt;math&amp;gt;\alpha &amp;lt; 1&amp;lt;/math&amp;gt; that the lengths of &amp;lt;math&amp;gt;S_1&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;S_3&amp;lt;/math&amp;gt; are at most &amp;lt;math&amp;gt;\alpha&amp;lt;/math&amp;gt; times the size of &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt;, then it is even &amp;lt;math&amp;gt;\Theta(T\cdot n \log n)&amp;lt;/math&amp;gt; in the worst case.&lt;br /&gt;
&lt;br /&gt;
If each pivot value is chosen uniformly randomly from members of the respective sequence and if all selections of pivot values are stochastically independent, the average-case complexity is &amp;lt;math&amp;gt;\Theta(T\cdot n \log n)&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
=== Proof: ===&lt;br /&gt;
First note that the complexity for a single recursive call on &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt; (excluding the complexity for the recursive descents on &amp;lt;math&amp;gt;S_1&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;S_3&amp;lt;/math&amp;gt;) is in &amp;lt;math&amp;gt;\Theta(T\cdot|S|)&amp;lt;/math&amp;gt;. On each recursive level, all calls are on distinct subsets of &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt;. Therefore, the number of recursive calls with non-empty sequences on one recursive level is in &amp;lt;math&amp;gt;\Theta(n)&amp;lt;/math&amp;gt;. The number of calls with empty sequences on one level is at most twice the total number of calls with non-empty sequences on the previous level. Hence, the number of calls with empty sequences on one recursive level is in &amp;lt;math&amp;gt;O(n)&amp;lt;/math&amp;gt; as well. In summary, the total complexity on a recursive level is &amp;lt;math&amp;gt;O(T\cdot n)&amp;lt;/math&amp;gt;. So, for the total complexity, it remains to estimate the number of recursive levels.&lt;br /&gt;
&lt;br /&gt;
Now consider the first statement. The recursion variant implies that the deepest recursive level is &amp;lt;math&amp;gt;O(n)&amp;lt;/math&amp;gt;. On the other hand, &amp;lt;math&amp;gt;\Omega(n)&amp;lt;/math&amp;gt; in the very worst case is obvious. This gives the claimed &amp;lt;math&amp;gt;\Theta(T\cdot n^2)&amp;lt;/math&amp;gt; in the worst case.&lt;br /&gt;
&lt;br /&gt;
Next assume there is a fixed &amp;lt;math&amp;gt;\alpha &amp;lt; 1&amp;lt;/math&amp;gt; such that &amp;lt;math&amp;gt;|S_1| \leq \alpha \cdot|S|&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;|S_3| \leq \alpha \cdot|S|&amp;lt;/math&amp;gt; is guaranteed in each recursive call. Then the length of any sequence on recursive level &amp;lt;math&amp;gt;\#i&amp;lt;/math&amp;gt; is at most &amp;lt;math&amp;gt;\alpha ^ i \cdot |S|&amp;lt;/math&amp;gt;. Therefore, the maximal recursive depth is &amp;lt;math&amp;gt;\lceil \log_{a^{-1}}(n)\rceil&amp;lt;/math&amp;gt;. Since &amp;lt;math&amp;gt;\alpha^{-1} &amp;gt; 1&amp;lt;/math&amp;gt;, the total complexity is in &amp;lt;math&amp;gt;O(T\cdot n \log n)&amp;lt;/math&amp;gt; in the worst case.&lt;br /&gt;
&lt;br /&gt;
For the last statement, the average-case analysis, first note that the number of comparisons alone has the same asymptotic complexity as the algorithm as a whole. Next note that any &amp;lt;math&amp;gt;x,y \in S&amp;lt;/math&amp;gt; are compared at most once throughout the entire algorithm if, and only if, &amp;lt;math&amp;gt;x&amp;lt;/math&amp;gt; or &amp;lt;math&amp;gt;y&amp;lt;/math&amp;gt; is chosen as the pivot value for a subsequence to which both elements belong. For &amp;lt;math&amp;gt;x,y \in S&amp;lt;/math&amp;gt;, let &amp;lt;math&amp;gt;Pr(x,y)&amp;lt;/math&amp;gt; denote the probability that &amp;lt;math&amp;gt;x&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;y&amp;lt;/math&amp;gt; are indeed compared. Since comparison events are distinct and &amp;lt;math&amp;gt;Pr(x,y )\in \{0,1\}&amp;lt;/math&amp;gt; for all &amp;lt;math&amp;gt;x,y \in S&amp;lt;/math&amp;gt;, the expected number of comparisons is&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\sum_{x,y \in S, x \neq y} Pr(x,y)&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Let &amp;lt;math&amp;gt;n := |S|&amp;lt;/math&amp;gt;, and for &amp;lt;math&amp;gt;i,j \in \{1,\dots,n\}&amp;lt;/math&amp;gt;, let &amp;lt;math&amp;gt;Pr(i,j)&amp;lt;/math&amp;gt; denote the probability that the &amp;lt;math&amp;gt;i&amp;lt;/math&amp;gt;-th and the &amp;lt;math&amp;gt;j&amp;lt;/math&amp;gt;-th elemet of the eventual sorted sequence are compared throughout the algorithm. Using this notation, we may rewrite the above summation as follows:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\sum_{x,y \in S, x \neq y} Pr(x,y) = \sum_{i=1}^{n-1} \sum_{j = i+1}^{n} Pr(i,j)&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
For &amp;lt;math&amp;gt;i,j \in \{1,\dots,n\}&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;i &amp;lt; j&amp;lt;/math&amp;gt;, let &amp;lt;math&amp;gt; S_{ij}&amp;lt;/math&amp;gt; denote the subsequence of the eventual sorted sequence that starts with &amp;lt;math&amp;gt;i&amp;lt;/math&amp;gt; and ends with &amp;lt;math&amp;gt;j&amp;lt;/math&amp;gt;. The elements &amp;lt;math&amp;gt;\#i&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\#j&amp;lt;/math&amp;gt; are compared if, and only if, &amp;lt;math&amp;gt;\#i&amp;lt;/math&amp;gt; or &amp;lt;math&amp;gt;\#j&amp;lt;/math&amp;gt; is the very first element of &amp;lt;math&amp;gt;S_{ij}&amp;lt;/math&amp;gt; to be chosen as a pivot. The probability of this event is &amp;lt;math&amp;gt;\frac{2}{|S_{ij}|} = \frac{2}{j - i + 1}&amp;lt;/math&amp;gt;, so we obtain&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\sum_{i=1}^{n-1} \sum_{j = i+1}^{n} \frac{2}{j-i+1}&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Substituting &amp;lt;math&amp;gt;k := j-i&amp;lt;/math&amp;gt;, this gives&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\sum_{i=1}^{n-1} \sum_{j = i+1}^{n} \frac{2}{j-i+1} = \sum_{i=1}^{n-1} \sum_{k=1}^{n} \frac{2}{k+1} \leq 2(n - 1)&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\sum_{k=1}^n \frac{1}{k+1} \leq 2(n-1)&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\sum_{k+1}^n \frac{1}{k}&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The [http://en.wikipedia.org/wiki/Harmonic_series_(mathematics)#Rate_of_divergence asymptotic behavior] of the [http://en.wikipedia.org/wiki/Harmonic_series_(mathematics) harmonic series] is &amp;lt;math&amp;gt;\Theta(\log n)&amp;lt;/math&amp;gt;, so the last expression is &amp;lt;math&amp;gt;\Theta(T\cdot n \log n)&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
== Further information ==&lt;br /&gt;
&lt;br /&gt;
If &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt; is an array, &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt; cannot be decomposed into subsequences. We would liko to avoid the need for additional arrays and copy operations. Instead, the array should be sorted in-place, that is, by swap operations on pairs of elements. The auxiliary procedure, [[Pivot partitioning by scanning]], is designed exactly for that: it permutes the array such that each of &amp;lt;math&amp;gt;S_1&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;S_2&amp;lt;/math&amp;gt; is a subarray. Then each recursive call of Quicksort operates on a subarray of the input array, which is specified by two index pointers.&lt;/div&gt;</summary>
		<author><name>Weihe</name></author>
	</entry>
	<entry>
		<id>https://wiki.algo.informatik.tu-darmstadt.de/index.php?title=Quicksort&amp;diff=3856</id>
		<title>Quicksort</title>
		<link rel="alternate" type="text/html" href="https://wiki.algo.informatik.tu-darmstadt.de/index.php?title=Quicksort&amp;diff=3856"/>
		<updated>2017-03-03T13:35:41Z</updated>

		<summary type="html">&lt;p&gt;Weihe: /* Abstract view */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Category:Sorting Algorithms]]&lt;br /&gt;
[[Category:Divide and Conquer]]&lt;br /&gt;
&amp;lt;div class=&amp;quot;plainlinks&amp;quot; style=&amp;quot;float:right;margin:0 0 5px 5px; border:1px solid #AAAAAA; width:auto; padding:1em; margin: 0px 0px 1em 1em;&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;div style=&amp;quot;font-size: 1.8em;font-weight:bold;text-align: center;margin:0.2em 0 1em 0&amp;quot;&amp;gt;Quick Sort&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;font-size: 1.2em; margin:.5em 0 1em 0; text-align:center&amp;quot;&amp;gt;whatever&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;font-size: 1.2em; margin:.5em 0 .5em 0;text-align:center&amp;quot;&amp;gt;[[File:olw_logo1.png|20px]][https://openlearnware.tu-darmstadt.de/#!/resource/quick-sort-1945 Openlearnware]&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== General information ==&lt;br /&gt;
'''Algorithmic problem:''' [[Sorting based on pairwise comparison]]&lt;br /&gt;
&lt;br /&gt;
'''Type of algorithm:''' recursion&lt;br /&gt;
&lt;br /&gt;
== Abstract view ==&lt;br /&gt;
&lt;br /&gt;
'''Invariant:''' After a recursive call, the input sequence of this recursive call is sorted.&lt;br /&gt;
&lt;br /&gt;
'''Variant:''' In each recursive call, the sequence of the callee is strictly shorter than that of the caller.&lt;br /&gt;
&lt;br /&gt;
'''Break condition:''' The sequence is empty or a [[Sets and sequences#Singleton, pair, triple, quadruple|singleton]].&lt;br /&gt;
&lt;br /&gt;
'''Remark:''' For a particular recursive call &amp;lt;math&amp;gt;C&amp;lt;/math&amp;gt;, we may, for example, choose the height of the recursion subtree with root &amp;lt;math&amp;gt;C&amp;lt;/math&amp;gt; as the induction parameter. For concise, the induction parameter is omitted in the following.&lt;br /&gt;
&lt;br /&gt;
== Induction basis ==&lt;br /&gt;
&lt;br /&gt;
'''Abstract view:''' Nothing to do on an empty sequence or a singleton.&lt;br /&gt;
&lt;br /&gt;
'''Implementation:''' Ditto.&lt;br /&gt;
&lt;br /&gt;
'''Proof:''' Empty sequences and singletons are trivially sorted.&lt;br /&gt;
&lt;br /&gt;
== Induction step ==&lt;br /&gt;
&lt;br /&gt;
=== Abstract view: ===&lt;br /&gt;
# Choose a pivot value &amp;lt;math&amp;gt;p \in [min\{x|x \in S\},\dots,max\{x|x \in S\}]&amp;lt;/math&amp;gt; (note that &amp;lt;math&amp;gt;p&amp;lt;/math&amp;gt; is not required to be an element of &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt;.&lt;br /&gt;
# Partition &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt; into sequences, &amp;lt;math&amp;gt;S_1&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;S_2&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;S_3&amp;lt;/math&amp;gt;, such that &amp;lt;math&amp;gt;x &amp;lt; p&amp;lt;/math&amp;gt; for all &amp;lt;math&amp;gt;x \in S_1&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;x = p&amp;lt;/math&amp;gt; for all &amp;lt;math&amp;gt;x \in S_2&amp;lt;/math&amp;gt;, and &amp;lt;math&amp;gt;x &amp;gt; p&amp;lt;/math&amp;gt; for all &amp;lt;math&amp;gt;x \in S_3&amp;lt;/math&amp;gt;.&lt;br /&gt;
# Sort &amp;lt;math&amp;gt;S_1&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;S_3&amp;lt;/math&amp;gt; recursively.&lt;br /&gt;
# The concatenation of all three lists, &amp;lt;math&amp;gt;S_1 + S_2 + S_3&amp;lt;/math&amp;gt;, is the result of the algorithm.&lt;br /&gt;
&lt;br /&gt;
=== Implementation: ===&lt;br /&gt;
&lt;br /&gt;
# Choose &amp;lt;math&amp;gt;p \in [min\{x|x \in S\},\dots,max\{x|x \in S\}]&amp;lt;/math&amp;gt; according to some pivoting rule.&lt;br /&gt;
# &amp;lt;math&amp;gt;S_1 := S_2 := S_3 := \emptyset&amp;lt;/math&amp;gt;.&lt;br /&gt;
# For all &amp;lt;math&amp;gt;x \in S&amp;lt;/math&amp;gt;, append &amp;lt;math&amp;gt;x&amp;lt;/math&amp;gt; to&lt;br /&gt;
## &amp;lt;math&amp;gt;S_1&amp;lt;/math&amp;gt; if &amp;lt;math&amp;gt;x &amp;lt; p&amp;lt;/math&amp;gt;,&lt;br /&gt;
## &amp;lt;math&amp;gt;S_2&amp;lt;/math&amp;gt; if &amp;lt;math&amp;gt;x = p&amp;lt;/math&amp;gt;,&lt;br /&gt;
## &amp;lt;math&amp;gt;S_3&amp;lt;/math&amp;gt; if &amp;lt;math&amp;gt;x &amp;gt; p&amp;lt;/math&amp;gt;.&lt;br /&gt;
# Call Quicksort on &amp;lt;math&amp;gt;S_1&amp;lt;/math&amp;gt; giving &amp;lt;math&amp;gt;S_1'&amp;lt;/math&amp;gt;&lt;br /&gt;
# Call Quicksort on &amp;lt;math&amp;gt;S_3&amp;lt;/math&amp;gt; giving &amp;lt;math&amp;gt;S_3'&amp;lt;/math&amp;gt;&lt;br /&gt;
# Return &amp;lt;math&amp;gt;S_1' + S_2' + S_3'&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
=== Correctness: ===&lt;br /&gt;
&lt;br /&gt;
By induction hypothesis, &amp;lt;math&amp;gt;S_1'&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;S_3'&amp;lt;/math&amp;gt; are sorted permutations of &amp;lt;math&amp;gt;S_1&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;S_3&amp;lt;/math&amp;gt;, respectively. In particular &amp;lt;math&amp;gt;S_1' + S_2 + S_3'&amp;lt;/math&amp;gt; is a permutation of &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt;. To see that this permutation is sorted, let &amp;lt;math&amp;gt;x&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;y&amp;lt;/math&amp;gt; be two members of &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt; such that &amp;lt;math&amp;gt;y&amp;lt;/math&amp;gt; immediately succeeds &amp;lt;math&amp;gt;x&amp;lt;/math&amp;gt; in the resulting sequence &amp;lt;math&amp;gt;S_1' + S_2 + S_3'&amp;lt;/math&amp;gt;. We have to show &amp;lt;math&amp;gt;x \leq y&amp;lt;/math&amp;gt;.&lt;br /&gt;
# If &amp;lt;math&amp;gt;x,y \in S_1'&amp;lt;/math&amp;gt; or &amp;lt;math&amp;gt;x,y \in S_3'&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;x \leq y&amp;lt;/math&amp;gt; resultes from the induction hypothesis.&lt;br /&gt;
# On the other hand, if &amp;lt;math&amp;gt;x,y \in S_2&amp;lt;/math&amp;gt;. It is &amp;lt;math&amp;gt;x = y = p&amp;lt;/math&amp;gt;, which trivially implies &amp;lt;math&amp;gt;x \leq y&amp;lt;/math&amp;gt;.&lt;br /&gt;
# Finally, for the following cases, &amp;lt;math&amp;gt;x \leq y&amp;lt;/math&amp;gt; is implied by the specific way of partitioning &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt; into &amp;lt;math&amp;gt;S_1'&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;S_2&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;S_3'&amp;lt;/math&amp;gt;:&lt;br /&gt;
## &amp;lt;math&amp;gt;x \in S_1'&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;y \in S_2&amp;lt;/math&amp;gt;&lt;br /&gt;
## &amp;lt;math&amp;gt;x \in S_2&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;y \in S_3'&amp;lt;/math&amp;gt;&lt;br /&gt;
## &amp;lt;math&amp;gt;x \in S_1'&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;y \in S_3'&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Obviously, this case distinction covers all potential cases, so the claim is proved.&lt;br /&gt;
&lt;br /&gt;
== Complexity ==&lt;br /&gt;
&lt;br /&gt;
=== Statement: ===&lt;br /&gt;
[[File:Quicksortrecursion.png|350px|thumb|right|recursion depth of quick sort : [A] best case, [B] avg. case, [C] worst case]]&lt;br /&gt;
In the worst case, the complexity is &amp;lt;math&amp;gt;\Theta(T\cdot n^2)&amp;lt;/math&amp;gt;, where &amp;lt;math&amp;gt;T&amp;lt;/math&amp;gt; is the complexity of the comparison.&lt;br /&gt;
&lt;br /&gt;
If the pivoting rule ensures for some &amp;lt;math&amp;gt;\alpha &amp;lt; 1&amp;lt;/math&amp;gt; that the lengths of &amp;lt;math&amp;gt;S_1&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;S_3&amp;lt;/math&amp;gt; are at most &amp;lt;math&amp;gt;\alpha&amp;lt;/math&amp;gt; times the size of &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt;, then it is even &amp;lt;math&amp;gt;\Theta(T\cdot n \log n)&amp;lt;/math&amp;gt; in the worst case.&lt;br /&gt;
&lt;br /&gt;
If each pivot value is chosen uniformly randomly from members of the respective sequence and if all selections of pivot values are stochastically independent, the average-case complexity is &amp;lt;math&amp;gt;\Theta(T\cdot n \log n)&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
=== Proof: ===&lt;br /&gt;
First note that the complexity for a single recursive call on &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt; (excluding the complexity for the recursive descents on &amp;lt;math&amp;gt;S_1&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;S_3&amp;lt;/math&amp;gt;) is in &amp;lt;math&amp;gt;\Theta(T\cdot|S|)&amp;lt;/math&amp;gt;. On each recursive level, all calls are on distinct subsets of &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt;. Therefore, the number of recursive calls with non-empty sequences on one recursive level is in &amp;lt;math&amp;gt;\Theta(n)&amp;lt;/math&amp;gt;. The number of calls with empty sequences on one level is at most twice the total number of calls with non-empty sequences on the previous level. Hence, the number of calls with empty sequences on one recursive level is in &amp;lt;math&amp;gt;O(n)&amp;lt;/math&amp;gt; as well. In summary, the total complexity on a recursive level is &amp;lt;math&amp;gt;O(T\cdot n)&amp;lt;/math&amp;gt;. So, for the total complexity, it remains to estimate the number of recursive levels.&lt;br /&gt;
&lt;br /&gt;
Now consider the first statement. The recursion variant implies that the deepest recursive level is &amp;lt;math&amp;gt;O(n)&amp;lt;/math&amp;gt;. On the other hand, &amp;lt;math&amp;gt;\Omega(n)&amp;lt;/math&amp;gt; in the very worst case is obvious. This gives the claimed &amp;lt;math&amp;gt;\Theta(T\cdot n^2)&amp;lt;/math&amp;gt; in the worst case.&lt;br /&gt;
&lt;br /&gt;
Next assume there is a fixed &amp;lt;math&amp;gt;\alpha &amp;lt; 1&amp;lt;/math&amp;gt; such that &amp;lt;math&amp;gt;|S_1| \leq \alpha \cdot|S|&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;|S_3| \leq \alpha \cdot|S|&amp;lt;/math&amp;gt; is guaranteed in each recursive call. Then the length of any sequence on recursive level &amp;lt;math&amp;gt;\#i&amp;lt;/math&amp;gt; is at most &amp;lt;math&amp;gt;\alpha ^ i \cdot |S|&amp;lt;/math&amp;gt;. Therefore, the maximal recursive depth is &amp;lt;math&amp;gt;\lceil \log_{a^{-1}}(n)\rceil&amp;lt;/math&amp;gt;. Since &amp;lt;math&amp;gt;\alpha^{-1} &amp;gt; 1&amp;lt;/math&amp;gt;, the total complexity is in &amp;lt;math&amp;gt;O(T\cdot n \log n)&amp;lt;/math&amp;gt; in the worst case.&lt;br /&gt;
&lt;br /&gt;
For the last statement, the average-case analysis, first note that the number of comparisons alone has the same asymptotic complexity as the algorithm as a whole. Next note that any &amp;lt;math&amp;gt;x,y \in S&amp;lt;/math&amp;gt; are compared at most once throughout the entire algorithm if, and only if, &amp;lt;math&amp;gt;x&amp;lt;/math&amp;gt; or &amp;lt;math&amp;gt;y&amp;lt;/math&amp;gt; is chosen as the pivot value for a subsequence to which both elements belong. For &amp;lt;math&amp;gt;x,y \in S&amp;lt;/math&amp;gt;, let &amp;lt;math&amp;gt;Pr(x,y)&amp;lt;/math&amp;gt; denote the probability that &amp;lt;math&amp;gt;x&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;y&amp;lt;/math&amp;gt; are indeed compared. Since comparison events are distinct and &amp;lt;math&amp;gt;Pr(x,y )\in \{0,1\}&amp;lt;/math&amp;gt; for all &amp;lt;math&amp;gt;x,y \in S&amp;lt;/math&amp;gt;, the expected number of comparisons is&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\sum_{x,y \in S, x \neq y} Pr(x,y)&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Let &amp;lt;math&amp;gt;n := |S|&amp;lt;/math&amp;gt;, and for &amp;lt;math&amp;gt;i,j \in \{1,\dots,n\}&amp;lt;/math&amp;gt;, let &amp;lt;math&amp;gt;Pr(i,j)&amp;lt;/math&amp;gt; denote the probability that the &amp;lt;math&amp;gt;i&amp;lt;/math&amp;gt;-th and the &amp;lt;math&amp;gt;j&amp;lt;/math&amp;gt;-th elemet of the eventual sorted sequence are compared throughout the algorithm. Using this notation, we may rewrite the above summation as follows:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\sum_{x,y \in S, x \neq y} Pr(x,y) = \sum_{i=1}^{n-1} \sum_{j = i+1}^{n} Pr(i,j)&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
For &amp;lt;math&amp;gt;i,j \in \{1,\dots,n\}&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;i &amp;lt; j&amp;lt;/math&amp;gt;, let &amp;lt;math&amp;gt; S_{ij}&amp;lt;/math&amp;gt; denote the subsequence of the eventual sorted sequence that starts with &amp;lt;math&amp;gt;i&amp;lt;/math&amp;gt; and ends with &amp;lt;math&amp;gt;j&amp;lt;/math&amp;gt;. The elements &amp;lt;math&amp;gt;\#i&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\#j&amp;lt;/math&amp;gt; are compared if, and only if, &amp;lt;math&amp;gt;\#i&amp;lt;/math&amp;gt; or &amp;lt;math&amp;gt;\#j&amp;lt;/math&amp;gt; is the very first element of &amp;lt;math&amp;gt;S_{ij}&amp;lt;/math&amp;gt; to be chosen as a pivot. The probability of this event is &amp;lt;math&amp;gt;\frac{2}{|S_{ij}|} = \frac{2}{j - i + 1}&amp;lt;/math&amp;gt;, so we obtain&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\sum_{i=1}^{n-1} \sum_{j = i+1}^{n} \frac{2}{j-i+1}&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Substituting &amp;lt;math&amp;gt;k := j-i&amp;lt;/math&amp;gt;, this gives&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\sum_{i=1}^{n-1} \sum_{j = i+1}^{n} \frac{2}{j-i+1} = \sum_{i=1}^{n-1} \sum_{k=1}^{n} \frac{2}{k+1} \leq 2(n - 1)&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\sum_{k=1}^n \frac{1}{k+1} \leq 2(n-1)&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\sum_{k+1}^n \frac{1}{k}&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The [http://en.wikipedia.org/wiki/Harmonic_series_(mathematics)#Rate_of_divergence asymptotic behavior] of the [http://en.wikipedia.org/wiki/Harmonic_series_(mathematics) harmonic series] is &amp;lt;math&amp;gt;\Theta(\log n)&amp;lt;/math&amp;gt;, so the last expression is &amp;lt;math&amp;gt;\Theta(T\cdot n \log n)&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
== Further information ==&lt;br /&gt;
&lt;br /&gt;
If &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt; is an array, &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt; cannot be decomposed into subsequences. We would liko to avoid the need for additional arrays and copy operations. Instead, the array should be sorted in-place, that is, by swap operations on pairs of elements. The auxiliary procedure, [[Pivot partitioning by scanning]], is designed exactly for that: it permutes the array such that each of &amp;lt;math&amp;gt;S_1&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;S_2&amp;lt;/math&amp;gt; is a subarray. Then each recursive call of Quicksort operates on a subarray of the input array, which is specified by two index pointers.&lt;/div&gt;</summary>
		<author><name>Weihe</name></author>
	</entry>
	<entry>
		<id>https://wiki.algo.informatik.tu-darmstadt.de/index.php?title=Mergesort&amp;diff=3855</id>
		<title>Mergesort</title>
		<link rel="alternate" type="text/html" href="https://wiki.algo.informatik.tu-darmstadt.de/index.php?title=Mergesort&amp;diff=3855"/>
		<updated>2017-03-03T13:35:13Z</updated>

		<summary type="html">&lt;p&gt;Weihe: /* Abstract View */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Category:Videos]]&lt;br /&gt;
[[Category:Sorting Algorithms]]&lt;br /&gt;
[[Category:Divide and Conquer]]&lt;br /&gt;
{{#ev:youtube|https://www.youtube.com/watch?v=7kdQwh-WvhA|500|right|Chapters&lt;br /&gt;
#[00:00] Mergesort&lt;br /&gt;
#[02:36] Fragen&lt;br /&gt;
#[02:44] Wie funktioniert der Algorithmus?&lt;br /&gt;
#[03:04] Was ist die asymptotische Komplexität des Algorithmus?&lt;br /&gt;
#[03:23] Was macht Merge?&lt;br /&gt;
#[03:34] Wie lautet die Invariante?&lt;br /&gt;
#[03:58] Warum ist der Algorithmus korrekt?&lt;br /&gt;
#[04:10] Wie wird die Invariante sichergestellt?&lt;br /&gt;
#[04:31] Was ist die asymptotische Komplexität des Algorithmus?&lt;br /&gt;
|frame}}&lt;br /&gt;
== General Information ==&lt;br /&gt;
'''Algorithmic problem:''' [[Sorting based on pairwise comparison]]&lt;br /&gt;
&lt;br /&gt;
'''Type of algorithm:''' recursion&lt;br /&gt;
&lt;br /&gt;
== Abstract View ==&lt;br /&gt;
'''Invariant:''' After a recursive call, the input sequence of this recursive call is sorted.&lt;br /&gt;
&lt;br /&gt;
'''Variant:''' For a recursive call on a subsequence &amp;lt;math&amp;gt;S'&amp;lt;/math&amp;gt; of &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt;, let &amp;lt;math&amp;gt;S_1'&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;S_2'&amp;lt;/math&amp;gt; denote the subsequences of &amp;lt;math&amp;gt;S'&amp;lt;/math&amp;gt; with which Mergesort is called recursively from that call. Then it is &amp;lt;math&amp;gt;|S_1'| \leq \lceil|S'| /2\rceil&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;|S_2'| \leq \lceil|S'| /2\rceil&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
'''Break condition:''' The current subsequence of the recursive call is a [[Sets and sequences#Singleton, pair, triple, quadruple|singleton]].&lt;br /&gt;
&lt;br /&gt;
'''Remark:''' For a particular recursive call &amp;lt;math&amp;gt;C&amp;lt;/math&amp;gt;, we may, for example, choose the height of the recursion subtree with root &amp;lt;math&amp;gt;C&amp;lt;/math&amp;gt; as the induction parameter. For concise, the induction parameter is omitted in the following.&lt;br /&gt;
&lt;br /&gt;
== Induction Basis ==&lt;br /&gt;
'''Abstract view:''' Nothing to do on a singleton.&lt;br /&gt;
&lt;br /&gt;
'''Implementation:''' Ditto.&lt;br /&gt;
&lt;br /&gt;
'''Proof:''' A singleton is trivially sorted.&lt;br /&gt;
&lt;br /&gt;
== Induction Step ==&lt;br /&gt;
'''Abstract view:''' The sequence is divided into two subsequences of approximately half size, it does not matter at all in which way this is done. Both subsequences are sorted recursively using Mergesort. The sorted subsequences are &amp;quot;merged&amp;quot; into one using algorithm [[Merge]].&lt;br /&gt;
&lt;br /&gt;
'''Implementation:''' Obvious.&lt;br /&gt;
&lt;br /&gt;
'''Correctness:''' By induction hypothesis, the recursive calls sort correctly. So, correctness of [[Merge]] implies correctness of Mergesort.&lt;br /&gt;
&lt;br /&gt;
== Complexity ==&lt;br /&gt;
'''Statement:''' The complexity is in &amp;lt;math&amp;gt;O(T\cdot n \log n)&amp;lt;/math&amp;gt; in the best and worst case, where &amp;lt;math&amp;gt;T&amp;lt;/math&amp;gt; is the complexity of the comparison.&lt;br /&gt;
&lt;br /&gt;
'''Proof:''' Obviously, the variant is correct. So, the lengths of &amp;lt;math&amp;gt;S_1'&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;S_2'&amp;lt;/math&amp;gt; are at most &amp;lt;math&amp;gt;\lceil1/2\rceil&amp;lt;/math&amp;gt; of the length of &amp;lt;math&amp;gt;S'&amp;lt;/math&amp;gt;. Consequently, the lengths of &amp;lt;math&amp;gt;S_1'&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;S_2'&amp;lt;/math&amp;gt; are at least &amp;lt;math&amp;gt;\lfloor1/2\rfloor&amp;lt;/math&amp;gt; of the length of &amp;lt;math&amp;gt;S'&amp;lt;/math&amp;gt;. In summary, the overall recursion depth is in &amp;lt;math&amp;gt;\Theta(\log n)&amp;lt;/math&amp;gt; in the best and worst case. Next consider the run time of a single recursive call, which receives some &amp;lt;math&amp;gt;S'&amp;lt;/math&amp;gt; as input and calls Mergesort recursively with two subsequences &amp;lt;math&amp;gt;S_1'&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;S_2'&amp;lt;/math&amp;gt;. The run time of this recursive call (excluding the run times of the recursive calls with &amp;lt;math&amp;gt;S_1'&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;S_2'&amp;lt;/math&amp;gt;) is linear in the length of &amp;lt;math&amp;gt;S'&amp;lt;/math&amp;gt;. Since all recursive calls on the same recursion level operate on pairwise disjoint subsequences, the total run time of all calls on the same recursive level is linear in the length of the original sequence.&lt;br /&gt;
&lt;br /&gt;
==Example implementations==&lt;br /&gt;
===Java===&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;java&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
	public static &amp;lt;T&amp;gt; void mergesort(List&amp;lt;T&amp;gt; liste, Comparator&amp;lt;T&amp;gt; cmp) {&lt;br /&gt;
		if (liste.size() &amp;lt;= 1)&lt;br /&gt;
			return;&lt;br /&gt;
		LinkedList&amp;lt;T&amp;gt; teilliste1 = new LinkedList&amp;lt;T&amp;gt;(); // leer&lt;br /&gt;
		LinkedList&amp;lt;T&amp;gt; teilliste2 = new LinkedList&amp;lt;T&amp;gt;();&lt;br /&gt;
		zerlegeInTeillisten(liste, teilliste1, teilliste2);&lt;br /&gt;
		mergesort(teilliste1, cmp);&lt;br /&gt;
		mergesort(teilliste2, cmp);&lt;br /&gt;
		liste.clear();&lt;br /&gt;
		merge(teilliste1, teilliste2, liste, cmp);&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	// -----------------&lt;br /&gt;
	public static &amp;lt;T&amp;gt; void zerlegeInTeillisten(List&amp;lt;T&amp;gt; liste,&lt;br /&gt;
			List&amp;lt;T&amp;gt; teilliste1, List&amp;lt;T&amp;gt; teilliste2) {&lt;br /&gt;
		ListIterator&amp;lt;T&amp;gt; it = liste.listIterator();&lt;br /&gt;
		for (int i = 0; i &amp;lt; liste.size(); i++) {&lt;br /&gt;
			T elem = it.next();&lt;br /&gt;
			if (i &amp;lt;= liste.size() / 2)&lt;br /&gt;
				teilliste1.add(elem); // Haengt elem hinten an teilliste1 an&lt;br /&gt;
			else&lt;br /&gt;
				teilliste2.add(elem); // Dito teilliste2&lt;br /&gt;
		}&lt;br /&gt;
	}&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;/div&gt;</summary>
		<author><name>Weihe</name></author>
	</entry>
	<entry>
		<id>https://wiki.algo.informatik.tu-darmstadt.de/index.php?title=Asymptotic_comparison_of_functions&amp;diff=3854</id>
		<title>Asymptotic comparison of functions</title>
		<link rel="alternate" type="text/html" href="https://wiki.algo.informatik.tu-darmstadt.de/index.php?title=Asymptotic_comparison_of_functions&amp;diff=3854"/>
		<updated>2016-05-10T15:38:26Z</updated>

		<summary type="html">&lt;p&gt;Weihe: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== One-dimensional case ==&lt;br /&gt;
&lt;br /&gt;
Let &amp;lt;math&amp;gt;f:\mathbb{R}\rightarrow\mathbb{R}^+_0&amp;lt;/math&amp;gt; be a function. The following sets (a.k.a. '''classes''') of functions are defined for &amp;lt;math&amp;gt;f&amp;lt;/math&amp;gt;:&lt;br /&gt;
# &amp;lt;math&amp;gt;\mathcal{O}(f)&amp;lt;/math&amp;gt; consists of all functions &amp;lt;math&amp;gt;g:\mathbb{R}\rightarrow\mathbb{R}^+_0&amp;lt;/math&amp;gt; such that there are &amp;lt;math&amp;gt;N_g,\,c_g\in\mathbb{R}&amp;lt;/math&amp;gt; that fulfill &amp;lt;math&amp;gt;g(n)\leq c_g\cdot f(n)&amp;lt;/math&amp;gt; for all &amp;lt;math&amp;gt;n\geq N_g&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;n\in\mathbb{R}&amp;lt;/math&amp;gt;;&lt;br /&gt;
# &amp;lt;math&amp;gt;\Omega(f)&amp;lt;/math&amp;gt; consists of all functions &amp;lt;math&amp;gt;g:\mathbb{R}\rightarrow\mathbb{R}^+_0&amp;lt;/math&amp;gt; such that there are &amp;lt;math&amp;gt;N_g,\,c_g\in\mathbb{R}&amp;lt;/math&amp;gt; that fulfill &amp;lt;math&amp;gt;g(n)\geq\frac{1}{c_g}\cdot f(n)&amp;lt;/math&amp;gt; for all &amp;lt;math&amp;gt;n\geq N_g&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;n\in\mathbb{R}&amp;lt;/math&amp;gt;;&lt;br /&gt;
# &amp;lt;math&amp;gt;\Theta(f):=\mathcal{O}(f)\cap\Omega(f)&amp;lt;/math&amp;gt;;&lt;br /&gt;
# &amp;lt;math&amp;gt;o(f):=\mathcal{O}(f)\setminus\Theta(f)&amp;lt;/math&amp;gt;.&lt;br /&gt;
# &amp;lt;math&amp;gt;\omega(f):=\Omega(f)\setminus\Theta(f)&amp;lt;/math&amp;gt;.&lt;br /&gt;
'''Remark:''' This notation is usually called the '''[https://en.wikipedia.org/wiki/Big_O_notation big O notation]''' or '''asymptotic notation''' and is also known as the '''Landau symbols''' or '''Landau-Bachmann symbols'''.&lt;br /&gt;
&lt;br /&gt;
== Mathematical rules for asymptotic comparison ==&lt;br /&gt;
&lt;br /&gt;
Let &amp;lt;math&amp;gt;f,g,h:\mathbb{R}\rightarrow\mathbb{R}^+_0&amp;lt;/math&amp;gt; be three functions.&lt;br /&gt;
# Anti-reflexivity: If &amp;lt;math&amp;gt;f\in\mathcal{O}(g)&amp;lt;/math&amp;gt;, it is &amp;lt;math&amp;gt;g\in\Omega(f)&amp;lt;/math&amp;gt;, and vice versa.&lt;br /&gt;
# Transitivity: If &amp;lt;math&amp;gt;f\in\oplus(g)&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;g\in\oplus(h)&amp;lt;/math&amp;gt; then &amp;lt;math&amp;gt;f\in\oplus(h)&amp;lt;/math&amp;gt;, where &amp;quot;&amp;lt;math&amp;gt;\oplus&amp;lt;/math&amp;gt;&amp;quot; is anyone of &amp;quot;&amp;lt;math&amp;gt;\mathcal{O}&amp;lt;/math&amp;gt;&amp;quot;, &amp;quot;&amp;lt;math&amp;gt;\Omega&amp;lt;/math&amp;gt;&amp;quot;, &amp;quot;&amp;lt;math&amp;gt;\Theta&amp;lt;/math&amp;gt;&amp;quot;, &amp;quot;&amp;lt;math&amp;gt;o&amp;lt;/math&amp;gt;&amp;quot;, and &amp;quot;&amp;lt;math&amp;gt;\omega&amp;lt;/math&amp;gt;&amp;quot;.&lt;br /&gt;
# It is &amp;lt;math&amp;gt;\mathcal{O}(f)\cup\mathcal{O}(g)\subseteq\mathcal{O}(f+g)&amp;lt;/math&amp;gt;.&lt;br /&gt;
# If &amp;lt;math&amp;gt;f\in\mathcal{O}(g)&amp;lt;/math&amp;gt;, it is &amp;lt;math&amp;gt;\oplus(f+g)=\oplus(g)&amp;lt;/math&amp;gt;, where &amp;quot;&amp;lt;math&amp;gt;\oplus&amp;lt;/math&amp;gt;&amp;quot; is anyone of &amp;quot;&amp;lt;math&amp;gt;\mathcal{O}&amp;lt;/math&amp;gt;&amp;quot;, &amp;quot;&amp;lt;math&amp;gt;\Omega&amp;lt;/math&amp;gt;&amp;quot;, and &amp;quot;&amp;lt;math&amp;gt;\Theta&amp;lt;/math&amp;gt;&amp;quot;.&lt;br /&gt;
# It is &amp;lt;math&amp;gt;f\in\mathcal{O}(g)&amp;lt;/math&amp;gt; if, and only if, the [http://en.wikipedia.org/wiki/Limit_superior_and_limit_inferior limit superior] of the series &amp;lt;math&amp;gt;f(n)/g(n)&amp;lt;/math&amp;gt; for &amp;lt;math&amp;gt;n\rightarrow+\infty&amp;lt;/math&amp;gt; is finite.&lt;br /&gt;
# It is &amp;lt;math&amp;gt;f\in o(g)&amp;lt;/math&amp;gt; if, and only if, this limit superior is zero. Note that, due to nonnegativity, this is equivalent to the statement that &amp;lt;math&amp;gt;\lim_{n\rightarrow+\infty}f(n)/g(n)&amp;lt;/math&amp;gt; exists and equals zero.&lt;br /&gt;
# For &amp;lt;math&amp;gt;a,b\in\mathbb{R}&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;a,b&amp;gt;1&amp;lt;/math&amp;gt;, it is &amp;lt;math&amp;gt;\oplus(\log_a(n))=\oplus(\log_b(n))&amp;lt;/math&amp;gt;, where &amp;quot;&amp;lt;math&amp;gt;\oplus&amp;lt;/math&amp;gt;&amp;quot; is anyone of &amp;quot;&amp;lt;math&amp;gt;\mathcal{O}&amp;lt;/math&amp;gt;&amp;quot;, &amp;quot;&amp;lt;math&amp;gt;\Omega&amp;lt;/math&amp;gt;&amp;quot;, &amp;quot;&amp;lt;math&amp;gt;\Theta&amp;lt;/math&amp;gt;&amp;quot;, &amp;quot;&amp;lt;math&amp;gt;o&amp;lt;/math&amp;gt;&amp;quot;, and &amp;quot;&amp;lt;math&amp;gt;\omega&amp;lt;/math&amp;gt;&amp;quot; (follows immediately from the basic rule &amp;lt;math&amp;gt;\log_a(n)/\log_b(n)=\log_a(b)=&amp;lt;/math&amp;gt; const). In particular, the base of a logarithm function may be omitted: &amp;lt;math&amp;gt;\oplus(\log(n))=\oplus(\log_a(n))&amp;lt;/math&amp;gt;.&lt;br /&gt;
# For all &amp;lt;math&amp;gt;k,\ell\in\mathbb{R}&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;k&amp;lt;\ell&amp;lt;/math&amp;gt;, it is &amp;lt;math&amp;gt;n^k\in o(n^\ell)&amp;lt;/math&amp;gt;.&lt;br /&gt;
# For all &amp;lt;math&amp;gt;k\in\mathbb{R}^+&amp;lt;/math&amp;gt;, it is &amp;lt;math&amp;gt;\log^k(n)\in o(n)&amp;lt;/math&amp;gt;.&lt;br /&gt;
# For all &amp;lt;math&amp;gt;k,a\in\mathbb{R}&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;a&amp;gt;1&amp;lt;/math&amp;gt;, it is &amp;lt;math&amp;gt;n^k\in o(a^n)&amp;lt;/math&amp;gt;.&lt;br /&gt;
# For all &amp;lt;math&amp;gt;a,b\in\mathbb{R}&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;1&amp;lt;a&amp;lt;b&amp;lt;/math&amp;gt;, it is &amp;lt;math&amp;gt;a^n\in o(b^n)&amp;lt;/math&amp;gt;.&lt;br /&gt;
# For all &amp;lt;math&amp;gt;a\in\mathbb{R}&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;a&amp;gt;1&amp;lt;/math&amp;gt;, it is &amp;lt;math&amp;gt;a^n\in o(n!)&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
== Comparison with specific functions ==&lt;br /&gt;
&lt;br /&gt;
A function &amp;lt;math&amp;gt;f:\mathbb{R}\rightarrow\mathbb{R}^+_0&amp;lt;/math&amp;gt; is said to be&lt;br /&gt;
# '''linear''' if &amp;lt;math&amp;gt;f\in\Theta(n)&amp;lt;/math&amp;gt;;&lt;br /&gt;
# '''quadratic''' if &amp;lt;math&amp;gt;f\in\Theta(n^2)&amp;lt;/math&amp;gt;;&lt;br /&gt;
# '''cubic''' if &amp;lt;math&amp;gt;f\in\Theta(n^3)&amp;lt;/math&amp;gt;;&lt;br /&gt;
# '''logarithmic''' if &amp;lt;math&amp;gt;f\in\Theta(\log(n))&amp;lt;/math&amp;gt;;&lt;br /&gt;
# '''&amp;quot;n-log-n&amp;quot;''' if &amp;lt;math&amp;gt;f\in\Theta(n\cdot\log(n))&amp;lt;/math&amp;gt;;&lt;br /&gt;
# '''polynomial''' if there is a polynomial &amp;lt;math&amp;gt;p:\mathbb{R}\rightarrow\mathbb{R}&amp;lt;/math&amp;gt; such that &amp;lt;math&amp;gt;f\in\mathcal{O}(p)&amp;lt;/math&amp;gt;;&lt;br /&gt;
# '''subexponential''' if &amp;lt;math&amp;gt;f\in o(a^n)&amp;lt;/math&amp;gt; for every &amp;lt;math&amp;gt;a\in\mathbb{R}&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;a&amp;gt;1&amp;lt;/math&amp;gt;;&lt;br /&gt;
# '''exponential''' if there are &amp;lt;math&amp;gt;a,b\in\mathbb{R}&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;a&amp;gt;b&amp;gt;1&amp;lt;/math&amp;gt;, such that &amp;lt;math&amp;gt;f\in\mathcal{O}(a^n)&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;f\in\Omega(b^n)&amp;lt;/math&amp;gt;;&lt;br /&gt;
# '''factorial''' if &amp;lt;math&amp;gt;f\in\Theta(n!)&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
'''Remark:''' Note that the notion of polynomial is based on an &amp;quot;&amp;lt;math&amp;gt;\mathcal{O}&amp;lt;/math&amp;gt;&amp;quot;, not on a &amp;quot;&amp;lt;math&amp;gt;\Theta&amp;lt;/math&amp;gt;&amp;quot;. In fact, in this context, &amp;quot;polynomial&amp;quot; is usually used short for &amp;quot;polynomially bounded from above&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
== Multidimensional case ==&lt;br /&gt;
&lt;br /&gt;
Let &amp;lt;math&amp;gt;k\in\mathbb{R}&amp;lt;/math&amp;gt; and let &amp;lt;math&amp;gt;f:\mathbb{R}^k\rightarrow\mathbb{R}^+_0&amp;lt;/math&amp;gt;. The following sets (a.k.a. '''classes''') are defined for &amp;lt;math&amp;gt;f&amp;lt;/math&amp;gt;:&lt;br /&gt;
# &amp;lt;math&amp;gt;\mathcal{O}(f)&amp;lt;/math&amp;gt; consists of all functions &amp;lt;math&amp;gt;g:\mathbb{R}^k\rightarrow\mathbb{R}^+_0&amp;lt;/math&amp;gt; such that there are &amp;lt;math&amp;gt;N_g,\,c_g\in\mathbb{R}&amp;lt;/math&amp;gt;  that fulfill &amp;lt;math&amp;gt;g(n_1,\ldots,n_k)\leq c_g\cdot f(n_1,\ldots,n_k)&amp;lt;/math&amp;gt; for all &amp;lt;math&amp;gt;n_1,\ldots,n_k\in\mathbb{R}&amp;lt;/math&amp;gt; such that &amp;lt;math&amp;gt;n_1,\ldots,n_k\geq N_g&amp;lt;/math&amp;gt;;&lt;br /&gt;
# &amp;lt;math&amp;gt;\Omega(f)&amp;lt;/math&amp;gt; consists of all functions &amp;lt;math&amp;gt;g:\mathbb{R}^k\rightarrow\mathbb{R}^+_0&amp;lt;/math&amp;gt; such that there are &amp;lt;math&amp;gt;N_g,\,c_g\in\mathbb{R}&amp;lt;/math&amp;gt;  that fulfill &amp;lt;math&amp;gt;g(n_1,\ldots,n_k)\geq c_g\cdot f(n_1,\ldots,n_k)&amp;lt;/math&amp;gt; for all &amp;lt;math&amp;gt;n_1,\ldots,n_k\in\mathbb{R}&amp;lt;/math&amp;gt; such that &amp;lt;math&amp;gt;n_1,\ldots,n_k\geq N_g&amp;lt;/math&amp;gt;;&lt;br /&gt;
# &amp;lt;math&amp;gt;\Theta(f):=\mathcal{O}(f)\cap\Omega(f)&amp;lt;/math&amp;gt;;&lt;br /&gt;
# &amp;lt;math&amp;gt;o(f):=\mathcal{O}\setminus\Theta(f)&amp;lt;/math&amp;gt;.&lt;br /&gt;
# &amp;lt;math&amp;gt;\omega(f):=\Omega(f)\setminus\Theta(f)&amp;lt;/math&amp;gt;.&lt;/div&gt;</summary>
		<author><name>Weihe</name></author>
	</entry>
	<entry>
		<id>https://wiki.algo.informatik.tu-darmstadt.de/index.php?title=Asymptotic_comparison_of_functions&amp;diff=3853</id>
		<title>Asymptotic comparison of functions</title>
		<link rel="alternate" type="text/html" href="https://wiki.algo.informatik.tu-darmstadt.de/index.php?title=Asymptotic_comparison_of_functions&amp;diff=3853"/>
		<updated>2016-05-10T15:37:36Z</updated>

		<summary type="html">&lt;p&gt;Weihe: /* Multidimensional case */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== One-dimensional case ==&lt;br /&gt;
&lt;br /&gt;
Let &amp;lt;math&amp;gt;f:\mathbb{R}\rightarrow\mathbb{R}^+_0&amp;lt;/math&amp;gt; be a function. The following sets (a.k.a. '''classes''') of functions are defined for &amp;lt;math&amp;gt;f&amp;lt;/math&amp;gt;:&lt;br /&gt;
# &amp;lt;math&amp;gt;\mathcal{O}(f)&amp;lt;/math&amp;gt; consists of all functions &amp;lt;math&amp;gt;g:\mathbb{R}\rightarrow\mathbb{R}^+_0&amp;lt;/math&amp;gt; such that there are &amp;lt;math&amp;gt;N_g,\,c_g\in\mathbb{R}&amp;lt;/math&amp;gt; that fulfill &amp;lt;math&amp;gt;g(n)\leq c_g\cdot f(n)&amp;lt;/math&amp;gt; for all &amp;lt;math&amp;gt;n\geq N_g&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;n\in\mathbb{R}&amp;lt;/math&amp;gt;;&lt;br /&gt;
# &amp;lt;math&amp;gt;\Omega(f)&amp;lt;/math&amp;gt; consists of all functions &amp;lt;math&amp;gt;g:\mathbb{R}\rightarrow\mathbb{R}^+_0&amp;lt;/math&amp;gt; such that there are &amp;lt;math&amp;gt;N_g,\,c_g\in\mathbb{R}&amp;lt;/math&amp;gt; that fulfill &amp;lt;math&amp;gt;g(n)\geq\frac{1}{c_g}\cdot f(n)&amp;lt;/math&amp;gt; for all &amp;lt;math&amp;gt;n\geq N_g&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;n\in\mathbb{R}&amp;lt;/math&amp;gt;;&lt;br /&gt;
# &amp;lt;math&amp;gt;\Theta(f):=\mathcal{O}(f)\cap\Omega(f)&amp;lt;/math&amp;gt;;&lt;br /&gt;
# &amp;lt;math&amp;gt;o(f):=\mathcal{O}(f)\setminus\Theta(f)&amp;lt;/math&amp;gt;.&lt;br /&gt;
# &amp;lt;math&amp;gt;\omega(f):=\Omega(f)\setminus\Theta(f)&amp;lt;/math&amp;gt;.&lt;br /&gt;
'''Remark:''' This notation is usually called the '''[https://en.wikipedia.org/wiki/Big_O_notation big O notation]''' or '''asymptotic notation''' and is also known as the '''Landau symbols''' or '''Landau-Bachmann symbols'''.&lt;br /&gt;
&lt;br /&gt;
== Mathematical rules for asymptotic comparison ==&lt;br /&gt;
&lt;br /&gt;
Let &amp;lt;math&amp;gt;f,g,h:\mathbb{R}\rightarrow\mathbb{R}^+_0&amp;lt;/math&amp;gt; be three functions.&lt;br /&gt;
# Anti-reflexivity: If &amp;lt;math&amp;gt;f\in\mathcal{O}(g)&amp;lt;/math&amp;gt;, it is &amp;lt;math&amp;gt;g\in\Omega(f)&amp;lt;/math&amp;gt;, and vice versa.&lt;br /&gt;
# Transitivity: If &amp;lt;math&amp;gt;f\in\oplus(g)&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;g\in\oplus(h)&amp;lt;/math&amp;gt; then &amp;lt;math&amp;gt;f\in\oplus(h)&amp;lt;/math&amp;gt;, where &amp;quot;&amp;lt;math&amp;gt;\oplus&amp;lt;/math&amp;gt;&amp;quot; is anyone of &amp;quot;&amp;lt;math&amp;gt;\mathcal{O}&amp;lt;/math&amp;gt;&amp;quot;, &amp;quot;&amp;lt;math&amp;gt;\Omega&amp;lt;/math&amp;gt;&amp;quot;, &amp;quot;&amp;lt;math&amp;gt;\Theta&amp;lt;/math&amp;gt;&amp;quot;, &amp;quot;&amp;lt;math&amp;gt;o&amp;lt;/math&amp;gt;&amp;quot;, and &amp;quot;&amp;lt;math&amp;gt;\omega&amp;lt;/math&amp;gt;&amp;quot;.&lt;br /&gt;
# It is &amp;lt;math&amp;gt;\mathcal{O}(f)\cup\mathcal{O}(g)\subseteq\mathcal{O}(f+g)&amp;lt;/math&amp;gt;.&lt;br /&gt;
# If &amp;lt;math&amp;gt;f\in\mathcal{O}(g)&amp;lt;/math&amp;gt;, it is &amp;lt;math&amp;gt;\oplus(f+g)=\oplus(g)&amp;lt;/math&amp;gt;, where &amp;quot;&amp;lt;math&amp;gt;\oplus&amp;lt;/math&amp;gt;&amp;quot; is anyone of &amp;quot;&amp;lt;math&amp;gt;\mathcal{O}&amp;lt;/math&amp;gt;&amp;quot;, &amp;quot;&amp;lt;math&amp;gt;\Omega&amp;lt;/math&amp;gt;&amp;quot;, and &amp;quot;&amp;lt;math&amp;gt;\Theta&amp;lt;/math&amp;gt;&amp;quot;.&lt;br /&gt;
# It is &amp;lt;math&amp;gt;f\in\mathcal{O}(g)&amp;lt;/math&amp;gt; if, and only if, the [http://en.wikipedia.org/wiki/Limit_superior_and_limit_inferior limit superior] of the series &amp;lt;math&amp;gt;f(n)/g(n)&amp;lt;/math&amp;gt; for &amp;lt;math&amp;gt;n\rightarrow+\infty&amp;lt;/math&amp;gt; is finite.&lt;br /&gt;
# It is &amp;lt;math&amp;gt;f\in o(g)&amp;lt;/math&amp;gt; if, and only if, this limit superior is zero. Note that, due to nonnegativity, this is equivalent to the statement that &amp;lt;math&amp;gt;\lim_{n\rightarrow+\infty}f(n)/g(n)&amp;lt;/math&amp;gt; exists and equals zero.&lt;br /&gt;
# For &amp;lt;math&amp;gt;a,b\in\mathbb{R}&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;a,b&amp;gt;1&amp;lt;/math&amp;gt;, it is &amp;lt;math&amp;gt;\oplus(\log_a(n))=\oplus(\log_b(n))&amp;lt;/math&amp;gt;, where &amp;quot;&amp;lt;math&amp;gt;\oplus&amp;lt;/math&amp;gt;&amp;quot; is anyone of &amp;quot;&amp;lt;math&amp;gt;\mathcal{O}&amp;lt;/math&amp;gt;&amp;quot;, &amp;quot;&amp;lt;math&amp;gt;\Omega&amp;lt;/math&amp;gt;&amp;quot;, &amp;quot;&amp;lt;math&amp;gt;\Theta&amp;lt;/math&amp;gt;&amp;quot;, &amp;quot;&amp;lt;math&amp;gt;o&amp;lt;/math&amp;gt;&amp;quot;, and &amp;quot;&amp;lt;math&amp;gt;\omega&amp;lt;/math&amp;gt;&amp;quot; (follows immediately from the basic rule &amp;lt;math&amp;gt;\log_a(n)/\log_b(n)=\log_a(b)=&amp;lt;/math&amp;gt; const). In particular, the base of a logarithm function may be omitted: &amp;lt;math&amp;gt;\oplus(\log(n))=\oplus(\log_a(n))&amp;lt;/math&amp;gt;.&lt;br /&gt;
# For all &amp;lt;math&amp;gt;k,\ell\in\mathbb{R}&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;k&amp;lt;\ell&amp;lt;/math&amp;gt;, it is &amp;lt;math&amp;gt;n^k\in o(n^\ell)&amp;lt;/math&amp;gt;.&lt;br /&gt;
# For all &amp;lt;math&amp;gt;k\in\mathbb{R}^+&amp;lt;/math&amp;gt;, it is &amp;lt;math&amp;gt;\log^k(n)\in o(n)&amp;lt;/math&amp;gt;.&lt;br /&gt;
# For all &amp;lt;math&amp;gt;k,a\in\mathbb{R}&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;a&amp;gt;1&amp;lt;/math&amp;gt;, it is &amp;lt;math&amp;gt;n^k\in o(a^n)&amp;lt;/math&amp;gt;.&lt;br /&gt;
# For all &amp;lt;math&amp;gt;a,b\in\mathbb{R}&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;1&amp;lt;a&amp;lt;b&amp;lt;/math&amp;gt;, it is &amp;lt;math&amp;gt;a^n\in o(b^n)&amp;lt;/math&amp;gt;.&lt;br /&gt;
# For all &amp;lt;math&amp;gt;a\in\mathbb{R}&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;a&amp;gt;1&amp;lt;/math&amp;gt;, it is &amp;lt;math&amp;gt;a^n\in o(n!)&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
== Comparison with specific functions ==&lt;br /&gt;
&lt;br /&gt;
A function &amp;lt;math&amp;gt;f:\mathbb{N}\rightarrow\mathbb{R}^+_0&amp;lt;/math&amp;gt; is said to be&lt;br /&gt;
# '''linear''' if &amp;lt;math&amp;gt;f\in\Theta(n)&amp;lt;/math&amp;gt;;&lt;br /&gt;
# '''quadratic''' if &amp;lt;math&amp;gt;f\in\Theta(n^2)&amp;lt;/math&amp;gt;;&lt;br /&gt;
# '''cubic''' if &amp;lt;math&amp;gt;f\in\Theta(n^3)&amp;lt;/math&amp;gt;;&lt;br /&gt;
# '''logarithmic''' if &amp;lt;math&amp;gt;f\in\Theta(\log(n))&amp;lt;/math&amp;gt;;&lt;br /&gt;
# '''&amp;quot;n-log-n&amp;quot;''' if &amp;lt;math&amp;gt;f\in\Theta(n\cdot\log(n))&amp;lt;/math&amp;gt;;&lt;br /&gt;
# '''polynomial''' if there is a polynomial &amp;lt;math&amp;gt;p:\mathbb{N}\rightarrow\mathbb{R}&amp;lt;/math&amp;gt; such that &amp;lt;math&amp;gt;f\in\mathcal{O}(p)&amp;lt;/math&amp;gt;;&lt;br /&gt;
# '''subexponential''' if &amp;lt;math&amp;gt;f\in o(a^n)&amp;lt;/math&amp;gt; for every &amp;lt;math&amp;gt;a\in\mathbb{R}&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;a&amp;gt;1&amp;lt;/math&amp;gt;;&lt;br /&gt;
# '''exponential''' if there are &amp;lt;math&amp;gt;a,b\in\mathbb{R}&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;a&amp;gt;b&amp;gt;1&amp;lt;/math&amp;gt;, such that &amp;lt;math&amp;gt;f\in\mathcal{O}(a^n)&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;f\in\Omega(b^n)&amp;lt;/math&amp;gt;;&lt;br /&gt;
# '''factorial''' if &amp;lt;math&amp;gt;f\in\Theta(n!)&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
'''Remark:''' Note that the notion of polynomial is based on an &amp;quot;&amp;lt;math&amp;gt;\mathcal{O}&amp;lt;/math&amp;gt;&amp;quot;, not on a &amp;quot;&amp;lt;math&amp;gt;\Theta&amp;lt;/math&amp;gt;&amp;quot;. In fact, in this context, &amp;quot;polynomial&amp;quot; is usually used short for &amp;quot;polynomially bounded from above&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
== Multidimensional case ==&lt;br /&gt;
&lt;br /&gt;
Let &amp;lt;math&amp;gt;k\in\mathbb{N}&amp;lt;/math&amp;gt; and let &amp;lt;math&amp;gt;f:\mathbb{R}^k\rightarrow\mathbb{R}^+_0&amp;lt;/math&amp;gt;. The following sets (a.k.a. '''classes''') are defined for &amp;lt;math&amp;gt;f&amp;lt;/math&amp;gt;:&lt;br /&gt;
# &amp;lt;math&amp;gt;\mathcal{O}(f)&amp;lt;/math&amp;gt; consists of all functions &amp;lt;math&amp;gt;g:\mathbb{R}^k\rightarrow\mathbb{R}^+_0&amp;lt;/math&amp;gt; such that there are &amp;lt;math&amp;gt;N_g,\,c_g\in\mathbb{R}&amp;lt;/math&amp;gt;  that fulfill &amp;lt;math&amp;gt;g(n_1,\ldots,n_k)\leq c_g\cdot f(n_1,\ldots,n_k)&amp;lt;/math&amp;gt; for all &amp;lt;math&amp;gt;n_1,\ldots,n_k\in\mathbb{N}&amp;lt;/math&amp;gt; such that &amp;lt;math&amp;gt;n_1,\ldots,n_k\geq N_g&amp;lt;/math&amp;gt;;&lt;br /&gt;
# &amp;lt;math&amp;gt;\Omega(f)&amp;lt;/math&amp;gt; consists of all functions &amp;lt;math&amp;gt;g:\mathbb{N}^k\rightarrow\mathbb{R}^+_0&amp;lt;/math&amp;gt; such that there are &amp;lt;math&amp;gt;N_g,\,c_g\in\mathbb{N}&amp;lt;/math&amp;gt;  that fulfill &amp;lt;math&amp;gt;g(n_1,\ldots,n_k)\geq c_g\cdot f(n_1,\ldots,n_k)&amp;lt;/math&amp;gt; for all &amp;lt;math&amp;gt;n_1,\ldots,n_k\in\mathbb{N}&amp;lt;/math&amp;gt; such that &amp;lt;math&amp;gt;n_1,\ldots,n_k\geq N_g&amp;lt;/math&amp;gt;;&lt;br /&gt;
# &amp;lt;math&amp;gt;\Theta(f):=\mathcal{O}(f)\cap\Omega(f)&amp;lt;/math&amp;gt;;&lt;br /&gt;
# &amp;lt;math&amp;gt;o(f):=\mathcal{O}\setminus\Theta(f)&amp;lt;/math&amp;gt;.&lt;br /&gt;
# &amp;lt;math&amp;gt;\omega(f):=\Omega(f)\setminus\Theta(f)&amp;lt;/math&amp;gt;.&lt;/div&gt;</summary>
		<author><name>Weihe</name></author>
	</entry>
	<entry>
		<id>https://wiki.algo.informatik.tu-darmstadt.de/index.php?title=Asymptotic_comparison_of_functions&amp;diff=3852</id>
		<title>Asymptotic comparison of functions</title>
		<link rel="alternate" type="text/html" href="https://wiki.algo.informatik.tu-darmstadt.de/index.php?title=Asymptotic_comparison_of_functions&amp;diff=3852"/>
		<updated>2016-05-10T15:37:19Z</updated>

		<summary type="html">&lt;p&gt;Weihe: /* Multidimensional case */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== One-dimensional case ==&lt;br /&gt;
&lt;br /&gt;
Let &amp;lt;math&amp;gt;f:\mathbb{R}\rightarrow\mathbb{R}^+_0&amp;lt;/math&amp;gt; be a function. The following sets (a.k.a. '''classes''') of functions are defined for &amp;lt;math&amp;gt;f&amp;lt;/math&amp;gt;:&lt;br /&gt;
# &amp;lt;math&amp;gt;\mathcal{O}(f)&amp;lt;/math&amp;gt; consists of all functions &amp;lt;math&amp;gt;g:\mathbb{R}\rightarrow\mathbb{R}^+_0&amp;lt;/math&amp;gt; such that there are &amp;lt;math&amp;gt;N_g,\,c_g\in\mathbb{R}&amp;lt;/math&amp;gt; that fulfill &amp;lt;math&amp;gt;g(n)\leq c_g\cdot f(n)&amp;lt;/math&amp;gt; for all &amp;lt;math&amp;gt;n\geq N_g&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;n\in\mathbb{R}&amp;lt;/math&amp;gt;;&lt;br /&gt;
# &amp;lt;math&amp;gt;\Omega(f)&amp;lt;/math&amp;gt; consists of all functions &amp;lt;math&amp;gt;g:\mathbb{R}\rightarrow\mathbb{R}^+_0&amp;lt;/math&amp;gt; such that there are &amp;lt;math&amp;gt;N_g,\,c_g\in\mathbb{R}&amp;lt;/math&amp;gt; that fulfill &amp;lt;math&amp;gt;g(n)\geq\frac{1}{c_g}\cdot f(n)&amp;lt;/math&amp;gt; for all &amp;lt;math&amp;gt;n\geq N_g&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;n\in\mathbb{R}&amp;lt;/math&amp;gt;;&lt;br /&gt;
# &amp;lt;math&amp;gt;\Theta(f):=\mathcal{O}(f)\cap\Omega(f)&amp;lt;/math&amp;gt;;&lt;br /&gt;
# &amp;lt;math&amp;gt;o(f):=\mathcal{O}(f)\setminus\Theta(f)&amp;lt;/math&amp;gt;.&lt;br /&gt;
# &amp;lt;math&amp;gt;\omega(f):=\Omega(f)\setminus\Theta(f)&amp;lt;/math&amp;gt;.&lt;br /&gt;
'''Remark:''' This notation is usually called the '''[https://en.wikipedia.org/wiki/Big_O_notation big O notation]''' or '''asymptotic notation''' and is also known as the '''Landau symbols''' or '''Landau-Bachmann symbols'''.&lt;br /&gt;
&lt;br /&gt;
== Mathematical rules for asymptotic comparison ==&lt;br /&gt;
&lt;br /&gt;
Let &amp;lt;math&amp;gt;f,g,h:\mathbb{R}\rightarrow\mathbb{R}^+_0&amp;lt;/math&amp;gt; be three functions.&lt;br /&gt;
# Anti-reflexivity: If &amp;lt;math&amp;gt;f\in\mathcal{O}(g)&amp;lt;/math&amp;gt;, it is &amp;lt;math&amp;gt;g\in\Omega(f)&amp;lt;/math&amp;gt;, and vice versa.&lt;br /&gt;
# Transitivity: If &amp;lt;math&amp;gt;f\in\oplus(g)&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;g\in\oplus(h)&amp;lt;/math&amp;gt; then &amp;lt;math&amp;gt;f\in\oplus(h)&amp;lt;/math&amp;gt;, where &amp;quot;&amp;lt;math&amp;gt;\oplus&amp;lt;/math&amp;gt;&amp;quot; is anyone of &amp;quot;&amp;lt;math&amp;gt;\mathcal{O}&amp;lt;/math&amp;gt;&amp;quot;, &amp;quot;&amp;lt;math&amp;gt;\Omega&amp;lt;/math&amp;gt;&amp;quot;, &amp;quot;&amp;lt;math&amp;gt;\Theta&amp;lt;/math&amp;gt;&amp;quot;, &amp;quot;&amp;lt;math&amp;gt;o&amp;lt;/math&amp;gt;&amp;quot;, and &amp;quot;&amp;lt;math&amp;gt;\omega&amp;lt;/math&amp;gt;&amp;quot;.&lt;br /&gt;
# It is &amp;lt;math&amp;gt;\mathcal{O}(f)\cup\mathcal{O}(g)\subseteq\mathcal{O}(f+g)&amp;lt;/math&amp;gt;.&lt;br /&gt;
# If &amp;lt;math&amp;gt;f\in\mathcal{O}(g)&amp;lt;/math&amp;gt;, it is &amp;lt;math&amp;gt;\oplus(f+g)=\oplus(g)&amp;lt;/math&amp;gt;, where &amp;quot;&amp;lt;math&amp;gt;\oplus&amp;lt;/math&amp;gt;&amp;quot; is anyone of &amp;quot;&amp;lt;math&amp;gt;\mathcal{O}&amp;lt;/math&amp;gt;&amp;quot;, &amp;quot;&amp;lt;math&amp;gt;\Omega&amp;lt;/math&amp;gt;&amp;quot;, and &amp;quot;&amp;lt;math&amp;gt;\Theta&amp;lt;/math&amp;gt;&amp;quot;.&lt;br /&gt;
# It is &amp;lt;math&amp;gt;f\in\mathcal{O}(g)&amp;lt;/math&amp;gt; if, and only if, the [http://en.wikipedia.org/wiki/Limit_superior_and_limit_inferior limit superior] of the series &amp;lt;math&amp;gt;f(n)/g(n)&amp;lt;/math&amp;gt; for &amp;lt;math&amp;gt;n\rightarrow+\infty&amp;lt;/math&amp;gt; is finite.&lt;br /&gt;
# It is &amp;lt;math&amp;gt;f\in o(g)&amp;lt;/math&amp;gt; if, and only if, this limit superior is zero. Note that, due to nonnegativity, this is equivalent to the statement that &amp;lt;math&amp;gt;\lim_{n\rightarrow+\infty}f(n)/g(n)&amp;lt;/math&amp;gt; exists and equals zero.&lt;br /&gt;
# For &amp;lt;math&amp;gt;a,b\in\mathbb{R}&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;a,b&amp;gt;1&amp;lt;/math&amp;gt;, it is &amp;lt;math&amp;gt;\oplus(\log_a(n))=\oplus(\log_b(n))&amp;lt;/math&amp;gt;, where &amp;quot;&amp;lt;math&amp;gt;\oplus&amp;lt;/math&amp;gt;&amp;quot; is anyone of &amp;quot;&amp;lt;math&amp;gt;\mathcal{O}&amp;lt;/math&amp;gt;&amp;quot;, &amp;quot;&amp;lt;math&amp;gt;\Omega&amp;lt;/math&amp;gt;&amp;quot;, &amp;quot;&amp;lt;math&amp;gt;\Theta&amp;lt;/math&amp;gt;&amp;quot;, &amp;quot;&amp;lt;math&amp;gt;o&amp;lt;/math&amp;gt;&amp;quot;, and &amp;quot;&amp;lt;math&amp;gt;\omega&amp;lt;/math&amp;gt;&amp;quot; (follows immediately from the basic rule &amp;lt;math&amp;gt;\log_a(n)/\log_b(n)=\log_a(b)=&amp;lt;/math&amp;gt; const). In particular, the base of a logarithm function may be omitted: &amp;lt;math&amp;gt;\oplus(\log(n))=\oplus(\log_a(n))&amp;lt;/math&amp;gt;.&lt;br /&gt;
# For all &amp;lt;math&amp;gt;k,\ell\in\mathbb{R}&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;k&amp;lt;\ell&amp;lt;/math&amp;gt;, it is &amp;lt;math&amp;gt;n^k\in o(n^\ell)&amp;lt;/math&amp;gt;.&lt;br /&gt;
# For all &amp;lt;math&amp;gt;k\in\mathbb{R}^+&amp;lt;/math&amp;gt;, it is &amp;lt;math&amp;gt;\log^k(n)\in o(n)&amp;lt;/math&amp;gt;.&lt;br /&gt;
# For all &amp;lt;math&amp;gt;k,a\in\mathbb{R}&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;a&amp;gt;1&amp;lt;/math&amp;gt;, it is &amp;lt;math&amp;gt;n^k\in o(a^n)&amp;lt;/math&amp;gt;.&lt;br /&gt;
# For all &amp;lt;math&amp;gt;a,b\in\mathbb{R}&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;1&amp;lt;a&amp;lt;b&amp;lt;/math&amp;gt;, it is &amp;lt;math&amp;gt;a^n\in o(b^n)&amp;lt;/math&amp;gt;.&lt;br /&gt;
# For all &amp;lt;math&amp;gt;a\in\mathbb{R}&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;a&amp;gt;1&amp;lt;/math&amp;gt;, it is &amp;lt;math&amp;gt;a^n\in o(n!)&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
== Comparison with specific functions ==&lt;br /&gt;
&lt;br /&gt;
A function &amp;lt;math&amp;gt;f:\mathbb{N}\rightarrow\mathbb{R}^+_0&amp;lt;/math&amp;gt; is said to be&lt;br /&gt;
# '''linear''' if &amp;lt;math&amp;gt;f\in\Theta(n)&amp;lt;/math&amp;gt;;&lt;br /&gt;
# '''quadratic''' if &amp;lt;math&amp;gt;f\in\Theta(n^2)&amp;lt;/math&amp;gt;;&lt;br /&gt;
# '''cubic''' if &amp;lt;math&amp;gt;f\in\Theta(n^3)&amp;lt;/math&amp;gt;;&lt;br /&gt;
# '''logarithmic''' if &amp;lt;math&amp;gt;f\in\Theta(\log(n))&amp;lt;/math&amp;gt;;&lt;br /&gt;
# '''&amp;quot;n-log-n&amp;quot;''' if &amp;lt;math&amp;gt;f\in\Theta(n\cdot\log(n))&amp;lt;/math&amp;gt;;&lt;br /&gt;
# '''polynomial''' if there is a polynomial &amp;lt;math&amp;gt;p:\mathbb{N}\rightarrow\mathbb{R}&amp;lt;/math&amp;gt; such that &amp;lt;math&amp;gt;f\in\mathcal{O}(p)&amp;lt;/math&amp;gt;;&lt;br /&gt;
# '''subexponential''' if &amp;lt;math&amp;gt;f\in o(a^n)&amp;lt;/math&amp;gt; for every &amp;lt;math&amp;gt;a\in\mathbb{R}&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;a&amp;gt;1&amp;lt;/math&amp;gt;;&lt;br /&gt;
# '''exponential''' if there are &amp;lt;math&amp;gt;a,b\in\mathbb{R}&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;a&amp;gt;b&amp;gt;1&amp;lt;/math&amp;gt;, such that &amp;lt;math&amp;gt;f\in\mathcal{O}(a^n)&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;f\in\Omega(b^n)&amp;lt;/math&amp;gt;;&lt;br /&gt;
# '''factorial''' if &amp;lt;math&amp;gt;f\in\Theta(n!)&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
'''Remark:''' Note that the notion of polynomial is based on an &amp;quot;&amp;lt;math&amp;gt;\mathcal{O}&amp;lt;/math&amp;gt;&amp;quot;, not on a &amp;quot;&amp;lt;math&amp;gt;\Theta&amp;lt;/math&amp;gt;&amp;quot;. In fact, in this context, &amp;quot;polynomial&amp;quot; is usually used short for &amp;quot;polynomially bounded from above&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
== Multidimensional case ==&lt;br /&gt;
&lt;br /&gt;
Let &amp;lt;math&amp;gt;k\in\mathbb{N}&amp;lt;/math&amp;gt; and let &amp;lt;math&amp;gt;f:\mathbb{R}^k\rightarrow\mathbb{R}^+_0&amp;lt;/math&amp;gt;. The following sets (a.k.a. '''classes''') are defined for &amp;lt;math&amp;gt;f&amp;lt;/math&amp;gt;:&lt;br /&gt;
# &amp;lt;math&amp;gt;\mathcal{O}(f)&amp;lt;/math&amp;gt; consists of all functions &amp;lt;math&amp;gt;g:\mathbb{N}^k\rightarrow\mathbb{R}^+_0&amp;lt;/math&amp;gt; such that there are &amp;lt;math&amp;gt;N_g,\,c_g\in\mathbb{N}&amp;lt;/math&amp;gt;  that fulfill &amp;lt;math&amp;gt;g(n_1,\ldots,n_k)\leq c_g\cdot f(n_1,\ldots,n_k)&amp;lt;/math&amp;gt; for all &amp;lt;math&amp;gt;n_1,\ldots,n_k\in\mathbb{N}&amp;lt;/math&amp;gt; such that &amp;lt;math&amp;gt;n_1,\ldots,n_k\geq N_g&amp;lt;/math&amp;gt;;&lt;br /&gt;
# &amp;lt;math&amp;gt;\Omega(f)&amp;lt;/math&amp;gt; consists of all functions &amp;lt;math&amp;gt;g:\mathbb{N}^k\rightarrow\mathbb{R}^+_0&amp;lt;/math&amp;gt; such that there are &amp;lt;math&amp;gt;N_g,\,c_g\in\mathbb{N}&amp;lt;/math&amp;gt;  that fulfill &amp;lt;math&amp;gt;g(n_1,\ldots,n_k)\geq c_g\cdot f(n_1,\ldots,n_k)&amp;lt;/math&amp;gt; for all &amp;lt;math&amp;gt;n_1,\ldots,n_k\in\mathbb{N}&amp;lt;/math&amp;gt; such that &amp;lt;math&amp;gt;n_1,\ldots,n_k\geq N_g&amp;lt;/math&amp;gt;;&lt;br /&gt;
# &amp;lt;math&amp;gt;\Theta(f):=\mathcal{O}(f)\cap\Omega(f)&amp;lt;/math&amp;gt;;&lt;br /&gt;
# &amp;lt;math&amp;gt;o(f):=\mathcal{O}\setminus\Theta(f)&amp;lt;/math&amp;gt;.&lt;br /&gt;
# &amp;lt;math&amp;gt;\omega(f):=\Omega(f)\setminus\Theta(f)&amp;lt;/math&amp;gt;.&lt;/div&gt;</summary>
		<author><name>Weihe</name></author>
	</entry>
	<entry>
		<id>https://wiki.algo.informatik.tu-darmstadt.de/index.php?title=Asymptotic_comparison_of_functions&amp;diff=3851</id>
		<title>Asymptotic comparison of functions</title>
		<link rel="alternate" type="text/html" href="https://wiki.algo.informatik.tu-darmstadt.de/index.php?title=Asymptotic_comparison_of_functions&amp;diff=3851"/>
		<updated>2016-05-10T15:36:24Z</updated>

		<summary type="html">&lt;p&gt;Weihe: /* Mathematical rules for asymptotic comparison */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== One-dimensional case ==&lt;br /&gt;
&lt;br /&gt;
Let &amp;lt;math&amp;gt;f:\mathbb{R}\rightarrow\mathbb{R}^+_0&amp;lt;/math&amp;gt; be a function. The following sets (a.k.a. '''classes''') of functions are defined for &amp;lt;math&amp;gt;f&amp;lt;/math&amp;gt;:&lt;br /&gt;
# &amp;lt;math&amp;gt;\mathcal{O}(f)&amp;lt;/math&amp;gt; consists of all functions &amp;lt;math&amp;gt;g:\mathbb{R}\rightarrow\mathbb{R}^+_0&amp;lt;/math&amp;gt; such that there are &amp;lt;math&amp;gt;N_g,\,c_g\in\mathbb{R}&amp;lt;/math&amp;gt; that fulfill &amp;lt;math&amp;gt;g(n)\leq c_g\cdot f(n)&amp;lt;/math&amp;gt; for all &amp;lt;math&amp;gt;n\geq N_g&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;n\in\mathbb{R}&amp;lt;/math&amp;gt;;&lt;br /&gt;
# &amp;lt;math&amp;gt;\Omega(f)&amp;lt;/math&amp;gt; consists of all functions &amp;lt;math&amp;gt;g:\mathbb{R}\rightarrow\mathbb{R}^+_0&amp;lt;/math&amp;gt; such that there are &amp;lt;math&amp;gt;N_g,\,c_g\in\mathbb{R}&amp;lt;/math&amp;gt; that fulfill &amp;lt;math&amp;gt;g(n)\geq\frac{1}{c_g}\cdot f(n)&amp;lt;/math&amp;gt; for all &amp;lt;math&amp;gt;n\geq N_g&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;n\in\mathbb{R}&amp;lt;/math&amp;gt;;&lt;br /&gt;
# &amp;lt;math&amp;gt;\Theta(f):=\mathcal{O}(f)\cap\Omega(f)&amp;lt;/math&amp;gt;;&lt;br /&gt;
# &amp;lt;math&amp;gt;o(f):=\mathcal{O}(f)\setminus\Theta(f)&amp;lt;/math&amp;gt;.&lt;br /&gt;
# &amp;lt;math&amp;gt;\omega(f):=\Omega(f)\setminus\Theta(f)&amp;lt;/math&amp;gt;.&lt;br /&gt;
'''Remark:''' This notation is usually called the '''[https://en.wikipedia.org/wiki/Big_O_notation big O notation]''' or '''asymptotic notation''' and is also known as the '''Landau symbols''' or '''Landau-Bachmann symbols'''.&lt;br /&gt;
&lt;br /&gt;
== Mathematical rules for asymptotic comparison ==&lt;br /&gt;
&lt;br /&gt;
Let &amp;lt;math&amp;gt;f,g,h:\mathbb{R}\rightarrow\mathbb{R}^+_0&amp;lt;/math&amp;gt; be three functions.&lt;br /&gt;
# Anti-reflexivity: If &amp;lt;math&amp;gt;f\in\mathcal{O}(g)&amp;lt;/math&amp;gt;, it is &amp;lt;math&amp;gt;g\in\Omega(f)&amp;lt;/math&amp;gt;, and vice versa.&lt;br /&gt;
# Transitivity: If &amp;lt;math&amp;gt;f\in\oplus(g)&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;g\in\oplus(h)&amp;lt;/math&amp;gt; then &amp;lt;math&amp;gt;f\in\oplus(h)&amp;lt;/math&amp;gt;, where &amp;quot;&amp;lt;math&amp;gt;\oplus&amp;lt;/math&amp;gt;&amp;quot; is anyone of &amp;quot;&amp;lt;math&amp;gt;\mathcal{O}&amp;lt;/math&amp;gt;&amp;quot;, &amp;quot;&amp;lt;math&amp;gt;\Omega&amp;lt;/math&amp;gt;&amp;quot;, &amp;quot;&amp;lt;math&amp;gt;\Theta&amp;lt;/math&amp;gt;&amp;quot;, &amp;quot;&amp;lt;math&amp;gt;o&amp;lt;/math&amp;gt;&amp;quot;, and &amp;quot;&amp;lt;math&amp;gt;\omega&amp;lt;/math&amp;gt;&amp;quot;.&lt;br /&gt;
# It is &amp;lt;math&amp;gt;\mathcal{O}(f)\cup\mathcal{O}(g)\subseteq\mathcal{O}(f+g)&amp;lt;/math&amp;gt;.&lt;br /&gt;
# If &amp;lt;math&amp;gt;f\in\mathcal{O}(g)&amp;lt;/math&amp;gt;, it is &amp;lt;math&amp;gt;\oplus(f+g)=\oplus(g)&amp;lt;/math&amp;gt;, where &amp;quot;&amp;lt;math&amp;gt;\oplus&amp;lt;/math&amp;gt;&amp;quot; is anyone of &amp;quot;&amp;lt;math&amp;gt;\mathcal{O}&amp;lt;/math&amp;gt;&amp;quot;, &amp;quot;&amp;lt;math&amp;gt;\Omega&amp;lt;/math&amp;gt;&amp;quot;, and &amp;quot;&amp;lt;math&amp;gt;\Theta&amp;lt;/math&amp;gt;&amp;quot;.&lt;br /&gt;
# It is &amp;lt;math&amp;gt;f\in\mathcal{O}(g)&amp;lt;/math&amp;gt; if, and only if, the [http://en.wikipedia.org/wiki/Limit_superior_and_limit_inferior limit superior] of the series &amp;lt;math&amp;gt;f(n)/g(n)&amp;lt;/math&amp;gt; for &amp;lt;math&amp;gt;n\rightarrow+\infty&amp;lt;/math&amp;gt; is finite.&lt;br /&gt;
# It is &amp;lt;math&amp;gt;f\in o(g)&amp;lt;/math&amp;gt; if, and only if, this limit superior is zero. Note that, due to nonnegativity, this is equivalent to the statement that &amp;lt;math&amp;gt;\lim_{n\rightarrow+\infty}f(n)/g(n)&amp;lt;/math&amp;gt; exists and equals zero.&lt;br /&gt;
# For &amp;lt;math&amp;gt;a,b\in\mathbb{R}&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;a,b&amp;gt;1&amp;lt;/math&amp;gt;, it is &amp;lt;math&amp;gt;\oplus(\log_a(n))=\oplus(\log_b(n))&amp;lt;/math&amp;gt;, where &amp;quot;&amp;lt;math&amp;gt;\oplus&amp;lt;/math&amp;gt;&amp;quot; is anyone of &amp;quot;&amp;lt;math&amp;gt;\mathcal{O}&amp;lt;/math&amp;gt;&amp;quot;, &amp;quot;&amp;lt;math&amp;gt;\Omega&amp;lt;/math&amp;gt;&amp;quot;, &amp;quot;&amp;lt;math&amp;gt;\Theta&amp;lt;/math&amp;gt;&amp;quot;, &amp;quot;&amp;lt;math&amp;gt;o&amp;lt;/math&amp;gt;&amp;quot;, and &amp;quot;&amp;lt;math&amp;gt;\omega&amp;lt;/math&amp;gt;&amp;quot; (follows immediately from the basic rule &amp;lt;math&amp;gt;\log_a(n)/\log_b(n)=\log_a(b)=&amp;lt;/math&amp;gt; const). In particular, the base of a logarithm function may be omitted: &amp;lt;math&amp;gt;\oplus(\log(n))=\oplus(\log_a(n))&amp;lt;/math&amp;gt;.&lt;br /&gt;
# For all &amp;lt;math&amp;gt;k,\ell\in\mathbb{R}&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;k&amp;lt;\ell&amp;lt;/math&amp;gt;, it is &amp;lt;math&amp;gt;n^k\in o(n^\ell)&amp;lt;/math&amp;gt;.&lt;br /&gt;
# For all &amp;lt;math&amp;gt;k\in\mathbb{R}^+&amp;lt;/math&amp;gt;, it is &amp;lt;math&amp;gt;\log^k(n)\in o(n)&amp;lt;/math&amp;gt;.&lt;br /&gt;
# For all &amp;lt;math&amp;gt;k,a\in\mathbb{R}&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;a&amp;gt;1&amp;lt;/math&amp;gt;, it is &amp;lt;math&amp;gt;n^k\in o(a^n)&amp;lt;/math&amp;gt;.&lt;br /&gt;
# For all &amp;lt;math&amp;gt;a,b\in\mathbb{R}&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;1&amp;lt;a&amp;lt;b&amp;lt;/math&amp;gt;, it is &amp;lt;math&amp;gt;a^n\in o(b^n)&amp;lt;/math&amp;gt;.&lt;br /&gt;
# For all &amp;lt;math&amp;gt;a\in\mathbb{R}&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;a&amp;gt;1&amp;lt;/math&amp;gt;, it is &amp;lt;math&amp;gt;a^n\in o(n!)&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
== Comparison with specific functions ==&lt;br /&gt;
&lt;br /&gt;
A function &amp;lt;math&amp;gt;f:\mathbb{N}\rightarrow\mathbb{R}^+_0&amp;lt;/math&amp;gt; is said to be&lt;br /&gt;
# '''linear''' if &amp;lt;math&amp;gt;f\in\Theta(n)&amp;lt;/math&amp;gt;;&lt;br /&gt;
# '''quadratic''' if &amp;lt;math&amp;gt;f\in\Theta(n^2)&amp;lt;/math&amp;gt;;&lt;br /&gt;
# '''cubic''' if &amp;lt;math&amp;gt;f\in\Theta(n^3)&amp;lt;/math&amp;gt;;&lt;br /&gt;
# '''logarithmic''' if &amp;lt;math&amp;gt;f\in\Theta(\log(n))&amp;lt;/math&amp;gt;;&lt;br /&gt;
# '''&amp;quot;n-log-n&amp;quot;''' if &amp;lt;math&amp;gt;f\in\Theta(n\cdot\log(n))&amp;lt;/math&amp;gt;;&lt;br /&gt;
# '''polynomial''' if there is a polynomial &amp;lt;math&amp;gt;p:\mathbb{N}\rightarrow\mathbb{R}&amp;lt;/math&amp;gt; such that &amp;lt;math&amp;gt;f\in\mathcal{O}(p)&amp;lt;/math&amp;gt;;&lt;br /&gt;
# '''subexponential''' if &amp;lt;math&amp;gt;f\in o(a^n)&amp;lt;/math&amp;gt; for every &amp;lt;math&amp;gt;a\in\mathbb{R}&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;a&amp;gt;1&amp;lt;/math&amp;gt;;&lt;br /&gt;
# '''exponential''' if there are &amp;lt;math&amp;gt;a,b\in\mathbb{R}&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;a&amp;gt;b&amp;gt;1&amp;lt;/math&amp;gt;, such that &amp;lt;math&amp;gt;f\in\mathcal{O}(a^n)&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;f\in\Omega(b^n)&amp;lt;/math&amp;gt;;&lt;br /&gt;
# '''factorial''' if &amp;lt;math&amp;gt;f\in\Theta(n!)&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
'''Remark:''' Note that the notion of polynomial is based on an &amp;quot;&amp;lt;math&amp;gt;\mathcal{O}&amp;lt;/math&amp;gt;&amp;quot;, not on a &amp;quot;&amp;lt;math&amp;gt;\Theta&amp;lt;/math&amp;gt;&amp;quot;. In fact, in this context, &amp;quot;polynomial&amp;quot; is usually used short for &amp;quot;polynomially bounded from above&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
== Multidimensional case ==&lt;br /&gt;
&lt;br /&gt;
Let &amp;lt;math&amp;gt;k\in\mathbb{N}&amp;lt;/math&amp;gt; and let &amp;lt;math&amp;gt;f:\mathbb{N}^k\rightarrow\mathbb{R}^+_0&amp;lt;/math&amp;gt;. The following sets (a.k.a. '''classes''') are defined for &amp;lt;math&amp;gt;f&amp;lt;/math&amp;gt;:&lt;br /&gt;
# &amp;lt;math&amp;gt;\mathcal{O}(f)&amp;lt;/math&amp;gt; consists of all functions &amp;lt;math&amp;gt;g:\mathbb{N}^k\rightarrow\mathbb{R}^+_0&amp;lt;/math&amp;gt; such that there are &amp;lt;math&amp;gt;N_g,\,c_g\in\mathbb{N}&amp;lt;/math&amp;gt;  that fulfill &amp;lt;math&amp;gt;g(n_1,\ldots,n_k)\leq c_g\cdot f(n_1,\ldots,n_k)&amp;lt;/math&amp;gt; for all &amp;lt;math&amp;gt;n_1,\ldots,n_k\in\mathbb{N}&amp;lt;/math&amp;gt; such that &amp;lt;math&amp;gt;n_1,\ldots,n_k\geq N_g&amp;lt;/math&amp;gt;;&lt;br /&gt;
# &amp;lt;math&amp;gt;\Omega(f)&amp;lt;/math&amp;gt; consists of all functions &amp;lt;math&amp;gt;g:\mathbb{N}^k\rightarrow\mathbb{R}^+_0&amp;lt;/math&amp;gt; such that there are &amp;lt;math&amp;gt;N_g,\,c_g\in\mathbb{N}&amp;lt;/math&amp;gt;  that fulfill &amp;lt;math&amp;gt;g(n_1,\ldots,n_k)\geq c_g\cdot f(n_1,\ldots,n_k)&amp;lt;/math&amp;gt; for all &amp;lt;math&amp;gt;n_1,\ldots,n_k\in\mathbb{N}&amp;lt;/math&amp;gt; such that &amp;lt;math&amp;gt;n_1,\ldots,n_k\geq N_g&amp;lt;/math&amp;gt;;&lt;br /&gt;
# &amp;lt;math&amp;gt;\Theta(f):=\mathcal{O}(f)\cap\Omega(f)&amp;lt;/math&amp;gt;;&lt;br /&gt;
# &amp;lt;math&amp;gt;o(f):=\mathcal{O}\setminus\Theta(f)&amp;lt;/math&amp;gt;.&lt;br /&gt;
# &amp;lt;math&amp;gt;\omega(f):=\Omega(f)\setminus\Theta(f)&amp;lt;/math&amp;gt;.&lt;/div&gt;</summary>
		<author><name>Weihe</name></author>
	</entry>
	<entry>
		<id>https://wiki.algo.informatik.tu-darmstadt.de/index.php?title=Asymptotic_comparison_of_functions&amp;diff=3850</id>
		<title>Asymptotic comparison of functions</title>
		<link rel="alternate" type="text/html" href="https://wiki.algo.informatik.tu-darmstadt.de/index.php?title=Asymptotic_comparison_of_functions&amp;diff=3850"/>
		<updated>2016-05-10T15:35:56Z</updated>

		<summary type="html">&lt;p&gt;Weihe: /* One-dimensional case */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== One-dimensional case ==&lt;br /&gt;
&lt;br /&gt;
Let &amp;lt;math&amp;gt;f:\mathbb{R}\rightarrow\mathbb{R}^+_0&amp;lt;/math&amp;gt; be a function. The following sets (a.k.a. '''classes''') of functions are defined for &amp;lt;math&amp;gt;f&amp;lt;/math&amp;gt;:&lt;br /&gt;
# &amp;lt;math&amp;gt;\mathcal{O}(f)&amp;lt;/math&amp;gt; consists of all functions &amp;lt;math&amp;gt;g:\mathbb{R}\rightarrow\mathbb{R}^+_0&amp;lt;/math&amp;gt; such that there are &amp;lt;math&amp;gt;N_g,\,c_g\in\mathbb{R}&amp;lt;/math&amp;gt; that fulfill &amp;lt;math&amp;gt;g(n)\leq c_g\cdot f(n)&amp;lt;/math&amp;gt; for all &amp;lt;math&amp;gt;n\geq N_g&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;n\in\mathbb{R}&amp;lt;/math&amp;gt;;&lt;br /&gt;
# &amp;lt;math&amp;gt;\Omega(f)&amp;lt;/math&amp;gt; consists of all functions &amp;lt;math&amp;gt;g:\mathbb{R}\rightarrow\mathbb{R}^+_0&amp;lt;/math&amp;gt; such that there are &amp;lt;math&amp;gt;N_g,\,c_g\in\mathbb{R}&amp;lt;/math&amp;gt; that fulfill &amp;lt;math&amp;gt;g(n)\geq\frac{1}{c_g}\cdot f(n)&amp;lt;/math&amp;gt; for all &amp;lt;math&amp;gt;n\geq N_g&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;n\in\mathbb{R}&amp;lt;/math&amp;gt;;&lt;br /&gt;
# &amp;lt;math&amp;gt;\Theta(f):=\mathcal{O}(f)\cap\Omega(f)&amp;lt;/math&amp;gt;;&lt;br /&gt;
# &amp;lt;math&amp;gt;o(f):=\mathcal{O}(f)\setminus\Theta(f)&amp;lt;/math&amp;gt;.&lt;br /&gt;
# &amp;lt;math&amp;gt;\omega(f):=\Omega(f)\setminus\Theta(f)&amp;lt;/math&amp;gt;.&lt;br /&gt;
'''Remark:''' This notation is usually called the '''[https://en.wikipedia.org/wiki/Big_O_notation big O notation]''' or '''asymptotic notation''' and is also known as the '''Landau symbols''' or '''Landau-Bachmann symbols'''.&lt;br /&gt;
&lt;br /&gt;
== Mathematical rules for asymptotic comparison ==&lt;br /&gt;
&lt;br /&gt;
Let &amp;lt;math&amp;gt;f,g,h:\mathbb{R}\rightarrow\mathbb{R}^+_0&amp;lt;/math&amp;gt; be three functions.&lt;br /&gt;
# Anti-reflexivity: If &amp;lt;math&amp;gt;f\in\mathcal{O}(g)&amp;lt;/math&amp;gt;, it is &amp;lt;math&amp;gt;g\in\Omega(f)&amp;lt;/math&amp;gt;, and vice versa.&lt;br /&gt;
# Transitivity: If &amp;lt;math&amp;gt;f\in\oplus(g)&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;g\in\oplus(h)&amp;lt;/math&amp;gt; then &amp;lt;math&amp;gt;f\in\oplus(h)&amp;lt;/math&amp;gt;, where &amp;quot;&amp;lt;math&amp;gt;\oplus&amp;lt;/math&amp;gt;&amp;quot; is anyone of &amp;quot;&amp;lt;math&amp;gt;\mathcal{O}&amp;lt;/math&amp;gt;&amp;quot;, &amp;quot;&amp;lt;math&amp;gt;\Omega&amp;lt;/math&amp;gt;&amp;quot;, &amp;quot;&amp;lt;math&amp;gt;\Theta&amp;lt;/math&amp;gt;&amp;quot;, &amp;quot;&amp;lt;math&amp;gt;o&amp;lt;/math&amp;gt;&amp;quot;, and &amp;quot;&amp;lt;math&amp;gt;\omega&amp;lt;/math&amp;gt;&amp;quot;.&lt;br /&gt;
# It is &amp;lt;math&amp;gt;\mathcal{O}(f)\cup\mathcal{O}(g)\subseteq\mathcal{O}(f+g)&amp;lt;/math&amp;gt;.&lt;br /&gt;
# If &amp;lt;math&amp;gt;f\in\mathcal{O}(g)&amp;lt;/math&amp;gt;, it is &amp;lt;math&amp;gt;\oplus(f+g)=\oplus(g)&amp;lt;/math&amp;gt;, where &amp;quot;&amp;lt;math&amp;gt;\oplus&amp;lt;/math&amp;gt;&amp;quot; is anyone of &amp;quot;&amp;lt;math&amp;gt;\mathcal{O}&amp;lt;/math&amp;gt;&amp;quot;, &amp;quot;&amp;lt;math&amp;gt;\Omega&amp;lt;/math&amp;gt;&amp;quot;, &amp;quot;&amp;lt;math&amp;gt;\Theta&amp;lt;/math&amp;gt;&amp;quot;, &amp;quot;&amp;lt;math&amp;gt;o&amp;lt;/math&amp;gt;&amp;quot;, and &amp;quot;&amp;lt;math&amp;gt;\omega&amp;lt;/math&amp;gt;&amp;quot;.&lt;br /&gt;
# It is &amp;lt;math&amp;gt;f\in\mathcal{O}(g)&amp;lt;/math&amp;gt; if, and only if, the [http://en.wikipedia.org/wiki/Limit_superior_and_limit_inferior limit superior] of the series &amp;lt;math&amp;gt;f(n)/g(n)&amp;lt;/math&amp;gt; for &amp;lt;math&amp;gt;n\rightarrow+\infty&amp;lt;/math&amp;gt; is finite.&lt;br /&gt;
# It is &amp;lt;math&amp;gt;f\in o(g)&amp;lt;/math&amp;gt; if, and only if, this limit superior is zero. Note that, due to nonnegativity, this is equivalent to the statement that &amp;lt;math&amp;gt;\lim_{n\rightarrow+\infty}f(n)/g(n)&amp;lt;/math&amp;gt; exists and equals zero.&lt;br /&gt;
# For &amp;lt;math&amp;gt;a,b\in\mathbb{R}&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;a,b&amp;gt;1&amp;lt;/math&amp;gt;, it is &amp;lt;math&amp;gt;\oplus(\log_a(n))=\oplus(\log_b(n))&amp;lt;/math&amp;gt;, where &amp;quot;&amp;lt;math&amp;gt;\oplus&amp;lt;/math&amp;gt;&amp;quot; is anyone of &amp;quot;&amp;lt;math&amp;gt;\mathcal{O}&amp;lt;/math&amp;gt;&amp;quot;, &amp;quot;&amp;lt;math&amp;gt;\Omega&amp;lt;/math&amp;gt;&amp;quot;, &amp;quot;&amp;lt;math&amp;gt;\Theta&amp;lt;/math&amp;gt;&amp;quot;, &amp;quot;&amp;lt;math&amp;gt;o&amp;lt;/math&amp;gt;&amp;quot;, and &amp;quot;&amp;lt;math&amp;gt;\omega&amp;lt;/math&amp;gt;&amp;quot; (follows immediately from the basic rule &amp;lt;math&amp;gt;\log_a(n)/\log_b(n)=\log_a(b)=&amp;lt;/math&amp;gt; const). In particular, the base of a logarithm function may be omitted: &amp;lt;math&amp;gt;\oplus(\log(n))=\oplus(\log_a(n))&amp;lt;/math&amp;gt;.&lt;br /&gt;
# For all &amp;lt;math&amp;gt;k,\ell\in\mathbb{R}&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;k&amp;lt;\ell&amp;lt;/math&amp;gt;, it is &amp;lt;math&amp;gt;n^k\in o(n^\ell)&amp;lt;/math&amp;gt;.&lt;br /&gt;
# For all &amp;lt;math&amp;gt;k\in\mathbb{R}^+&amp;lt;/math&amp;gt;, it is &amp;lt;math&amp;gt;\log^k(n)\in o(n)&amp;lt;/math&amp;gt;.&lt;br /&gt;
# For all &amp;lt;math&amp;gt;k,a\in\mathbb{R}&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;a&amp;gt;1&amp;lt;/math&amp;gt;, it is &amp;lt;math&amp;gt;n^k\in o(a^n)&amp;lt;/math&amp;gt;.&lt;br /&gt;
# For all &amp;lt;math&amp;gt;a,b\in\mathbb{R}&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;1&amp;lt;a&amp;lt;b&amp;lt;/math&amp;gt;, it is &amp;lt;math&amp;gt;a^n\in o(b^n)&amp;lt;/math&amp;gt;.&lt;br /&gt;
# For all &amp;lt;math&amp;gt;a\in\mathbb{R}&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;a&amp;gt;1&amp;lt;/math&amp;gt;, it is &amp;lt;math&amp;gt;a^n\in o(n!)&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
== Comparison with specific functions ==&lt;br /&gt;
&lt;br /&gt;
A function &amp;lt;math&amp;gt;f:\mathbb{N}\rightarrow\mathbb{R}^+_0&amp;lt;/math&amp;gt; is said to be&lt;br /&gt;
# '''linear''' if &amp;lt;math&amp;gt;f\in\Theta(n)&amp;lt;/math&amp;gt;;&lt;br /&gt;
# '''quadratic''' if &amp;lt;math&amp;gt;f\in\Theta(n^2)&amp;lt;/math&amp;gt;;&lt;br /&gt;
# '''cubic''' if &amp;lt;math&amp;gt;f\in\Theta(n^3)&amp;lt;/math&amp;gt;;&lt;br /&gt;
# '''logarithmic''' if &amp;lt;math&amp;gt;f\in\Theta(\log(n))&amp;lt;/math&amp;gt;;&lt;br /&gt;
# '''&amp;quot;n-log-n&amp;quot;''' if &amp;lt;math&amp;gt;f\in\Theta(n\cdot\log(n))&amp;lt;/math&amp;gt;;&lt;br /&gt;
# '''polynomial''' if there is a polynomial &amp;lt;math&amp;gt;p:\mathbb{N}\rightarrow\mathbb{R}&amp;lt;/math&amp;gt; such that &amp;lt;math&amp;gt;f\in\mathcal{O}(p)&amp;lt;/math&amp;gt;;&lt;br /&gt;
# '''subexponential''' if &amp;lt;math&amp;gt;f\in o(a^n)&amp;lt;/math&amp;gt; for every &amp;lt;math&amp;gt;a\in\mathbb{R}&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;a&amp;gt;1&amp;lt;/math&amp;gt;;&lt;br /&gt;
# '''exponential''' if there are &amp;lt;math&amp;gt;a,b\in\mathbb{R}&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;a&amp;gt;b&amp;gt;1&amp;lt;/math&amp;gt;, such that &amp;lt;math&amp;gt;f\in\mathcal{O}(a^n)&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;f\in\Omega(b^n)&amp;lt;/math&amp;gt;;&lt;br /&gt;
# '''factorial''' if &amp;lt;math&amp;gt;f\in\Theta(n!)&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
'''Remark:''' Note that the notion of polynomial is based on an &amp;quot;&amp;lt;math&amp;gt;\mathcal{O}&amp;lt;/math&amp;gt;&amp;quot;, not on a &amp;quot;&amp;lt;math&amp;gt;\Theta&amp;lt;/math&amp;gt;&amp;quot;. In fact, in this context, &amp;quot;polynomial&amp;quot; is usually used short for &amp;quot;polynomially bounded from above&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
== Multidimensional case ==&lt;br /&gt;
&lt;br /&gt;
Let &amp;lt;math&amp;gt;k\in\mathbb{N}&amp;lt;/math&amp;gt; and let &amp;lt;math&amp;gt;f:\mathbb{N}^k\rightarrow\mathbb{R}^+_0&amp;lt;/math&amp;gt;. The following sets (a.k.a. '''classes''') are defined for &amp;lt;math&amp;gt;f&amp;lt;/math&amp;gt;:&lt;br /&gt;
# &amp;lt;math&amp;gt;\mathcal{O}(f)&amp;lt;/math&amp;gt; consists of all functions &amp;lt;math&amp;gt;g:\mathbb{N}^k\rightarrow\mathbb{R}^+_0&amp;lt;/math&amp;gt; such that there are &amp;lt;math&amp;gt;N_g,\,c_g\in\mathbb{N}&amp;lt;/math&amp;gt;  that fulfill &amp;lt;math&amp;gt;g(n_1,\ldots,n_k)\leq c_g\cdot f(n_1,\ldots,n_k)&amp;lt;/math&amp;gt; for all &amp;lt;math&amp;gt;n_1,\ldots,n_k\in\mathbb{N}&amp;lt;/math&amp;gt; such that &amp;lt;math&amp;gt;n_1,\ldots,n_k\geq N_g&amp;lt;/math&amp;gt;;&lt;br /&gt;
# &amp;lt;math&amp;gt;\Omega(f)&amp;lt;/math&amp;gt; consists of all functions &amp;lt;math&amp;gt;g:\mathbb{N}^k\rightarrow\mathbb{R}^+_0&amp;lt;/math&amp;gt; such that there are &amp;lt;math&amp;gt;N_g,\,c_g\in\mathbb{N}&amp;lt;/math&amp;gt;  that fulfill &amp;lt;math&amp;gt;g(n_1,\ldots,n_k)\geq c_g\cdot f(n_1,\ldots,n_k)&amp;lt;/math&amp;gt; for all &amp;lt;math&amp;gt;n_1,\ldots,n_k\in\mathbb{N}&amp;lt;/math&amp;gt; such that &amp;lt;math&amp;gt;n_1,\ldots,n_k\geq N_g&amp;lt;/math&amp;gt;;&lt;br /&gt;
# &amp;lt;math&amp;gt;\Theta(f):=\mathcal{O}(f)\cap\Omega(f)&amp;lt;/math&amp;gt;;&lt;br /&gt;
# &amp;lt;math&amp;gt;o(f):=\mathcal{O}\setminus\Theta(f)&amp;lt;/math&amp;gt;.&lt;br /&gt;
# &amp;lt;math&amp;gt;\omega(f):=\Omega(f)\setminus\Theta(f)&amp;lt;/math&amp;gt;.&lt;/div&gt;</summary>
		<author><name>Weihe</name></author>
	</entry>
	<entry>
		<id>https://wiki.algo.informatik.tu-darmstadt.de/index.php?title=Asymptotic_comparison_of_functions&amp;diff=3849</id>
		<title>Asymptotic comparison of functions</title>
		<link rel="alternate" type="text/html" href="https://wiki.algo.informatik.tu-darmstadt.de/index.php?title=Asymptotic_comparison_of_functions&amp;diff=3849"/>
		<updated>2016-05-10T15:35:12Z</updated>

		<summary type="html">&lt;p&gt;Weihe: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== One-dimensional case ==&lt;br /&gt;
&lt;br /&gt;
Let &amp;lt;math&amp;gt;f:\mathbb{R}\rightarrow\mathbb{R}^+_0&amp;lt;/math&amp;gt; be a function. The following sets (a.k.a. '''classes''') of functions are defined for &amp;lt;math&amp;gt;f&amp;lt;/math&amp;gt;:&lt;br /&gt;
# &amp;lt;math&amp;gt;\mathcal{O}(f)&amp;lt;/math&amp;gt; consists of all functions &amp;lt;math&amp;gt;g:\mathbb{R}\rightarrow\mathbb{R}^+_0&amp;lt;/math&amp;gt; such that there are &amp;lt;math&amp;gt;N_g,\,c_g\in\mathbb{N}&amp;lt;/math&amp;gt; that fulfill &amp;lt;math&amp;gt;g(n)\leq c_g\cdot f(n)&amp;lt;/math&amp;gt; for all &amp;lt;math&amp;gt;n\geq N_g&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;n\in\mathbb{R}&amp;lt;/math&amp;gt;;&lt;br /&gt;
# &amp;lt;math&amp;gt;\Omega(f)&amp;lt;/math&amp;gt; consists of all functions &amp;lt;math&amp;gt;g:\mathbb{N}\rightarrow\mathbb{R}^+_0&amp;lt;/math&amp;gt; such that there are &amp;lt;math&amp;gt;N_g,\,c_g\in\mathbb{R}&amp;lt;/math&amp;gt; that fulfill &amp;lt;math&amp;gt;g(n)\geq\frac{1}{c_g}\cdot f(n)&amp;lt;/math&amp;gt; for all &amp;lt;math&amp;gt;n\geq N_g&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;n\in\mathbb{R}&amp;lt;/math&amp;gt;;&lt;br /&gt;
# &amp;lt;math&amp;gt;\Theta(f):=\mathcal{O}(f)\cap\Omega(f)&amp;lt;/math&amp;gt;;&lt;br /&gt;
# &amp;lt;math&amp;gt;o(f):=\mathcal{O}(f)\setminus\Theta(f)&amp;lt;/math&amp;gt;.&lt;br /&gt;
# &amp;lt;math&amp;gt;\omega(f):=\Omega(f)\setminus\Theta(f)&amp;lt;/math&amp;gt;.&lt;br /&gt;
'''Remark:''' This notation is usually called the '''[https://en.wikipedia.org/wiki/Big_O_notation big O notation]''' or '''asymptotic notation''' and is also known as the '''Landau symbols''' or '''Landau-Bachmann symbols'''.&lt;br /&gt;
&lt;br /&gt;
== Mathematical rules for asymptotic comparison ==&lt;br /&gt;
&lt;br /&gt;
Let &amp;lt;math&amp;gt;f,g,h:\mathbb{R}\rightarrow\mathbb{R}^+_0&amp;lt;/math&amp;gt; be three functions.&lt;br /&gt;
# Anti-reflexivity: If &amp;lt;math&amp;gt;f\in\mathcal{O}(g)&amp;lt;/math&amp;gt;, it is &amp;lt;math&amp;gt;g\in\Omega(f)&amp;lt;/math&amp;gt;, and vice versa.&lt;br /&gt;
# Transitivity: If &amp;lt;math&amp;gt;f\in\oplus(g)&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;g\in\oplus(h)&amp;lt;/math&amp;gt; then &amp;lt;math&amp;gt;f\in\oplus(h)&amp;lt;/math&amp;gt;, where &amp;quot;&amp;lt;math&amp;gt;\oplus&amp;lt;/math&amp;gt;&amp;quot; is anyone of &amp;quot;&amp;lt;math&amp;gt;\mathcal{O}&amp;lt;/math&amp;gt;&amp;quot;, &amp;quot;&amp;lt;math&amp;gt;\Omega&amp;lt;/math&amp;gt;&amp;quot;, &amp;quot;&amp;lt;math&amp;gt;\Theta&amp;lt;/math&amp;gt;&amp;quot;, &amp;quot;&amp;lt;math&amp;gt;o&amp;lt;/math&amp;gt;&amp;quot;, and &amp;quot;&amp;lt;math&amp;gt;\omega&amp;lt;/math&amp;gt;&amp;quot;.&lt;br /&gt;
# It is &amp;lt;math&amp;gt;\mathcal{O}(f)\cup\mathcal{O}(g)\subseteq\mathcal{O}(f+g)&amp;lt;/math&amp;gt;.&lt;br /&gt;
# If &amp;lt;math&amp;gt;f\in\mathcal{O}(g)&amp;lt;/math&amp;gt;, it is &amp;lt;math&amp;gt;\oplus(f+g)=\oplus(g)&amp;lt;/math&amp;gt;, where &amp;quot;&amp;lt;math&amp;gt;\oplus&amp;lt;/math&amp;gt;&amp;quot; is anyone of &amp;quot;&amp;lt;math&amp;gt;\mathcal{O}&amp;lt;/math&amp;gt;&amp;quot;, &amp;quot;&amp;lt;math&amp;gt;\Omega&amp;lt;/math&amp;gt;&amp;quot;, &amp;quot;&amp;lt;math&amp;gt;\Theta&amp;lt;/math&amp;gt;&amp;quot;, &amp;quot;&amp;lt;math&amp;gt;o&amp;lt;/math&amp;gt;&amp;quot;, and &amp;quot;&amp;lt;math&amp;gt;\omega&amp;lt;/math&amp;gt;&amp;quot;.&lt;br /&gt;
# It is &amp;lt;math&amp;gt;f\in\mathcal{O}(g)&amp;lt;/math&amp;gt; if, and only if, the [http://en.wikipedia.org/wiki/Limit_superior_and_limit_inferior limit superior] of the series &amp;lt;math&amp;gt;f(n)/g(n)&amp;lt;/math&amp;gt; for &amp;lt;math&amp;gt;n\rightarrow+\infty&amp;lt;/math&amp;gt; is finite.&lt;br /&gt;
# It is &amp;lt;math&amp;gt;f\in o(g)&amp;lt;/math&amp;gt; if, and only if, this limit superior is zero. Note that, due to nonnegativity, this is equivalent to the statement that &amp;lt;math&amp;gt;\lim_{n\rightarrow+\infty}f(n)/g(n)&amp;lt;/math&amp;gt; exists and equals zero.&lt;br /&gt;
# For &amp;lt;math&amp;gt;a,b\in\mathbb{R}&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;a,b&amp;gt;1&amp;lt;/math&amp;gt;, it is &amp;lt;math&amp;gt;\oplus(\log_a(n))=\oplus(\log_b(n))&amp;lt;/math&amp;gt;, where &amp;quot;&amp;lt;math&amp;gt;\oplus&amp;lt;/math&amp;gt;&amp;quot; is anyone of &amp;quot;&amp;lt;math&amp;gt;\mathcal{O}&amp;lt;/math&amp;gt;&amp;quot;, &amp;quot;&amp;lt;math&amp;gt;\Omega&amp;lt;/math&amp;gt;&amp;quot;, &amp;quot;&amp;lt;math&amp;gt;\Theta&amp;lt;/math&amp;gt;&amp;quot;, &amp;quot;&amp;lt;math&amp;gt;o&amp;lt;/math&amp;gt;&amp;quot;, and &amp;quot;&amp;lt;math&amp;gt;\omega&amp;lt;/math&amp;gt;&amp;quot; (follows immediately from the basic rule &amp;lt;math&amp;gt;\log_a(n)/\log_b(n)=\log_a(b)=&amp;lt;/math&amp;gt; const). In particular, the base of a logarithm function may be omitted: &amp;lt;math&amp;gt;\oplus(\log(n))=\oplus(\log_a(n))&amp;lt;/math&amp;gt;.&lt;br /&gt;
# For all &amp;lt;math&amp;gt;k,\ell\in\mathbb{R}&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;k&amp;lt;\ell&amp;lt;/math&amp;gt;, it is &amp;lt;math&amp;gt;n^k\in o(n^\ell)&amp;lt;/math&amp;gt;.&lt;br /&gt;
# For all &amp;lt;math&amp;gt;k\in\mathbb{R}^+&amp;lt;/math&amp;gt;, it is &amp;lt;math&amp;gt;\log^k(n)\in o(n)&amp;lt;/math&amp;gt;.&lt;br /&gt;
# For all &amp;lt;math&amp;gt;k,a\in\mathbb{R}&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;a&amp;gt;1&amp;lt;/math&amp;gt;, it is &amp;lt;math&amp;gt;n^k\in o(a^n)&amp;lt;/math&amp;gt;.&lt;br /&gt;
# For all &amp;lt;math&amp;gt;a,b\in\mathbb{R}&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;1&amp;lt;a&amp;lt;b&amp;lt;/math&amp;gt;, it is &amp;lt;math&amp;gt;a^n\in o(b^n)&amp;lt;/math&amp;gt;.&lt;br /&gt;
# For all &amp;lt;math&amp;gt;a\in\mathbb{R}&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;a&amp;gt;1&amp;lt;/math&amp;gt;, it is &amp;lt;math&amp;gt;a^n\in o(n!)&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
== Comparison with specific functions ==&lt;br /&gt;
&lt;br /&gt;
A function &amp;lt;math&amp;gt;f:\mathbb{N}\rightarrow\mathbb{R}^+_0&amp;lt;/math&amp;gt; is said to be&lt;br /&gt;
# '''linear''' if &amp;lt;math&amp;gt;f\in\Theta(n)&amp;lt;/math&amp;gt;;&lt;br /&gt;
# '''quadratic''' if &amp;lt;math&amp;gt;f\in\Theta(n^2)&amp;lt;/math&amp;gt;;&lt;br /&gt;
# '''cubic''' if &amp;lt;math&amp;gt;f\in\Theta(n^3)&amp;lt;/math&amp;gt;;&lt;br /&gt;
# '''logarithmic''' if &amp;lt;math&amp;gt;f\in\Theta(\log(n))&amp;lt;/math&amp;gt;;&lt;br /&gt;
# '''&amp;quot;n-log-n&amp;quot;''' if &amp;lt;math&amp;gt;f\in\Theta(n\cdot\log(n))&amp;lt;/math&amp;gt;;&lt;br /&gt;
# '''polynomial''' if there is a polynomial &amp;lt;math&amp;gt;p:\mathbb{N}\rightarrow\mathbb{R}&amp;lt;/math&amp;gt; such that &amp;lt;math&amp;gt;f\in\mathcal{O}(p)&amp;lt;/math&amp;gt;;&lt;br /&gt;
# '''subexponential''' if &amp;lt;math&amp;gt;f\in o(a^n)&amp;lt;/math&amp;gt; for every &amp;lt;math&amp;gt;a\in\mathbb{R}&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;a&amp;gt;1&amp;lt;/math&amp;gt;;&lt;br /&gt;
# '''exponential''' if there are &amp;lt;math&amp;gt;a,b\in\mathbb{R}&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;a&amp;gt;b&amp;gt;1&amp;lt;/math&amp;gt;, such that &amp;lt;math&amp;gt;f\in\mathcal{O}(a^n)&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;f\in\Omega(b^n)&amp;lt;/math&amp;gt;;&lt;br /&gt;
# '''factorial''' if &amp;lt;math&amp;gt;f\in\Theta(n!)&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
'''Remark:''' Note that the notion of polynomial is based on an &amp;quot;&amp;lt;math&amp;gt;\mathcal{O}&amp;lt;/math&amp;gt;&amp;quot;, not on a &amp;quot;&amp;lt;math&amp;gt;\Theta&amp;lt;/math&amp;gt;&amp;quot;. In fact, in this context, &amp;quot;polynomial&amp;quot; is usually used short for &amp;quot;polynomially bounded from above&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
== Multidimensional case ==&lt;br /&gt;
&lt;br /&gt;
Let &amp;lt;math&amp;gt;k\in\mathbb{N}&amp;lt;/math&amp;gt; and let &amp;lt;math&amp;gt;f:\mathbb{N}^k\rightarrow\mathbb{R}^+_0&amp;lt;/math&amp;gt;. The following sets (a.k.a. '''classes''') are defined for &amp;lt;math&amp;gt;f&amp;lt;/math&amp;gt;:&lt;br /&gt;
# &amp;lt;math&amp;gt;\mathcal{O}(f)&amp;lt;/math&amp;gt; consists of all functions &amp;lt;math&amp;gt;g:\mathbb{N}^k\rightarrow\mathbb{R}^+_0&amp;lt;/math&amp;gt; such that there are &amp;lt;math&amp;gt;N_g,\,c_g\in\mathbb{N}&amp;lt;/math&amp;gt;  that fulfill &amp;lt;math&amp;gt;g(n_1,\ldots,n_k)\leq c_g\cdot f(n_1,\ldots,n_k)&amp;lt;/math&amp;gt; for all &amp;lt;math&amp;gt;n_1,\ldots,n_k\in\mathbb{N}&amp;lt;/math&amp;gt; such that &amp;lt;math&amp;gt;n_1,\ldots,n_k\geq N_g&amp;lt;/math&amp;gt;;&lt;br /&gt;
# &amp;lt;math&amp;gt;\Omega(f)&amp;lt;/math&amp;gt; consists of all functions &amp;lt;math&amp;gt;g:\mathbb{N}^k\rightarrow\mathbb{R}^+_0&amp;lt;/math&amp;gt; such that there are &amp;lt;math&amp;gt;N_g,\,c_g\in\mathbb{N}&amp;lt;/math&amp;gt;  that fulfill &amp;lt;math&amp;gt;g(n_1,\ldots,n_k)\geq c_g\cdot f(n_1,\ldots,n_k)&amp;lt;/math&amp;gt; for all &amp;lt;math&amp;gt;n_1,\ldots,n_k\in\mathbb{N}&amp;lt;/math&amp;gt; such that &amp;lt;math&amp;gt;n_1,\ldots,n_k\geq N_g&amp;lt;/math&amp;gt;;&lt;br /&gt;
# &amp;lt;math&amp;gt;\Theta(f):=\mathcal{O}(f)\cap\Omega(f)&amp;lt;/math&amp;gt;;&lt;br /&gt;
# &amp;lt;math&amp;gt;o(f):=\mathcal{O}\setminus\Theta(f)&amp;lt;/math&amp;gt;.&lt;br /&gt;
# &amp;lt;math&amp;gt;\omega(f):=\Omega(f)\setminus\Theta(f)&amp;lt;/math&amp;gt;.&lt;/div&gt;</summary>
		<author><name>Weihe</name></author>
	</entry>
	<entry>
		<id>https://wiki.algo.informatik.tu-darmstadt.de/index.php?title=Algorithms_and_correctness&amp;diff=3848</id>
		<title>Algorithms and correctness</title>
		<link rel="alternate" type="text/html" href="https://wiki.algo.informatik.tu-darmstadt.de/index.php?title=Algorithms_and_correctness&amp;diff=3848"/>
		<updated>2016-04-28T12:49:57Z</updated>

		<summary type="html">&lt;p&gt;Weihe: /* Invariant */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Algorithmic problem ==&lt;br /&gt;
&lt;br /&gt;
An algorithmic problem is described by:&lt;br /&gt;
# a set of feasible '''inputs''' (a.k.a. '''instances''');&lt;br /&gt;
# for each input a set of '''outputs''' (a.k.a. '''feasible solutions''' or '''solutions''', for short), which may be empty;&lt;br /&gt;
# optionally, an '''objective function''', which assigns a real number to each feasible solution (the ''quality'' of the solution). If an objective function is specified, it is also specified whether the objective function is to be ''minimized'' or to be ''maximized''.&lt;br /&gt;
&lt;br /&gt;
'''Remark:'''&lt;br /&gt;
Typically, sets such as the set of all feasible inputs to a problem and the set of all feasible outputs to an input are given by a [https://en.wikipedia.org/wiki/Genus%E2%80%93differentia_definition genus-differentia definition], which is an example of [https://en.wikipedia.org/wiki/Intensional_definition intensional definitions]. More specifically, an [https://en.wikipedia.org/wiki/Abstract_data_type abstract data type] is given along with the restrictions that must be fulfilled by feasible inputs and outputs, respectively. Sometimes, the abstract data type is called the set of all inputs / outputs, and the members of the respective set that fulfill the restrictions are then the ''feasible'' inputs / outputs.&lt;br /&gt;
&lt;br /&gt;
== Instructions, operations and subroutines ==&lt;br /&gt;
&lt;br /&gt;
[http://en.wikipedia.org/wiki/Imperative_programming Imperative programming] languages offer possibilities to specify '''instructions''' (a.k.a. '''statements'''). An instruction specifies a sequence of '''(machine) operations'''; executing the instruction on a machine means that the machine performs this sequence of operations. Instructions may be bundled as '''subroutines''' (a.k.a. '''procedures''', '''functions''', '''methods''').&lt;br /&gt;
&lt;br /&gt;
== (Source) programs and processes ==&lt;br /&gt;
&lt;br /&gt;
A '''program''' is a sequence of instructions in some programming language. A program written in a higher programming language, which is to be compiled or interpreted, is often called a '''source program'''. A '''process''' means the execution of a program on a real or virtual machine.&lt;br /&gt;
&lt;br /&gt;
== Termination ==&lt;br /&gt;
&lt;br /&gt;
# A loop has at least one '''break condition'''; this is a part of the (source) program of the loop. '''Termination''' of a loop refers to the process and means that the process evaluates the break condition and this evaluation yields '''true'''.&lt;br /&gt;
# '''Termination''' of a subroutine means that the process runs into an unconditional return-statement or, alternatively, runs into a conditional return-statement and this condition yields '''true''' (common high-level languages only provide unconditional return-statements; a return statement at the end of a void-subroutine may be implicit in many languages).&lt;br /&gt;
# '''Termination''' of a recursion means that every branch of the recursion has a finite depth, that is, runs into a recursive call that does not call the recursive subroutine any further.&lt;br /&gt;
&lt;br /&gt;
== Algorithm ==&lt;br /&gt;
&lt;br /&gt;
# An algorithm is associated with an [[#Algorithmic problem|algorithmic problem]].&lt;br /&gt;
# An algorithm is an abstract description of a process and can be formulated (a.k.a. '''implemented''') as a subroutine. This subroutine is required to compute some feasible output for any given input of the associated algorithmic problem.&lt;br /&gt;
# If an objective function is given, the objective function value of the generated solution is a criterion for the quality of an algorithm. More specifically:&lt;br /&gt;
## in case of ''minimization'': a low objective function value is favored;&lt;br /&gt;
## in case of ''maximization'': a high objective function value is favored.&lt;br /&gt;
&lt;br /&gt;
== Iterative and recursive algorithms ==&lt;br /&gt;
&lt;br /&gt;
# Basically, a non-trivial algorithm is a loop or recursion plus some '''preprocessing''' (a.k.a. '''initialization''') and/or some '''postprocessing'''.&lt;br /&gt;
# If an algorithm consists of two or more loops/recursions that are strictly disjoint parts of an implementation of the algorithm (and thus executed strictly after each other), it may be viewed as two or more algorithms that are executed after each other. Therefore, without loss of generality, an algorithm may be viewed as ''one'' loop or ''one'' recursion plus some pre/postprocessing.&lt;br /&gt;
# An iteration of a loop may contain another loop or recursion. Analogously, a recursive call may contain another loop or recursion. We say that a loop/recursion inside another loop/recursion is '''nested''' or the '''inner''' loop/recursion. Correspondingly, the '''nesting''' loop/recursion is the '''outer''' one.&lt;br /&gt;
&lt;br /&gt;
'''Remarks:'''&lt;br /&gt;
# Clearly, a loop may be transformed into a recursion and vice versa. So, every algorithm may be formulated either as a loop or as a recursion. However, in most cases, one of these two options looks simpler and &amp;quot;more natural&amp;quot; than the other one.&lt;br /&gt;
# Formulating an algorithm as a loop might be favorable in many cases because a loop allows more control than a recursion. More specifically, a loop may be implemented as an [http://en.wikipedia.org/wiki/Iterator iterator] whose method for going one step forward implements the execution of one iteration of the loop. Such an implementation allows one to terminate the loop early, to suspend execution and resume execution later on, and to execute some additional instructions between two iterations (e.g. for visualization or for testing purposes). The crucial point is that, for all of these purposes, the code of the algorithm need not be modified (the source need not even be available).&lt;br /&gt;
&lt;br /&gt;
== Correctness ==&lt;br /&gt;
&lt;br /&gt;
An algorithm is correct if three conditions are fulfilled for every feasible input:&lt;br /&gt;
# All instructions are '''well-defined'''. For example, an instruction that divides by zero, exceeds the range of a numerical type (''overflow''), accesses an array component outside the array's index range, or accesses an attribute or method of a '''void''' / '''null''' pointer are ill-defined operations.&lt;br /&gt;
# Each loop and recursion '''terminates''', that is, the break condition is fulfilled after a finite number of iterative / recursive steps. In case of a recursion with more than one recursive call inside an execution of the recursive routine, this means that every branch of the recursion tree must reach the break condition after a finite number of recursive descents.&lt;br /&gt;
# If the given input admits no feasible solution, this information is delivered by the algorithm. Otherwise, the algorithm delivers a feasible output.&lt;br /&gt;
&lt;br /&gt;
== Invariant ==&lt;br /&gt;
&lt;br /&gt;
# The '''invariant''' of a loop consists of all assertions that are true immediately before the first iteration, immediately after the last iteration, and between two iterations. These assertions are called the '''invariant assertions''' of this loop.&lt;br /&gt;
# The '''invariant''' of a recursion consists of all assertions that are fulfilled immediately before '''or''' after each recursive call (see [[#Induction in case of a recursion|below]]). These assertions are called the '''invariant assertions''' of this recursion.&lt;br /&gt;
&lt;br /&gt;
'''Remarks:'''&lt;br /&gt;
# Typically, an invariant assertion is ''parameterized'', for example:&lt;br /&gt;
## in case of an iteration: usually by the number of iterations performed so far;&lt;br /&gt;
## in case of a recursion: the recursion depth or a dynamically changing value in the auxiliary data (e.g. the length of a sequence).&lt;br /&gt;
# If an algorithm only accesses its input and its own local data, the invariant assertions are assertions about its input and these local data and nothing else.&lt;br /&gt;
# The invariant of an algorithm has a specific function in the [[#Correctness proofs|correctness proof]] for the algorithm. Typically, in a documentation, only those invariant assertions are stated that are relevant for this purpose. &lt;br /&gt;
# The invariant is also essential for testing purposes: a [http://en.wikipedia.org/wiki/White-box_testing white-box test] of a loop/recursion should definitely check maintenance of all relevant invariant assertions.&lt;br /&gt;
# Last not least, the invariant is the very &amp;quot;essence&amp;quot; of the algorithmic idea; deeply understanding the algorithm amounts to understanding the invariant.&lt;br /&gt;
&lt;br /&gt;
== Variant ==&lt;br /&gt;
&lt;br /&gt;
# The '''variant''' of a loop consists of all differences between the state immediately before an iteration and the state immediately after that iteration (typically, but not exclusively, the values of integral loop variables).&lt;br /&gt;
# The '''variant''' of a recursion consists of all differences between the input of a recursive call &amp;lt;math&amp;gt;X&amp;lt;/math&amp;gt; and the inputs of all recursive calls that are directly called in &amp;lt;math&amp;gt;X&amp;lt;/math&amp;gt; (typically, but not exclusively, some integral measure of the size of the input).&lt;br /&gt;
&lt;br /&gt;
'''Remarks:'''&lt;br /&gt;
# If an algorithm only accesses its input and its own local data, the variant states changes of the contents of its input and  these local data and nothing else.&lt;br /&gt;
# The variant of an algorithm has a specific function in the [[#Correctness proofs|correctness proof]] for the algorithm. Typically, in a documentation, only those variant assertions are stated that are relevant for this purpose. &lt;br /&gt;
# The variant is also essential for testing purposes: a [http://en.wikipedia.org/wiki/White-box_testing white-box test] of a loop/recursion should definitely check all relevant changes.&lt;br /&gt;
&lt;br /&gt;
== Correctness proofs ==&lt;br /&gt;
&lt;br /&gt;
# Assuming the algorithm is correct:&lt;br /&gt;
## The variant implies that the break condition will be fulfilled,&lt;br /&gt;
### in case of a loop: after a finite number of iterations;&lt;br /&gt;
### in case of a recursion: after a finite number of recursive calls in each branch of the recursive tree.&lt;br /&gt;
## Correctness of the output follows from what the invariant says about the state immediately after the last iteration.&lt;br /&gt;
# Proving the invariant amounts to an induction on the number of iterations performed so far / the recursion parameter over which the variant of the recursion is defined.&lt;br /&gt;
## The invariant is the induction hypothesis.&lt;br /&gt;
## Proving the induction basis amounts to proving that the preprocessing (initialization) establishes the invariant.&lt;br /&gt;
## Proving the induction step amounts to proving that an iteration / a recursive call maintains the invariant.&lt;br /&gt;
&lt;br /&gt;
'''Caveat:'''&lt;br /&gt;
Quite frequently, validity of the invariant for the state immediately before the first iteration is based on a separate definition of the &amp;quot;null case&amp;quot;. For example, the empty sum (sum over no summands) is 0 by definition, the empty product is 1, the faculty of 0 is 1, the 0-th power of a number is 1, etc. To be on the safe side in such a case, the particular induction step from 0 to 1 should be checked separately.&lt;br /&gt;
&lt;br /&gt;
'''Remarks:'''&lt;br /&gt;
# In a nutshell, the variant proves termination, and the invariant proves that the output is correct.&lt;br /&gt;
# Typically, well-definedness of all operations is not considered explicitly in correctness proofs. The reason is that well-definedness is non-obvious in rare cases only.&lt;br /&gt;
# In many cases, correctness of the output follows immediately from what the invariant says about the state immediately after the last iteration; in other cases, additional arguments are necessary to prove correctness of the output.&lt;br /&gt;
# The variant is also essential to estimate, asymptotically, the number of iterations / the recursion depth.&lt;br /&gt;
&lt;br /&gt;
== Induction in case of a recursion ==&lt;br /&gt;
In principle, there are two, in a sense  mutually opposite, ways to define the induction. Typically, only one of these two ways is viable:&lt;br /&gt;
# The depth of a recursive call is the induction variable; the original call to this subroutine has to ensure the induction basis; the induction step has to be ensured by every descent in the recursion tree. Example: [[Binary search|binary search]].&lt;br /&gt;
# The induction variable is some parameter of the input. The invariant simply says that the output of a recursive call is correct. In a descent in the recursion tree, that parameter must strictly decrease. The recursion anchor must ensure the induction basis. Examples: [[Mergesort|mergesort]] and [[Quicksort|quicksort]]; in these examples, the size of the sequence is the induction variable.&lt;/div&gt;</summary>
		<author><name>Weihe</name></author>
	</entry>
	<entry>
		<id>https://wiki.algo.informatik.tu-darmstadt.de/index.php?title=Algorithms_and_correctness&amp;diff=3847</id>
		<title>Algorithms and correctness</title>
		<link rel="alternate" type="text/html" href="https://wiki.algo.informatik.tu-darmstadt.de/index.php?title=Algorithms_and_correctness&amp;diff=3847"/>
		<updated>2016-04-28T11:57:25Z</updated>

		<summary type="html">&lt;p&gt;Weihe: /* Correctness proofs */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Algorithmic problem ==&lt;br /&gt;
&lt;br /&gt;
An algorithmic problem is described by:&lt;br /&gt;
# a set of feasible '''inputs''' (a.k.a. '''instances''');&lt;br /&gt;
# for each input a set of '''outputs''' (a.k.a. '''feasible solutions''' or '''solutions''', for short), which may be empty;&lt;br /&gt;
# optionally, an '''objective function''', which assigns a real number to each feasible solution (the ''quality'' of the solution). If an objective function is specified, it is also specified whether the objective function is to be ''minimized'' or to be ''maximized''.&lt;br /&gt;
&lt;br /&gt;
'''Remark:'''&lt;br /&gt;
Typically, sets such as the set of all feasible inputs to a problem and the set of all feasible outputs to an input are given by a [https://en.wikipedia.org/wiki/Genus%E2%80%93differentia_definition genus-differentia definition], which is an example of [https://en.wikipedia.org/wiki/Intensional_definition intensional definitions]. More specifically, an [https://en.wikipedia.org/wiki/Abstract_data_type abstract data type] is given along with the restrictions that must be fulfilled by feasible inputs and outputs, respectively. Sometimes, the abstract data type is called the set of all inputs / outputs, and the members of the respective set that fulfill the restrictions are then the ''feasible'' inputs / outputs.&lt;br /&gt;
&lt;br /&gt;
== Instructions, operations and subroutines ==&lt;br /&gt;
&lt;br /&gt;
[http://en.wikipedia.org/wiki/Imperative_programming Imperative programming] languages offer possibilities to specify '''instructions''' (a.k.a. '''statements'''). An instruction specifies a sequence of '''(machine) operations'''; executing the instruction on a machine means that the machine performs this sequence of operations. Instructions may be bundled as '''subroutines''' (a.k.a. '''procedures''', '''functions''', '''methods''').&lt;br /&gt;
&lt;br /&gt;
== (Source) programs and processes ==&lt;br /&gt;
&lt;br /&gt;
A '''program''' is a sequence of instructions in some programming language. A program written in a higher programming language, which is to be compiled or interpreted, is often called a '''source program'''. A '''process''' means the execution of a program on a real or virtual machine.&lt;br /&gt;
&lt;br /&gt;
== Termination ==&lt;br /&gt;
&lt;br /&gt;
# A loop has at least one '''break condition'''; this is a part of the (source) program of the loop. '''Termination''' of a loop refers to the process and means that the process evaluates the break condition and this evaluation yields '''true'''.&lt;br /&gt;
# '''Termination''' of a subroutine means that the process runs into an unconditional return-statement or, alternatively, runs into a conditional return-statement and this condition yields '''true''' (common high-level languages only provide unconditional return-statements; a return statement at the end of a void-subroutine may be implicit in many languages).&lt;br /&gt;
# '''Termination''' of a recursion means that every branch of the recursion has a finite depth, that is, runs into a recursive call that does not call the recursive subroutine any further.&lt;br /&gt;
&lt;br /&gt;
== Algorithm ==&lt;br /&gt;
&lt;br /&gt;
# An algorithm is associated with an [[#Algorithmic problem|algorithmic problem]].&lt;br /&gt;
# An algorithm is an abstract description of a process and can be formulated (a.k.a. '''implemented''') as a subroutine. This subroutine is required to compute some feasible output for any given input of the associated algorithmic problem.&lt;br /&gt;
# If an objective function is given, the objective function value of the generated solution is a criterion for the quality of an algorithm. More specifically:&lt;br /&gt;
## in case of ''minimization'': a low objective function value is favored;&lt;br /&gt;
## in case of ''maximization'': a high objective function value is favored.&lt;br /&gt;
&lt;br /&gt;
== Iterative and recursive algorithms ==&lt;br /&gt;
&lt;br /&gt;
# Basically, a non-trivial algorithm is a loop or recursion plus some '''preprocessing''' (a.k.a. '''initialization''') and/or some '''postprocessing'''.&lt;br /&gt;
# If an algorithm consists of two or more loops/recursions that are strictly disjoint parts of an implementation of the algorithm (and thus executed strictly after each other), it may be viewed as two or more algorithms that are executed after each other. Therefore, without loss of generality, an algorithm may be viewed as ''one'' loop or ''one'' recursion plus some pre/postprocessing.&lt;br /&gt;
# An iteration of a loop may contain another loop or recursion. Analogously, a recursive call may contain another loop or recursion. We say that a loop/recursion inside another loop/recursion is '''nested''' or the '''inner''' loop/recursion. Correspondingly, the '''nesting''' loop/recursion is the '''outer''' one.&lt;br /&gt;
&lt;br /&gt;
'''Remarks:'''&lt;br /&gt;
# Clearly, a loop may be transformed into a recursion and vice versa. So, every algorithm may be formulated either as a loop or as a recursion. However, in most cases, one of these two options looks simpler and &amp;quot;more natural&amp;quot; than the other one.&lt;br /&gt;
# Formulating an algorithm as a loop might be favorable in many cases because a loop allows more control than a recursion. More specifically, a loop may be implemented as an [http://en.wikipedia.org/wiki/Iterator iterator] whose method for going one step forward implements the execution of one iteration of the loop. Such an implementation allows one to terminate the loop early, to suspend execution and resume execution later on, and to execute some additional instructions between two iterations (e.g. for visualization or for testing purposes). The crucial point is that, for all of these purposes, the code of the algorithm need not be modified (the source need not even be available).&lt;br /&gt;
&lt;br /&gt;
== Correctness ==&lt;br /&gt;
&lt;br /&gt;
An algorithm is correct if three conditions are fulfilled for every feasible input:&lt;br /&gt;
# All instructions are '''well-defined'''. For example, an instruction that divides by zero, exceeds the range of a numerical type (''overflow''), accesses an array component outside the array's index range, or accesses an attribute or method of a '''void''' / '''null''' pointer are ill-defined operations.&lt;br /&gt;
# Each loop and recursion '''terminates''', that is, the break condition is fulfilled after a finite number of iterative / recursive steps. In case of a recursion with more than one recursive call inside an execution of the recursive routine, this means that every branch of the recursion tree must reach the break condition after a finite number of recursive descents.&lt;br /&gt;
# If the given input admits no feasible solution, this information is delivered by the algorithm. Otherwise, the algorithm delivers a feasible output.&lt;br /&gt;
&lt;br /&gt;
== Invariant ==&lt;br /&gt;
&lt;br /&gt;
# The '''invariant''' of a loop consists of all assertions that are true immediately before the first iteration, immediately after the last iteration, and between two iterations. These assertions are called the '''invariant assertions''' of this loop.&lt;br /&gt;
# The '''invariant''' of a recursion consists of all assertions that are fulfilled immediately before '''or''' after each recursive call (see [[#Induction in case of a recursion|below]]). These assertions are called the '''invariant assertions''' of this recursion.&lt;br /&gt;
&lt;br /&gt;
'''Remarks:'''&lt;br /&gt;
# If an algorithm only accesses its input and its own local data, the invariant assertions are assertions about its input and these local data and nothing else.&lt;br /&gt;
# The invariant of an algorithm has a specific function in the [[#Correctness proofs|correctness proof]] for the algorithm. Typically, in a documentation, only those invariant assertions are stated that are relevant for this purpose. &lt;br /&gt;
# The invariant is also essential for testing purposes: a [http://en.wikipedia.org/wiki/White-box_testing white-box test] of a loop/recursion should definitely check maintenance of all relevant invariant assertions.&lt;br /&gt;
# Last not least, the invariant is the very &amp;quot;essence&amp;quot; of the algorithmic idea; deeply understanding the algorithm amounts to understanding the invariant.&lt;br /&gt;
&lt;br /&gt;
== Variant ==&lt;br /&gt;
&lt;br /&gt;
# The '''variant''' of a loop consists of all differences between the state immediately before an iteration and the state immediately after that iteration (typically, but not exclusively, the values of integral loop variables).&lt;br /&gt;
# The '''variant''' of a recursion consists of all differences between the input of a recursive call &amp;lt;math&amp;gt;X&amp;lt;/math&amp;gt; and the inputs of all recursive calls that are directly called in &amp;lt;math&amp;gt;X&amp;lt;/math&amp;gt; (typically, but not exclusively, some integral measure of the size of the input).&lt;br /&gt;
&lt;br /&gt;
'''Remarks:'''&lt;br /&gt;
# If an algorithm only accesses its input and its own local data, the variant states changes of the contents of its input and  these local data and nothing else.&lt;br /&gt;
# The variant of an algorithm has a specific function in the [[#Correctness proofs|correctness proof]] for the algorithm. Typically, in a documentation, only those variant assertions are stated that are relevant for this purpose. &lt;br /&gt;
# The variant is also essential for testing purposes: a [http://en.wikipedia.org/wiki/White-box_testing white-box test] of a loop/recursion should definitely check all relevant changes.&lt;br /&gt;
&lt;br /&gt;
== Correctness proofs ==&lt;br /&gt;
&lt;br /&gt;
# Assuming the algorithm is correct:&lt;br /&gt;
## The variant implies that the break condition will be fulfilled,&lt;br /&gt;
### in case of a loop: after a finite number of iterations;&lt;br /&gt;
### in case of a recursion: after a finite number of recursive calls in each branch of the recursive tree.&lt;br /&gt;
## Correctness of the output follows from what the invariant says about the state immediately after the last iteration.&lt;br /&gt;
# Proving the invariant amounts to an induction on the number of iterations performed so far / the recursion parameter over which the variant of the recursion is defined.&lt;br /&gt;
## The invariant is the induction hypothesis.&lt;br /&gt;
## Proving the induction basis amounts to proving that the preprocessing (initialization) establishes the invariant.&lt;br /&gt;
## Proving the induction step amounts to proving that an iteration / a recursive call maintains the invariant.&lt;br /&gt;
&lt;br /&gt;
'''Caveat:'''&lt;br /&gt;
Quite frequently, validity of the invariant for the state immediately before the first iteration is based on a separate definition of the &amp;quot;null case&amp;quot;. For example, the empty sum (sum over no summands) is 0 by definition, the empty product is 1, the faculty of 0 is 1, the 0-th power of a number is 1, etc. To be on the safe side in such a case, the particular induction step from 0 to 1 should be checked separately.&lt;br /&gt;
&lt;br /&gt;
'''Remarks:'''&lt;br /&gt;
# In a nutshell, the variant proves termination, and the invariant proves that the output is correct.&lt;br /&gt;
# Typically, well-definedness of all operations is not considered explicitly in correctness proofs. The reason is that well-definedness is non-obvious in rare cases only.&lt;br /&gt;
# In many cases, correctness of the output follows immediately from what the invariant says about the state immediately after the last iteration; in other cases, additional arguments are necessary to prove correctness of the output.&lt;br /&gt;
# The variant is also essential to estimate, asymptotically, the number of iterations / the recursion depth.&lt;br /&gt;
&lt;br /&gt;
== Induction in case of a recursion ==&lt;br /&gt;
In principle, there are two, in a sense  mutually opposite, ways to define the induction. Typically, only one of these two ways is viable:&lt;br /&gt;
# The depth of a recursive call is the induction variable; the original call to this subroutine has to ensure the induction basis; the induction step has to be ensured by every descent in the recursion tree. Example: [[Binary search|binary search]].&lt;br /&gt;
# The induction variable is some parameter of the input. The invariant simply says that the output of a recursive call is correct. In a descent in the recursion tree, that parameter must strictly decrease. The recursion anchor must ensure the induction basis. Examples: [[Mergesort|mergesort]] and [[Quicksort|quicksort]]; in these examples, the size of the sequence is the induction variable.&lt;/div&gt;</summary>
		<author><name>Weihe</name></author>
	</entry>
	<entry>
		<id>https://wiki.algo.informatik.tu-darmstadt.de/index.php?title=Algorithms_and_correctness&amp;diff=3846</id>
		<title>Algorithms and correctness</title>
		<link rel="alternate" type="text/html" href="https://wiki.algo.informatik.tu-darmstadt.de/index.php?title=Algorithms_and_correctness&amp;diff=3846"/>
		<updated>2016-04-28T11:56:53Z</updated>

		<summary type="html">&lt;p&gt;Weihe: /* Correctness proofs */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Algorithmic problem ==&lt;br /&gt;
&lt;br /&gt;
An algorithmic problem is described by:&lt;br /&gt;
# a set of feasible '''inputs''' (a.k.a. '''instances''');&lt;br /&gt;
# for each input a set of '''outputs''' (a.k.a. '''feasible solutions''' or '''solutions''', for short), which may be empty;&lt;br /&gt;
# optionally, an '''objective function''', which assigns a real number to each feasible solution (the ''quality'' of the solution). If an objective function is specified, it is also specified whether the objective function is to be ''minimized'' or to be ''maximized''.&lt;br /&gt;
&lt;br /&gt;
'''Remark:'''&lt;br /&gt;
Typically, sets such as the set of all feasible inputs to a problem and the set of all feasible outputs to an input are given by a [https://en.wikipedia.org/wiki/Genus%E2%80%93differentia_definition genus-differentia definition], which is an example of [https://en.wikipedia.org/wiki/Intensional_definition intensional definitions]. More specifically, an [https://en.wikipedia.org/wiki/Abstract_data_type abstract data type] is given along with the restrictions that must be fulfilled by feasible inputs and outputs, respectively. Sometimes, the abstract data type is called the set of all inputs / outputs, and the members of the respective set that fulfill the restrictions are then the ''feasible'' inputs / outputs.&lt;br /&gt;
&lt;br /&gt;
== Instructions, operations and subroutines ==&lt;br /&gt;
&lt;br /&gt;
[http://en.wikipedia.org/wiki/Imperative_programming Imperative programming] languages offer possibilities to specify '''instructions''' (a.k.a. '''statements'''). An instruction specifies a sequence of '''(machine) operations'''; executing the instruction on a machine means that the machine performs this sequence of operations. Instructions may be bundled as '''subroutines''' (a.k.a. '''procedures''', '''functions''', '''methods''').&lt;br /&gt;
&lt;br /&gt;
== (Source) programs and processes ==&lt;br /&gt;
&lt;br /&gt;
A '''program''' is a sequence of instructions in some programming language. A program written in a higher programming language, which is to be compiled or interpreted, is often called a '''source program'''. A '''process''' means the execution of a program on a real or virtual machine.&lt;br /&gt;
&lt;br /&gt;
== Termination ==&lt;br /&gt;
&lt;br /&gt;
# A loop has at least one '''break condition'''; this is a part of the (source) program of the loop. '''Termination''' of a loop refers to the process and means that the process evaluates the break condition and this evaluation yields '''true'''.&lt;br /&gt;
# '''Termination''' of a subroutine means that the process runs into an unconditional return-statement or, alternatively, runs into a conditional return-statement and this condition yields '''true''' (common high-level languages only provide unconditional return-statements; a return statement at the end of a void-subroutine may be implicit in many languages).&lt;br /&gt;
# '''Termination''' of a recursion means that every branch of the recursion has a finite depth, that is, runs into a recursive call that does not call the recursive subroutine any further.&lt;br /&gt;
&lt;br /&gt;
== Algorithm ==&lt;br /&gt;
&lt;br /&gt;
# An algorithm is associated with an [[#Algorithmic problem|algorithmic problem]].&lt;br /&gt;
# An algorithm is an abstract description of a process and can be formulated (a.k.a. '''implemented''') as a subroutine. This subroutine is required to compute some feasible output for any given input of the associated algorithmic problem.&lt;br /&gt;
# If an objective function is given, the objective function value of the generated solution is a criterion for the quality of an algorithm. More specifically:&lt;br /&gt;
## in case of ''minimization'': a low objective function value is favored;&lt;br /&gt;
## in case of ''maximization'': a high objective function value is favored.&lt;br /&gt;
&lt;br /&gt;
== Iterative and recursive algorithms ==&lt;br /&gt;
&lt;br /&gt;
# Basically, a non-trivial algorithm is a loop or recursion plus some '''preprocessing''' (a.k.a. '''initialization''') and/or some '''postprocessing'''.&lt;br /&gt;
# If an algorithm consists of two or more loops/recursions that are strictly disjoint parts of an implementation of the algorithm (and thus executed strictly after each other), it may be viewed as two or more algorithms that are executed after each other. Therefore, without loss of generality, an algorithm may be viewed as ''one'' loop or ''one'' recursion plus some pre/postprocessing.&lt;br /&gt;
# An iteration of a loop may contain another loop or recursion. Analogously, a recursive call may contain another loop or recursion. We say that a loop/recursion inside another loop/recursion is '''nested''' or the '''inner''' loop/recursion. Correspondingly, the '''nesting''' loop/recursion is the '''outer''' one.&lt;br /&gt;
&lt;br /&gt;
'''Remarks:'''&lt;br /&gt;
# Clearly, a loop may be transformed into a recursion and vice versa. So, every algorithm may be formulated either as a loop or as a recursion. However, in most cases, one of these two options looks simpler and &amp;quot;more natural&amp;quot; than the other one.&lt;br /&gt;
# Formulating an algorithm as a loop might be favorable in many cases because a loop allows more control than a recursion. More specifically, a loop may be implemented as an [http://en.wikipedia.org/wiki/Iterator iterator] whose method for going one step forward implements the execution of one iteration of the loop. Such an implementation allows one to terminate the loop early, to suspend execution and resume execution later on, and to execute some additional instructions between two iterations (e.g. for visualization or for testing purposes). The crucial point is that, for all of these purposes, the code of the algorithm need not be modified (the source need not even be available).&lt;br /&gt;
&lt;br /&gt;
== Correctness ==&lt;br /&gt;
&lt;br /&gt;
An algorithm is correct if three conditions are fulfilled for every feasible input:&lt;br /&gt;
# All instructions are '''well-defined'''. For example, an instruction that divides by zero, exceeds the range of a numerical type (''overflow''), accesses an array component outside the array's index range, or accesses an attribute or method of a '''void''' / '''null''' pointer are ill-defined operations.&lt;br /&gt;
# Each loop and recursion '''terminates''', that is, the break condition is fulfilled after a finite number of iterative / recursive steps. In case of a recursion with more than one recursive call inside an execution of the recursive routine, this means that every branch of the recursion tree must reach the break condition after a finite number of recursive descents.&lt;br /&gt;
# If the given input admits no feasible solution, this information is delivered by the algorithm. Otherwise, the algorithm delivers a feasible output.&lt;br /&gt;
&lt;br /&gt;
== Invariant ==&lt;br /&gt;
&lt;br /&gt;
# The '''invariant''' of a loop consists of all assertions that are true immediately before the first iteration, immediately after the last iteration, and between two iterations. These assertions are called the '''invariant assertions''' of this loop.&lt;br /&gt;
# The '''invariant''' of a recursion consists of all assertions that are fulfilled immediately before '''or''' after each recursive call (see [[#Induction in case of a recursion|below]]). These assertions are called the '''invariant assertions''' of this recursion.&lt;br /&gt;
&lt;br /&gt;
'''Remarks:'''&lt;br /&gt;
# If an algorithm only accesses its input and its own local data, the invariant assertions are assertions about its input and these local data and nothing else.&lt;br /&gt;
# The invariant of an algorithm has a specific function in the [[#Correctness proofs|correctness proof]] for the algorithm. Typically, in a documentation, only those invariant assertions are stated that are relevant for this purpose. &lt;br /&gt;
# The invariant is also essential for testing purposes: a [http://en.wikipedia.org/wiki/White-box_testing white-box test] of a loop/recursion should definitely check maintenance of all relevant invariant assertions.&lt;br /&gt;
# Last not least, the invariant is the very &amp;quot;essence&amp;quot; of the algorithmic idea; deeply understanding the algorithm amounts to understanding the invariant.&lt;br /&gt;
&lt;br /&gt;
== Variant ==&lt;br /&gt;
&lt;br /&gt;
# The '''variant''' of a loop consists of all differences between the state immediately before an iteration and the state immediately after that iteration (typically, but not exclusively, the values of integral loop variables).&lt;br /&gt;
# The '''variant''' of a recursion consists of all differences between the input of a recursive call &amp;lt;math&amp;gt;X&amp;lt;/math&amp;gt; and the inputs of all recursive calls that are directly called in &amp;lt;math&amp;gt;X&amp;lt;/math&amp;gt; (typically, but not exclusively, some integral measure of the size of the input).&lt;br /&gt;
&lt;br /&gt;
'''Remarks:'''&lt;br /&gt;
# If an algorithm only accesses its input and its own local data, the variant states changes of the contents of its input and  these local data and nothing else.&lt;br /&gt;
# The variant of an algorithm has a specific function in the [[#Correctness proofs|correctness proof]] for the algorithm. Typically, in a documentation, only those variant assertions are stated that are relevant for this purpose. &lt;br /&gt;
# The variant is also essential for testing purposes: a [http://en.wikipedia.org/wiki/White-box_testing white-box test] of a loop/recursion should definitely check all relevant changes.&lt;br /&gt;
&lt;br /&gt;
== Correctness proofs ==&lt;br /&gt;
&lt;br /&gt;
# Assuming the algorithm is correct:&lt;br /&gt;
## The variant implies that the break condition will be fulfilled,&lt;br /&gt;
### in case of a loop: after a finite number of iterations;&lt;br /&gt;
### in case of a recursion: after a finite number of recursive calls in each branch of the recursive tree.&lt;br /&gt;
## Correctness of the output follows from what the invariant says about the state immediately after the last iteration.&lt;br /&gt;
# Proving the invariant amounts to an induction on the number of iterations performed so far / the recursion parameter over which the variant of the recursion is defined.&lt;br /&gt;
## The invariant is the induction hypothesis.&lt;br /&gt;
## Proving the induction basis amounts to proving that the preprocessing (initialization) establishes the invariant.&lt;br /&gt;
## Proving the induction step amounts to proving that an iteration / a recursive call maintains the invariant.&lt;br /&gt;
&lt;br /&gt;
'''Caveat:'''&lt;br /&gt;
Quite frequently, validity of the invariant for the state immediately before the first iteration is based on a separate definition of the &amp;quot;null case&amp;quot;. For example, the empty sum (sum over no summands) is 0 by definition, the product is 1, the faculty of 0 is 1, the 0-th power of a number is 1, etc. To be on the safe side in such a case, the particular induction step from 0 to 1 should be checked separately.&lt;br /&gt;
&lt;br /&gt;
'''Remarks:'''&lt;br /&gt;
# In a nutshell, the variant proves termination, and the invariant proves that the output is correct.&lt;br /&gt;
# Typically, well-definedness of all operations is not considered explicitly in correctness proofs. The reason is that well-definedness is non-obvious in rare cases only.&lt;br /&gt;
# In many cases, correctness of the output follows immediately from what the invariant says about the state immediately after the last iteration; in other cases, additional arguments are necessary to prove correctness of the output.&lt;br /&gt;
# The variant is also essential to estimate, asymptotically, the number of iterations / the recursion depth.&lt;br /&gt;
&lt;br /&gt;
== Induction in case of a recursion ==&lt;br /&gt;
In principle, there are two, in a sense  mutually opposite, ways to define the induction. Typically, only one of these two ways is viable:&lt;br /&gt;
# The depth of a recursive call is the induction variable; the original call to this subroutine has to ensure the induction basis; the induction step has to be ensured by every descent in the recursion tree. Example: [[Binary search|binary search]].&lt;br /&gt;
# The induction variable is some parameter of the input. The invariant simply says that the output of a recursive call is correct. In a descent in the recursion tree, that parameter must strictly decrease. The recursion anchor must ensure the induction basis. Examples: [[Mergesort|mergesort]] and [[Quicksort|quicksort]]; in these examples, the size of the sequence is the induction variable.&lt;/div&gt;</summary>
		<author><name>Weihe</name></author>
	</entry>
	<entry>
		<id>https://wiki.algo.informatik.tu-darmstadt.de/index.php?title=Algorithms_and_correctness&amp;diff=3845</id>
		<title>Algorithms and correctness</title>
		<link rel="alternate" type="text/html" href="https://wiki.algo.informatik.tu-darmstadt.de/index.php?title=Algorithms_and_correctness&amp;diff=3845"/>
		<updated>2016-04-28T11:56:31Z</updated>

		<summary type="html">&lt;p&gt;Weihe: /* Invariant */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Algorithmic problem ==&lt;br /&gt;
&lt;br /&gt;
An algorithmic problem is described by:&lt;br /&gt;
# a set of feasible '''inputs''' (a.k.a. '''instances''');&lt;br /&gt;
# for each input a set of '''outputs''' (a.k.a. '''feasible solutions''' or '''solutions''', for short), which may be empty;&lt;br /&gt;
# optionally, an '''objective function''', which assigns a real number to each feasible solution (the ''quality'' of the solution). If an objective function is specified, it is also specified whether the objective function is to be ''minimized'' or to be ''maximized''.&lt;br /&gt;
&lt;br /&gt;
'''Remark:'''&lt;br /&gt;
Typically, sets such as the set of all feasible inputs to a problem and the set of all feasible outputs to an input are given by a [https://en.wikipedia.org/wiki/Genus%E2%80%93differentia_definition genus-differentia definition], which is an example of [https://en.wikipedia.org/wiki/Intensional_definition intensional definitions]. More specifically, an [https://en.wikipedia.org/wiki/Abstract_data_type abstract data type] is given along with the restrictions that must be fulfilled by feasible inputs and outputs, respectively. Sometimes, the abstract data type is called the set of all inputs / outputs, and the members of the respective set that fulfill the restrictions are then the ''feasible'' inputs / outputs.&lt;br /&gt;
&lt;br /&gt;
== Instructions, operations and subroutines ==&lt;br /&gt;
&lt;br /&gt;
[http://en.wikipedia.org/wiki/Imperative_programming Imperative programming] languages offer possibilities to specify '''instructions''' (a.k.a. '''statements'''). An instruction specifies a sequence of '''(machine) operations'''; executing the instruction on a machine means that the machine performs this sequence of operations. Instructions may be bundled as '''subroutines''' (a.k.a. '''procedures''', '''functions''', '''methods''').&lt;br /&gt;
&lt;br /&gt;
== (Source) programs and processes ==&lt;br /&gt;
&lt;br /&gt;
A '''program''' is a sequence of instructions in some programming language. A program written in a higher programming language, which is to be compiled or interpreted, is often called a '''source program'''. A '''process''' means the execution of a program on a real or virtual machine.&lt;br /&gt;
&lt;br /&gt;
== Termination ==&lt;br /&gt;
&lt;br /&gt;
# A loop has at least one '''break condition'''; this is a part of the (source) program of the loop. '''Termination''' of a loop refers to the process and means that the process evaluates the break condition and this evaluation yields '''true'''.&lt;br /&gt;
# '''Termination''' of a subroutine means that the process runs into an unconditional return-statement or, alternatively, runs into a conditional return-statement and this condition yields '''true''' (common high-level languages only provide unconditional return-statements; a return statement at the end of a void-subroutine may be implicit in many languages).&lt;br /&gt;
# '''Termination''' of a recursion means that every branch of the recursion has a finite depth, that is, runs into a recursive call that does not call the recursive subroutine any further.&lt;br /&gt;
&lt;br /&gt;
== Algorithm ==&lt;br /&gt;
&lt;br /&gt;
# An algorithm is associated with an [[#Algorithmic problem|algorithmic problem]].&lt;br /&gt;
# An algorithm is an abstract description of a process and can be formulated (a.k.a. '''implemented''') as a subroutine. This subroutine is required to compute some feasible output for any given input of the associated algorithmic problem.&lt;br /&gt;
# If an objective function is given, the objective function value of the generated solution is a criterion for the quality of an algorithm. More specifically:&lt;br /&gt;
## in case of ''minimization'': a low objective function value is favored;&lt;br /&gt;
## in case of ''maximization'': a high objective function value is favored.&lt;br /&gt;
&lt;br /&gt;
== Iterative and recursive algorithms ==&lt;br /&gt;
&lt;br /&gt;
# Basically, a non-trivial algorithm is a loop or recursion plus some '''preprocessing''' (a.k.a. '''initialization''') and/or some '''postprocessing'''.&lt;br /&gt;
# If an algorithm consists of two or more loops/recursions that are strictly disjoint parts of an implementation of the algorithm (and thus executed strictly after each other), it may be viewed as two or more algorithms that are executed after each other. Therefore, without loss of generality, an algorithm may be viewed as ''one'' loop or ''one'' recursion plus some pre/postprocessing.&lt;br /&gt;
# An iteration of a loop may contain another loop or recursion. Analogously, a recursive call may contain another loop or recursion. We say that a loop/recursion inside another loop/recursion is '''nested''' or the '''inner''' loop/recursion. Correspondingly, the '''nesting''' loop/recursion is the '''outer''' one.&lt;br /&gt;
&lt;br /&gt;
'''Remarks:'''&lt;br /&gt;
# Clearly, a loop may be transformed into a recursion and vice versa. So, every algorithm may be formulated either as a loop or as a recursion. However, in most cases, one of these two options looks simpler and &amp;quot;more natural&amp;quot; than the other one.&lt;br /&gt;
# Formulating an algorithm as a loop might be favorable in many cases because a loop allows more control than a recursion. More specifically, a loop may be implemented as an [http://en.wikipedia.org/wiki/Iterator iterator] whose method for going one step forward implements the execution of one iteration of the loop. Such an implementation allows one to terminate the loop early, to suspend execution and resume execution later on, and to execute some additional instructions between two iterations (e.g. for visualization or for testing purposes). The crucial point is that, for all of these purposes, the code of the algorithm need not be modified (the source need not even be available).&lt;br /&gt;
&lt;br /&gt;
== Correctness ==&lt;br /&gt;
&lt;br /&gt;
An algorithm is correct if three conditions are fulfilled for every feasible input:&lt;br /&gt;
# All instructions are '''well-defined'''. For example, an instruction that divides by zero, exceeds the range of a numerical type (''overflow''), accesses an array component outside the array's index range, or accesses an attribute or method of a '''void''' / '''null''' pointer are ill-defined operations.&lt;br /&gt;
# Each loop and recursion '''terminates''', that is, the break condition is fulfilled after a finite number of iterative / recursive steps. In case of a recursion with more than one recursive call inside an execution of the recursive routine, this means that every branch of the recursion tree must reach the break condition after a finite number of recursive descents.&lt;br /&gt;
# If the given input admits no feasible solution, this information is delivered by the algorithm. Otherwise, the algorithm delivers a feasible output.&lt;br /&gt;
&lt;br /&gt;
== Invariant ==&lt;br /&gt;
&lt;br /&gt;
# The '''invariant''' of a loop consists of all assertions that are true immediately before the first iteration, immediately after the last iteration, and between two iterations. These assertions are called the '''invariant assertions''' of this loop.&lt;br /&gt;
# The '''invariant''' of a recursion consists of all assertions that are fulfilled immediately before '''or''' after each recursive call (see [[#Induction in case of a recursion|below]]). These assertions are called the '''invariant assertions''' of this recursion.&lt;br /&gt;
&lt;br /&gt;
'''Remarks:'''&lt;br /&gt;
# If an algorithm only accesses its input and its own local data, the invariant assertions are assertions about its input and these local data and nothing else.&lt;br /&gt;
# The invariant of an algorithm has a specific function in the [[#Correctness proofs|correctness proof]] for the algorithm. Typically, in a documentation, only those invariant assertions are stated that are relevant for this purpose. &lt;br /&gt;
# The invariant is also essential for testing purposes: a [http://en.wikipedia.org/wiki/White-box_testing white-box test] of a loop/recursion should definitely check maintenance of all relevant invariant assertions.&lt;br /&gt;
# Last not least, the invariant is the very &amp;quot;essence&amp;quot; of the algorithmic idea; deeply understanding the algorithm amounts to understanding the invariant.&lt;br /&gt;
&lt;br /&gt;
== Variant ==&lt;br /&gt;
&lt;br /&gt;
# The '''variant''' of a loop consists of all differences between the state immediately before an iteration and the state immediately after that iteration (typically, but not exclusively, the values of integral loop variables).&lt;br /&gt;
# The '''variant''' of a recursion consists of all differences between the input of a recursive call &amp;lt;math&amp;gt;X&amp;lt;/math&amp;gt; and the inputs of all recursive calls that are directly called in &amp;lt;math&amp;gt;X&amp;lt;/math&amp;gt; (typically, but not exclusively, some integral measure of the size of the input).&lt;br /&gt;
&lt;br /&gt;
'''Remarks:'''&lt;br /&gt;
# If an algorithm only accesses its input and its own local data, the variant states changes of the contents of its input and  these local data and nothing else.&lt;br /&gt;
# The variant of an algorithm has a specific function in the [[#Correctness proofs|correctness proof]] for the algorithm. Typically, in a documentation, only those variant assertions are stated that are relevant for this purpose. &lt;br /&gt;
# The variant is also essential for testing purposes: a [http://en.wikipedia.org/wiki/White-box_testing white-box test] of a loop/recursion should definitely check all relevant changes.&lt;br /&gt;
&lt;br /&gt;
== Correctness proofs ==&lt;br /&gt;
&lt;br /&gt;
# Assuming the algorithm is correct:&lt;br /&gt;
## The variant implies that the break condition will be fulfilled,&lt;br /&gt;
### in case of a loop: after a finite number of iterations;&lt;br /&gt;
### in case of a recursion: after a finite number of recursive calls in each branch of the recursive tree.&lt;br /&gt;
## Correctness of the output follows from what the invariant says about the state immediately after the last iteration.&lt;br /&gt;
# Proving the invariant amounts to an induction on the number of iterations performed so far / the recursion parameter over which the variant of the recursion is defined.&lt;br /&gt;
## The invariant is the induction hypothesis.&lt;br /&gt;
## Proving the induction basis amounts to proving that the preprocessing (initialization) establishes the invariant.&lt;br /&gt;
## Proving the induction step amounts to proving that an iteration / a recursive call maintains the invariant.&lt;br /&gt;
&lt;br /&gt;
'''Remarks:'''&lt;br /&gt;
# In a nutshell, the variant proves termination, and the invariant proves that the output is correct.&lt;br /&gt;
# Typically, well-definedness of all operations is not considered explicitly in correctness proofs. The reason is that well-definedness is non-obvious in rare cases only.&lt;br /&gt;
# In many cases, correctness of the output follows immediately from what the invariant says about the state immediately after the last iteration; in other cases, additional arguments are necessary to prove correctness of the output.&lt;br /&gt;
# The variant is also essential to estimate, asymptotically, the number of iterations / the recursion depth.&lt;br /&gt;
&lt;br /&gt;
== Induction in case of a recursion ==&lt;br /&gt;
In principle, there are two, in a sense  mutually opposite, ways to define the induction. Typically, only one of these two ways is viable:&lt;br /&gt;
# The depth of a recursive call is the induction variable; the original call to this subroutine has to ensure the induction basis; the induction step has to be ensured by every descent in the recursion tree. Example: [[Binary search|binary search]].&lt;br /&gt;
# The induction variable is some parameter of the input. The invariant simply says that the output of a recursive call is correct. In a descent in the recursion tree, that parameter must strictly decrease. The recursion anchor must ensure the induction basis. Examples: [[Mergesort|mergesort]] and [[Quicksort|quicksort]]; in these examples, the size of the sequence is the induction variable.&lt;/div&gt;</summary>
		<author><name>Weihe</name></author>
	</entry>
	<entry>
		<id>https://wiki.algo.informatik.tu-darmstadt.de/index.php?title=Algorithms_and_correctness&amp;diff=3844</id>
		<title>Algorithms and correctness</title>
		<link rel="alternate" type="text/html" href="https://wiki.algo.informatik.tu-darmstadt.de/index.php?title=Algorithms_and_correctness&amp;diff=3844"/>
		<updated>2016-04-28T11:56:14Z</updated>

		<summary type="html">&lt;p&gt;Weihe: /* Invariant */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Algorithmic problem ==&lt;br /&gt;
&lt;br /&gt;
An algorithmic problem is described by:&lt;br /&gt;
# a set of feasible '''inputs''' (a.k.a. '''instances''');&lt;br /&gt;
# for each input a set of '''outputs''' (a.k.a. '''feasible solutions''' or '''solutions''', for short), which may be empty;&lt;br /&gt;
# optionally, an '''objective function''', which assigns a real number to each feasible solution (the ''quality'' of the solution). If an objective function is specified, it is also specified whether the objective function is to be ''minimized'' or to be ''maximized''.&lt;br /&gt;
&lt;br /&gt;
'''Remark:'''&lt;br /&gt;
Typically, sets such as the set of all feasible inputs to a problem and the set of all feasible outputs to an input are given by a [https://en.wikipedia.org/wiki/Genus%E2%80%93differentia_definition genus-differentia definition], which is an example of [https://en.wikipedia.org/wiki/Intensional_definition intensional definitions]. More specifically, an [https://en.wikipedia.org/wiki/Abstract_data_type abstract data type] is given along with the restrictions that must be fulfilled by feasible inputs and outputs, respectively. Sometimes, the abstract data type is called the set of all inputs / outputs, and the members of the respective set that fulfill the restrictions are then the ''feasible'' inputs / outputs.&lt;br /&gt;
&lt;br /&gt;
== Instructions, operations and subroutines ==&lt;br /&gt;
&lt;br /&gt;
[http://en.wikipedia.org/wiki/Imperative_programming Imperative programming] languages offer possibilities to specify '''instructions''' (a.k.a. '''statements'''). An instruction specifies a sequence of '''(machine) operations'''; executing the instruction on a machine means that the machine performs this sequence of operations. Instructions may be bundled as '''subroutines''' (a.k.a. '''procedures''', '''functions''', '''methods''').&lt;br /&gt;
&lt;br /&gt;
== (Source) programs and processes ==&lt;br /&gt;
&lt;br /&gt;
A '''program''' is a sequence of instructions in some programming language. A program written in a higher programming language, which is to be compiled or interpreted, is often called a '''source program'''. A '''process''' means the execution of a program on a real or virtual machine.&lt;br /&gt;
&lt;br /&gt;
== Termination ==&lt;br /&gt;
&lt;br /&gt;
# A loop has at least one '''break condition'''; this is a part of the (source) program of the loop. '''Termination''' of a loop refers to the process and means that the process evaluates the break condition and this evaluation yields '''true'''.&lt;br /&gt;
# '''Termination''' of a subroutine means that the process runs into an unconditional return-statement or, alternatively, runs into a conditional return-statement and this condition yields '''true''' (common high-level languages only provide unconditional return-statements; a return statement at the end of a void-subroutine may be implicit in many languages).&lt;br /&gt;
# '''Termination''' of a recursion means that every branch of the recursion has a finite depth, that is, runs into a recursive call that does not call the recursive subroutine any further.&lt;br /&gt;
&lt;br /&gt;
== Algorithm ==&lt;br /&gt;
&lt;br /&gt;
# An algorithm is associated with an [[#Algorithmic problem|algorithmic problem]].&lt;br /&gt;
# An algorithm is an abstract description of a process and can be formulated (a.k.a. '''implemented''') as a subroutine. This subroutine is required to compute some feasible output for any given input of the associated algorithmic problem.&lt;br /&gt;
# If an objective function is given, the objective function value of the generated solution is a criterion for the quality of an algorithm. More specifically:&lt;br /&gt;
## in case of ''minimization'': a low objective function value is favored;&lt;br /&gt;
## in case of ''maximization'': a high objective function value is favored.&lt;br /&gt;
&lt;br /&gt;
== Iterative and recursive algorithms ==&lt;br /&gt;
&lt;br /&gt;
# Basically, a non-trivial algorithm is a loop or recursion plus some '''preprocessing''' (a.k.a. '''initialization''') and/or some '''postprocessing'''.&lt;br /&gt;
# If an algorithm consists of two or more loops/recursions that are strictly disjoint parts of an implementation of the algorithm (and thus executed strictly after each other), it may be viewed as two or more algorithms that are executed after each other. Therefore, without loss of generality, an algorithm may be viewed as ''one'' loop or ''one'' recursion plus some pre/postprocessing.&lt;br /&gt;
# An iteration of a loop may contain another loop or recursion. Analogously, a recursive call may contain another loop or recursion. We say that a loop/recursion inside another loop/recursion is '''nested''' or the '''inner''' loop/recursion. Correspondingly, the '''nesting''' loop/recursion is the '''outer''' one.&lt;br /&gt;
&lt;br /&gt;
'''Remarks:'''&lt;br /&gt;
# Clearly, a loop may be transformed into a recursion and vice versa. So, every algorithm may be formulated either as a loop or as a recursion. However, in most cases, one of these two options looks simpler and &amp;quot;more natural&amp;quot; than the other one.&lt;br /&gt;
# Formulating an algorithm as a loop might be favorable in many cases because a loop allows more control than a recursion. More specifically, a loop may be implemented as an [http://en.wikipedia.org/wiki/Iterator iterator] whose method for going one step forward implements the execution of one iteration of the loop. Such an implementation allows one to terminate the loop early, to suspend execution and resume execution later on, and to execute some additional instructions between two iterations (e.g. for visualization or for testing purposes). The crucial point is that, for all of these purposes, the code of the algorithm need not be modified (the source need not even be available).&lt;br /&gt;
&lt;br /&gt;
== Correctness ==&lt;br /&gt;
&lt;br /&gt;
An algorithm is correct if three conditions are fulfilled for every feasible input:&lt;br /&gt;
# All instructions are '''well-defined'''. For example, an instruction that divides by zero, exceeds the range of a numerical type (''overflow''), accesses an array component outside the array's index range, or accesses an attribute or method of a '''void''' / '''null''' pointer are ill-defined operations.&lt;br /&gt;
# Each loop and recursion '''terminates''', that is, the break condition is fulfilled after a finite number of iterative / recursive steps. In case of a recursion with more than one recursive call inside an execution of the recursive routine, this means that every branch of the recursion tree must reach the break condition after a finite number of recursive descents.&lt;br /&gt;
# If the given input admits no feasible solution, this information is delivered by the algorithm. Otherwise, the algorithm delivers a feasible output.&lt;br /&gt;
&lt;br /&gt;
== Invariant ==&lt;br /&gt;
&lt;br /&gt;
# The '''invariant''' of a loop consists of all assertions that are true immediately before the first iteration, immediately after the last iteration, and between two iterations. These assertions are called the '''invariant assertions''' of this loop.&lt;br /&gt;
# The '''invariant''' of a recursion consists of all assertions that are fulfilled immediately before '''or''' after each recursive call (see [[#Induction in case of a recursion|below]]). These assertions are called the '''invariant assertions''' of this recursion.&lt;br /&gt;
&lt;br /&gt;
'''Caveat:'''&lt;br /&gt;
Quite frequently, validity of the invariant for the state immediately before the first iteration is based on a separate definition of the &amp;quot;null case&amp;quot;. For example, the empty sum (sum over no summands) is 0 by definition, the product is 1, the faculty of 0 is 1, the 0-th power of a number is 1, etc. To be on the safe side in such a case, the particular induction step from 0 to 1 should be checked separately.&lt;br /&gt;
&lt;br /&gt;
'''Remarks:'''&lt;br /&gt;
# If an algorithm only accesses its input and its own local data, the invariant assertions are assertions about its input and these local data and nothing else.&lt;br /&gt;
# The invariant of an algorithm has a specific function in the [[#Correctness proofs|correctness proof]] for the algorithm. Typically, in a documentation, only those invariant assertions are stated that are relevant for this purpose. &lt;br /&gt;
# The invariant is also essential for testing purposes: a [http://en.wikipedia.org/wiki/White-box_testing white-box test] of a loop/recursion should definitely check maintenance of all relevant invariant assertions.&lt;br /&gt;
# Last not least, the invariant is the very &amp;quot;essence&amp;quot; of the algorithmic idea; deeply understanding the algorithm amounts to understanding the invariant.&lt;br /&gt;
&lt;br /&gt;
== Variant ==&lt;br /&gt;
&lt;br /&gt;
# The '''variant''' of a loop consists of all differences between the state immediately before an iteration and the state immediately after that iteration (typically, but not exclusively, the values of integral loop variables).&lt;br /&gt;
# The '''variant''' of a recursion consists of all differences between the input of a recursive call &amp;lt;math&amp;gt;X&amp;lt;/math&amp;gt; and the inputs of all recursive calls that are directly called in &amp;lt;math&amp;gt;X&amp;lt;/math&amp;gt; (typically, but not exclusively, some integral measure of the size of the input).&lt;br /&gt;
&lt;br /&gt;
'''Remarks:'''&lt;br /&gt;
# If an algorithm only accesses its input and its own local data, the variant states changes of the contents of its input and  these local data and nothing else.&lt;br /&gt;
# The variant of an algorithm has a specific function in the [[#Correctness proofs|correctness proof]] for the algorithm. Typically, in a documentation, only those variant assertions are stated that are relevant for this purpose. &lt;br /&gt;
# The variant is also essential for testing purposes: a [http://en.wikipedia.org/wiki/White-box_testing white-box test] of a loop/recursion should definitely check all relevant changes.&lt;br /&gt;
&lt;br /&gt;
== Correctness proofs ==&lt;br /&gt;
&lt;br /&gt;
# Assuming the algorithm is correct:&lt;br /&gt;
## The variant implies that the break condition will be fulfilled,&lt;br /&gt;
### in case of a loop: after a finite number of iterations;&lt;br /&gt;
### in case of a recursion: after a finite number of recursive calls in each branch of the recursive tree.&lt;br /&gt;
## Correctness of the output follows from what the invariant says about the state immediately after the last iteration.&lt;br /&gt;
# Proving the invariant amounts to an induction on the number of iterations performed so far / the recursion parameter over which the variant of the recursion is defined.&lt;br /&gt;
## The invariant is the induction hypothesis.&lt;br /&gt;
## Proving the induction basis amounts to proving that the preprocessing (initialization) establishes the invariant.&lt;br /&gt;
## Proving the induction step amounts to proving that an iteration / a recursive call maintains the invariant.&lt;br /&gt;
&lt;br /&gt;
'''Remarks:'''&lt;br /&gt;
# In a nutshell, the variant proves termination, and the invariant proves that the output is correct.&lt;br /&gt;
# Typically, well-definedness of all operations is not considered explicitly in correctness proofs. The reason is that well-definedness is non-obvious in rare cases only.&lt;br /&gt;
# In many cases, correctness of the output follows immediately from what the invariant says about the state immediately after the last iteration; in other cases, additional arguments are necessary to prove correctness of the output.&lt;br /&gt;
# The variant is also essential to estimate, asymptotically, the number of iterations / the recursion depth.&lt;br /&gt;
&lt;br /&gt;
== Induction in case of a recursion ==&lt;br /&gt;
In principle, there are two, in a sense  mutually opposite, ways to define the induction. Typically, only one of these two ways is viable:&lt;br /&gt;
# The depth of a recursive call is the induction variable; the original call to this subroutine has to ensure the induction basis; the induction step has to be ensured by every descent in the recursion tree. Example: [[Binary search|binary search]].&lt;br /&gt;
# The induction variable is some parameter of the input. The invariant simply says that the output of a recursive call is correct. In a descent in the recursion tree, that parameter must strictly decrease. The recursion anchor must ensure the induction basis. Examples: [[Mergesort|mergesort]] and [[Quicksort|quicksort]]; in these examples, the size of the sequence is the induction variable.&lt;/div&gt;</summary>
		<author><name>Weihe</name></author>
	</entry>
	<entry>
		<id>https://wiki.algo.informatik.tu-darmstadt.de/index.php?title=Algorithms_and_correctness&amp;diff=3843</id>
		<title>Algorithms and correctness</title>
		<link rel="alternate" type="text/html" href="https://wiki.algo.informatik.tu-darmstadt.de/index.php?title=Algorithms_and_correctness&amp;diff=3843"/>
		<updated>2016-04-27T11:08:14Z</updated>

		<summary type="html">&lt;p&gt;Weihe: /* Invariant */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Algorithmic problem ==&lt;br /&gt;
&lt;br /&gt;
An algorithmic problem is described by:&lt;br /&gt;
# a set of feasible '''inputs''' (a.k.a. '''instances''');&lt;br /&gt;
# for each input a set of '''outputs''' (a.k.a. '''feasible solutions''' or '''solutions''', for short), which may be empty;&lt;br /&gt;
# optionally, an '''objective function''', which assigns a real number to each feasible solution (the ''quality'' of the solution). If an objective function is specified, it is also specified whether the objective function is to be ''minimized'' or to be ''maximized''.&lt;br /&gt;
&lt;br /&gt;
'''Remark:'''&lt;br /&gt;
Typically, sets such as the set of all feasible inputs to a problem and the set of all feasible outputs to an input are given by a [https://en.wikipedia.org/wiki/Genus%E2%80%93differentia_definition genus-differentia definition], which is an example of [https://en.wikipedia.org/wiki/Intensional_definition intensional definitions]. More specifically, an [https://en.wikipedia.org/wiki/Abstract_data_type abstract data type] is given along with the restrictions that must be fulfilled by feasible inputs and outputs, respectively. Sometimes, the abstract data type is called the set of all inputs / outputs, and the members of the respective set that fulfill the restrictions are then the ''feasible'' inputs / outputs.&lt;br /&gt;
&lt;br /&gt;
== Instructions, operations and subroutines ==&lt;br /&gt;
&lt;br /&gt;
[http://en.wikipedia.org/wiki/Imperative_programming Imperative programming] languages offer possibilities to specify '''instructions''' (a.k.a. '''statements'''). An instruction specifies a sequence of '''(machine) operations'''; executing the instruction on a machine means that the machine performs this sequence of operations. Instructions may be bundled as '''subroutines''' (a.k.a. '''procedures''', '''functions''', '''methods''').&lt;br /&gt;
&lt;br /&gt;
== (Source) programs and processes ==&lt;br /&gt;
&lt;br /&gt;
A '''program''' is a sequence of instructions in some programming language. A program written in a higher programming language, which is to be compiled or interpreted, is often called a '''source program'''. A '''process''' means the execution of a program on a real or virtual machine.&lt;br /&gt;
&lt;br /&gt;
== Termination ==&lt;br /&gt;
&lt;br /&gt;
# A loop has at least one '''break condition'''; this is a part of the (source) program of the loop. '''Termination''' of a loop refers to the process and means that the process evaluates the break condition and this evaluation yields '''true'''.&lt;br /&gt;
# '''Termination''' of a subroutine means that the process runs into an unconditional return-statement or, alternatively, runs into a conditional return-statement and this condition yields '''true''' (common high-level languages only provide unconditional return-statements; a return statement at the end of a void-subroutine may be implicit in many languages).&lt;br /&gt;
# '''Termination''' of a recursion means that every branch of the recursion has a finite depth, that is, runs into a recursive call that does not call the recursive subroutine any further.&lt;br /&gt;
&lt;br /&gt;
== Algorithm ==&lt;br /&gt;
&lt;br /&gt;
# An algorithm is associated with an [[#Algorithmic problem|algorithmic problem]].&lt;br /&gt;
# An algorithm is an abstract description of a process and can be formulated (a.k.a. '''implemented''') as a subroutine. This subroutine is required to compute some feasible output for any given input of the associated algorithmic problem.&lt;br /&gt;
# If an objective function is given, the objective function value of the generated solution is a criterion for the quality of an algorithm. More specifically:&lt;br /&gt;
## in case of ''minimization'': a low objective function value is favored;&lt;br /&gt;
## in case of ''maximization'': a high objective function value is favored.&lt;br /&gt;
&lt;br /&gt;
== Iterative and recursive algorithms ==&lt;br /&gt;
&lt;br /&gt;
# Basically, a non-trivial algorithm is a loop or recursion plus some '''preprocessing''' (a.k.a. '''initialization''') and/or some '''postprocessing'''.&lt;br /&gt;
# If an algorithm consists of two or more loops/recursions that are strictly disjoint parts of an implementation of the algorithm (and thus executed strictly after each other), it may be viewed as two or more algorithms that are executed after each other. Therefore, without loss of generality, an algorithm may be viewed as ''one'' loop or ''one'' recursion plus some pre/postprocessing.&lt;br /&gt;
# An iteration of a loop may contain another loop or recursion. Analogously, a recursive call may contain another loop or recursion. We say that a loop/recursion inside another loop/recursion is '''nested''' or the '''inner''' loop/recursion. Correspondingly, the '''nesting''' loop/recursion is the '''outer''' one.&lt;br /&gt;
&lt;br /&gt;
'''Remarks:'''&lt;br /&gt;
# Clearly, a loop may be transformed into a recursion and vice versa. So, every algorithm may be formulated either as a loop or as a recursion. However, in most cases, one of these two options looks simpler and &amp;quot;more natural&amp;quot; than the other one.&lt;br /&gt;
# Formulating an algorithm as a loop might be favorable in many cases because a loop allows more control than a recursion. More specifically, a loop may be implemented as an [http://en.wikipedia.org/wiki/Iterator iterator] whose method for going one step forward implements the execution of one iteration of the loop. Such an implementation allows one to terminate the loop early, to suspend execution and resume execution later on, and to execute some additional instructions between two iterations (e.g. for visualization or for testing purposes). The crucial point is that, for all of these purposes, the code of the algorithm need not be modified (the source need not even be available).&lt;br /&gt;
&lt;br /&gt;
== Correctness ==&lt;br /&gt;
&lt;br /&gt;
An algorithm is correct if three conditions are fulfilled for every feasible input:&lt;br /&gt;
# All instructions are '''well-defined'''. For example, an instruction that divides by zero, exceeds the range of a numerical type (''overflow''), accesses an array component outside the array's index range, or accesses an attribute or method of a '''void''' / '''null''' pointer are ill-defined operations.&lt;br /&gt;
# Each loop and recursion '''terminates''', that is, the break condition is fulfilled after a finite number of iterative / recursive steps. In case of a recursion with more than one recursive call inside an execution of the recursive routine, this means that every branch of the recursion tree must reach the break condition after a finite number of recursive descents.&lt;br /&gt;
# If the given input admits no feasible solution, this information is delivered by the algorithm. Otherwise, the algorithm delivers a feasible output.&lt;br /&gt;
&lt;br /&gt;
== Invariant ==&lt;br /&gt;
&lt;br /&gt;
# The '''invariant''' of a loop consists of all assertions that are true immediately before the first iteration, immediately after the last iteration, and between two iterations. These assertions are called the '''invariant assertions''' of this loop.&lt;br /&gt;
# The '''invariant''' of a recursion consists of all assertions that are fulfilled immediately before '''or''' after each recursive call (see [[#Induction in case of a recursion|below]]). These assertions are called the '''invariant assertions''' of this recursion.&lt;br /&gt;
&lt;br /&gt;
'''Remarks:'''&lt;br /&gt;
# If an algorithm only accesses its input and its own local data, the invariant assertions are assertions about its input and these local data and nothing else.&lt;br /&gt;
# The invariant of an algorithm has a specific function in the [[#Correctness proofs|correctness proof]] for the algorithm. Typically, in a documentation, only those invariant assertions are stated that are relevant for this purpose. &lt;br /&gt;
# The invariant is also essential for testing purposes: a [http://en.wikipedia.org/wiki/White-box_testing white-box test] of a loop/recursion should definitely check maintenance of all relevant invariant assertions.&lt;br /&gt;
# Last not least, the invariant is the very &amp;quot;essence&amp;quot; of the algorithmic idea; deeply understanding the algorithm amounts to understanding the invariant.&lt;br /&gt;
&lt;br /&gt;
== Variant ==&lt;br /&gt;
&lt;br /&gt;
# The '''variant''' of a loop consists of all differences between the state immediately before an iteration and the state immediately after that iteration (typically, but not exclusively, the values of integral loop variables).&lt;br /&gt;
# The '''variant''' of a recursion consists of all differences between the input of a recursive call &amp;lt;math&amp;gt;X&amp;lt;/math&amp;gt; and the inputs of all recursive calls that are directly called in &amp;lt;math&amp;gt;X&amp;lt;/math&amp;gt; (typically, but not exclusively, some integral measure of the size of the input).&lt;br /&gt;
&lt;br /&gt;
'''Remarks:'''&lt;br /&gt;
# If an algorithm only accesses its input and its own local data, the variant states changes of the contents of its input and  these local data and nothing else.&lt;br /&gt;
# The variant of an algorithm has a specific function in the [[#Correctness proofs|correctness proof]] for the algorithm. Typically, in a documentation, only those variant assertions are stated that are relevant for this purpose. &lt;br /&gt;
# The variant is also essential for testing purposes: a [http://en.wikipedia.org/wiki/White-box_testing white-box test] of a loop/recursion should definitely check all relevant changes.&lt;br /&gt;
&lt;br /&gt;
== Correctness proofs ==&lt;br /&gt;
&lt;br /&gt;
# Assuming the algorithm is correct:&lt;br /&gt;
## The variant implies that the break condition will be fulfilled,&lt;br /&gt;
### in case of a loop: after a finite number of iterations;&lt;br /&gt;
### in case of a recursion: after a finite number of recursive calls in each branch of the recursive tree.&lt;br /&gt;
## Correctness of the output follows from what the invariant says about the state immediately after the last iteration.&lt;br /&gt;
# Proving the invariant amounts to an induction on the number of iterations performed so far / the recursion parameter over which the variant of the recursion is defined.&lt;br /&gt;
## The invariant is the induction hypothesis.&lt;br /&gt;
## Proving the induction basis amounts to proving that the preprocessing (initialization) establishes the invariant.&lt;br /&gt;
## Proving the induction step amounts to proving that an iteration / a recursive call maintains the invariant.&lt;br /&gt;
&lt;br /&gt;
'''Remarks:'''&lt;br /&gt;
# In a nutshell, the variant proves termination, and the invariant proves that the output is correct.&lt;br /&gt;
# Typically, well-definedness of all operations is not considered explicitly in correctness proofs. The reason is that well-definedness is non-obvious in rare cases only.&lt;br /&gt;
# In many cases, correctness of the output follows immediately from what the invariant says about the state immediately after the last iteration; in other cases, additional arguments are necessary to prove correctness of the output.&lt;br /&gt;
# The variant is also essential to estimate, asymptotically, the number of iterations / the recursion depth.&lt;br /&gt;
&lt;br /&gt;
== Induction in case of a recursion ==&lt;br /&gt;
In principle, there are two, in a sense  mutually opposite, ways to define the induction. Typically, only one of these two ways is viable:&lt;br /&gt;
# The depth of a recursive call is the induction variable; the original call to this subroutine has to ensure the induction basis; the induction step has to be ensured by every descent in the recursion tree. Example: [[Binary search|binary search]].&lt;br /&gt;
# The induction variable is some parameter of the input. The invariant simply says that the output of a recursive call is correct. In a descent in the recursion tree, that parameter must strictly decrease. The recursion anchor must ensure the induction basis. Examples: [[Mergesort|mergesort]] and [[Quicksort|quicksort]]; in these examples, the size of the sequence is the induction variable.&lt;/div&gt;</summary>
		<author><name>Weihe</name></author>
	</entry>
	<entry>
		<id>https://wiki.algo.informatik.tu-darmstadt.de/index.php?title=Algorithms_and_correctness&amp;diff=3842</id>
		<title>Algorithms and correctness</title>
		<link rel="alternate" type="text/html" href="https://wiki.algo.informatik.tu-darmstadt.de/index.php?title=Algorithms_and_correctness&amp;diff=3842"/>
		<updated>2016-04-27T06:40:55Z</updated>

		<summary type="html">&lt;p&gt;Weihe: /* Induction in case of a recursion */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Algorithmic problem ==&lt;br /&gt;
&lt;br /&gt;
An algorithmic problem is described by:&lt;br /&gt;
# a set of feasible '''inputs''' (a.k.a. '''instances''');&lt;br /&gt;
# for each input a set of '''outputs''' (a.k.a. '''feasible solutions''' or '''solutions''', for short), which may be empty;&lt;br /&gt;
# optionally, an '''objective function''', which assigns a real number to each feasible solution (the ''quality'' of the solution). If an objective function is specified, it is also specified whether the objective function is to be ''minimized'' or to be ''maximized''.&lt;br /&gt;
&lt;br /&gt;
'''Remark:'''&lt;br /&gt;
Typically, sets such as the set of all feasible inputs to a problem and the set of all feasible outputs to an input are given by a [https://en.wikipedia.org/wiki/Genus%E2%80%93differentia_definition genus-differentia definition], which is an example of [https://en.wikipedia.org/wiki/Intensional_definition intensional definitions]. More specifically, an [https://en.wikipedia.org/wiki/Abstract_data_type abstract data type] is given along with the restrictions that must be fulfilled by feasible inputs and outputs, respectively. Sometimes, the abstract data type is called the set of all inputs / outputs, and the members of the respective set that fulfill the restrictions are then the ''feasible'' inputs / outputs.&lt;br /&gt;
&lt;br /&gt;
== Instructions, operations and subroutines ==&lt;br /&gt;
&lt;br /&gt;
[http://en.wikipedia.org/wiki/Imperative_programming Imperative programming] languages offer possibilities to specify '''instructions''' (a.k.a. '''statements'''). An instruction specifies a sequence of '''(machine) operations'''; executing the instruction on a machine means that the machine performs this sequence of operations. Instructions may be bundled as '''subroutines''' (a.k.a. '''procedures''', '''functions''', '''methods''').&lt;br /&gt;
&lt;br /&gt;
== (Source) programs and processes ==&lt;br /&gt;
&lt;br /&gt;
A '''program''' is a sequence of instructions in some programming language. A program written in a higher programming language, which is to be compiled or interpreted, is often called a '''source program'''. A '''process''' means the execution of a program on a real or virtual machine.&lt;br /&gt;
&lt;br /&gt;
== Termination ==&lt;br /&gt;
&lt;br /&gt;
# A loop has at least one '''break condition'''; this is a part of the (source) program of the loop. '''Termination''' of a loop refers to the process and means that the process evaluates the break condition and this evaluation yields '''true'''.&lt;br /&gt;
# '''Termination''' of a subroutine means that the process runs into an unconditional return-statement or, alternatively, runs into a conditional return-statement and this condition yields '''true''' (common high-level languages only provide unconditional return-statements; a return statement at the end of a void-subroutine may be implicit in many languages).&lt;br /&gt;
# '''Termination''' of a recursion means that every branch of the recursion has a finite depth, that is, runs into a recursive call that does not call the recursive subroutine any further.&lt;br /&gt;
&lt;br /&gt;
== Algorithm ==&lt;br /&gt;
&lt;br /&gt;
# An algorithm is associated with an [[#Algorithmic problem|algorithmic problem]].&lt;br /&gt;
# An algorithm is an abstract description of a process and can be formulated (a.k.a. '''implemented''') as a subroutine. This subroutine is required to compute some feasible output for any given input of the associated algorithmic problem.&lt;br /&gt;
# If an objective function is given, the objective function value of the generated solution is a criterion for the quality of an algorithm. More specifically:&lt;br /&gt;
## in case of ''minimization'': a low objective function value is favored;&lt;br /&gt;
## in case of ''maximization'': a high objective function value is favored.&lt;br /&gt;
&lt;br /&gt;
== Iterative and recursive algorithms ==&lt;br /&gt;
&lt;br /&gt;
# Basically, a non-trivial algorithm is a loop or recursion plus some '''preprocessing''' (a.k.a. '''initialization''') and/or some '''postprocessing'''.&lt;br /&gt;
# If an algorithm consists of two or more loops/recursions that are strictly disjoint parts of an implementation of the algorithm (and thus executed strictly after each other), it may be viewed as two or more algorithms that are executed after each other. Therefore, without loss of generality, an algorithm may be viewed as ''one'' loop or ''one'' recursion plus some pre/postprocessing.&lt;br /&gt;
# An iteration of a loop may contain another loop or recursion. Analogously, a recursive call may contain another loop or recursion. We say that a loop/recursion inside another loop/recursion is '''nested''' or the '''inner''' loop/recursion. Correspondingly, the '''nesting''' loop/recursion is the '''outer''' one.&lt;br /&gt;
&lt;br /&gt;
'''Remarks:'''&lt;br /&gt;
# Clearly, a loop may be transformed into a recursion and vice versa. So, every algorithm may be formulated either as a loop or as a recursion. However, in most cases, one of these two options looks simpler and &amp;quot;more natural&amp;quot; than the other one.&lt;br /&gt;
# Formulating an algorithm as a loop might be favorable in many cases because a loop allows more control than a recursion. More specifically, a loop may be implemented as an [http://en.wikipedia.org/wiki/Iterator iterator] whose method for going one step forward implements the execution of one iteration of the loop. Such an implementation allows one to terminate the loop early, to suspend execution and resume execution later on, and to execute some additional instructions between two iterations (e.g. for visualization or for testing purposes). The crucial point is that, for all of these purposes, the code of the algorithm need not be modified (the source need not even be available).&lt;br /&gt;
&lt;br /&gt;
== Correctness ==&lt;br /&gt;
&lt;br /&gt;
An algorithm is correct if three conditions are fulfilled for every feasible input:&lt;br /&gt;
# All instructions are '''well-defined'''. For example, an instruction that divides by zero, exceeds the range of a numerical type (''overflow''), accesses an array component outside the array's index range, or accesses an attribute or method of a '''void''' / '''null''' pointer are ill-defined operations.&lt;br /&gt;
# Each loop and recursion '''terminates''', that is, the break condition is fulfilled after a finite number of iterative / recursive steps. In case of a recursion with more than one recursive call inside an execution of the recursive routine, this means that every branch of the recursion tree must reach the break condition after a finite number of recursive descents.&lt;br /&gt;
# If the given input admits no feasible solution, this information is delivered by the algorithm. Otherwise, the algorithm delivers a feasible output.&lt;br /&gt;
&lt;br /&gt;
== Invariant ==&lt;br /&gt;
&lt;br /&gt;
# The '''invariant''' of a loop consists of all assertions that are true immediately before the first iteration, immediately after the last iteration, and between two iterations. These assertions are called the '''invariant assertions''' of this loop.&lt;br /&gt;
# The '''invariant''' of a recursion consists of all assertions that are fulfilled immediately after each recursive call. These assertions are called the '''invariant assertions''' of this recursion.&lt;br /&gt;
&lt;br /&gt;
'''Remarks:'''&lt;br /&gt;
# If an algorithm only accesses its input and its own local data, the invariant assertions are assertions about its input and these local data and nothing else.&lt;br /&gt;
# The invariant of an algorithm has a specific function in the [[#Correctness proofs|correctness proof]] for the algorithm. Typically, in a documentation, only those invariant assertions are stated that are relevant for this purpose. &lt;br /&gt;
# The invariant is also essential for testing purposes: a [http://en.wikipedia.org/wiki/White-box_testing white-box test] of a loop/recursion should definitely check maintenance of all relevant invariant assertions.&lt;br /&gt;
# Last not least, the invariant is the very &amp;quot;essence&amp;quot; of the algorithmic idea; deeply understanding the algorithm amounts to understanding the invariant.&lt;br /&gt;
&lt;br /&gt;
== Variant ==&lt;br /&gt;
&lt;br /&gt;
# The '''variant''' of a loop consists of all differences between the state immediately before an iteration and the state immediately after that iteration (typically, but not exclusively, the values of integral loop variables).&lt;br /&gt;
# The '''variant''' of a recursion consists of all differences between the input of a recursive call &amp;lt;math&amp;gt;X&amp;lt;/math&amp;gt; and the inputs of all recursive calls that are directly called in &amp;lt;math&amp;gt;X&amp;lt;/math&amp;gt; (typically, but not exclusively, some integral measure of the size of the input).&lt;br /&gt;
&lt;br /&gt;
'''Remarks:'''&lt;br /&gt;
# If an algorithm only accesses its input and its own local data, the variant states changes of the contents of its input and  these local data and nothing else.&lt;br /&gt;
# The variant of an algorithm has a specific function in the [[#Correctness proofs|correctness proof]] for the algorithm. Typically, in a documentation, only those variant assertions are stated that are relevant for this purpose. &lt;br /&gt;
# The variant is also essential for testing purposes: a [http://en.wikipedia.org/wiki/White-box_testing white-box test] of a loop/recursion should definitely check all relevant changes.&lt;br /&gt;
&lt;br /&gt;
== Correctness proofs ==&lt;br /&gt;
&lt;br /&gt;
# Assuming the algorithm is correct:&lt;br /&gt;
## The variant implies that the break condition will be fulfilled,&lt;br /&gt;
### in case of a loop: after a finite number of iterations;&lt;br /&gt;
### in case of a recursion: after a finite number of recursive calls in each branch of the recursive tree.&lt;br /&gt;
## Correctness of the output follows from what the invariant says about the state immediately after the last iteration.&lt;br /&gt;
# Proving the invariant amounts to an induction on the number of iterations performed so far / the recursion parameter over which the variant of the recursion is defined.&lt;br /&gt;
## The invariant is the induction hypothesis.&lt;br /&gt;
## Proving the induction basis amounts to proving that the preprocessing (initialization) establishes the invariant.&lt;br /&gt;
## Proving the induction step amounts to proving that an iteration / a recursive call maintains the invariant.&lt;br /&gt;
&lt;br /&gt;
'''Remarks:'''&lt;br /&gt;
# In a nutshell, the variant proves termination, and the invariant proves that the output is correct.&lt;br /&gt;
# Typically, well-definedness of all operations is not considered explicitly in correctness proofs. The reason is that well-definedness is non-obvious in rare cases only.&lt;br /&gt;
# In many cases, correctness of the output follows immediately from what the invariant says about the state immediately after the last iteration; in other cases, additional arguments are necessary to prove correctness of the output.&lt;br /&gt;
# The variant is also essential to estimate, asymptotically, the number of iterations / the recursion depth.&lt;br /&gt;
&lt;br /&gt;
== Induction in case of a recursion ==&lt;br /&gt;
In principle, there are two, in a sense  mutually opposite, ways to define the induction. Typically, only one of these two ways is viable:&lt;br /&gt;
# The depth of a recursive call is the induction variable; the original call to this subroutine has to ensure the induction basis; the induction step has to be ensured by every descent in the recursion tree. Example: [[Binary search|binary search]].&lt;br /&gt;
# The induction variable is some parameter of the input. The invariant simply says that the output of a recursive call is correct. In a descent in the recursion tree, that parameter must strictly decrease. The recursion anchor must ensure the induction basis. Examples: [[Mergesort|mergesort]] and [[Quicksort|quicksort]]; in these examples, the size of the sequence is the induction variable.&lt;/div&gt;</summary>
		<author><name>Weihe</name></author>
	</entry>
	<entry>
		<id>https://wiki.algo.informatik.tu-darmstadt.de/index.php?title=Algorithms_and_correctness&amp;diff=3841</id>
		<title>Algorithms and correctness</title>
		<link rel="alternate" type="text/html" href="https://wiki.algo.informatik.tu-darmstadt.de/index.php?title=Algorithms_and_correctness&amp;diff=3841"/>
		<updated>2016-04-27T06:40:33Z</updated>

		<summary type="html">&lt;p&gt;Weihe: /* Induction in case of a recursion */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Algorithmic problem ==&lt;br /&gt;
&lt;br /&gt;
An algorithmic problem is described by:&lt;br /&gt;
# a set of feasible '''inputs''' (a.k.a. '''instances''');&lt;br /&gt;
# for each input a set of '''outputs''' (a.k.a. '''feasible solutions''' or '''solutions''', for short), which may be empty;&lt;br /&gt;
# optionally, an '''objective function''', which assigns a real number to each feasible solution (the ''quality'' of the solution). If an objective function is specified, it is also specified whether the objective function is to be ''minimized'' or to be ''maximized''.&lt;br /&gt;
&lt;br /&gt;
'''Remark:'''&lt;br /&gt;
Typically, sets such as the set of all feasible inputs to a problem and the set of all feasible outputs to an input are given by a [https://en.wikipedia.org/wiki/Genus%E2%80%93differentia_definition genus-differentia definition], which is an example of [https://en.wikipedia.org/wiki/Intensional_definition intensional definitions]. More specifically, an [https://en.wikipedia.org/wiki/Abstract_data_type abstract data type] is given along with the restrictions that must be fulfilled by feasible inputs and outputs, respectively. Sometimes, the abstract data type is called the set of all inputs / outputs, and the members of the respective set that fulfill the restrictions are then the ''feasible'' inputs / outputs.&lt;br /&gt;
&lt;br /&gt;
== Instructions, operations and subroutines ==&lt;br /&gt;
&lt;br /&gt;
[http://en.wikipedia.org/wiki/Imperative_programming Imperative programming] languages offer possibilities to specify '''instructions''' (a.k.a. '''statements'''). An instruction specifies a sequence of '''(machine) operations'''; executing the instruction on a machine means that the machine performs this sequence of operations. Instructions may be bundled as '''subroutines''' (a.k.a. '''procedures''', '''functions''', '''methods''').&lt;br /&gt;
&lt;br /&gt;
== (Source) programs and processes ==&lt;br /&gt;
&lt;br /&gt;
A '''program''' is a sequence of instructions in some programming language. A program written in a higher programming language, which is to be compiled or interpreted, is often called a '''source program'''. A '''process''' means the execution of a program on a real or virtual machine.&lt;br /&gt;
&lt;br /&gt;
== Termination ==&lt;br /&gt;
&lt;br /&gt;
# A loop has at least one '''break condition'''; this is a part of the (source) program of the loop. '''Termination''' of a loop refers to the process and means that the process evaluates the break condition and this evaluation yields '''true'''.&lt;br /&gt;
# '''Termination''' of a subroutine means that the process runs into an unconditional return-statement or, alternatively, runs into a conditional return-statement and this condition yields '''true''' (common high-level languages only provide unconditional return-statements; a return statement at the end of a void-subroutine may be implicit in many languages).&lt;br /&gt;
# '''Termination''' of a recursion means that every branch of the recursion has a finite depth, that is, runs into a recursive call that does not call the recursive subroutine any further.&lt;br /&gt;
&lt;br /&gt;
== Algorithm ==&lt;br /&gt;
&lt;br /&gt;
# An algorithm is associated with an [[#Algorithmic problem|algorithmic problem]].&lt;br /&gt;
# An algorithm is an abstract description of a process and can be formulated (a.k.a. '''implemented''') as a subroutine. This subroutine is required to compute some feasible output for any given input of the associated algorithmic problem.&lt;br /&gt;
# If an objective function is given, the objective function value of the generated solution is a criterion for the quality of an algorithm. More specifically:&lt;br /&gt;
## in case of ''minimization'': a low objective function value is favored;&lt;br /&gt;
## in case of ''maximization'': a high objective function value is favored.&lt;br /&gt;
&lt;br /&gt;
== Iterative and recursive algorithms ==&lt;br /&gt;
&lt;br /&gt;
# Basically, a non-trivial algorithm is a loop or recursion plus some '''preprocessing''' (a.k.a. '''initialization''') and/or some '''postprocessing'''.&lt;br /&gt;
# If an algorithm consists of two or more loops/recursions that are strictly disjoint parts of an implementation of the algorithm (and thus executed strictly after each other), it may be viewed as two or more algorithms that are executed after each other. Therefore, without loss of generality, an algorithm may be viewed as ''one'' loop or ''one'' recursion plus some pre/postprocessing.&lt;br /&gt;
# An iteration of a loop may contain another loop or recursion. Analogously, a recursive call may contain another loop or recursion. We say that a loop/recursion inside another loop/recursion is '''nested''' or the '''inner''' loop/recursion. Correspondingly, the '''nesting''' loop/recursion is the '''outer''' one.&lt;br /&gt;
&lt;br /&gt;
'''Remarks:'''&lt;br /&gt;
# Clearly, a loop may be transformed into a recursion and vice versa. So, every algorithm may be formulated either as a loop or as a recursion. However, in most cases, one of these two options looks simpler and &amp;quot;more natural&amp;quot; than the other one.&lt;br /&gt;
# Formulating an algorithm as a loop might be favorable in many cases because a loop allows more control than a recursion. More specifically, a loop may be implemented as an [http://en.wikipedia.org/wiki/Iterator iterator] whose method for going one step forward implements the execution of one iteration of the loop. Such an implementation allows one to terminate the loop early, to suspend execution and resume execution later on, and to execute some additional instructions between two iterations (e.g. for visualization or for testing purposes). The crucial point is that, for all of these purposes, the code of the algorithm need not be modified (the source need not even be available).&lt;br /&gt;
&lt;br /&gt;
== Correctness ==&lt;br /&gt;
&lt;br /&gt;
An algorithm is correct if three conditions are fulfilled for every feasible input:&lt;br /&gt;
# All instructions are '''well-defined'''. For example, an instruction that divides by zero, exceeds the range of a numerical type (''overflow''), accesses an array component outside the array's index range, or accesses an attribute or method of a '''void''' / '''null''' pointer are ill-defined operations.&lt;br /&gt;
# Each loop and recursion '''terminates''', that is, the break condition is fulfilled after a finite number of iterative / recursive steps. In case of a recursion with more than one recursive call inside an execution of the recursive routine, this means that every branch of the recursion tree must reach the break condition after a finite number of recursive descents.&lt;br /&gt;
# If the given input admits no feasible solution, this information is delivered by the algorithm. Otherwise, the algorithm delivers a feasible output.&lt;br /&gt;
&lt;br /&gt;
== Invariant ==&lt;br /&gt;
&lt;br /&gt;
# The '''invariant''' of a loop consists of all assertions that are true immediately before the first iteration, immediately after the last iteration, and between two iterations. These assertions are called the '''invariant assertions''' of this loop.&lt;br /&gt;
# The '''invariant''' of a recursion consists of all assertions that are fulfilled immediately after each recursive call. These assertions are called the '''invariant assertions''' of this recursion.&lt;br /&gt;
&lt;br /&gt;
'''Remarks:'''&lt;br /&gt;
# If an algorithm only accesses its input and its own local data, the invariant assertions are assertions about its input and these local data and nothing else.&lt;br /&gt;
# The invariant of an algorithm has a specific function in the [[#Correctness proofs|correctness proof]] for the algorithm. Typically, in a documentation, only those invariant assertions are stated that are relevant for this purpose. &lt;br /&gt;
# The invariant is also essential for testing purposes: a [http://en.wikipedia.org/wiki/White-box_testing white-box test] of a loop/recursion should definitely check maintenance of all relevant invariant assertions.&lt;br /&gt;
# Last not least, the invariant is the very &amp;quot;essence&amp;quot; of the algorithmic idea; deeply understanding the algorithm amounts to understanding the invariant.&lt;br /&gt;
&lt;br /&gt;
== Variant ==&lt;br /&gt;
&lt;br /&gt;
# The '''variant''' of a loop consists of all differences between the state immediately before an iteration and the state immediately after that iteration (typically, but not exclusively, the values of integral loop variables).&lt;br /&gt;
# The '''variant''' of a recursion consists of all differences between the input of a recursive call &amp;lt;math&amp;gt;X&amp;lt;/math&amp;gt; and the inputs of all recursive calls that are directly called in &amp;lt;math&amp;gt;X&amp;lt;/math&amp;gt; (typically, but not exclusively, some integral measure of the size of the input).&lt;br /&gt;
&lt;br /&gt;
'''Remarks:'''&lt;br /&gt;
# If an algorithm only accesses its input and its own local data, the variant states changes of the contents of its input and  these local data and nothing else.&lt;br /&gt;
# The variant of an algorithm has a specific function in the [[#Correctness proofs|correctness proof]] for the algorithm. Typically, in a documentation, only those variant assertions are stated that are relevant for this purpose. &lt;br /&gt;
# The variant is also essential for testing purposes: a [http://en.wikipedia.org/wiki/White-box_testing white-box test] of a loop/recursion should definitely check all relevant changes.&lt;br /&gt;
&lt;br /&gt;
== Correctness proofs ==&lt;br /&gt;
&lt;br /&gt;
# Assuming the algorithm is correct:&lt;br /&gt;
## The variant implies that the break condition will be fulfilled,&lt;br /&gt;
### in case of a loop: after a finite number of iterations;&lt;br /&gt;
### in case of a recursion: after a finite number of recursive calls in each branch of the recursive tree.&lt;br /&gt;
## Correctness of the output follows from what the invariant says about the state immediately after the last iteration.&lt;br /&gt;
# Proving the invariant amounts to an induction on the number of iterations performed so far / the recursion parameter over which the variant of the recursion is defined.&lt;br /&gt;
## The invariant is the induction hypothesis.&lt;br /&gt;
## Proving the induction basis amounts to proving that the preprocessing (initialization) establishes the invariant.&lt;br /&gt;
## Proving the induction step amounts to proving that an iteration / a recursive call maintains the invariant.&lt;br /&gt;
&lt;br /&gt;
'''Remarks:'''&lt;br /&gt;
# In a nutshell, the variant proves termination, and the invariant proves that the output is correct.&lt;br /&gt;
# Typically, well-definedness of all operations is not considered explicitly in correctness proofs. The reason is that well-definedness is non-obvious in rare cases only.&lt;br /&gt;
# In many cases, correctness of the output follows immediately from what the invariant says about the state immediately after the last iteration; in other cases, additional arguments are necessary to prove correctness of the output.&lt;br /&gt;
# The variant is also essential to estimate, asymptotically, the number of iterations / the recursion depth.&lt;br /&gt;
&lt;br /&gt;
== Induction in case of a recursion ==&lt;br /&gt;
In principle, there are two, in a sense  mutually opposite, ways to define the induction. Typically, only one of these two ways is viable:&lt;br /&gt;
# The depth of a recursive call is the induction variable; the original call to this subroutine has to ensure the induction basis; the induction step has to ensured by every descent in the recursion tree. Example: [[Binary search|binary search]].&lt;br /&gt;
# The induction variable is some parameter of the input. The invariant simply says that the output of a recursive call is correct. In a descent in the recursion tree, that parameter must strictly decrease. The recursion anchor must ensure the induction basis. Examples: [[Mergesort|mergesort]] and [[Quicksort|quicksort]]; in these examples, the size of the sequence is the induction variable.&lt;/div&gt;</summary>
		<author><name>Weihe</name></author>
	</entry>
	<entry>
		<id>https://wiki.algo.informatik.tu-darmstadt.de/index.php?title=Algorithms_and_correctness&amp;diff=3840</id>
		<title>Algorithms and correctness</title>
		<link rel="alternate" type="text/html" href="https://wiki.algo.informatik.tu-darmstadt.de/index.php?title=Algorithms_and_correctness&amp;diff=3840"/>
		<updated>2016-04-27T06:40:23Z</updated>

		<summary type="html">&lt;p&gt;Weihe: /* Induction in case of a recursion */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Algorithmic problem ==&lt;br /&gt;
&lt;br /&gt;
An algorithmic problem is described by:&lt;br /&gt;
# a set of feasible '''inputs''' (a.k.a. '''instances''');&lt;br /&gt;
# for each input a set of '''outputs''' (a.k.a. '''feasible solutions''' or '''solutions''', for short), which may be empty;&lt;br /&gt;
# optionally, an '''objective function''', which assigns a real number to each feasible solution (the ''quality'' of the solution). If an objective function is specified, it is also specified whether the objective function is to be ''minimized'' or to be ''maximized''.&lt;br /&gt;
&lt;br /&gt;
'''Remark:'''&lt;br /&gt;
Typically, sets such as the set of all feasible inputs to a problem and the set of all feasible outputs to an input are given by a [https://en.wikipedia.org/wiki/Genus%E2%80%93differentia_definition genus-differentia definition], which is an example of [https://en.wikipedia.org/wiki/Intensional_definition intensional definitions]. More specifically, an [https://en.wikipedia.org/wiki/Abstract_data_type abstract data type] is given along with the restrictions that must be fulfilled by feasible inputs and outputs, respectively. Sometimes, the abstract data type is called the set of all inputs / outputs, and the members of the respective set that fulfill the restrictions are then the ''feasible'' inputs / outputs.&lt;br /&gt;
&lt;br /&gt;
== Instructions, operations and subroutines ==&lt;br /&gt;
&lt;br /&gt;
[http://en.wikipedia.org/wiki/Imperative_programming Imperative programming] languages offer possibilities to specify '''instructions''' (a.k.a. '''statements'''). An instruction specifies a sequence of '''(machine) operations'''; executing the instruction on a machine means that the machine performs this sequence of operations. Instructions may be bundled as '''subroutines''' (a.k.a. '''procedures''', '''functions''', '''methods''').&lt;br /&gt;
&lt;br /&gt;
== (Source) programs and processes ==&lt;br /&gt;
&lt;br /&gt;
A '''program''' is a sequence of instructions in some programming language. A program written in a higher programming language, which is to be compiled or interpreted, is often called a '''source program'''. A '''process''' means the execution of a program on a real or virtual machine.&lt;br /&gt;
&lt;br /&gt;
== Termination ==&lt;br /&gt;
&lt;br /&gt;
# A loop has at least one '''break condition'''; this is a part of the (source) program of the loop. '''Termination''' of a loop refers to the process and means that the process evaluates the break condition and this evaluation yields '''true'''.&lt;br /&gt;
# '''Termination''' of a subroutine means that the process runs into an unconditional return-statement or, alternatively, runs into a conditional return-statement and this condition yields '''true''' (common high-level languages only provide unconditional return-statements; a return statement at the end of a void-subroutine may be implicit in many languages).&lt;br /&gt;
# '''Termination''' of a recursion means that every branch of the recursion has a finite depth, that is, runs into a recursive call that does not call the recursive subroutine any further.&lt;br /&gt;
&lt;br /&gt;
== Algorithm ==&lt;br /&gt;
&lt;br /&gt;
# An algorithm is associated with an [[#Algorithmic problem|algorithmic problem]].&lt;br /&gt;
# An algorithm is an abstract description of a process and can be formulated (a.k.a. '''implemented''') as a subroutine. This subroutine is required to compute some feasible output for any given input of the associated algorithmic problem.&lt;br /&gt;
# If an objective function is given, the objective function value of the generated solution is a criterion for the quality of an algorithm. More specifically:&lt;br /&gt;
## in case of ''minimization'': a low objective function value is favored;&lt;br /&gt;
## in case of ''maximization'': a high objective function value is favored.&lt;br /&gt;
&lt;br /&gt;
== Iterative and recursive algorithms ==&lt;br /&gt;
&lt;br /&gt;
# Basically, a non-trivial algorithm is a loop or recursion plus some '''preprocessing''' (a.k.a. '''initialization''') and/or some '''postprocessing'''.&lt;br /&gt;
# If an algorithm consists of two or more loops/recursions that are strictly disjoint parts of an implementation of the algorithm (and thus executed strictly after each other), it may be viewed as two or more algorithms that are executed after each other. Therefore, without loss of generality, an algorithm may be viewed as ''one'' loop or ''one'' recursion plus some pre/postprocessing.&lt;br /&gt;
# An iteration of a loop may contain another loop or recursion. Analogously, a recursive call may contain another loop or recursion. We say that a loop/recursion inside another loop/recursion is '''nested''' or the '''inner''' loop/recursion. Correspondingly, the '''nesting''' loop/recursion is the '''outer''' one.&lt;br /&gt;
&lt;br /&gt;
'''Remarks:'''&lt;br /&gt;
# Clearly, a loop may be transformed into a recursion and vice versa. So, every algorithm may be formulated either as a loop or as a recursion. However, in most cases, one of these two options looks simpler and &amp;quot;more natural&amp;quot; than the other one.&lt;br /&gt;
# Formulating an algorithm as a loop might be favorable in many cases because a loop allows more control than a recursion. More specifically, a loop may be implemented as an [http://en.wikipedia.org/wiki/Iterator iterator] whose method for going one step forward implements the execution of one iteration of the loop. Such an implementation allows one to terminate the loop early, to suspend execution and resume execution later on, and to execute some additional instructions between two iterations (e.g. for visualization or for testing purposes). The crucial point is that, for all of these purposes, the code of the algorithm need not be modified (the source need not even be available).&lt;br /&gt;
&lt;br /&gt;
== Correctness ==&lt;br /&gt;
&lt;br /&gt;
An algorithm is correct if three conditions are fulfilled for every feasible input:&lt;br /&gt;
# All instructions are '''well-defined'''. For example, an instruction that divides by zero, exceeds the range of a numerical type (''overflow''), accesses an array component outside the array's index range, or accesses an attribute or method of a '''void''' / '''null''' pointer are ill-defined operations.&lt;br /&gt;
# Each loop and recursion '''terminates''', that is, the break condition is fulfilled after a finite number of iterative / recursive steps. In case of a recursion with more than one recursive call inside an execution of the recursive routine, this means that every branch of the recursion tree must reach the break condition after a finite number of recursive descents.&lt;br /&gt;
# If the given input admits no feasible solution, this information is delivered by the algorithm. Otherwise, the algorithm delivers a feasible output.&lt;br /&gt;
&lt;br /&gt;
== Invariant ==&lt;br /&gt;
&lt;br /&gt;
# The '''invariant''' of a loop consists of all assertions that are true immediately before the first iteration, immediately after the last iteration, and between two iterations. These assertions are called the '''invariant assertions''' of this loop.&lt;br /&gt;
# The '''invariant''' of a recursion consists of all assertions that are fulfilled immediately after each recursive call. These assertions are called the '''invariant assertions''' of this recursion.&lt;br /&gt;
&lt;br /&gt;
'''Remarks:'''&lt;br /&gt;
# If an algorithm only accesses its input and its own local data, the invariant assertions are assertions about its input and these local data and nothing else.&lt;br /&gt;
# The invariant of an algorithm has a specific function in the [[#Correctness proofs|correctness proof]] for the algorithm. Typically, in a documentation, only those invariant assertions are stated that are relevant for this purpose. &lt;br /&gt;
# The invariant is also essential for testing purposes: a [http://en.wikipedia.org/wiki/White-box_testing white-box test] of a loop/recursion should definitely check maintenance of all relevant invariant assertions.&lt;br /&gt;
# Last not least, the invariant is the very &amp;quot;essence&amp;quot; of the algorithmic idea; deeply understanding the algorithm amounts to understanding the invariant.&lt;br /&gt;
&lt;br /&gt;
== Variant ==&lt;br /&gt;
&lt;br /&gt;
# The '''variant''' of a loop consists of all differences between the state immediately before an iteration and the state immediately after that iteration (typically, but not exclusively, the values of integral loop variables).&lt;br /&gt;
# The '''variant''' of a recursion consists of all differences between the input of a recursive call &amp;lt;math&amp;gt;X&amp;lt;/math&amp;gt; and the inputs of all recursive calls that are directly called in &amp;lt;math&amp;gt;X&amp;lt;/math&amp;gt; (typically, but not exclusively, some integral measure of the size of the input).&lt;br /&gt;
&lt;br /&gt;
'''Remarks:'''&lt;br /&gt;
# If an algorithm only accesses its input and its own local data, the variant states changes of the contents of its input and  these local data and nothing else.&lt;br /&gt;
# The variant of an algorithm has a specific function in the [[#Correctness proofs|correctness proof]] for the algorithm. Typically, in a documentation, only those variant assertions are stated that are relevant for this purpose. &lt;br /&gt;
# The variant is also essential for testing purposes: a [http://en.wikipedia.org/wiki/White-box_testing white-box test] of a loop/recursion should definitely check all relevant changes.&lt;br /&gt;
&lt;br /&gt;
== Correctness proofs ==&lt;br /&gt;
&lt;br /&gt;
# Assuming the algorithm is correct:&lt;br /&gt;
## The variant implies that the break condition will be fulfilled,&lt;br /&gt;
### in case of a loop: after a finite number of iterations;&lt;br /&gt;
### in case of a recursion: after a finite number of recursive calls in each branch of the recursive tree.&lt;br /&gt;
## Correctness of the output follows from what the invariant says about the state immediately after the last iteration.&lt;br /&gt;
# Proving the invariant amounts to an induction on the number of iterations performed so far / the recursion parameter over which the variant of the recursion is defined.&lt;br /&gt;
## The invariant is the induction hypothesis.&lt;br /&gt;
## Proving the induction basis amounts to proving that the preprocessing (initialization) establishes the invariant.&lt;br /&gt;
## Proving the induction step amounts to proving that an iteration / a recursive call maintains the invariant.&lt;br /&gt;
&lt;br /&gt;
'''Remarks:'''&lt;br /&gt;
# In a nutshell, the variant proves termination, and the invariant proves that the output is correct.&lt;br /&gt;
# Typically, well-definedness of all operations is not considered explicitly in correctness proofs. The reason is that well-definedness is non-obvious in rare cases only.&lt;br /&gt;
# In many cases, correctness of the output follows immediately from what the invariant says about the state immediately after the last iteration; in other cases, additional arguments are necessary to prove correctness of the output.&lt;br /&gt;
# The variant is also essential to estimate, asymptotically, the number of iterations / the recursion depth.&lt;br /&gt;
&lt;br /&gt;
== Induction in case of a recursion ==&lt;br /&gt;
In principle, there are two, in a sense  mutually opposite, ways to define the induction. Typically, only one of these two ways is viable:&lt;br /&gt;
# The depth of a recursive call is the induction variable; the original call to this subroutine has to ensure the induction basis; the induction step has to ensured by every descent in the recursion tree. Example: [[Binary search|binary search]].&lt;br /&gt;
# The induction variable is some parameter of the input. The invariant simply says that the output of a recursive call is correct. In a descent in the recursion tree, that parameter must strictly decrease. The recursion anchor must ensure the induction basis. Example: [[Mergesort|mergesort]] and [[Quicksort|quicksort]]; in these examples, the size of the sequence is the induction variable.&lt;/div&gt;</summary>
		<author><name>Weihe</name></author>
	</entry>
	<entry>
		<id>https://wiki.algo.informatik.tu-darmstadt.de/index.php?title=Algorithms_and_correctness&amp;diff=3839</id>
		<title>Algorithms and correctness</title>
		<link rel="alternate" type="text/html" href="https://wiki.algo.informatik.tu-darmstadt.de/index.php?title=Algorithms_and_correctness&amp;diff=3839"/>
		<updated>2016-04-27T06:35:09Z</updated>

		<summary type="html">&lt;p&gt;Weihe: /* Induction in case of a recursion */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Algorithmic problem ==&lt;br /&gt;
&lt;br /&gt;
An algorithmic problem is described by:&lt;br /&gt;
# a set of feasible '''inputs''' (a.k.a. '''instances''');&lt;br /&gt;
# for each input a set of '''outputs''' (a.k.a. '''feasible solutions''' or '''solutions''', for short), which may be empty;&lt;br /&gt;
# optionally, an '''objective function''', which assigns a real number to each feasible solution (the ''quality'' of the solution). If an objective function is specified, it is also specified whether the objective function is to be ''minimized'' or to be ''maximized''.&lt;br /&gt;
&lt;br /&gt;
'''Remark:'''&lt;br /&gt;
Typically, sets such as the set of all feasible inputs to a problem and the set of all feasible outputs to an input are given by a [https://en.wikipedia.org/wiki/Genus%E2%80%93differentia_definition genus-differentia definition], which is an example of [https://en.wikipedia.org/wiki/Intensional_definition intensional definitions]. More specifically, an [https://en.wikipedia.org/wiki/Abstract_data_type abstract data type] is given along with the restrictions that must be fulfilled by feasible inputs and outputs, respectively. Sometimes, the abstract data type is called the set of all inputs / outputs, and the members of the respective set that fulfill the restrictions are then the ''feasible'' inputs / outputs.&lt;br /&gt;
&lt;br /&gt;
== Instructions, operations and subroutines ==&lt;br /&gt;
&lt;br /&gt;
[http://en.wikipedia.org/wiki/Imperative_programming Imperative programming] languages offer possibilities to specify '''instructions''' (a.k.a. '''statements'''). An instruction specifies a sequence of '''(machine) operations'''; executing the instruction on a machine means that the machine performs this sequence of operations. Instructions may be bundled as '''subroutines''' (a.k.a. '''procedures''', '''functions''', '''methods''').&lt;br /&gt;
&lt;br /&gt;
== (Source) programs and processes ==&lt;br /&gt;
&lt;br /&gt;
A '''program''' is a sequence of instructions in some programming language. A program written in a higher programming language, which is to be compiled or interpreted, is often called a '''source program'''. A '''process''' means the execution of a program on a real or virtual machine.&lt;br /&gt;
&lt;br /&gt;
== Termination ==&lt;br /&gt;
&lt;br /&gt;
# A loop has at least one '''break condition'''; this is a part of the (source) program of the loop. '''Termination''' of a loop refers to the process and means that the process evaluates the break condition and this evaluation yields '''true'''.&lt;br /&gt;
# '''Termination''' of a subroutine means that the process runs into an unconditional return-statement or, alternatively, runs into a conditional return-statement and this condition yields '''true''' (common high-level languages only provide unconditional return-statements; a return statement at the end of a void-subroutine may be implicit in many languages).&lt;br /&gt;
# '''Termination''' of a recursion means that every branch of the recursion has a finite depth, that is, runs into a recursive call that does not call the recursive subroutine any further.&lt;br /&gt;
&lt;br /&gt;
== Algorithm ==&lt;br /&gt;
&lt;br /&gt;
# An algorithm is associated with an [[#Algorithmic problem|algorithmic problem]].&lt;br /&gt;
# An algorithm is an abstract description of a process and can be formulated (a.k.a. '''implemented''') as a subroutine. This subroutine is required to compute some feasible output for any given input of the associated algorithmic problem.&lt;br /&gt;
# If an objective function is given, the objective function value of the generated solution is a criterion for the quality of an algorithm. More specifically:&lt;br /&gt;
## in case of ''minimization'': a low objective function value is favored;&lt;br /&gt;
## in case of ''maximization'': a high objective function value is favored.&lt;br /&gt;
&lt;br /&gt;
== Iterative and recursive algorithms ==&lt;br /&gt;
&lt;br /&gt;
# Basically, a non-trivial algorithm is a loop or recursion plus some '''preprocessing''' (a.k.a. '''initialization''') and/or some '''postprocessing'''.&lt;br /&gt;
# If an algorithm consists of two or more loops/recursions that are strictly disjoint parts of an implementation of the algorithm (and thus executed strictly after each other), it may be viewed as two or more algorithms that are executed after each other. Therefore, without loss of generality, an algorithm may be viewed as ''one'' loop or ''one'' recursion plus some pre/postprocessing.&lt;br /&gt;
# An iteration of a loop may contain another loop or recursion. Analogously, a recursive call may contain another loop or recursion. We say that a loop/recursion inside another loop/recursion is '''nested''' or the '''inner''' loop/recursion. Correspondingly, the '''nesting''' loop/recursion is the '''outer''' one.&lt;br /&gt;
&lt;br /&gt;
'''Remarks:'''&lt;br /&gt;
# Clearly, a loop may be transformed into a recursion and vice versa. So, every algorithm may be formulated either as a loop or as a recursion. However, in most cases, one of these two options looks simpler and &amp;quot;more natural&amp;quot; than the other one.&lt;br /&gt;
# Formulating an algorithm as a loop might be favorable in many cases because a loop allows more control than a recursion. More specifically, a loop may be implemented as an [http://en.wikipedia.org/wiki/Iterator iterator] whose method for going one step forward implements the execution of one iteration of the loop. Such an implementation allows one to terminate the loop early, to suspend execution and resume execution later on, and to execute some additional instructions between two iterations (e.g. for visualization or for testing purposes). The crucial point is that, for all of these purposes, the code of the algorithm need not be modified (the source need not even be available).&lt;br /&gt;
&lt;br /&gt;
== Correctness ==&lt;br /&gt;
&lt;br /&gt;
An algorithm is correct if three conditions are fulfilled for every feasible input:&lt;br /&gt;
# All instructions are '''well-defined'''. For example, an instruction that divides by zero, exceeds the range of a numerical type (''overflow''), accesses an array component outside the array's index range, or accesses an attribute or method of a '''void''' / '''null''' pointer are ill-defined operations.&lt;br /&gt;
# Each loop and recursion '''terminates''', that is, the break condition is fulfilled after a finite number of iterative / recursive steps. In case of a recursion with more than one recursive call inside an execution of the recursive routine, this means that every branch of the recursion tree must reach the break condition after a finite number of recursive descents.&lt;br /&gt;
# If the given input admits no feasible solution, this information is delivered by the algorithm. Otherwise, the algorithm delivers a feasible output.&lt;br /&gt;
&lt;br /&gt;
== Invariant ==&lt;br /&gt;
&lt;br /&gt;
# The '''invariant''' of a loop consists of all assertions that are true immediately before the first iteration, immediately after the last iteration, and between two iterations. These assertions are called the '''invariant assertions''' of this loop.&lt;br /&gt;
# The '''invariant''' of a recursion consists of all assertions that are fulfilled immediately after each recursive call. These assertions are called the '''invariant assertions''' of this recursion.&lt;br /&gt;
&lt;br /&gt;
'''Remarks:'''&lt;br /&gt;
# If an algorithm only accesses its input and its own local data, the invariant assertions are assertions about its input and these local data and nothing else.&lt;br /&gt;
# The invariant of an algorithm has a specific function in the [[#Correctness proofs|correctness proof]] for the algorithm. Typically, in a documentation, only those invariant assertions are stated that are relevant for this purpose. &lt;br /&gt;
# The invariant is also essential for testing purposes: a [http://en.wikipedia.org/wiki/White-box_testing white-box test] of a loop/recursion should definitely check maintenance of all relevant invariant assertions.&lt;br /&gt;
# Last not least, the invariant is the very &amp;quot;essence&amp;quot; of the algorithmic idea; deeply understanding the algorithm amounts to understanding the invariant.&lt;br /&gt;
&lt;br /&gt;
== Variant ==&lt;br /&gt;
&lt;br /&gt;
# The '''variant''' of a loop consists of all differences between the state immediately before an iteration and the state immediately after that iteration (typically, but not exclusively, the values of integral loop variables).&lt;br /&gt;
# The '''variant''' of a recursion consists of all differences between the input of a recursive call &amp;lt;math&amp;gt;X&amp;lt;/math&amp;gt; and the inputs of all recursive calls that are directly called in &amp;lt;math&amp;gt;X&amp;lt;/math&amp;gt; (typically, but not exclusively, some integral measure of the size of the input).&lt;br /&gt;
&lt;br /&gt;
'''Remarks:'''&lt;br /&gt;
# If an algorithm only accesses its input and its own local data, the variant states changes of the contents of its input and  these local data and nothing else.&lt;br /&gt;
# The variant of an algorithm has a specific function in the [[#Correctness proofs|correctness proof]] for the algorithm. Typically, in a documentation, only those variant assertions are stated that are relevant for this purpose. &lt;br /&gt;
# The variant is also essential for testing purposes: a [http://en.wikipedia.org/wiki/White-box_testing white-box test] of a loop/recursion should definitely check all relevant changes.&lt;br /&gt;
&lt;br /&gt;
== Correctness proofs ==&lt;br /&gt;
&lt;br /&gt;
# Assuming the algorithm is correct:&lt;br /&gt;
## The variant implies that the break condition will be fulfilled,&lt;br /&gt;
### in case of a loop: after a finite number of iterations;&lt;br /&gt;
### in case of a recursion: after a finite number of recursive calls in each branch of the recursive tree.&lt;br /&gt;
## Correctness of the output follows from what the invariant says about the state immediately after the last iteration.&lt;br /&gt;
# Proving the invariant amounts to an induction on the number of iterations performed so far / the recursion parameter over which the variant of the recursion is defined.&lt;br /&gt;
## The invariant is the induction hypothesis.&lt;br /&gt;
## Proving the induction basis amounts to proving that the preprocessing (initialization) establishes the invariant.&lt;br /&gt;
## Proving the induction step amounts to proving that an iteration / a recursive call maintains the invariant.&lt;br /&gt;
&lt;br /&gt;
'''Remarks:'''&lt;br /&gt;
# In a nutshell, the variant proves termination, and the invariant proves that the output is correct.&lt;br /&gt;
# Typically, well-definedness of all operations is not considered explicitly in correctness proofs. The reason is that well-definedness is non-obvious in rare cases only.&lt;br /&gt;
# In many cases, correctness of the output follows immediately from what the invariant says about the state immediately after the last iteration; in other cases, additional arguments are necessary to prove correctness of the output.&lt;br /&gt;
# The variant is also essential to estimate, asymptotically, the number of iterations / the recursion depth.&lt;br /&gt;
&lt;br /&gt;
== Induction in case of a recursion ==&lt;br /&gt;
In principle, there are two, in a sense  mutually opposite, ways to define the induction. Typically, only one of these two ways is viable:&lt;br /&gt;
# The depth of a recursive call is the induction variable; the original call to this subroutine has to ensure the induction basis; the induction step has to ensured by every descent in the recursion tree. Example: [[Binary search|binary search]].&lt;br /&gt;
# The induction variable is some parameter of the input. The invariant simply says that the output of a recursive call is correct. In a descent in the recursion tree, that parameter must strictly decrease. The recursion anchor must ensure the induction basis. Example: [[Mergesort|mergesort]]; the size of the sequence is the induction variable.&lt;/div&gt;</summary>
		<author><name>Weihe</name></author>
	</entry>
	<entry>
		<id>https://wiki.algo.informatik.tu-darmstadt.de/index.php?title=Algorithms_and_correctness&amp;diff=3838</id>
		<title>Algorithms and correctness</title>
		<link rel="alternate" type="text/html" href="https://wiki.algo.informatik.tu-darmstadt.de/index.php?title=Algorithms_and_correctness&amp;diff=3838"/>
		<updated>2016-04-27T06:34:42Z</updated>

		<summary type="html">&lt;p&gt;Weihe: /* Induction in case of a recursion */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Algorithmic problem ==&lt;br /&gt;
&lt;br /&gt;
An algorithmic problem is described by:&lt;br /&gt;
# a set of feasible '''inputs''' (a.k.a. '''instances''');&lt;br /&gt;
# for each input a set of '''outputs''' (a.k.a. '''feasible solutions''' or '''solutions''', for short), which may be empty;&lt;br /&gt;
# optionally, an '''objective function''', which assigns a real number to each feasible solution (the ''quality'' of the solution). If an objective function is specified, it is also specified whether the objective function is to be ''minimized'' or to be ''maximized''.&lt;br /&gt;
&lt;br /&gt;
'''Remark:'''&lt;br /&gt;
Typically, sets such as the set of all feasible inputs to a problem and the set of all feasible outputs to an input are given by a [https://en.wikipedia.org/wiki/Genus%E2%80%93differentia_definition genus-differentia definition], which is an example of [https://en.wikipedia.org/wiki/Intensional_definition intensional definitions]. More specifically, an [https://en.wikipedia.org/wiki/Abstract_data_type abstract data type] is given along with the restrictions that must be fulfilled by feasible inputs and outputs, respectively. Sometimes, the abstract data type is called the set of all inputs / outputs, and the members of the respective set that fulfill the restrictions are then the ''feasible'' inputs / outputs.&lt;br /&gt;
&lt;br /&gt;
== Instructions, operations and subroutines ==&lt;br /&gt;
&lt;br /&gt;
[http://en.wikipedia.org/wiki/Imperative_programming Imperative programming] languages offer possibilities to specify '''instructions''' (a.k.a. '''statements'''). An instruction specifies a sequence of '''(machine) operations'''; executing the instruction on a machine means that the machine performs this sequence of operations. Instructions may be bundled as '''subroutines''' (a.k.a. '''procedures''', '''functions''', '''methods''').&lt;br /&gt;
&lt;br /&gt;
== (Source) programs and processes ==&lt;br /&gt;
&lt;br /&gt;
A '''program''' is a sequence of instructions in some programming language. A program written in a higher programming language, which is to be compiled or interpreted, is often called a '''source program'''. A '''process''' means the execution of a program on a real or virtual machine.&lt;br /&gt;
&lt;br /&gt;
== Termination ==&lt;br /&gt;
&lt;br /&gt;
# A loop has at least one '''break condition'''; this is a part of the (source) program of the loop. '''Termination''' of a loop refers to the process and means that the process evaluates the break condition and this evaluation yields '''true'''.&lt;br /&gt;
# '''Termination''' of a subroutine means that the process runs into an unconditional return-statement or, alternatively, runs into a conditional return-statement and this condition yields '''true''' (common high-level languages only provide unconditional return-statements; a return statement at the end of a void-subroutine may be implicit in many languages).&lt;br /&gt;
# '''Termination''' of a recursion means that every branch of the recursion has a finite depth, that is, runs into a recursive call that does not call the recursive subroutine any further.&lt;br /&gt;
&lt;br /&gt;
== Algorithm ==&lt;br /&gt;
&lt;br /&gt;
# An algorithm is associated with an [[#Algorithmic problem|algorithmic problem]].&lt;br /&gt;
# An algorithm is an abstract description of a process and can be formulated (a.k.a. '''implemented''') as a subroutine. This subroutine is required to compute some feasible output for any given input of the associated algorithmic problem.&lt;br /&gt;
# If an objective function is given, the objective function value of the generated solution is a criterion for the quality of an algorithm. More specifically:&lt;br /&gt;
## in case of ''minimization'': a low objective function value is favored;&lt;br /&gt;
## in case of ''maximization'': a high objective function value is favored.&lt;br /&gt;
&lt;br /&gt;
== Iterative and recursive algorithms ==&lt;br /&gt;
&lt;br /&gt;
# Basically, a non-trivial algorithm is a loop or recursion plus some '''preprocessing''' (a.k.a. '''initialization''') and/or some '''postprocessing'''.&lt;br /&gt;
# If an algorithm consists of two or more loops/recursions that are strictly disjoint parts of an implementation of the algorithm (and thus executed strictly after each other), it may be viewed as two or more algorithms that are executed after each other. Therefore, without loss of generality, an algorithm may be viewed as ''one'' loop or ''one'' recursion plus some pre/postprocessing.&lt;br /&gt;
# An iteration of a loop may contain another loop or recursion. Analogously, a recursive call may contain another loop or recursion. We say that a loop/recursion inside another loop/recursion is '''nested''' or the '''inner''' loop/recursion. Correspondingly, the '''nesting''' loop/recursion is the '''outer''' one.&lt;br /&gt;
&lt;br /&gt;
'''Remarks:'''&lt;br /&gt;
# Clearly, a loop may be transformed into a recursion and vice versa. So, every algorithm may be formulated either as a loop or as a recursion. However, in most cases, one of these two options looks simpler and &amp;quot;more natural&amp;quot; than the other one.&lt;br /&gt;
# Formulating an algorithm as a loop might be favorable in many cases because a loop allows more control than a recursion. More specifically, a loop may be implemented as an [http://en.wikipedia.org/wiki/Iterator iterator] whose method for going one step forward implements the execution of one iteration of the loop. Such an implementation allows one to terminate the loop early, to suspend execution and resume execution later on, and to execute some additional instructions between two iterations (e.g. for visualization or for testing purposes). The crucial point is that, for all of these purposes, the code of the algorithm need not be modified (the source need not even be available).&lt;br /&gt;
&lt;br /&gt;
== Correctness ==&lt;br /&gt;
&lt;br /&gt;
An algorithm is correct if three conditions are fulfilled for every feasible input:&lt;br /&gt;
# All instructions are '''well-defined'''. For example, an instruction that divides by zero, exceeds the range of a numerical type (''overflow''), accesses an array component outside the array's index range, or accesses an attribute or method of a '''void''' / '''null''' pointer are ill-defined operations.&lt;br /&gt;
# Each loop and recursion '''terminates''', that is, the break condition is fulfilled after a finite number of iterative / recursive steps. In case of a recursion with more than one recursive call inside an execution of the recursive routine, this means that every branch of the recursion tree must reach the break condition after a finite number of recursive descents.&lt;br /&gt;
# If the given input admits no feasible solution, this information is delivered by the algorithm. Otherwise, the algorithm delivers a feasible output.&lt;br /&gt;
&lt;br /&gt;
== Invariant ==&lt;br /&gt;
&lt;br /&gt;
# The '''invariant''' of a loop consists of all assertions that are true immediately before the first iteration, immediately after the last iteration, and between two iterations. These assertions are called the '''invariant assertions''' of this loop.&lt;br /&gt;
# The '''invariant''' of a recursion consists of all assertions that are fulfilled immediately after each recursive call. These assertions are called the '''invariant assertions''' of this recursion.&lt;br /&gt;
&lt;br /&gt;
'''Remarks:'''&lt;br /&gt;
# If an algorithm only accesses its input and its own local data, the invariant assertions are assertions about its input and these local data and nothing else.&lt;br /&gt;
# The invariant of an algorithm has a specific function in the [[#Correctness proofs|correctness proof]] for the algorithm. Typically, in a documentation, only those invariant assertions are stated that are relevant for this purpose. &lt;br /&gt;
# The invariant is also essential for testing purposes: a [http://en.wikipedia.org/wiki/White-box_testing white-box test] of a loop/recursion should definitely check maintenance of all relevant invariant assertions.&lt;br /&gt;
# Last not least, the invariant is the very &amp;quot;essence&amp;quot; of the algorithmic idea; deeply understanding the algorithm amounts to understanding the invariant.&lt;br /&gt;
&lt;br /&gt;
== Variant ==&lt;br /&gt;
&lt;br /&gt;
# The '''variant''' of a loop consists of all differences between the state immediately before an iteration and the state immediately after that iteration (typically, but not exclusively, the values of integral loop variables).&lt;br /&gt;
# The '''variant''' of a recursion consists of all differences between the input of a recursive call &amp;lt;math&amp;gt;X&amp;lt;/math&amp;gt; and the inputs of all recursive calls that are directly called in &amp;lt;math&amp;gt;X&amp;lt;/math&amp;gt; (typically, but not exclusively, some integral measure of the size of the input).&lt;br /&gt;
&lt;br /&gt;
'''Remarks:'''&lt;br /&gt;
# If an algorithm only accesses its input and its own local data, the variant states changes of the contents of its input and  these local data and nothing else.&lt;br /&gt;
# The variant of an algorithm has a specific function in the [[#Correctness proofs|correctness proof]] for the algorithm. Typically, in a documentation, only those variant assertions are stated that are relevant for this purpose. &lt;br /&gt;
# The variant is also essential for testing purposes: a [http://en.wikipedia.org/wiki/White-box_testing white-box test] of a loop/recursion should definitely check all relevant changes.&lt;br /&gt;
&lt;br /&gt;
== Correctness proofs ==&lt;br /&gt;
&lt;br /&gt;
# Assuming the algorithm is correct:&lt;br /&gt;
## The variant implies that the break condition will be fulfilled,&lt;br /&gt;
### in case of a loop: after a finite number of iterations;&lt;br /&gt;
### in case of a recursion: after a finite number of recursive calls in each branch of the recursive tree.&lt;br /&gt;
## Correctness of the output follows from what the invariant says about the state immediately after the last iteration.&lt;br /&gt;
# Proving the invariant amounts to an induction on the number of iterations performed so far / the recursion parameter over which the variant of the recursion is defined.&lt;br /&gt;
## The invariant is the induction hypothesis.&lt;br /&gt;
## Proving the induction basis amounts to proving that the preprocessing (initialization) establishes the invariant.&lt;br /&gt;
## Proving the induction step amounts to proving that an iteration / a recursive call maintains the invariant.&lt;br /&gt;
&lt;br /&gt;
'''Remarks:'''&lt;br /&gt;
# In a nutshell, the variant proves termination, and the invariant proves that the output is correct.&lt;br /&gt;
# Typically, well-definedness of all operations is not considered explicitly in correctness proofs. The reason is that well-definedness is non-obvious in rare cases only.&lt;br /&gt;
# In many cases, correctness of the output follows immediately from what the invariant says about the state immediately after the last iteration; in other cases, additional arguments are necessary to prove correctness of the output.&lt;br /&gt;
# The variant is also essential to estimate, asymptotically, the number of iterations / the recursion depth.&lt;br /&gt;
&lt;br /&gt;
== Induction in case of a recursion ==&lt;br /&gt;
In principle, there are two mutually ways to define the induction. Typically, only one of these two ways is viable:&lt;br /&gt;
# The depth of a recursive call is the induction variable; the original call to this subroutine has to ensure the induction basis; the induction step has to ensured by every descent in the recursion tree. Example: [[Binary search|binary search]].&lt;br /&gt;
# The induction variable is some parameter of the input. The invariant simply says that the output of a recursive call is correct. In a descent in the recursion tree, that parameter must strictly decrease. The recursion anchor must ensure the induction basis. Example: [[Mergesort|mergesort]]; the size of the sequence is the induction variable.&lt;/div&gt;</summary>
		<author><name>Weihe</name></author>
	</entry>
	<entry>
		<id>https://wiki.algo.informatik.tu-darmstadt.de/index.php?title=Algorithms_and_correctness&amp;diff=3837</id>
		<title>Algorithms and correctness</title>
		<link rel="alternate" type="text/html" href="https://wiki.algo.informatik.tu-darmstadt.de/index.php?title=Algorithms_and_correctness&amp;diff=3837"/>
		<updated>2016-04-27T06:30:10Z</updated>

		<summary type="html">&lt;p&gt;Weihe: /* Induction in case of a recursion */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Algorithmic problem ==&lt;br /&gt;
&lt;br /&gt;
An algorithmic problem is described by:&lt;br /&gt;
# a set of feasible '''inputs''' (a.k.a. '''instances''');&lt;br /&gt;
# for each input a set of '''outputs''' (a.k.a. '''feasible solutions''' or '''solutions''', for short), which may be empty;&lt;br /&gt;
# optionally, an '''objective function''', which assigns a real number to each feasible solution (the ''quality'' of the solution). If an objective function is specified, it is also specified whether the objective function is to be ''minimized'' or to be ''maximized''.&lt;br /&gt;
&lt;br /&gt;
'''Remark:'''&lt;br /&gt;
Typically, sets such as the set of all feasible inputs to a problem and the set of all feasible outputs to an input are given by a [https://en.wikipedia.org/wiki/Genus%E2%80%93differentia_definition genus-differentia definition], which is an example of [https://en.wikipedia.org/wiki/Intensional_definition intensional definitions]. More specifically, an [https://en.wikipedia.org/wiki/Abstract_data_type abstract data type] is given along with the restrictions that must be fulfilled by feasible inputs and outputs, respectively. Sometimes, the abstract data type is called the set of all inputs / outputs, and the members of the respective set that fulfill the restrictions are then the ''feasible'' inputs / outputs.&lt;br /&gt;
&lt;br /&gt;
== Instructions, operations and subroutines ==&lt;br /&gt;
&lt;br /&gt;
[http://en.wikipedia.org/wiki/Imperative_programming Imperative programming] languages offer possibilities to specify '''instructions''' (a.k.a. '''statements'''). An instruction specifies a sequence of '''(machine) operations'''; executing the instruction on a machine means that the machine performs this sequence of operations. Instructions may be bundled as '''subroutines''' (a.k.a. '''procedures''', '''functions''', '''methods''').&lt;br /&gt;
&lt;br /&gt;
== (Source) programs and processes ==&lt;br /&gt;
&lt;br /&gt;
A '''program''' is a sequence of instructions in some programming language. A program written in a higher programming language, which is to be compiled or interpreted, is often called a '''source program'''. A '''process''' means the execution of a program on a real or virtual machine.&lt;br /&gt;
&lt;br /&gt;
== Termination ==&lt;br /&gt;
&lt;br /&gt;
# A loop has at least one '''break condition'''; this is a part of the (source) program of the loop. '''Termination''' of a loop refers to the process and means that the process evaluates the break condition and this evaluation yields '''true'''.&lt;br /&gt;
# '''Termination''' of a subroutine means that the process runs into an unconditional return-statement or, alternatively, runs into a conditional return-statement and this condition yields '''true''' (common high-level languages only provide unconditional return-statements; a return statement at the end of a void-subroutine may be implicit in many languages).&lt;br /&gt;
# '''Termination''' of a recursion means that every branch of the recursion has a finite depth, that is, runs into a recursive call that does not call the recursive subroutine any further.&lt;br /&gt;
&lt;br /&gt;
== Algorithm ==&lt;br /&gt;
&lt;br /&gt;
# An algorithm is associated with an [[#Algorithmic problem|algorithmic problem]].&lt;br /&gt;
# An algorithm is an abstract description of a process and can be formulated (a.k.a. '''implemented''') as a subroutine. This subroutine is required to compute some feasible output for any given input of the associated algorithmic problem.&lt;br /&gt;
# If an objective function is given, the objective function value of the generated solution is a criterion for the quality of an algorithm. More specifically:&lt;br /&gt;
## in case of ''minimization'': a low objective function value is favored;&lt;br /&gt;
## in case of ''maximization'': a high objective function value is favored.&lt;br /&gt;
&lt;br /&gt;
== Iterative and recursive algorithms ==&lt;br /&gt;
&lt;br /&gt;
# Basically, a non-trivial algorithm is a loop or recursion plus some '''preprocessing''' (a.k.a. '''initialization''') and/or some '''postprocessing'''.&lt;br /&gt;
# If an algorithm consists of two or more loops/recursions that are strictly disjoint parts of an implementation of the algorithm (and thus executed strictly after each other), it may be viewed as two or more algorithms that are executed after each other. Therefore, without loss of generality, an algorithm may be viewed as ''one'' loop or ''one'' recursion plus some pre/postprocessing.&lt;br /&gt;
# An iteration of a loop may contain another loop or recursion. Analogously, a recursive call may contain another loop or recursion. We say that a loop/recursion inside another loop/recursion is '''nested''' or the '''inner''' loop/recursion. Correspondingly, the '''nesting''' loop/recursion is the '''outer''' one.&lt;br /&gt;
&lt;br /&gt;
'''Remarks:'''&lt;br /&gt;
# Clearly, a loop may be transformed into a recursion and vice versa. So, every algorithm may be formulated either as a loop or as a recursion. However, in most cases, one of these two options looks simpler and &amp;quot;more natural&amp;quot; than the other one.&lt;br /&gt;
# Formulating an algorithm as a loop might be favorable in many cases because a loop allows more control than a recursion. More specifically, a loop may be implemented as an [http://en.wikipedia.org/wiki/Iterator iterator] whose method for going one step forward implements the execution of one iteration of the loop. Such an implementation allows one to terminate the loop early, to suspend execution and resume execution later on, and to execute some additional instructions between two iterations (e.g. for visualization or for testing purposes). The crucial point is that, for all of these purposes, the code of the algorithm need not be modified (the source need not even be available).&lt;br /&gt;
&lt;br /&gt;
== Correctness ==&lt;br /&gt;
&lt;br /&gt;
An algorithm is correct if three conditions are fulfilled for every feasible input:&lt;br /&gt;
# All instructions are '''well-defined'''. For example, an instruction that divides by zero, exceeds the range of a numerical type (''overflow''), accesses an array component outside the array's index range, or accesses an attribute or method of a '''void''' / '''null''' pointer are ill-defined operations.&lt;br /&gt;
# Each loop and recursion '''terminates''', that is, the break condition is fulfilled after a finite number of iterative / recursive steps. In case of a recursion with more than one recursive call inside an execution of the recursive routine, this means that every branch of the recursion tree must reach the break condition after a finite number of recursive descents.&lt;br /&gt;
# If the given input admits no feasible solution, this information is delivered by the algorithm. Otherwise, the algorithm delivers a feasible output.&lt;br /&gt;
&lt;br /&gt;
== Invariant ==&lt;br /&gt;
&lt;br /&gt;
# The '''invariant''' of a loop consists of all assertions that are true immediately before the first iteration, immediately after the last iteration, and between two iterations. These assertions are called the '''invariant assertions''' of this loop.&lt;br /&gt;
# The '''invariant''' of a recursion consists of all assertions that are fulfilled immediately after each recursive call. These assertions are called the '''invariant assertions''' of this recursion.&lt;br /&gt;
&lt;br /&gt;
'''Remarks:'''&lt;br /&gt;
# If an algorithm only accesses its input and its own local data, the invariant assertions are assertions about its input and these local data and nothing else.&lt;br /&gt;
# The invariant of an algorithm has a specific function in the [[#Correctness proofs|correctness proof]] for the algorithm. Typically, in a documentation, only those invariant assertions are stated that are relevant for this purpose. &lt;br /&gt;
# The invariant is also essential for testing purposes: a [http://en.wikipedia.org/wiki/White-box_testing white-box test] of a loop/recursion should definitely check maintenance of all relevant invariant assertions.&lt;br /&gt;
# Last not least, the invariant is the very &amp;quot;essence&amp;quot; of the algorithmic idea; deeply understanding the algorithm amounts to understanding the invariant.&lt;br /&gt;
&lt;br /&gt;
== Variant ==&lt;br /&gt;
&lt;br /&gt;
# The '''variant''' of a loop consists of all differences between the state immediately before an iteration and the state immediately after that iteration (typically, but not exclusively, the values of integral loop variables).&lt;br /&gt;
# The '''variant''' of a recursion consists of all differences between the input of a recursive call &amp;lt;math&amp;gt;X&amp;lt;/math&amp;gt; and the inputs of all recursive calls that are directly called in &amp;lt;math&amp;gt;X&amp;lt;/math&amp;gt; (typically, but not exclusively, some integral measure of the size of the input).&lt;br /&gt;
&lt;br /&gt;
'''Remarks:'''&lt;br /&gt;
# If an algorithm only accesses its input and its own local data, the variant states changes of the contents of its input and  these local data and nothing else.&lt;br /&gt;
# The variant of an algorithm has a specific function in the [[#Correctness proofs|correctness proof]] for the algorithm. Typically, in a documentation, only those variant assertions are stated that are relevant for this purpose. &lt;br /&gt;
# The variant is also essential for testing purposes: a [http://en.wikipedia.org/wiki/White-box_testing white-box test] of a loop/recursion should definitely check all relevant changes.&lt;br /&gt;
&lt;br /&gt;
== Correctness proofs ==&lt;br /&gt;
&lt;br /&gt;
# Assuming the algorithm is correct:&lt;br /&gt;
## The variant implies that the break condition will be fulfilled,&lt;br /&gt;
### in case of a loop: after a finite number of iterations;&lt;br /&gt;
### in case of a recursion: after a finite number of recursive calls in each branch of the recursive tree.&lt;br /&gt;
## Correctness of the output follows from what the invariant says about the state immediately after the last iteration.&lt;br /&gt;
# Proving the invariant amounts to an induction on the number of iterations performed so far / the recursion parameter over which the variant of the recursion is defined.&lt;br /&gt;
## The invariant is the induction hypothesis.&lt;br /&gt;
## Proving the induction basis amounts to proving that the preprocessing (initialization) establishes the invariant.&lt;br /&gt;
## Proving the induction step amounts to proving that an iteration / a recursive call maintains the invariant.&lt;br /&gt;
&lt;br /&gt;
'''Remarks:'''&lt;br /&gt;
# In a nutshell, the variant proves termination, and the invariant proves that the output is correct.&lt;br /&gt;
# Typically, well-definedness of all operations is not considered explicitly in correctness proofs. The reason is that well-definedness is non-obvious in rare cases only.&lt;br /&gt;
# In many cases, correctness of the output follows immediately from what the invariant says about the state immediately after the last iteration; in other cases, additional arguments are necessary to prove correctness of the output.&lt;br /&gt;
# The variant is also essential to estimate, asymptotically, the number of iterations / the recursion depth.&lt;br /&gt;
&lt;br /&gt;
== Induction in case of a recursion ==&lt;br /&gt;
In principle, there are two mutually ways to define the induction. Typically, only one of these two ways is viable:&lt;br /&gt;
# The depth of a recursive call is the induction variable; the original call to this subroutine has to ensure the induction basis; the induction step has to ensured by every descent in the recursion tree. Example: [[Binary search|binary search]].&lt;br /&gt;
&lt;br /&gt;
#&lt;/div&gt;</summary>
		<author><name>Weihe</name></author>
	</entry>
</feed>