Preflow-push: Difference between revisions

From Algowiki
Jump to navigation Jump to search
Line 61: Line 61:
To prove the variant, consider a step in which step 2.2 applies, but the arc <math>(v.w)</math> is not saturated by that. Potentially, <math>w</math> becomes active. However, <math>v</math> definitely becomes inactive. Now the variant follows from the fact that <math>d(w)=d(v)-1</math> for an admissible arc <math>(v,w)</math>.
To prove the variant, consider a step in which step 2.2 applies, but the arc <math>(v.w)</math> is not saturated by that. Potentially, <math>w</math> becomes active. However, <math>v</math> definitely becomes inactive. Now the variant follows from the fact that <math>d(w)=d(v)-1</math> for an admissible arc <math>(v,w)</math>.


It remains to show termination; this is proved by the folowing considerations.
It remains to show termination; this is proved by the following considerations.


== Complexity ==
== Complexity ==

Revision as of 05:49, 27 October 2014

Abstract view

Algorithmic problem: max-flow problem (standard version)

Type of algorithm: loop.

Definition:

  1. For [math]\displaystyle{ a\in A }[/math], let [math]\displaystyle{ f(a) }[/math] be a real number (not necessarily foring a flow). For [math]\displaystyle{ v\in V\setminus\{s,t\} }[/math], the excess of [math]\displaystyle{ v }[/math] with respect to [math]\displaystyle{ f }[/math] is defined by [math]\displaystyle{ e_f(v):=\sum_{w:(w,v)\in A}f(w,v)-\sum_{w:(v,w)\in A}f(v,w) }[/math].
  2. A node [math]\displaystyle{ v\in V\setminus\{s,t\} }[/math] is called active it its excess is positive.
  3. Let [math]\displaystyle{ \Delta }[/math] denote the sum of the distance labels [math]\displaystyle{ d(v) }[/math] of all active nodes [math]\displaystyle{ v\in V\setminus\{s,t\} }[/math].

Invariant: Before and after each iteration:

  1. For each arc [math]\displaystyle{ a\in A }[/math], it is [math]\displaystyle{ 0\leq f(a)\leq u(a) }[/math] .
  2. For each node [math]\displaystyle{ v\in V }[/math], it is [math]\displaystyle{ e_f(v)\geq 0 }[/math].
  3. The node labels [math]\displaystyle{ d }[/math] form a valid distance labeling, and it is [math]\displaystyle{ d(s)=|V| }[/math].
  4. The currently active nodes are stored in a set [math]\displaystyle{ S }[/math].

Variant: In each iteration, at least one of the following cases will take place:

  1. The label [math]\displaystyle{ d(v) }[/math] of at least one node [math]\displaystyle{ v\in V\setminus\{s,t\} }[/math] is increased.
  2. A saturating push is performed.
  3. The value of [math]\displaystyle{ \Delta }[/math] decreases (in case of a non-saturating push).

Break condition: [math]\displaystyle{ S=\emptyset }[/math].

Induction basis

Abstract view:

  1. For all arcs [math]\displaystyle{ a\in A }[/math], set [math]\displaystyle{ f(a):=0 }[/math].
  2. For each arc [math]\displaystyle{ (s,v)\in A }[/math], overwrite this value by [math]\displaystyle{ f(a):=u(a) }[/math].
  3. Compute a valid distance labeling [math]\displaystyle{ d }[/math], for example, the true distances from all nodes to [math]\displaystyle{ t }[/math] in the residual network.
  4. Set [math]\displaystyle{ d(s):=n }[/math].

Proof: Obvious.

Induction step

Abstract view:

  1. Choose an active node [math]\displaystyle{ v }[/math] from [math]\displaystyle{ S }[/math].
  2. If there is at least one admissible arc leaving [math]\displaystyle{ v }[/math]:
    1. Choose one such arc [math]\displaystyle{ (v,w) }[/math], say.
    2. If [math]\displaystyle{ w\neq s }[/math] and [math]\displaystyle{ e_f(w)=0 }[/math], insert [math]\displaystyle{ w }[/math] in [math]\displaystyle{ S }[/math].
    3. Increase the flow over [math]\displaystyle{ f }[/math] by the minimum of [math]\displaystyle{ e_f(v) }[/math] and the residual capacity of <matrh>a</math>.
    4. If [math]\displaystyle{ e_f(v)=0 }[/math] now, extract [math]\displaystyle{ v }[/math] from [math]\displaystyle{ S }[/math].
  3. Otherwise: increase [math]\displaystyle{ d(v) }[/math] to one more than the maximal label [math]\displaystyle{ d(w) }[/math] of any arc [math]\displaystyle{ (v,w) }[/math] in the residual network.

Remark: The preflow-push algorithm is also known as push-relabel algorithm. The push operation is step 2.2; the relabel operation is step 3.

Implementation of step 2.1: Before and after each iteration, each node [math]\displaystyle{ v\in V\setminus\{s,t\} }[/math] has a current arc, which is either void or points to one of the outgoing arcs of [math]\displaystyle{ v }[/math]. Whenever the node label of [math]\displaystyle{ v }[/math] is reset (incl. initialization), the curremt arc is initialized as the first arc in the list of outgoing arcs of [math]\displaystyle{ v }[/math]. In each execution of step 2.1, the current arc is checked for admissibilty, and if not, it is moved forward until the end of the list is reached or an admissible arc is met. In the latter case, the condition in step 2 is fulfilled.

Proof: Points 1, 2, and 4 of the invariant are obviously fulfilled. Point 3 of the invariant is affected by step 3 only; the extremely conservative increase ensures point 3 of the invariant.

To prove the variant, consider a step in which step 2.2 applies, but the arc [math]\displaystyle{ (v.w) }[/math] is not saturated by that. Potentially, [math]\displaystyle{ w }[/math] becomes active. However, [math]\displaystyle{ v }[/math] definitely becomes inactive. Now the variant follows from the fact that [math]\displaystyle{ d(w)=d(v)-1 }[/math] for an admissible arc [math]\displaystyle{ (v,w) }[/math].

It remains to show termination; this is proved by the following considerations.

Complexity

Statement: The asymptotic complexity is in [math]\displaystyle{ \mathcal{O}(n^2m) }[/math], where [math]\displaystyle{ n=|V| }[/math] and [math]\displaystyle{ m=|A| }[/math].

Proof: First we show that the total number of relabels (step 3 of the main loop) is in [math]\displaystyle{ \mathcal{O}(n^2) }[/math]. To see that, let [math]\displaystyle{ v\in V\setminus\{s,t\} }[/math] be an active note before/after an iteration of the main loop. A straightforward induction over the number of push operations shows that there is at least one simple [math]\displaystyle{ (s,v) }[/math]-path [math]\displaystyle{ p }[/math] with positive flow on all arcs. The transpose of [math]\displaystyle{ p }[/math] is augmenting. Due to the validity of [math]\displaystyle{ d }[/math] (induction hypothesis), [math]\displaystyle{ d(v)-d(s)=d(v)-n }[/math] cannot be larger than the number of arcs on [math]\displaystyle{ p }[/math], which is not larger than [math]\displaystyle{ n-1 }[/math]. Therefore, no node label can be larger than [math]\displaystyle{ 2n-1 }[/math]. Since node labels are nonnegative and increases at least by one in each relabel operation, the first claim follows.

The argument in the proof of correctness of the Ahuja-Orlin algorithm to prove that the total number of saturating push operations is in [math]\displaystyle{ \mathcal{O}(nm) }[/math], applies here as well.

Finally, we consider the non-saturating push operations. First note that [math]\displaystyle{ \Delta\geq 0 }[/math] before and after each iteration. Due to thh variant, the value of [math]\displaystyle{ Delta }[/math] is decreased by at least one in each non-saturating push operation. The value of [math]\displaystyle{ \Delta }[/math] is increased in each relabel operation exactly by the amount by which the label of the current node is increased. Since node labels are never incrased and bounded from above by [math]\displaystyle{ 2n-1 }[/math], [math]\displaystyle{ \Delta }[/math] increases by less than [math]\displaystyle{ 2n^2 }[/math] in total over all relabel operations. On the other hand, a saturating push operation may increase [math]\displaystyle{ \Delta }[/math] by at most [math]\displaystyle{ 2n-1 }[/math] (namely, in he case that [math]\displaystyle{ w }[/math] was not active immediately before the push). Therefore, the number of non-saturating push