Model computer: Difference between revisions

From Algowiki
Jump to navigation Jump to search
No edit summary
No edit summary
Line 1: Line 1:
[[Category:Algorithms]]
[[Category:Other]]
[[Category:Search Algorithms]]
[[Category:Tree Algorithms]]
[[Category:Graph Traversal]]


== General information ==
 
== Defintion ==
 
The model computer consists of a '''storage''' and a '''processing''' unit.
 
The storage is a large array of elementary storage units, each of which stores one bit of information.
 
The processing unit is a [//http://en.wikipedia.org/wiki/Boolean_circuit Boolean circuit], that is, a number of logical gates connected by wires. Both the input and the output pins are attached to the bits in the storage, in both cases in a one-to-one correspondence.
 
In each clock cycle of the computer, the current values of the storage bits are imported into the combinatorial circuit via the input bins, and the values exported at the output bins are the new values of the storage bins.
 
In each clock cycle, only a single '''machine instruction''' is executed, that is, a single bit is copied or two bits are processed logically/arithmetically (giving the values of one or two other bits). In particular, in each clock cycle, a constant, instruction-specific number of gates is involved in the update of the bit values in the storage.
 
For simplicity, interfaces to background devices, terminals, networks, etc., are ignored here. These details are not necessary for the analysis of algorithms and data structures.
 
It may be useful to regard the storage units as being organized in larger units (machine words) of fixed size like in a real-world computer. But this is not necessary for most considerations. In fact, an elementary instruction on fixed-size machine words (like on [//http://en.wikipedia.org/wiki/Reduced_instruction_set_computing RISC]) machines amounts to a constant number of elementary operations on bits. A constant factor on the number of execution steps makes no difference with respect to asymptotic complexity. In other words, a machine model with elementary instructions on bits and an analogously organized machine model elementary instructions on machines words are equivalent and undistinguishable with respect to asymptotic complexity.


'''Algorithmic problem:''' [[Graph traversal]]
'''Algorithmic problem:''' [[Graph traversal]]
Line 18: Line 30:
# In all cases, we also say '''precedes''' and '''succeeds''', respectively, instead of "is lexicograpically smaller/larger".
# In all cases, we also say '''precedes''' and '''succeeds''', respectively, instead of "is lexicograpically smaller/larger".


== Abstract view ==
=== Known Related Topics ===
 
'''Additional output:'''
# Each node has two Boolean labels with semantics, "is seen" and "is finished".
# An [[Basic graph definitions#Forests, trees, branchings, arborescences|arborescence]] <math>A=(V',A')</math> rooted at <math>s</math> such that <math>V'\subseteq V</math> is the set of all nodes reachable from <math>s</math> (including <math>s</math>). For each node <math>v\in V'</math>, the path from <math>s</math> to <math>v</math> in <math>A</math> is the lexicographically smallest <math>(s,v)</math>-path in <math>G</math>.
 
'''Specific characteristic:'''
The nodes may be returned either in lexicographic order or (alternatively or simultaneously) in '''parenthetical order''', that is:
Let <math>v,w\in V</math> such that <math>v</math> is seen before <math>w</math>. If there is a path from <math>v</math> to <math>w</math>, <math>w</math> is finished prior to <math>v</math>.
 
'''Remark:'''
''Parenthetical'' refers to the following intuition: Whenever a node is seen, open a parenthesis, and close it once the node is finished. The result is a correct parenthetial expression: Two parentheses are either disjoint, or one is nested in the other one.
 
'''Auxiliary data:'''
# A [[Sets and sequences#Stacks and queues|stack]] <math>S</math> whose elements are nodes in <math>V</math>.
# Each node  has a '''current arc''' <math>a_v\in V</math>, which is either void or an outgoing arc <math>a_v=(v,w)</math> of <math>v</math>. (May be viewed as an iterator over the list of all outgoing arcs.)
 
'''Invariant:'''
Before and after each iteration:
# <math>S</math> forms a path <math>p</math> from the start node <math>s</math> to some other node, that is, the order of the nodes on <math>p</math> is the order in which they appear in <math>S</math> (start node <math>s</math> at the bottom of <math>S</math>).
# For each node not yet seen, the current arc is the first arc (or void if the node has no outgoing arcs).
# For each node <math>v</math> on <math>p</math>:
## If there are arcs <math>(v,w)\in A</math> such that <math>w</math> is not yet seen, the current arc equals or precedes the first such arc.
## The subpath of <math>p</math> from the start node <math>s</math> to <math>v</math> is the lexicographically first <math>(s,v)</math>-path.
# The nodes on <math>p</math> are seen but not finished. Let <math>p+a</math> denote the concatenation of <math>p</math> with the current arc <math>a</math> of the last node of <math>p</math>. The nodes that are lexicographically smaller than <math>p+a</math> are seen and finished, and the nodes that lexicographically succeed <math>p+a</math> are neither seen nor finished. (Note that nothing is said about the head of <math>a</math>).
 
'''Variant:''' Either one node is finished or the current arc of one node is moved forward.
 
'''Break condition:''' <math>S=\emptyset</math>.
 
== Induction basis ==
 
'''Abstract view:''' No node is finished. The start node <math>s</math> is seen, no other node is seen. The start node is the only element of <math>S</math>. The current arc of each node is its first outgoing arc. If the nodes are to be returned in lexicographic order, the start node <math></math> is, initially, the only member of the output sequence; otherwise, the initial output sequence is empty. Arborescence <math>A</math> is initialized so as to contain <math>s</math> and nothing else.
 
'''Implementation:''' Obvious.
 
'''Proof:''' Obvious.
 
== Induction step ==
 
'''Abstract view:'''
# Let <math>v</math> be the last node of <math>p</math> (=the top element of <math>S</math>).
# While the current arc of <math>v</math> is not void and while the head of the current arc is labeled as seen: Move the current arc one step forward.
# If the current arc of <math>v</math> is not void, say, <math>a_v=(v,w)</math>:
## Insert <math>w</math> and <math>(v,w)</math> in <math>A</math>.
## Push <math>w</math> on <math>S</math>.
## Label <math>w</math> as seen.
## If the output order is the lexicographical one: Append <math>w</math> to the output sequence.
# Otherwise:
## Remove <math>v</math> from <math>S</math>
## Label <math>v</math> as finished.
## If the output order is the parenthetical one: Put <math>v</math> in the output sequence.
 
'''Implementation:''' Obvious.
 
'''Proof:'''
The loop ''variant'' is obviously fulfilled.
 
The first point of the ''invariant'' is obviously fulfilled, too. The second point follows from the fact that the current arc of a node is initialized to be the node's very first outgoing arc and only changed after the node is labeled as ''seen''. Point 3.1 follows from the fact that, in step 2, the current arc never skips an arc that points to an unseen node.
 
For point 3.2, let <math>p'</math> be the lexicographically smallest <math>(s,v)</math>-path. Moreover, let <math>w\neq v</math> be the last node on <math>p</math> and <math>p'</math> such that both paths are identical from <math>s</math> to <math>w</math> (possibly, <math>w=s</math>). Further, let <math>u</math> and <math>u'</math> be the immediate successors of <math>w</math> on <math>p</math> and <math>p'</math>, respectively. Then <math>u'</math> has been seen before <math>u</math> because <math>(w,u)</math> is the arc over which <math>u</math> was seen for the first time, and <math>(w,u')</math> precedes <math>(w,u)</math>. Note that <math>v</math> has not been seen earlier than <math>u</math> (in fact, later than <math>u</math>, unless <math>v=u</math>). In summary, <math>u'</math> has been seen before <math>v</math>. Since there is a path from <math>u'</math> to <math>v</math>, the correctness proof [[#Correctness|below]] will prove that <math>v</math> was finished before <math>u'</math>. This contradicts the induction hypothesis (point 4 of the invariant).
 
When a node is pushed on <math>S</math>, it is neither seen nor finished immediately before that iteration and then labeled as seen in that iteration. The node is finished when it leaves <math>S</math>. Both facts together give the first sentence of point 4. The other statements of point 4 follow from the observation that the concatenation of <math>p</math> with the current arc of the endnode of <math>p</math> increases lexicographically in each iteration.
 
== Correctness ==
 
It is easy to see that each operation of the algorithm is well defined. Due to the variant, the loop terminates after a finite number of steps. Immediately before the last iteration, <math>p</math> consists of the start node <math>s</math> only, and the current arc of <math>s</math> is void. Therefore, ''all'' nodes reachable from <math>s</math> except for <math>s</math> itself are lexicographically smaller than <math>p</math> at that moment. Due to point 4 of the invariant, all of these nodes are finished. In the last iteration, <math>s</math> is finished as well.
 
So, it remains to show that the specific characteristic is fulfilled in both cases.
Due to point 3.2 of the invariant, a node is seen for the first time via its lexicographically smallest path from <math>s</math>. Since the current path increases lexicographically in each iteration, the nodes are labeled as seen in lexicographic order. In summary, the lexicographical order is correct.
 
So consider the second, the parenthetical case.
Let <math>v</math> be seen before <math>w</math> and assume there is a path from <math>v</math> to <math>w</math>. We have to show that <math>w</math> is finished prior to <math>v</math>. Let <math>p'</math> denote the lexicographically smallest <math>(v,w)</math>-path. There is a stage in which this path is a subpath of the current path <math>p</math>. Clearly, <math>v</math> cannot be removed from <math>S</math> before <math>w</math>. This proves the claim.
 
== Complexity ==
 
'''Statement:''' The asymptotic complexity is in <math>\Theta(|V|+|A|)</math> in the worst case.
 
'''Proof:'''
For every node reachable from <math>s</math> (including <math>s</math>), the algorithm processes each of its outgoing arcs exactly once. And from each of these nodes, the algorithm goes backwards exactly once. Obviously, each of these steps requires a constant number of operations.
 
== Remark ==
 
Alternatively, DFS could be implemented as a recursive procedure. However, this excludes the option to implement DFS as an iterator, which means to turn the loop inside-out (cf. remarks on [[Graph traversal|graph traversal]]).
 
== Pseudocode recursive implementation ==
 
 
==== DFS(''G'') ====
:'''for''' each vertex ''u'' &isin; ''V'' [''G'']
::'''do''' color[''u''] &larr; WHITE
::: ''&pi;''[''u''] &larr; NIL
:''time'' &larr; 0
:: '''do if''' ''color''[''u''] == WHITE
::: '''then''' DFS-VISiT(''u'')
 
 
 
====DFS-VISIT(''u'')====
: ''color''[''u''] &larr; GRAY 
: ''time'' &larr; ''time'' + 1
: ''d''[''u''] &larr; ''time''
: '''for''' each ''v'' &isin; ''Adj''[''u'']
:: '''do if''' ''color''[''v''] = WHITE
::: ''' then ''' &pi; [''v''] &larr; ''u''
:::: DFS-VISIT(''v'')
:''color''[''u''] &larr; BLACK
:''f''[''u''] &larr; ''time'' &larr; ''time'' + 1


All descriptions of algorithms and data structures and algorithmic problems in this wiki shall be based on this model of computation (unless clearly stated otherwise).


== Pseudocode stack implementation ==
=== Remark ===
This model computer is equivalent to the [//http://en.wikipedia.org/wiki/Turing_machine  Turing machine] model with respect to computability and asymptotic complexity of algorithmic problems. Therefore, all computability and complexity results in this wiki are identical to those in Theoretical Computer Science (but derived differently).


====DFS(''s'')====
=== Reference ===  
: ''S = new Stack()''
General introduction to [//http://en.wikipedia.org/wiki/Abstract_machine abstract machines].
: ''s.IsSeen = true''
: ''S.push(s)''
: '''while''' ''S'' &ne; &empty;
:: ''n'' = ''S.peek()''
:: ''a'' = ''(v, w)'' = ''n.nextArc()''
:: '''if''' ''a'' == ''null''
::: ''n.IsFinised'' = ''true''
::: ''S.pop()''
:: '''else if''' ''w.IsSeen'' == ''false''
::: ''w.IsSeen'' = ''true''
::: ''S.push(w)''

Revision as of 14:25, 13 January 2015


Defintion

The model computer consists of a storage and a processing unit.

The storage is a large array of elementary storage units, each of which stores one bit of information.

The processing unit is a Boolean circuit, that is, a number of logical gates connected by wires. Both the input and the output pins are attached to the bits in the storage, in both cases in a one-to-one correspondence.

In each clock cycle of the computer, the current values of the storage bits are imported into the combinatorial circuit via the input bins, and the values exported at the output bins are the new values of the storage bins.

In each clock cycle, only a single machine instruction is executed, that is, a single bit is copied or two bits are processed logically/arithmetically (giving the values of one or two other bits). In particular, in each clock cycle, a constant, instruction-specific number of gates is involved in the update of the bit values in the storage.

For simplicity, interfaces to background devices, terminals, networks, etc., are ignored here. These details are not necessary for the analysis of algorithms and data structures.

It may be useful to regard the storage units as being organized in larger units (machine words) of fixed size like in a real-world computer. But this is not necessary for most considerations. In fact, an elementary instruction on fixed-size machine words (like on RISC) machines amounts to a constant number of elementary operations on bits. A constant factor on the number of execution steps makes no difference with respect to asymptotic complexity. In other words, a machine model with elementary instructions on bits and an analogously organized machine model elementary instructions on machines words are equivalent and undistinguishable with respect to asymptotic complexity.

Algorithmic problem: Graph traversal

Type of algorithm: loop

Definitions

  1. For each node, an arbitrary but fixed ordering of the outgoing arcs is assumed. An arc [math]\displaystyle{ (v,w) }[/math] preceding an arc [math]\displaystyle{ (v,w') }[/math] in this ordering is called lexicographically smaller than [math]\displaystyle{ (v,w) }[/math].
  2. Let [math]\displaystyle{ p }[/math] and [math]\displaystyle{ p' }[/math] be two paths that start from the same node [math]\displaystyle{ v\in V }[/math], but may or may not have the same end node. Let [math]\displaystyle{ w }[/math] be the last common node such that the subpaths of [math]\displaystyle{ p }[/math] and [math]\displaystyle{ p' }[/math] from [math]\displaystyle{ v }[/math] up to [math]\displaystyle{ w }[/math] are identical (possibly [math]\displaystyle{ v=w }[/math]). If the next arc of [math]\displaystyle{ p }[/math] from [math]\displaystyle{ w }[/math] onwards is lexicographically smaller than the next arc of [math]\displaystyle{ p' }[/math] from [math]\displaystyle{ w }[/math] onwards, [math]\displaystyle{ p }[/math] is said to be lexicograpically smaller than [math]\displaystyle{ p' }[/math]. Note that the lexicographically smallest path from [math]\displaystyle{ v\in V }[/math] to [math]\displaystyle{ w\in V }[/math] is well defined and unique.
  3. With respect to a starting node [math]\displaystyle{ s\in V }[/math], a node [math]\displaystyle{ v\in V }[/math] is lexicographically smaller than [math]\displaystyle{ w\in V }[/math] if the lexicographically smallest path from [math]\displaystyle{ s }[/math] to [math]\displaystyle{ v }[/math] is lexicographically smaller than the lexicographically smallest path from [math]\displaystyle{ s }[/math] to [math]\displaystyle{ w }[/math].
  4. In all of the above cases, the reverse relation is called lexicographically larger.
  5. A node [math]\displaystyle{ v\in V }[/math] is lexicographically smaller (resp., lexicograpically larger) than a path [math]\displaystyle{ p }[/math] if [math]\displaystyle{ v }[/math] does not belong to [math]\displaystyle{ p }[/math] and the lexicographically smallest path from the start node of [math]\displaystyle{ p }[/math] to [math]\displaystyle{ v }[/math] is lexicographically smaller (resp., larger) than [math]\displaystyle{ p }[/math]. (Note the asymmetry: In both cases, the lexicographically smallest path to [math]\displaystyle{ v }[/math] is used.)
  6. In all cases, we also say precedes and succeeds, respectively, instead of "is lexicograpically smaller/larger".

Known Related Topics

All descriptions of algorithms and data structures and algorithmic problems in this wiki shall be based on this model of computation (unless clearly stated otherwise).

Remark

This model computer is equivalent to the Turing machine model with respect to computability and asymptotic complexity of algorithmic problems. Therefore, all computability and complexity results in this wiki are identical to those in Theoretical Computer Science (but derived differently).

Reference

General introduction to abstract machines.