Chapter 6 Trans as long as m- in addition to -Conquer Copyright © 2007 Pearson Addison-Wesley. All rig

Chapter 6 Trans as long as m- in addition to -Conquer Copyright © 2007 Pearson Addison-Wesley. All rig www.phwiki.com

Chapter 6 Trans as long as m- in addition to -Conquer Copyright © 2007 Pearson Addison-Wesley. All rig

Cunningham, Ben, Metro Editor has reference to this Academic Journal, PHwiki organized this Journal Chapter 6 Trans as long as m- in addition to -Conquer Copyright © 2007 Pearson Addison-Wesley. All rights reserved. Trans as long as m in addition to Conquer This group of techniques solves a problem by a trans as long as mation to a simpler/more convenient instance of the same problem (instance simplification) to a different representation of the same instance (representation change) to a different problem as long as which an algorithm is already available (problem reduction) Instance simplification – Presorting Solve a problem’s instance by trans as long as ming it into another simpler/easier instance of the same problem Presorting Many problems involving lists are easier when list is sorted. searching computing the median (selection problem) checking if all elements are distinct (element uniqueness) Also: Topological sorting helps solving some problems as long as dags. Presorting is used in many geometric algorithms. Presorting is a special case of preprocessing.

Universit Catholique de Lille FR www.phwiki.com

This Particular University is Related to this Particular Journal

How fast can we sort Efficiency of algorithms involving sorting depends on efficiency of sorting. Theorem (see Sec. 11.2): log2 n! n log2 n comparisons are necessary in the worst case to sort a list of size n by any comparison-based algorithm. Note: About nlog2 n comparisons are also sufficient to sort array of size n (by mergesort). Searching with presorting Problem: Search as long as a given K in A[0 n-1] Presorting-based algorithm: Stage 1 Sort the array by an efficient sorting algorithm Stage 2 Apply binary search Efficiency: (nlog n) + O(log n) = (nlog n) Good or bad Why do we have our dictionaries, telephone directories, etc. sorted Element Uniqueness with presorting Presorting-based algorithm Stage 1: sort by efficient sorting algorithm (e.g. mergesort) Stage 2: scan array to check pairs of adjacent elements Efficiency: (nlog n) + O(n) = (nlog n) Brute as long as ce algorithm Compare all pairs of elements Efficiency: O(n2) Another algorithm Hashing, which works well on avg.

Instance simplification – Gaussian Elimination Given: A system of n linear equations in n unknowns with an arbitrary coefficient matrix. Trans as long as m to: An equivalent system of n linear equations in n unknowns with an upper triangular coefficient matrix. Solve the latter by substitutions starting with the last equation in addition to moving up to the first one. a11x1 + a12x2 + + a1nxn = b1 a1,1×1+ a12x2 + + a1nxn = b1 a21x1 + a22x2 + + a2nxn = b2 a22x2 + + a2nxn = b2 an1x1 + an2x2 + + annxn = bn annxn = bn Gaussian Elimination (cont.) The trans as long as mation is accomplished by a sequence of elementary operations on the system’s coefficient matrix (which don’t change the system’s solution): as long as i 1 to n-1 do replace each of the subsequent rows (i.e., rows i+1, , n) by the difference between that row in addition to an appropriate multiple of the i-th row to make the new coefficient in the i-th column of that row 0 Example of Gaussian Elimination Solve 2×1 – 4×2 + x3 = 6 3×1 – x2 + x3 = 11 x1 + x2 – x3 = -3 Gaussian elimination 2 -4 1 6 2 -4 1 6 3 -1 1 11 row2 – (3/2)row1 0 5 -1/2 2 1 1 -1 -3 row3 – (1/2)row1 0 3 -3/2 -6 row3–(3/5)row2 2 -4 1 6 0 5 -1/2 2 0 0 -6/5 -36/5 Backward substitution x3 = (-36/5) / (-6/5) = 6 x2 = (2+(1/2)6) / 5 = 1 x1 = (6 – 6 + 41)/2 = 2

Pseudocode in addition to Efficiency of Gaussian Elimination Stage 1: Reduction to the upper-triangular matrix as long as i 1 to n-1 do as long as j i+1 to n do as long as k i to n+1 do A[j, k] A[j, k] – A[i, k] A[j, i] / A[i, i] //improve! Stage 2: Backward substitution as long as j n downto 1 do t 0 as long as k j +1 to n do t t + A[j, k] x[k] x[j] (A[j, n+1] – t) / A[j, j] Efficiency: (n3) + (n2) = (n3) Searching Problem Problem: Given a (multi)set S of keys in addition to a search key K, find an occurrence of K in S, if any Searching must be considered in the context of: file size (internal vs. external) dynamics of data (static vs. dynamic) Dictionary operations (dynamic data): find (search) insert delete Taxonomy of Searching Algorithms List searching (good as long as static data) sequential search binary search interpolation search Tree searching (good as long as dynamic data) binary search tree binary balanced trees: AVL trees, red-black trees multiway balanced trees: 2-3 trees, 2-3-4 trees, B trees Hashing (good on average case) open hashing (separate chaining) closed hashing (open addressing)

Binary Search Tree Arrange keys in a binary tree with the binary search tree property: K K Example: 5, 3, 1, 10, 12, 7, 9 Dictionary Operations on Binary Search Trees Searching – straight as long as ward Insertion – search as long as key, insert at leaf where search terminated Deletion – 3 cases: deleting key at a leaf deleting key at node with single child deleting key at node with two children Efficiency depends of the tree’s height: log2 n h n-1, with height average (r in addition to om files) be about 3log2 n Thus all three operations have worst case efficiency: (n) average case efficiency: (log n) (CLRS, Ch. 12) Bonus: inorder traversal produces sorted list Balanced Search Trees Attractiveness of binary search tree is marred by the bad (linear) worst-case efficiency. Two ideas to overcome it are: to rebalance binary search tree when a new insertion makes the tree “too unbalanced” AVL trees red-black trees to allow more than one key in addition to two children 2-3 trees 2-3-4 trees B-trees

Balanced trees: AVL trees Definition An AVL tree is a binary search tree in which, as long as every node, the difference between the heights of its left in addition to right subtrees, called the balance factor, is at most 1 (with the height of an empty tree defined as -1) Tree (a) is an AVL tree; tree (b) is not an AVL tree Rotations If a key insertion violates the balance requirement at some node, the subtree rooted at that node is trans as long as med via one of the four rotations. (The rotation is always per as long as med as long as a subtree rooted at an “unbalanced” node closest to the new leaf.) Single R-rotation Double LR-rotation General case: Single R-rotation

General case: Double LR-rotation AVL tree construction – an example Construct an AVL tree as long as the list 5, 6, 8, 3, 2, 4, 7 AVL tree construction – an example (cont.)

Analysis of AVL trees h 1.4404 log2 (n + 2) – 1.3277 average height: 1.01 log2n + 0.1 as long as large n (found empirically) Search in addition to insertion are O(log n) Deletion is more complicated but is also O(log n) Disadvantages: frequent rotations complexity A similar idea: red-black trees (height of subtrees is allowed to differ by up to a factor of 2) Multiway Search Trees Definition A multiway search tree is a search tree that allows more than one key in the same node of the tree. Definition A node of a search tree is called an n-node if it contains n-1 ordered keys (which divide the entire key range into n intervals pointed to by the node’s n links to its children): Note: Every node in a classical binary search tree is a 2-node k1 < k2 < < kn-1 < k1 [k1, k2 ) kn-1 2-3 Tree Definition A 2-3 tree is a search tree that may have 2-nodes in addition to 3-nodes height-balanced (all leaves are on the same level) A 2-3 tree is constructed by successive insertions of keys given, with a new key always inserted into a leaf of the tree. If the leaf is a 3-node, it’s split into two with the middle key promoted to the parent. Cunningham, Ben Anniston Star Metro Editor www.phwiki.com

2-3 tree construction – an example Construct a 2-3 tree the list 9, 5, 8, 3, 2, 4, 7 Analysis of 2-3 trees log3 (n + 1) – 1 h log2 (n + 1) – 1 Search, insertion, in addition to deletion are in (log n) The idea of 2-3 tree can be generalized by allowing more keys per node 2-3-4 trees B-trees Heaps in addition to Heapsort Definition A heap is a binary tree with keys at its nodes (one key per node) such that: It is essentially complete, i.e., all its levels are full except possibly the last level, where only some rightmost keys may be missing The key at each node is keys at its children (this is called a max-heap)

Illustration of the heap’s definition a heap not a heap not a heap Note: Heap’s elements are ordered top down (along any path down from its root), but they are not ordered left to right Some Important Properties of a Heap Given n, there exists a unique binary tree with n nodes that is essentially complete, with h = log2 n The root contains the largest key The subtree rooted at any node of a heap is also a heap A heap can be represented as an array Heap’s Array Representation Store heap’s elements in an array (whose elements indexed, as long as convenience, 1 to n) in top-down left-to-right order Example: Left child of node j is at 2j Right child of node j is at 2j+1 Parent of node j is at j/2 Parental nodes are represented in the first n/2 locations 9 1 5 3 4 2

Examples of Solving Problems by Reduction Counting number of paths of length n in a graph by raising the graph’s adjacency matrix to the n-th power If (directed) graph G has adjacency matrix A, then as long as any k, the (i,j) entry in A^k gives the number of paths from vertex i in addition to j of length k. (Levitin, Ch. 6.6) 2 1 3 4

Cunningham, Ben Metro Editor

Cunningham, Ben is from United States and they belong to Anniston Star and they are from  Anniston, United States got related to this Particular Journal. and Cunningham, Ben deal with the subjects like Local News; Regional News

Journal Ratings by Universit Catholique de Lille

This Particular Journal got reviewed and rated by Universit Catholique de Lille and short form of this particular Institution is FR and gave this Journal an Excellent Rating.