Total Pageviews

Wednesday, October 12, 2011

Chapter 8:Sorting in Linear Time

8.1.1. The answer is (n-1).

8.2.4. Make an array Count[0...k] which stores the frequency of each element in the range 0 to k.Now, make another array Cumulative[0...k+1] which store the cumulative values corresponding to the Count array already made with Cumulative[0] = 0.
Then the answer to every query will be Cumulative[b+1] -Cumulative[a].

We can also save space by making the Cumulative array in the Count array itself.

8.3.2. Insertion Sort and Merge Sort are stable sorting algorithms.Whereas Quicksort satbility depends upon the implementation and Heap Sort isn't stable.

Maintaining Stability: We can maintain stability by keeping the index of the element along with the element values and breaking ties based on the element which has lower index.

8.3.4. Since we know the range we can employ Counting Sort algorithm to achieve O(n) time complexity.

8.2. (a.) Counting Sort
(b.) Counting Sort and Quicksort both can be used.(Think about the implementation using quicksort)

(c.) Insertion Sort

(d.)

(e.) For sorting the numbers inplace we can use counting sort method in which the first k indices of the array itself represents the count of each element in the input array.
Algorithm: (i.) Traverse the input array from left to right.
(ii.) For each element make A[A[i]%(k+1)] = A[A[i]%(k+1)] + (k+1) where i is the index used in the loop.
(iii.) Now again traverse the array A from 0 to K-1 and print each value A[i]/(k+1) times.

Thus we've the sorted array as output.

Wednesday, August 3, 2011

Chapter 7:Quicksort

7.1.2. The value of q returned will be r.
We can modify the partition function by keeping a variable count which stores the total number of equal elements and if count = r-p+1 then we'll return (p+r)/2.

7.1.4. We can simply change the partition function i.e. instead of the comparsion A[i] <= x we'll make a comparision A[i] >= x.This way all the elements which are greater than x are put on the left side of x.

7.2.2. When all the elements have the same value the running time will be O(n^2) if we use the partition function given in the book.

7.2.3. When the array elements are sorted in decreasing order then every call to the partition function will return r-1 thus it'll give us the worst case time complexity of quicksort.Thus running time will be O(n^2).

7.3.2. In the worst case we'll have to make O(n) calls which seems obvious from worst case arguments about partition given in the previous question.
Though for the best case we may think that the number of calls made to the random function is O(log n) but this is wrong, in the best case also number of calls made will be O(n).
Proof: The best case occurs when the partitioning produces two subproblems of size atmost n/2. It is atmost this because onewill have bn/2c elements and one will have dn/2e − 1. The recurrence for the calls to RANDOMIZED-PARTITION is then
T(n) ≤ 2T(n/2) + Θ(1)
By case 1 of the Master Theorem, T(n) has the solution
T(n) = Θ(n) in the best case.
So, Θ(n) calls are made to RANDOM in the best and worst cases.

7.4.5. The value of 'k' picked will be such that insertion sort on an array of size 'k' performs better than the quicksort.


Tuesday, August 2, 2011

Chapter 6:Heapsort

6.1.1 maximum number of elements in a heap of height 'h' is : 2^(h+1) - 1
minimum number of elements in a heap of height 'h' is : 2^h

6.1.2 I am not giving a concrete proof but since a heap forms a complete binary tree which has a height log(n) for 'n' nodes so the proof will be the same.
Just a rough idea from 6.1.1 : 2^h <= n <= 2^(h+1) - 1
=> h = log(n)

6.1.3 There lies simple logic behind this proof.Since it is a max-heap so for every node value of the child nodes must be smaller than the parent node.Now, if we suppose that the largest value is a child of some node then there is a node whose value is smaller than its child node(as no node has value greater than the largest value).This destroys the max-heap property.So, the largest value must lie at the root itself.

6.1.4 In a max-heap the smallest element must lie at the leaf.

6.1.5 Yes, an array which is sorted in increasing order is a min-heap.

6.1.6 No.

6.1.7 will be posted later.

6.2.1 practice yourself.

6.2.2 Pseudo Code:
Min-Heapify(i,A[])
{
l = (i << 1) + 1 ;
r = (i << 1) + 2 ;
smallest = i ;
if(l < heap-size(A) && A[l] < A[i])
smallest = l ;
if(r < heap-size(A) && A[r] < A[smallest])
smallest = r ;
if(smallest != i)
swap(A[i],A[smallest]) ;
Min-Heapify(smallest,A) ;
}

6.2.3 Nothing is done.

6.2.4 Nothing is done is done as these indices corresponds to the leaf of the max-heap.

6.2.5 Pseudo Code:
Iterative Max-Heapify(i,A[])
{
while(1)
l = (i << 1) + 1 ;
r = (i << 1) + 2 ;
largest = i ;
if(l < heap-size(A) && A[l] > A[i])
largest = l ;
if(r < heap-size(A) && A[r] > A[largest])
largest = r ;
if(largest != i)
swap(A[i],A[largest]) ;
i = largest ;
else
break ;
}

6.2.6. Since the max-heapify function is called for every node on a path from root down to leaf and this path can be of maximum log(n) so the worst-case time complexity is log(n).

6.3.2. Suppose that the array is {4,3,2,5,6,7,8,9}
then BUILD-MAX-HEAP will fail in this case(check out why??).

6.4.3. Running time of heap sort is independent of the order of elements in the array.So, the running time will be O(nlog(n)).

6.5.3. Do it yourself.

6.5.4. Since the heap data structure is represented by an array and deletions are implemented by re-ducing the size of the array there may be undefined values in the array past the end of the heap.Therefore it is essential that the MAX-HEAP-INSERT sets the key of the inserted node to -1 such that HEAP-INCREASE-KEY does not fail.

6.5.6. FIFO order can be maintained in the priority using some kind of counter which is also inserted with elements and is incremented with every insertion.The priority of the element is defined by the counter and the element having minimum counter value has the highest priority.

STACK can also be implemented using similar logic as above but the element having largest counter value is given the highest priority.

6.5.7. The solution is very similar to the Extract-Max algorithm.All we've to do is swap(A[i],A[heap_size-1]) and heap_size = heap_size-1.Now, call Max-Heapify function with A[] and i as the parameter.

6.5.8. Maintain a min-heap initially containing the first element from all the sorted k lists with a value identifying the list to which it belongs.Whenever the element is extracted check to which list it belongs and insert the next element of that list into the heap.So at all times the heap contains k elements, so all the operations can be performed in log(k) time and the number of operations to be performed is 2*n.So the complexity of the algorithm is O(nlog(k)).

6.1. (a.) No, they will form different max-heap.
Example is A={1,2,3};

(b.) It's not the proof but just an argument that for every insertion we can have to traverse log(n) height so in the worst case it can be O(nlog(n)).

6.2. (a.) A d-ary heap can be represented in a 1-dimensional array as follows. The root is kept in A[1], its d children are kept in order in A[2] through A[d + 1], their children are kept in order in A[d + 2] through A[d2+ d + 1], and so on. The following two procedures map a node with index i to its parent and to its j^th child (for 1 ≤ j ≤ d), respectively.

D-ARY-PARENT(i )
return (i − 1)/d ;

D-ARY-CHILD(i, j )
return d*i + j + 1 ; where j belongs to {0,d-1}.

This relation is for 0 index based array.

(b.)Height will be O(log n/log d).

(c.) The procedure HEAP-EXTRACT-MAX given in the text for binary heaps works fine for d-ary heaps too. The change needed to support d-ary heaps is in MAX-HEAPIFY, which must compare the argument node to all d children instead ofjust 2 children. The running time of HEAP-EXTRACT-MAX is still the running time for MAX-HEAPIFY, but that now takes worst-case time proportional to the product of the height of the heap by the number of children examined at each node (at most d), namely O(d logd n) = O(d lg n/ lg d).

(d.) The procedure MAX-HEAP-INSERT given in the text for binary heaps works fine for d-ary heaps too. The worst-case running time is still O(h), where h is the height of the heap. (Since only parent pointers are followed, the numberof children a node has is irrelevant.) For a d-ary heap, this is O(log d n) =O(lg n/ lg d).

(e.) D-ARY-HEAP-INCREASE-KEY(A, i, k)
A[i ] ← max(A[i ], k)
while i > 1 and A[PARENT(i )] < A[i ]
do exchange A[i ] ↔ A[PARENT(i )]
i ← PARENT(i )

6.3. (c.) This procedure of Extract-Min-Young is very similar to the Extract-Min in heaps.In the Young tableau we are sure that the minimum element will lie at index (0,0) (similar to the min-heap where minimum element lies at index 0).So, for extract-min procedure for Young just swap the last element present in the matrix with A[0][0] and then run the max-heapfiy procedure.
Pseudo Code:
Max-Heapify-Young(A[][],i,j)
(rx,ry) = (i,j+1) ;
(dx,dy) = (i+1,j) ;
(largex,largey) = (i,j) ;
if((rx,ry) < (m,n) && A[rx][ry] > A[i][j])
(largex,largey) = (rx,ry) ;
if((dx,dy) < (m,n) && A[dx][dy] >
A[largex][largey])
(largex,largey) = (dx,dy) ;
if((largex,largey) != (i,j))
swap(A[largex][largey],A[i][j]) ;
Max-Heapify-Young(A,largex,largey) ;

(d.) Make A[n-1][m-1] = infinity.Now implement the decrease key procedure similar to decrease key procedure for min-heap.
PSEUDO CODE:
YOUNGIFY(Y; i; j)
largestx = i ;
largesty = j ;
if i-1 >= 0 and Y [i][ j] < Y [i-1][j]
then largestx = i-1 ;
largesty = j ;
if j-1 >= 0 and Y [largestx][ largesty ] < Y [i][ j - 1]
then largestx = i ;
largesty = j-1 ;
if largestx != i or largesty != j
then exchange Y [i][ j] , Y [largestx][largesty ]
YOUNGIFY(Y,largestx,largesty) ;
(e.) For sorting n^2 number we can first insert them into a young tableau.This is O(n^2*(n+n)).
Now we can employ Extract-Min-Young method to extract minimum element from the table and this is again O(n^3).So, the total cost for sorting n^2 elements is O(n^3) .

(f.) Start from the index (0,m-1).
PSEUDO CODE :
find(i,j,n)
if(j < 0 || i == n)
return "false" ;
if(T[i][j] > n)
find(i,j-1,n);
else if(T[i][j] < n)
find(i+1,j,n) ;
else
return "true" ;

(g.) One more question : Given n distinct elements how many young tableau can you make from it??
Ans: