Total Pageviews

Sunday, February 16, 2014

Chapter 12: Binary Search Trees

12.1.1. Try it yourself.

12.1.2. BST and Min-Heap property differs in the relationship of their key field with their childs. They also differ in the organization of their nodes, meaning for min-heap (or for that matter max-heap, we're dealing with binary heaps only) is always filled on all levels except possibly the last. Heaps can be viewed as a complete binary tree but not our BST it can also be an incomplete binary tree.
            BST: For every node x with left child 'l' and right child 'r', following is the relationship between their keys:     key[x] >= key[l] and key[x] <= key[r].
            Min-Heap For every node x with left child 'l' and right child 'r', following is the relationship between their keys:     key[x] <= key[l] and key[x] <= key[r].
            We can certainly use a min-heap to print the keys in sorted order. But, the time will be O(nlog(n)). Heap-sort actually uses a min-heap to sort the values.

12.1.3. I'm just writing a pseudo-code:
      1. Create an empty stack.
      2. Initialize current node as root.
      3. Push the current node to stack and set current = current->left until current is NULL.
      4. If current is NULL and stack is not empty then
               a. Pop the top item from stack.
               b. Print the popped item, set current = current->right
               c. Go to step 3.
       5. If current is NULL and stack is empty then we are done.
       (copied from here)
       
      There is also another way of doing in-order walk without a stack. I'll try to post this as well.

12.1.4. PRE-ORDER-WALK:
            
           Pre-Order-Walk(x)
                  if(x != NULL)
                        then
                              print key[x]
                              Pre-Order-Walk(x->left)
                              Pre-Order-Walk(x->right)

           POST-ORDER-WALK:
            
           Post-Order-Walk(x)
                  if(x != NULL)
                        then
                              Post-Order-Walk(x->left)
                              Post-Order-Walk(x->right)
                              print key[x]

12.1.5. This is just a plain argument. First think about constructing a BST from a given set of n-elements. You've to order the elements satisfying the BST property. 
            Now, if the given set of n-elements is sorted ( O(nlogn) ) worst case, we can construct a BST by always choosing a mid-element as the root and dividing the list in two halves, and making mid-element of left half as left child of the root and mid-element of right half as right child of root. Repeating this procedure till the halves are empty. This can be easily done in O(n) complexity if       we've an array holding the n-elements. Things would be much different and a little complicated if we've those sorted elements in a linked-list.
             Thus, the overall complexity would be O(nlogn) + O(n) which is O(nlogn) as it is the dominating term here.
                 
12.2.1. Try it yourself.

12.2.2. 
           Tree-Minimum(x)
                 if( ! x->left)
                     return Tree-Minimum(x->left);
                 return x;

           Tree-Maximum(x)
                 if( ! x->right)
                     return Tree-Maximum(x-> right);
                 return x;

12.2.3. 
           Tree-Predecessor(x)
                if( ! x->left)
                    return Tree-Maximum(x->left);
                y = x->parent;
                while ! y && x == y->left
                         x = y;
                         y = y->parent;
                 return y;
12.2.4. 
                                   5
                                          8
                                     7         9

                 Search for 9 in this tree. Then A = {7}, B = {5, 8, 9} and C={}.  So, since 7 > 5 it breaks professor's claim.

12.2.5. Argument for successor:
           Suppose 'x' is the node and it's successor is 's'. Then key[s] should be the smallest key which is larger than key[x]. As given node 'x' has two child. So, successor of 'x' should be the node with smallest key in the right subtree. The smallest key for any tree with root 'r' doesn't have a left child, this you can verify by looking at Tree-Minimum function.

12.2.6. 

12.2.7. Inorder-tree-walk effectively does the same thing by exhausting the left subtree to find min, and discovering successors by minimizing right subtrees, or ascending to first-right parents thanks to the recursive stack.

The procedures are equivalent and both run in Theta(n). [Copied]

12.2.8. 

12.2.9. First case will be when x is the left child of y. It implies:
            
            1. key[y] > key[x] && key[x] < key[ancestor_of_y]
            2. key[y] < key[nodes_in_right_subtree_of_y]
           
            From 1 and 2: key[y] is the smallest key in T larger than key[x].

            Similar argument can be given for second case.

12.3.1. 
            public void insert (int value)
     {  
         root = insert ( value, root ); 
     }

    private BSTnode insert ( int value, BSTnode node )
    {
      if ( node == null )
         return new BSTnode(value);
      else if (key[node] > value)
         node.left  = insert ( value, node.left );
      else
         node.right = insert ( value, node.right );
      return node;
    }


12.3.2. Number of nodes examined while searching also includes the node which is searched for, which isn't the case when we inserted it.

12.3.3. Inorder-Tree walk is O(n). So, complexity for sorting by using this method is governed by insertion.

           Worst-case: O(n^2). One example will be when we insert the nodes in sorted order.

          Best-case: O(n log(n)). When we're able to achieve a balanced tree while insertion.

12.3.4. If node 'z' has two child, when the node 'z' is deleted we'll copy the satellite data of it's successor which is node 'y' and replace it's satellite data with it's right child's data. Now, if this happens the data structure pointing to 'y' will point to a node with a completely different satellite data.

          1st Solution:  Update the data structure to point to the new node which contains satellite data of 'y'.

         2nd Solution: Place 'y' into z’s position after deleting z: make right('y') the child of parent('y'); connect 'y' to z’s children and parent.

12.3.5. Example: in fig. 12.4 from the cormen book itself. 
      Deletion : First delete node '10' then, delete node '12'. It'll give a different tree, when we reverse this deletion step.

12.3.6. 
    
12.2.             Radix-Sort(Node n, String s)
                     {
                           if(n->light_shaded)
                                 print s;
                           if(n->left)
                                Radix-Sort(n->left, s+key[n]);
                           if(n->right)
                                Radix-Sort(n->right, s+key[n]);
                     }
           
          

Sunday, November 11, 2012

Chapter 11: Hash Tables


11.1.1. For finding the maximum element in the table you have to scan each slot once and update your maximum element accordingly.The worst case complexity would be O(m).


11.1.2. Initialize a bit vector of length |U| to all  0s. When storing key k set the kth bit and when deleting the unset the kth bit.

11.1.3. Each element in the table should be a pointer to the head of a linked list containing the satellite data. I though doubt for searching to be O(1).



11.1.4. Let us denote the huge array by T, we will also have a structure implemented as an array, S.
                        struct node{
                                    int key;
                                    int value;
} S;

The size of S equals the number of keys actually stored in T, so that S should be allocated at the dictionary’s maximum size. We can also use a linked list data structure so that we need not allocated space initially.
The idea of this scheme is that entries of T and S validate each other. If key k is actually stored in T , then T [k] contains the index, say j , of a valid entry in S, and S[ j ] contains the value k. So, we will have a following validation cycle:

S[T [k]].key = k, and T [S[ j ].key] = j

So, it means the stack S will not really be a stack since we are able to access any index not only the top element. So, for the namesake we can call it a simple array.
Implementing the following operations:

            SEARCH: Just check if for a given input key, the validating cycle is satisfied, if yes return the value.

            INSERT: Increment top[S] to point to the next index. Now make T[k] = top[S] and S[top[S]].key = new_key and S[top[S]].value = new_value_for_the_key.

            DELETE: To delete object x with key k, assuming that this object is in the dictionary, we need to break the validating cycle. The trick is to also ensure that we don’t leave a hole.in the stack, and we solve this problem by moving the top entry of the stack into the position that we are vacating and then fixing up that entry’s validating cycle. Try to come up with a step to implement this.

11.2.1.

11.2.3. Only insertion will be affected by this modification with complexity becoming O(n/m) from O(1).

11.2.4. Initialize all the values to a singly linked free list (flag set to false) with a head and tail pointer. On insert, use the memory pointed to by the head pointer and set the flag to true for the new element and increment the head pointer by one. On delete, set the flag to false and insert the newly freed memory at the tail of the linked list.

11.2.5. This is an application of pigeon-hole principle.

11.3.1. While searching for a key k, we’ll first check if the hash value matches if yes then only we’ll check if the keys actually match or not.

11.3.2. One easy approach would adding the values of each character to get a sum and then take a modulo 128. Another way would be using n-degree equation with each character value acting as a co-efficient for the nth term.
Example: S[0]*xn + S[1]*x(n-1)+ …. + S[n-1], for better result substitute x = 33 or 37 or 39. This is the famous Horner’s method for finding a hash value.

11.4.2.
HASH-DELETE(T, k)
i ← 0
repeat j ← h(k, i)
if T[j] = k
then T[j] ← -1
return j
else i ← i + 1
until i = m or T[j] = NIL
error "element doesn’t exist"
           
HASH-INSERT(T, k)
i ← 0
repeat j ← h(k, i)
if T[j] = NIL or T[j] = -1
then T[j] ← k
return j
else i ← i + 1
until i = m
error "element doesn’t exist"

Tuesday, September 18, 2012

Chapter 10: Elementary Data Structure


I am not posting solution of all the questions. But if you've any doubt kindly post it as 
a comment I'll post the solution as soon as possible.

10.2.4. Let's first define our node structure.
            struct node{
              int value;
              struct node* next;
            }
         When we've a sentinal node we'll make our last node in the linked list to point to the sentinal node.                And whenever we need to search for a value we'll update the value of our sentinal node to that search value. This way we'll be sure of finding that value in the linked list. The code goes here:
         
         boolean ListSearch(node* head,int value,node* sentinal_node){
            if(!head){
                return false;
            }
            sentinal_node->value = value;
            while(head != null){
                if(head->value == value){
                      break;
                }
                head = head->next;
            }
            if(head == sentinal_node){
                return false;
            }
            return true;
       }
       
10.2.5. Impleting a dictionary using a linked list is very similar to storing a list of numbers 
        in a linked list. Here, we just need to replace integer values with string literals.
        Let's first define our node structure.
            struct node{
                      char* word;
                      struct node* next;
            }
        Here are the cost of some operations:
        INSERT: O(1), simply insert at the head of the list.
        DELETE: O(n), you'll have to first search for the element in the list. For searching in the worst case
       you've to compare with every node in the linked list.
        SEARCH: O(n), reason is same as of above.
         
10.2.6. Let's first define our node structure.
            struct node{
                      char* word;
                      struct node* next;
            }
            struct Set{
                    struct node* head;
                    struct node* tail;
            }
        For every list beside with the head pointer we'll also keep a tail pointer.
        Supporting union operation in O(1) time: Suppose we've two Sets S1 and S2.
        
       PSEUDO-CODE:
                node* Union(Set S1,Set S2){
                      S1.tail->next = S2.head;
                      S1.tail = S2.tail;
                      Remove S2 from the list of sets;
                      return S1;
                }


10.4.4. void PrintNodes(node* root){
if(!root){
return;
}
PrintNodes(root->left);
printf("%d ",root->value);
PrintNodes(root->sibling);
            }


10.4.5. One very basic approach would be to do a level-order traversal. This can be implemented using a queue.
PSEUDO-CODE:

void DoLevelTraversal(node* root){
Queue<node*> Q;
Q.push(root);
while(!Q.empty()){
node* top = Q.front();
Q.pop();
printf("%d ",top->value);
while(top->sibling != NULL){
Q.push(top->sibling);
top = top->sibling;
}
}
}

But it uses extra space. Will think of some other approach soon.

Tuesday, August 28, 2012

Medians and Order Statistics


9.1.1. First of all, make sure that for finding the maximum element out of 'n' elements will take 'n-1' comparisions. Now, when you're through this, let us suppose that you compared the elements by making a pair each time. For better understanding go through the below diagram:
Now, the second maximum element would've compared with the maximum somewhere between the heigth of the tree which will be log(n) as it'll always be a balanced tree. So, we can traverse the tree from the root(which is the maximum element) and compare the elements only in that path to find the maximum among them. As there'll be a maximum of log(n) elements along this path, so the maximum element can be found in log(n)-1 comparisions.
So, in total n+log(n)-2 comparisions will be needed to find the maximum and the second maximum element.
 
9.1.2. This is also very similar to above problem in which we need to compare elements by making pairs. Try Yourself!!!

9.2.1. This is very simple, for any 0-length array (p == q) will evaluates to true and the function will return from there.

9.2.3.

9.2.4. Following can be one of the worst-case performance of randomized selection:
                        (i.) 2, 9, 0, 7, 5, 4, 8, 6, 1
                        (ii.) 9, 0, 7, 5, 4, 8, 6, 1
                        (iii.) 0, 7, 5, 4, 8, 6, 1
                        (iv.) 0, 7, 5, 4, 8, 6
                        (v.) 0, 7, 5, 4, 8
                        (vi.) 0, 7, 5, 4
                        (vii.) 0, 7, 5
                        (viii.) 0, 7
                        (ix.) 0
 

9.3.3. For achieving worst case O(nlogn) complexity for quicksort we can replace the partition in the traditional algorithm for quicksort with
       SELECT procedure mentioned in the article.

9.3.5. Given the black box, after every partition we are sure that the given element either lies to left or to the right side of the partition.
       Thus we need to search only n/2 elements for 2nd iteration. Thus the time complexity can be found as:
    c*n + c*n/2 + c*n/4 + c*n/6 + ... = 2*c*n
       so, our selection algorithm will have O(n) complexity given the linear time black box.

9.3.7. First of all find the median of the 'n' elements, suppose that index is 'i (=n/2)'. Now the 'k' elements that are closest to the median are 
       those which are (i-k/2) and (i+k/2) distance from the median. So, just find the (i+k/2)th element and (i-k/2)th element. Thus, the 'k' 
       elements which are 'k' distance from the median are the elements from (i-k/2) to (i+k/2).  

9.3.9. In order to Þnd the optimal placement for Professor Olay’s pipeline, we need only and the median(s) of the y-coordinates of his oil wells.
       The optimal y-coordinate for Professor Olay’s east-west oil pipeline is as follows:
• If n is even, then on either the oil well whose y-coordinate is the lower median or the one whose y-coordinate is the upper median, or anywhere between them.
• If n is odd, then on the oil well whose y-coordinate is the median.
       The proof is quite simple and can be easily cracked. Try it yourself.

9.1. (a.) The best algorithm will have O(nlogn) complexity. Then finding the i largest element can be done in O(i).
          So, overall complexity will be O(nlogn + i).

     (b.) Building a max-priority queue is O(n) complexity. Then extracting largest 'i' elements can be done with O(ilogi) complexity.
          So, overall complexity will be O(n + ilogi).
  
     (c.) Use the SELECT algorithm of Section 9.3 to Þnd the i th largest number in (n) time. Partition around that number in (n) time. Sort the i largest numbers in
 (i lg i ) worst-case time (with merge sort or heapsort).
          So, overall running time will be: (n + i lg i ).

Wednesday, October 12, 2011

Chapter 8:Sorting in Linear Time

8.1.1. The answer is (n-1).

8.2.4. Make an array Count[0...k] which stores the frequency of each element in the range 0 to k.Now, make another array Cumulative[0...k+1] which store the cumulative values corresponding to the Count array already made with Cumulative[0] = 0.
Then the answer to every query will be Cumulative[b+1] -Cumulative[a].

We can also save space by making the Cumulative array in the Count array itself.

8.3.2. Insertion Sort and Merge Sort are stable sorting algorithms.Whereas Quicksort satbility depends upon the implementation and Heap Sort isn't stable.

Maintaining Stability: We can maintain stability by keeping the index of the element along with the element values and breaking ties based on the element which has lower index.

8.3.4. Since we know the range we can employ Counting Sort algorithm to achieve O(n) time complexity.

8.2. (a.) Counting Sort
(b.) Counting Sort and Quicksort both can be used.(Think about the implementation using quicksort)

(c.) Insertion Sort

(d.)

(e.) For sorting the numbers inplace we can use counting sort method in which the first k indices of the array itself represents the count of each element in the input array.
Algorithm: (i.) Traverse the input array from left to right.
(ii.) For each element make A[A[i]%(k+1)] = A[A[i]%(k+1)] + (k+1) where i is the index used in the loop.
(iii.) Now again traverse the array A from 0 to K-1 and print each value A[i]/(k+1) times.

Thus we've the sorted array as output.

Wednesday, August 3, 2011

Chapter 7:Quicksort

7.1.2. The value of q returned will be r.
We can modify the partition function by keeping a variable count which stores the total number of equal elements and if count = r-p+1 then we'll return (p+r)/2.

7.1.4. We can simply change the partition function i.e. instead of the comparsion A[i] <= x we'll make a comparision A[i] >= x.This way all the elements which are greater than x are put on the left side of x.

7.2.2. When all the elements have the same value the running time will be O(n^2) if we use the partition function given in the book.

7.2.3. When the array elements are sorted in decreasing order then every call to the partition function will return r-1 thus it'll give us the worst case time complexity of quicksort.Thus running time will be O(n^2).

7.3.2. In the worst case we'll have to make O(n) calls which seems obvious from worst case arguments about partition given in the previous question.
Though for the best case we may think that the number of calls made to the random function is O(log n) but this is wrong, in the best case also number of calls made will be O(n).
Proof: The best case occurs when the partitioning produces two subproblems of size atmost n/2. It is atmost this because onewill have bn/2c elements and one will have dn/2e − 1. The recurrence for the calls to RANDOMIZED-PARTITION is then
T(n) ≤ 2T(n/2) + Θ(1)
By case 1 of the Master Theorem, T(n) has the solution
T(n) = Θ(n) in the best case.
So, Θ(n) calls are made to RANDOM in the best and worst cases.

7.4.5. The value of 'k' picked will be such that insertion sort on an array of size 'k' performs better than the quicksort.


Tuesday, August 2, 2011

Chapter 6:Heapsort

6.1.1 maximum number of elements in a heap of height 'h' is : 2^(h+1) - 1
minimum number of elements in a heap of height 'h' is : 2^h

6.1.2 I am not giving a concrete proof but since a heap forms a complete binary tree which has a height log(n) for 'n' nodes so the proof will be the same.
Just a rough idea from 6.1.1 : 2^h <= n <= 2^(h+1) - 1
=> h = log(n)

6.1.3 There lies simple logic behind this proof.Since it is a max-heap so for every node value of the child nodes must be smaller than the parent node.Now, if we suppose that the largest value is a child of some node then there is a node whose value is smaller than its child node(as no node has value greater than the largest value).This destroys the max-heap property.So, the largest value must lie at the root itself.

6.1.4 In a max-heap the smallest element must lie at the leaf.

6.1.5 Yes, an array which is sorted in increasing order is a min-heap.

6.1.6 No.

6.1.7 will be posted later.

6.2.1 practice yourself.

6.2.2 Pseudo Code:
Min-Heapify(i,A[])
{
l = (i << 1) + 1 ;
r = (i << 1) + 2 ;
smallest = i ;
if(l < heap-size(A) && A[l] < A[i])
smallest = l ;
if(r < heap-size(A) && A[r] < A[smallest])
smallest = r ;
if(smallest != i)
swap(A[i],A[smallest]) ;
Min-Heapify(smallest,A) ;
}

6.2.3 Nothing is done.

6.2.4 Nothing is done is done as these indices corresponds to the leaf of the max-heap.

6.2.5 Pseudo Code:
Iterative Max-Heapify(i,A[])
{
while(1)
l = (i << 1) + 1 ;
r = (i << 1) + 2 ;
largest = i ;
if(l < heap-size(A) && A[l] > A[i])
largest = l ;
if(r < heap-size(A) && A[r] > A[largest])
largest = r ;
if(largest != i)
swap(A[i],A[largest]) ;
i = largest ;
else
break ;
}

6.2.6. Since the max-heapify function is called for every node on a path from root down to leaf and this path can be of maximum log(n) so the worst-case time complexity is log(n).

6.3.2. Suppose that the array is {4,3,2,5,6,7,8,9}
then BUILD-MAX-HEAP will fail in this case(check out why??).

6.4.3. Running time of heap sort is independent of the order of elements in the array.So, the running time will be O(nlog(n)).

6.5.3. Do it yourself.

6.5.4. Since the heap data structure is represented by an array and deletions are implemented by re-ducing the size of the array there may be undefined values in the array past the end of the heap.Therefore it is essential that the MAX-HEAP-INSERT sets the key of the inserted node to -1 such that HEAP-INCREASE-KEY does not fail.

6.5.6. FIFO order can be maintained in the priority using some kind of counter which is also inserted with elements and is incremented with every insertion.The priority of the element is defined by the counter and the element having minimum counter value has the highest priority.

STACK can also be implemented using similar logic as above but the element having largest counter value is given the highest priority.

6.5.7. The solution is very similar to the Extract-Max algorithm.All we've to do is swap(A[i],A[heap_size-1]) and heap_size = heap_size-1.Now, call Max-Heapify function with A[] and i as the parameter.

6.5.8. Maintain a min-heap initially containing the first element from all the sorted k lists with a value identifying the list to which it belongs.Whenever the element is extracted check to which list it belongs and insert the next element of that list into the heap.So at all times the heap contains k elements, so all the operations can be performed in log(k) time and the number of operations to be performed is 2*n.So the complexity of the algorithm is O(nlog(k)).

6.1. (a.) No, they will form different max-heap.
Example is A={1,2,3};

(b.) It's not the proof but just an argument that for every insertion we can have to traverse log(n) height so in the worst case it can be O(nlog(n)).

6.2. (a.) A d-ary heap can be represented in a 1-dimensional array as follows. The root is kept in A[1], its d children are kept in order in A[2] through A[d + 1], their children are kept in order in A[d + 2] through A[d2+ d + 1], and so on. The following two procedures map a node with index i to its parent and to its j^th child (for 1 ≤ j ≤ d), respectively.

D-ARY-PARENT(i )
return (i − 1)/d ;

D-ARY-CHILD(i, j )
return d*i + j + 1 ; where j belongs to {0,d-1}.

This relation is for 0 index based array.

(b.)Height will be O(log n/log d).

(c.) The procedure HEAP-EXTRACT-MAX given in the text for binary heaps works fine for d-ary heaps too. The change needed to support d-ary heaps is in MAX-HEAPIFY, which must compare the argument node to all d children instead ofjust 2 children. The running time of HEAP-EXTRACT-MAX is still the running time for MAX-HEAPIFY, but that now takes worst-case time proportional to the product of the height of the heap by the number of children examined at each node (at most d), namely O(d logd n) = O(d lg n/ lg d).

(d.) The procedure MAX-HEAP-INSERT given in the text for binary heaps works fine for d-ary heaps too. The worst-case running time is still O(h), where h is the height of the heap. (Since only parent pointers are followed, the numberof children a node has is irrelevant.) For a d-ary heap, this is O(log d n) =O(lg n/ lg d).

(e.) D-ARY-HEAP-INCREASE-KEY(A, i, k)
A[i ] ← max(A[i ], k)
while i > 1 and A[PARENT(i )] < A[i ]
do exchange A[i ] ↔ A[PARENT(i )]
i ← PARENT(i )

6.3. (c.) This procedure of Extract-Min-Young is very similar to the Extract-Min in heaps.In the Young tableau we are sure that the minimum element will lie at index (0,0) (similar to the min-heap where minimum element lies at index 0).So, for extract-min procedure for Young just swap the last element present in the matrix with A[0][0] and then run the max-heapfiy procedure.
Pseudo Code:
Max-Heapify-Young(A[][],i,j)
(rx,ry) = (i,j+1) ;
(dx,dy) = (i+1,j) ;
(largex,largey) = (i,j) ;
if((rx,ry) < (m,n) && A[rx][ry] > A[i][j])
(largex,largey) = (rx,ry) ;
if((dx,dy) < (m,n) && A[dx][dy] >
A[largex][largey])
(largex,largey) = (dx,dy) ;
if((largex,largey) != (i,j))
swap(A[largex][largey],A[i][j]) ;
Max-Heapify-Young(A,largex,largey) ;

(d.) Make A[n-1][m-1] = infinity.Now implement the decrease key procedure similar to decrease key procedure for min-heap.
PSEUDO CODE:
YOUNGIFY(Y; i; j)
largestx = i ;
largesty = j ;
if i-1 >= 0 and Y [i][ j] < Y [i-1][j]
then largestx = i-1 ;
largesty = j ;
if j-1 >= 0 and Y [largestx][ largesty ] < Y [i][ j - 1]
then largestx = i ;
largesty = j-1 ;
if largestx != i or largesty != j
then exchange Y [i][ j] , Y [largestx][largesty ]
YOUNGIFY(Y,largestx,largesty) ;
(e.) For sorting n^2 number we can first insert them into a young tableau.This is O(n^2*(n+n)).
Now we can employ Extract-Min-Young method to extract minimum element from the table and this is again O(n^3).So, the total cost for sorting n^2 elements is O(n^3) .

(f.) Start from the index (0,m-1).
PSEUDO CODE :
find(i,j,n)
if(j < 0 || i == n)
return "false" ;
if(T[i][j] > n)
find(i,j-1,n);
else if(T[i][j] < n)
find(i+1,j,n) ;
else
return "true" ;

(g.) One more question : Given n distinct elements how many young tableau can you make from it??
Ans: