Computer Science Questions and Answers

Brooklyn College Graduate Computer Science Comprehensive Exam - Algorithms

Problem 1 - Discrete Math
Spring 2011

Let $n \ge 1$ be an integer, and let $a, b \gt 0$ be positive real numbers. Find a closed form for the following expression:

$$ \sum_{k=1}^n (ak + b)$$

In other words, you are to eliminate the summation and write the expression as a function of $a$, $b$, and $n$. Explain how you computed your answer.

Answer: Remember the formula for summation of a series:

$$ \sum_{k=1}^n i = \frac{n(n+1)}{2}$$

An easy mistake is to try and substitute $an$ for $n$ in this formula, but read it carefully — it's the summation of the series times $a$. So $a \times \frac{n(n+1)}{2}$. But what about $b$? You need to add $b$ to the formula $n$ times, or $+bn$. So the final answer is:

$$ \frac{an(n+1)}{2} + bn $$

Problem 2 - Discrete Math
Fall 2011

Given a string $S$ of length $n$, $S = s_1 \cdots s_n$. A substring of $S$ is a contiguous sequence of characters in $S$.

1. How many substrings of length 3 are there in $S$?
2. Let $m$ be an integer such that $1 \le m \le n$. How many substrings of length $m$ are there in $S$?
3. How many substrings of $S$ are there? (Hint: One way is to sum the number of substrings for all values of $m$.)

Answer:

1. $n-3+1$

2. $n-m+1$

3. $\frac{n(n+1)}{2}$

Problem 3 - Discrete Math
Spring 2012

Solve the following recursive formula. Prove the correctness of your solution.

$$\begin{align} T(1) &= 1 \\ T(n) &= 1 + 2T(n − 1) \end{align}$$

Answer:

First, plug in some values for 1 and get the result:

  • $n=1; 1$ (given)

  • $n=2; T(2) = 1+2T(2-1); 1+2T(1); 1+2(1); 3$

  • $n=3; 7$

  • $n=4; 15$

  • Guess and check method. The values grow fast, so you should think exponential. Start with the exponent values of $2$, and they seem to match closely, minus one. Guess: $T(n) = 2^n-1$.

    Proof by induction

    Base case: $2^1-1=1 \quad\checkmark$

    Assume true $T(n) = 2^n-1$

    Prove $T(n+1)=1+2(2^{n+1}-1)$

    $=1 + 2^{n+1}-2$

    $=2^{n+1}-1 \quad\checkmark$

Problem 4 - Discrete Math
Fall 2012

Prove the following identity for positive integers $n$ and $k$ where $k \le n$:

$$k{n \choose k} = n{n-1 \choose k-1}$$

Answer:

This can be solved via direct proof, using the formula for for combinations with repetition, which is:

$$ {n \choose k} = \frac{n!}{k!(n-k)!}$$

Direct proof:

$$\require{cancel}\frac{\cancel{k}n!}{\cancel{k!}(k-1)!(n-k!)} = \frac{\cancel{n(n-1)!}n!}{(k-1)!(n\cancel{-1}-k\cancel{+1})!}$$ $$ \frac{n!}{(k-1)!(n-k)!} = \frac{n!}{(k-1)!(n-k)!}$$

Problem 5 - Discrete Math
Spring 2013

Given a string $S$ of length $n$, $S = s_1 \cdots s_n$. A prefix of $S$ is a substring of $S$ beginning at $s_1$.

1. How many distinct prefixes of $S$ are there?
2. Write pseudocode that will print all prefixes of $S$, each on its own line.
3. Count the number of characters that your algorithm prints.
4. Using big-$O$ notation, state the time complexity of your algorithm in terms of n, where the basic operation is printing a single character.

Answer:

1. $n-1$

2.
var prefix = empty
for (i = 0; i < n-1; i++)
prefix += S[i]
print prefix + newline

3. Not including newline characters, $\frac{n(n+1)}{2}$

4. $O(n^2)$

Problem 6 - Discrete Math
Fall 2013

Assume $n = 3^k$ is a power of 3 for $k \ge 0$. Accurately solve the following recursion.

If you cannot find the exact solution, use the big-O notation.

$$\begin{align} T(1) &= 0 \\ T(n) &= T(n/3) + 2 \end{align}$$

Answer:

First, plug in some values:

$T(1) = 0$ (given)

$T(3) = T(3/3) + 2 = 2$

$T(9) = 4$

$T(27) = 6$

Because the values are growing slowly, we can guess a log function will be needed.

$$ T(n) = 2\;log_3(n)$$

Problem 7 - Discrete Math
Spring 2014

Given a sequence over a four letter alphabet $\Sigma = \{a, g, c, t\}$. A trigram is a contiguous subsequence of length 3, and an $n$-gram is a contiguous subsequence of length $n$.

1. How many different possible trigrams are there?
2. Given an integer $n$, state in terms of $n$ how many different possible $n$-grams there are. Is this function polynomial or exponential?
3. How many different n-grams are there for all values of $n$, ranging from 3 to some value $k$?

Answer: Use $n$ choose $k$, formula for .

1. $${4 \choose 3} = 4\times3\times2 = 24$$

2. $$\frac{n!}{(n-k)!}$$

3. $$\sum_{i=3}^k \left(\frac{n!}{(n-i)!}\right)$$

Problem 8 - Discrete Math
Fall 2014

Let $c$ be a constant. Solve the following recursion accurately by finding a closed form for $T(n)$ as a function of $n$ and $c$. Prove that your solution is correct.

If you use induction, note that it has to be done twice since the recursion is different for odd and even $n$.

$$\begin{align} T(1) & = c \\ T(2k) & = 2T(k) \\ T(2k + 1) & = 2T(k) + c \end{align}$$

Answer:

First, plug in some values:

$$ k=1, T(2) = 2c \quad T(3) = 3c$$ $$ k=2, T(4) = 4c \quad T(5) = 5c$$ $$ k=3, T(6) = 6c \quad T(7) = 7c$$

Guess $T(n)=nc$

Induction even:

Base case: $T(1) = c \quad \checkmark$

Assume $T(n) = nc$

Prove $T(n+1) = 2((n+1)c) = 2nc + 2c, \quad \checkmark$

Induction odd:

Base case: $T(1) = c \quad \checkmark$

Assume $T(n) = nc$

Prove $T(n+1) = 2((n+1)c) + c = 2nc + 3c, \quad \checkmark$

Problem 9 - Discrete Math
Spring 2015

The Fibonacci sequence is a sequence of numbers in which each number is the sum of the two preceding numbers. More formally, $\{F_k | k = 0, 1, 2, \cdots\}$ is defined as follows:

$$F_0 = 0$$ $$F_1 = 1$$ $$\forall\;k \ge 2, F_k = F_{k−1} + F_{k−2}$$

The infinite sequence is: $0, 1, 1, 2, 3, 5, 8, 13, 21, 34, \cdots$

Based on the definition, prove that $2F_k = F_{k+1} + F_{k−2}$ for $k \ge 2$.

Note: it is not necessary to use induction.

Answer:

If $F_k = F_{k-1} + F_{k-2}$, then $F_{k+1} = F_k + F_{k-1}$

By substitution:

$$ 2F_k = F_{k+1} + F_{k-2}$$ $$ \require{cancel} \cancel{2}F_k = (\cancel{F_k}+F_{k-1})+F_{k-2}$$ $$ F_k = F_{k-1} + F_{k-2} $$

Problem 10 - Discrete Math
Fall 2015

Prove (by induction or by a direct proof) the following identity for any $n \ge 1$.

$$ \sum_{i=1}^n (i * i!) = (n + 1)! - 1$$

Answer:

Base case $1 \quad (1*1!) = (1+1)!-1, 1 = 1 \quad \checkmark$

Assume:

$$ \sum_{i=1}^n (i*i!) = (n+1)!-1$$

Prove:

$$ \sum{i=1}^{n+1} (i*i!) = (n+1+1)!-1$$ $$ (1 + \cdots + n*n!) + (n+1)(n+1)! = (n+2)!-1$$ $$ \require{cancel} (n+1)!\cancel{-1}+(n+1)(n+1)! = (n+2)!\cancel{-1} $$ $$ \frac{1\cancel{(n+1)!} + (n+1)\cancel{(n+1)!}}{\cancel{(n+1)!}} = \frac{(n+2)\cancel{(n+1)!}}{\cancel{(n+1)!}} $$ $$ 1 + n + 1 = n + 2 $$ $$ n + 2 = n + 2 \quad \checkmark$$

Problem 11 - Discrete Math
Spring 2016

Prove the following identity:

$$ \sum_{i=0}^k 3^i = \frac{3^{k+1} -1}{2}$$

Answer:

Base case:

$$ \frac{3^{0+1}-1}{2} = \frac{2}{2} = 1 \quad \checkmark $$

Assume true:

$$ \sum_{i=0}^k 3^i = \frac{3^{k+1} -1}{2}$$

Prove:

$$ \sum_{i=0}^{k+1} 3^i = \frac{3^{k+2} -1}{2}$$ $$ 1 + \cdots + 3^k + 3^{k+1} = \frac{3^{k+2}-1}{2}$$ $$ 1 + \cdots + 3^k = \frac{3*3^{k+1}-1}{2} - 3^{k-1}$$ $$ \require{cancel} 1 + \cdots + 3^k = \frac{\cancel{3*3^{k+1}}-1\cancel{-2(3^{k+1})}}{2} $$ $$ 1 + \cdots + 3^k = \frac{3^{k+1} -1}{2} \quad \checkmark $$

Problem 12 - Discrete Math
Fall 2016

Prove by induction:

$$ \sum_{i=1}^n i^3 = \left(\frac{n(n+1)}{2}\right)^2$$

Base case: $ \frac{1(2)}{2}, 1 = 1 \quad \checkmark$

Assume true:

$$ \sum_{i=1}^n i^3 = \left(\frac{n(n+1)}{2}\right)^2$$

Prove:

$$ \sum_{i=1}^{n+1} i^3 = \left(\frac{(n+1)(n+2)}{2}\right)^2$$ $$ (1 + \cdots + n^3) + (n+1)^3 = \left(\frac{(n+1)(n+2)}{2}\right)^2 $$ $$ \left(\frac{n(n+1)}{2}\right)^2 + (n+1)^3 = \left(\frac{(n+1)(n+2)}{2}\right)^2 $$ $$ \require{cancel} \frac{n^2\cancel{(n+1)^2}+4(n+1)\cancel{^3}}{\cancel{(n+1)^2}} = \frac{\cancel{(n+1)^2}(n+2)^2}{\cancel{(n+1)^2}} $$ $$ n^2 + 4n + 4 = (n+2)^2 $$ $$ (n+2)^2 = (n+2)^2 $$

Problem 13 - Discrete Math
Spring 2017

Find the sum of the following $n$ terms. Note that the last term is $n$ and not $2n$.

$$\left(\sum_{i=1}^{n-1} 2i\right) + n = 2 + 4 + 6 + \cdots + (2n-4) + (2n-2) + n$$

Hint: You may use the identity $1 + 2 + \cdots + $n$ = n(n+1)/2$

Answer: The secret to making this problem an easy problem is recognizing that the $2$ in the $2i$ can be factored out of the summation series, allowing you to use the formula provided and multiply the entire formula by $2$.

$$2\left(\sum_{i=0}^{n-1} i\right) + n = 2 + 4 + 6 + \cdots + (2n-4) + (2n-2) + n$$ $$\require{cancel}\cancel{2}\left(\frac{(n-1)(n-1+1)}{\cancel{2}}\right) + n$$ $$n^2-n+n$$ $$n^2$$

Problem 14 - Discrete Math
Fall 2017

Let $a_n = qa_{n-1}$ for some constant $q$, and let $a_0 = 4$. Prove by induction that $a_n = 4q^n$ for all $n \ge 1$.

Answer:

Base case: $a_0 = 4q^0 = 4 \quad \checkmark$

Assume true $a_n = 4q^n$

Prove $a_{n+1} = 4q^{n+1}$

If $a_n = qa_{n-1}$, then by extension $a_{n+1} = qa_n$.

By substitution:

$$ qa_n = 4q^{n+1}$$ $$ \require{cancel} \frac{\cancel{q}a_n}{\cancel{q}} = \frac{4q^{n\cancel{+1}}}{\cancel{q}} $$ $$ a_n = 4q^n \quad \checkmark $$

Problem 15 - Big-O Notation
Spring 2011

Let $A$ be an un-sorted array with $n$ distinct integers and let $B$ be a sorted array with $n^7$ distinct integers. Let $x$ be a number that appears in both $A$ and $B$. Alice is checking if $x \in A$ and Bob is checking if $x \in B$. It costs one dollar per comparison between $x$ and an integer in an array. Both Alice and Bob attempt to spend the least possible.

Who will asymptotically pay less? Justify your answer.

Answer:

For small values of $n$ $(\lt n_0)$, Alice will pay less, but for large values of $n$ (beyond threshold $n_0$) Bob will pay less because he can use binary search, which worst case will use $O(log(n^7))$ comparisons, while Alice must use sequential search, which worst case is $O(n)$ comparisons.

Problem 16 - Big-O Notation
Fall 2011

In each of the following, state whether the statement is true or false. If true, provide justification, and if false, give the correct relation.

1. $f(n) = log(n), g(n) = n$ and $f = O(g)$.
2. $f(n) = 2^n, g(n) = 4^n$ and $f = \Omega(g)$.
3. $f(n) = n^3 − n^2 , g(n) = n^2$ and $f = O(g)$.
4. $f(n) = 2n + log(n), g(n) = 2n\;log(n)$ and $f = \Theta(g)$.
5. $f(n) = log_2(n), g(n) = log_{10}(n)$ and $f = \Theta(g)$.

Answer:

1. True, because $\sqrt{n}$ grows faster than $log(n)$, e.g. $\sqrt{100} = 10$ vs. $log_{10}(10) = 1$

2. True, because $4^n$ is $2*2^n$, only a constant time difference between the two.

3. False, $n^2$ can't be worst-case for $n^3$.

4. False, $2n + log(n)$ is not equal to $2n\;log(n)$ because there is no constant factor for $n$ to make them equivalent.

5. True, The rate of growth is the same regardless of the base.

Problem 17 - Big-O Notation
Spring 2012

True or false? Justify your answers.

1. Is $2^{n+1} = O(2^n)$?
2. Is $2^{2n} = O(2^n)$?
3. Is $log(n^2) = O(log(n))$?
4. Is $log(2^n) = O(log(n))$?

Answer:

1. True, because $2^{n+1} = 2*2^n$ and in asymptotic notation constant time factors are ignored.

2. False, no constant factor can make $2^{2n} = 2^n$

3. True, exponent in logarithm becomes a constant time factor.

4. False, $log(2^n)$ is linear while $log(n)$ is sublinear.

Problem 18 - Big-O Notation
Fall 2012

A problem $P$ has an upper bound complexity $O(n^2)$ and a lower bound complexity $\Omega(n\;log(n))$.

1. Could someone design an algorithm that solves the problem with complexity $n^2$? Justify your answer.

2. Could someone design an algorithm that solves the problem with complexity $0.5n\;log(n)$? Justify your answer.

3. Could someone design an algorithm that solves the problem with complexity $100n$? Justify your answer.

Answer:

1. Yes, quicksort is an example. Worst case it sorts in $n^2$, best case in $n\;log(n)$

2. No, because that's faster than the lower bound, $n\;log(n)$

3. Yes, because that's between the upper and lower bounds $n\;log(n) \lt n \lt n^2$

Problem 19 - Big-O Notation
Spring 2013

Given the following recurrence formula:

$$ \begin{align} T(1) & = 1 \\ T(n) & = 2T(n/2) + \sqrt{n} \end{align}$$

1. Iterate three levels of the recursion either by using the substitution method, or by drawing the top three levels of the recursion tree.

2. Use the Master Theorem to find a closed form for the above formula.

Answer:

1.

$$ \sqrt{n} \\ \overbrace{\sqrt{\frac{n}{2}}\qquad\sqrt{\frac{n}{2}}} \\ \overbrace{\sqrt{\frac{n^2}{2^2}}\qquad\sqrt{\frac{n^2}{2^2}}}\qquad\overbrace{\sqrt{\frac{n^2}{2^2}}\qquad\sqrt{\frac{n^2}{2^2}}} \\ \overbrace{\sqrt{\frac{n^4}{2^4}}\qquad\sqrt{\frac{n^4}{4^4}}}\qquad\overbrace{\sqrt{\frac{n^4}{2^4}}\qquad\sqrt{\frac{n^4}{4^4}}}\qquad\overbrace{\sqrt{\frac{n^4}{2^4}}\qquad\sqrt{\frac{n^4}{4^4}}}\qquad\overbrace{\sqrt{\frac{n^4}{2^4}}\qquad\sqrt{\frac{n^4}{4^4}}} $$

2. $T(n) = 2T(n/2) + \sqrt{n}$

Case 1 of the Master Theorem: $\frac{1}{2} \lt log_2(2)$

$\require{cancel} T(n) = \Theta(n\cancel{^{log_22}}) = \Theta(n)$

Problem 20 - Big-O Notation
Fall 2013

Suppose that you have two different algorithms to solve a given problem. Algorithm $A$ has worst-case time complexity $\Theta(n^2)$ and Algorithm $B$ has worst-case time complexity $\Theta(n\;log(n))$. Which of the following statements are true and which are false? Justify your answers.

1. Algorithm $B$ runs faster than algorithm $A$ on all possible inputs of length $n$, for $n \ge 1$.

2. There exists $n_0$ such that for any $n \gt n_0$, algorithm $B$ runs faster than algorithm $A$ on all inputs of length $n$.

3. It is possible that for $n = 100$, algorithm $A$ runs faster than algorithm $B$ on all possible inputs of length $100$.

Answer:

1. False. Only for $n \gt n_0$. E.g. Bubble sort ($n^2$) is better than merge sort ($n\;log(n)$) for small inputs $\lt n_0$.

2. True. Above a certain input threshhold, $B$ will always be faster than $A$.

3. True, if $100 \lt n_0$.

Problem 21 - Big-O Notation
Spring 2014

Suppose that an algorithm divides a file into three equal size pieces, using time $n$ for a file of size n. It then processes (recursively) each piece with size greater than 1. You may assume that the size of the file is such that each time it is possible to divide into three equal size pieces.

1. Write a recurrence relation for the running time of the algorithm.

2. Solve the recurrence relation.

Answer:

1. $T(n) = T(n/3) + 2$ and $T(1) = 0$

2. Plug in values for $n$.

$$T(1) = 0 \\ T(3) = 2 \\ T(9) = 4 \\ T(27) = 6$$

Guess $T(n) = 2\;log_3(n)$, use induction to prove.

Problem 22 - Big-O Notation
Fall 2014

Let $A$ be an algorithm that solves problem $P$ whose worst-case time complexity is $\Theta(n^2)$.

1. Is it possible that for some instances of $P$ the running time of $A$ is $n$?

2. Is it possible that for some instances of $P$ the running time of $A$ is $n^3$?

Justify your answers.

Answer:

1. Yes, $O(n^2)$ is worst-case.

2. No, $n^2$ is worst-case, and $n^3 > n^2$.

Problem 23 - Big-O Notation
Spring 2015

Match the following 10 functions into 5 pairs. If f (n) is paired with g(n) then f (n) = Θ(g(n)).

$$2^{n+1} \quad n \quad log_2(n^2) \quad log_2^2(n) \quad n^2/100 \quad 2^n \quad log_2(n) \quad 100n^2 − 500n \quad log(2^n) \quad log_{10}^2(n)$$

Answer:

$$ \begin{align} 2^{n+1} & \rightarrow 2^n \\ n & \rightarrow log(2^n) \\ log_2(n^2) & \rightarrow log_2(n) \\ log_2^2(n) & \rightarrow log_{10}^2(n) \\ \frac{n^2}{100} &\rightarrow 100n^2-500n \end{align}$$

Problem 24 - Big-O Notation
Fall 2015

For two integers $k$ and $n (1 \le k \le n)$, let $X$ be an un-sorted array containing $k$ distinct positive integers and let $Y$ be a sorted array containing $n$ distinct positive integers. The goal is to determine if all the $k$ numbers from $X$ appear in $Y$.

Algorithm $A$ performs $k$ binary searches in array $Y$ one per each number from array $X$.

Algorithm $B$ first sorts array $X$. Then it performs a sequential search of $X[1], X[2], \cdots, X[k]$ in array $Y$ but starts the sequential search of $X[i]$ at the index at which the sequential search of $X[i − 1]$ stopped.

Which algorithm is asymptotically better for $k = \sqrt{n}$? Justify your answer.

Answer:

$$ A = k\;log(k) \\ B = k\;log(k) + k \\ \sqrt{n}\;log(n^{\frac{1}{2}}) = \frac{\sqrt{n}}{2} \qquad \frac{\sqrt{n}}{2} + \sqrt{n} $$

Algorithm $A$ is better since it's $n\;log(n)$, while $B$ is $n\;log(n)+n$.

Problem 25 - Big-O Notation
Spring 2016

Order the following 10 functions such that if f (n) appears before g(n) then f (n) = O(g(n)).

$$2^n \quad n \quad log(n) \quad log^2(n) \quad n^2 \quad 3^n \quad n! \quad \sqrt{n} \quad n^{100} \quad n/log(n) $$

Answer:

1. $log(n)$

2. $log^2(n)$

3. $\sqrt{n}$

4. $\frac{n}{log(n)}$

5. $n$

6. $n^2$

7. $n^{100}$

8. $2^n$

9. $3^n$

10. $n!$

Problem 26 - Big-O Notation
Fall 2016

For each of the following questions, give an example of a function that satisfies the criteria, or state that none exist. There is no need to justify your answers.

1. A function that is $O(log(n))$.

2. A function that is both $\Omega(n\;log(n))$ and $O(n^2)$.

3. A function that is both $O(2^n)$ and $\Omega(3^n)$.

4. A function that is $O(n)$ but not $\Theta(n)$.

Answer:

1. Binary search

2. Quicksort

3. Can't exist - best case ($3^n$) can't be worse than $O(2^n)$

4. Linear search

Problem 27 - Big-O Notation
Spring 2017

Let $f(n) = 1000n$ and $g(n) = n^2/1000$. For each one of the following 4 parts, find a function $h(n)$ that satisfies the conditions of this part or explain why such a function does not exist.

(a) $h = O(f)$ and $h = O(g)$
(b) $h = O(f)$ and $h= \Omega(g)$
(c) $h = \Omega(f)$ and $h = O(g)$
(d) $h = \Omega(f)$ and $h = \Omega(g)$

Answer:

(a) If worst-case is both $n$ and $n^2$, then you're looking for a function that grows slower than $O(n)$. Hence $h(n) = log(n)$. In other words, any run-time that is faster than $O(n)$.

(b) If worst-case is $n$ and best-case is $n^2$, there's no function $h(n)$ that can exist to satisfy those parameters, since the best case cannot grow faster than the worst-case.

(c) If $n$ is th best-case and $n^2$ is the worst-case, $h(n) = n\;log(n)$ is a function whose growth is between those two parameters.

(d) If best-case is both $n$ and $n^2$, then $h(n) = n^3$ is a function that grows faster than either of those.

Problem 28 - Big-O Notation
Fall 2017

Express each of the following functions in $\Theta$ notation. For example, if the function is $7n^2$, it should be expressed as $\Theta(n^2)$.

(a) $100 + 5\;log(n) + \frac{n^2}{47}$
(b) $n/3 + 3^n + 3n + n^3 + 3/n$
(c) $log_2(8)$
(d) $log(n) + \sqrt n$

Answer:

(a) $O(n^2)$

(b) $O(3^n)$

(c) $O(1)$

(d) $O(log(n))$

Problem 29 - Sorting, Search
Spring 2011

Let $A$ be an unsorted array with $n \ge 2$ distinct positive integers where at least one of them is odd and one of them is even. The numbers are very large and therefore it requires complexity $\Omega(n\;log(n))$ to sort $A$.

Design a linear time algorithm to determine if ALL the odd numbers in $A$ are smaller than ALL the even numbers in $A$. Justify your answer.

Answer:

You need to know the max of the odd and min of the even. If $max(odd) \lt min(even)$ then true. Run the tournament algorithm on even and odd, although you need to traverse the list once to identify even/odd.

$$ \require{cancel} n + \cancel{2}\left(\frac{3n-2}{\cancel{2}}\right) = 4n-2$$

$$O(n)$$

Problem 30 - Sorting, Search
Fall 2011

You are given a sorted array $A$ with $k$ elements, and an unsorted array $B$ with $log(k)$ elements. The problem is to combine the arrays into one sorted array (with $n$ elements, where $n = k + log\;k)$.

1. Give an efficient algorithm to solve this problem.

2. Analyze the time complexity of your algorithm, in terms of the number of comparisons performed.

3. How much space does your algorithm use?

Answer:

1. Merge sort

2. $n\;log\;n = (k+log\;k)log(k+log\;k)$

3. $O(n) = O(k+log\;k)$

Problem 31 - Sorting, Search
Spring 2012

Let $A = A[1] \lt \cdots \lt a[n]$ be a sorted array of $n$ distinct integers, in which each integer is in the range $[1 \cdots $n$ + 1]$. that is, exactly one integer out of $\{1, \cdots, n + 1\}$ is missing from $A$. describe an efficient algorithm to find the missing integer. analyze the worst case complexity (number of accesses to array $A$) of your algorithm.

Answer:

Sum up the integers as they're scanned, then subtract that total from the sum of all integers from $1 \cdots n+1$. That number is the missing integer.

Runtime is linear $O(n)$ since it only traverses the list once.

Problem 32 - Sorting, Search
Fall 2012

Suppose $A_1 , \cdots, A_k$ are $k\;(k \gt 1)$ sorted arrays each of size $n$ containing a total of $kn$ distinct numbers. Describe an efficient algorithm to find the smallest and largest numbers among all the $kn$ numbers.

What is the complexity of your algorithm? Justify your answer.

Answer:

Scan the array, looking at the first and last values of each subarray, which are the min and max values for that subarray. Initialize min and mix to the first subarray, then replace as necessary while traversing the master array. Complexity is $3n$ or just $O(n)$.

Problem 33 - Sorting, Search
Spring 2013

Suppose you are given an array $A$ of $n$ sorted numbers that has been circularly shifted $k$ positions to the right. For example, $[35, 42, 5, 15, 27, 29]$ is a sorted array that has been circularly shifted $k = 2$ positions, while $[27, 29, 35, 42, 5, 15]$ has been shifted $k = 4$ positions.

1. Suppose you know the value of $k$. Give an $O(1)$ algorithm to find the largest number in $A$.

2. Suppose you do not know the value of $k$. Give an $O(log(n))$ algorithm to find the largest number in $A$. For partial credit, you may give an $O(n)$ algorithm. Justify the time complexity of your algorithm.

Answer:

1. $(n+k)mod\;n$

2. Modified binary search. Look at $A[1]$, compare that to midpoint. If $A[1] \gt$ midpoint, the larger numbers are on the left-half instead of the right-half. Proceed with algorithm as normal with that knowledge.

Problem 34 - Sorting, Search
Fall 2013

Let $A = [A[1] \lt A[2] \lt \cdots \lt A[n]]$ be a sorted array containing $n$ distinct positive integers. Let $x$ be a positive integer such that both $x$ and $2x$ are not in $A$. Describe an efficient algorithm to find the number of integers in $A$ that are larger than $x$ and smaller than $2x$. What is the complexity of your algorithm? Justify your answer.

Answer:

Modified binary search.

1. Run binary search algorithm to find the first number greater than $x$, then again to find the first number less than $2x$. Then subtract the indices from each other to get the number of values in the range. Complexity is $O(2\;log(n))$.

Problem 35 - Sorting, Search
Spring 2014

The predecessor query is defined as follows. Given a sorted array $A$ of $n$ distinct positive real numbers, and a real number $x$ (not necessarily in $A$), find the largest element in the array that is strictly less than $x$.

1. Write pseudocode for an efficient algorithm to answer a predecessor query. You can define the predecessor of the first element in the array to be zero.

2. What is the time complexity of your algorithm?

Answer:

Modified binary search. Keep running it until you hit the spot where $x$ would be, then take the value to the left. Complexity is $O(log(n))$.

Problem 36 - Sorting, Search
Fall 2014

Let $A$ and $B$ be two unsorted arrays of numbers. We say that array $A$ is greater than array $B$, if all the numbers in $A$ are larger than all the numbers in $B$. To clarify the definition we give the following examples.

The array $[20, 18, 14, 9, 10]$ is greater than the array $[3, 8, 2, 5]$ since all of the numbers in the first array are greater than all the numbers in the second array. However, the array $[7, 4, 6, 9]$ is not greater than the array $[3, 5, 1]$ since the $4$ in the first array is smaller than the $5$ in the second array. Given two arrays, $X$ and $Y$ , both of size $n$, describe an efficient algorithm that decides whether $X$ is greater than $Y$. Explain why your algorithm is correct, and provide an analysis of the time complexity of your algorithm.

Answer:

Find the min of $x$ and max of $y$ using the tournament algorithm.

$$ \require{cancel} \cancel{2}\frac{3n-2}{\cancel{2}} = 3n-2 = O(n) $$

Problem 37 - Sorting, Search
Spring 2015

For $k \le n$, let $A$ be an $k \times n$ matrix with $k$ rows and $n$ columns containing $kn$ distinct integers. For $1 le i \le k$ and $1 \le j \le n$, denote by $A[i, j]$ the value of $A$ at row $i$ and column $j$. Assume that each row is sorted. That is, $A[i, j] \lt a[i, j ']$ for $1 \le i \le k$ and $1 \le j \lt j' \le n$. describe an efficient algorithm to find the second smallest number in $a$. explain why your algorithm is correct.

What is the worst-case number of comparisons among entries from A that are performed by your algorithm? Justify your answer.

Answer:

1. Run a tournament comparison on the first elements of all $k$ rows to get the smallest value. $\frac{3n-2}{2}$.

2. Run the tourney again, eliminating the winner in step 1, but also add all $k$ values from the second column $\frac{3*2n-2}{2}-1$

3. Worst case is $O(n)$

Problem 38 - Sorting, Search
Fall 2015

The range of an array $X$ is defined as $[min(X) \cdots max(X)]$ where $min(X)$ is the minimum number in the array and $max(X)$ is the maximum number in the array.

Define $Y \bullet Z$ for two arrays $Y$ and $Z$ if the range of $Z$ contains the range of $Y$. That is, both inequalities $min(Z) \le min(Y)$ and $max(Z) \ge max(Y)$ hold.

Let $Y$ and $Z$ be two unsorted arrays each containing $n (n \ge 2)$ distinct numbers. Describe an efficient algorithm that determines if $Y \bullet Z$.

What is the worst-case number of comparisons made by your algorithm as a function of $n$? Justify your answer.

Answer:

Run the tournament comparison algorithm 4 times:

$$ \require{cancel} 2\cancel{4}\left(\frac{2n-3}{\cancel{2}}\right) = 4n-6 $$

Complexity is $O(n)$

Problem 39 - Sorting, Search
Spring 2016

Let $A = A[1] \lt A[2] \lt \cdots \lt A[n]$ and $B = B[1] \lt B[2] lt \cdots \lt B[m]$ be two sorted arrays containing $n + m$ distinct integers. $B$ separates $A$ if for any two entries $A[i] \lt A[j]$ in $A$ there exists an entry $B[k]$ in $B$ such that $A[i] \lt B[k] \lt A[j]$.

Describe an efficient algorithm that checks if $B$ separates $A$. What is the time complexity of your algorithm as a function of $n$ and $m$? Justify your answer.

Answer:

Dual sequential search. Look at the first numbers of each array. If $B[0] < A[0]$, start traversing $A[1]$ until you find a number greater than $B[0]$. If found, done. If not, $B$ does n't separate $A$.

If $A[0]>B[0]$, traverse $B$ until a value is found that is greater than $A[0]$, then repeat the above algorithm.

Worst case all values of both are looked at once so $O(m+n)$.

Problem 40 - Sorting, Search
Fall 2016

Let $A = A[1] \lt A[2] \lt \cdots A[n]$ be a sorted array of $n$ distinct integers. Describe a $\Theta(log(n))$ algorithm that determines if there exists an index $1 \le i \le n$ such that $A[i] = i$. If such an index exists, the algorithm should return it. If not, it should return the number $−1$.

What is the time complexity of your algorithm as a function of $n$? You will earn partial credit for a correct but non-efficient algorithm.

Answer:

Modified binary search. Look at the midpoint, if the value is greater than the midpoint, possible candidates are only on the left-hand side. If less than the midpoint, then possible candidates exist only the right-hand side. Repeat as necessary. Complexity is $O(log(n))$.

Problem 41 - Sorting, Search
Spring 2017

Bob selects an integer $x$ in the range $[1 \cdots n]$ for $n = 2^k$ and $k \gt 1$. Alice is trying to find $x$ using the binary search procedure. After exactly $k$ questions of the type: "is $x$ less than $i$?" for some $1 \lt i \le n$, she announces that Bob has selected $y$. Unfortunately $x \ne y$ because Bob lied in one of his answers while giving the correct answers to all the other $k - 1$ questions.

Justify your answers to the following three questions.

(a) If the first answer was a lie, what are the possible values for $y$?
(b) If the first answer was a lie, how many more questions does Alice need to find $x$ assuming Bob will not lie again?
(c) If the last answer was a lie, how many more questions does Alice need to find $x$ assuming Bob will not lie again?

Answer: The important idea is that binary search divides an array in half upon each iteration, but also note that the top end of the range is $2^k$, which affects the computation.

(a) If Bob's first answer was a lie you know nothing about the location of $x$, you only the array is a set of integers ranging from $1$ to $2^k$, and it can't be the answer to the first question (the midpoint) because $k \gt 1$, so that means it's a value between $1$ and the midpoint or between the midpoint and $1$ and $2^k$. If it's the first half: $1 \le y \le 2^{k/2}$ or if it's last half: $2^{k/2} \le y \le 2^k$.

(b) The prompt already indicated there are $k$ questions total needed, so if the first answer is already given then $k - 1$ more questions are needed.

(c) None, because the last answer must be the answer.

Problem 42 - Sorting, Search
Fall 2017

Let $A$ and $B$ be two arrays of integers, both size of $n$, and let $x$ be an integer. Describe a $\Theta(n\;log\;n)$ algorithm that determines if there exist two elements, one from each array, that add up to $x$. That is, the algorithm should return a pair of indices $(j, k)$ such that $A[j] + B[k] = x$ if such a pair exists.

You do not need to formally prove the running time of your algorithm but you will receive only partial credit for a $\Theta(n^2)$ algorithm.

Answer:

The fastest way would be to use a hash table. Scan the first array $A$ into a hash table, then subtract each value of B from $x$ and see if the difference exists in the hash table. This would be a linear $O(n)$ operation.

The prompt asks for a $O(n\;log\;n)$ operation, which implies sorting, however if you sort the arrays you lose their original location, and the prompt is asking you to return the indices.

First scan each array $A$ and $B$ into new arrays, $A'$ and $B'$, pairing each value with its index. So if $A[0] = 45$, then $A'[0] = [45, 0]$. Then merge sort both new arrays based on the values (first item in the pair). Next scan array $A$, subtracting each value from $x$ and use binary search to see if the other value exists in $B$ and return the index. $2n$ to create the new arrays, $2n\;log\;n$ to sort the new arrays, $n$ to scan the new array and $log\;n$ to use binary search, hence $2n + 2n\;log\;n + n + log\;n = 2n\;log\;n + 3n + log\;n$ or $O(n\;log\;n)$.

Problem 43 - Graph Theory
Spring 2011

Let $G = (X, Y, E)$ be a complete bipartite graph. $X$ and $Y$ are sets of vertices and $E$ is the set of all possible edges $(x, y)$ where $x \in X$ and $y \in Y$. Assume that $X$ has $k$ vertices and that $Y$ has $h$ vertices for $k \gt h \gt 0$.

A set of vertices $I$ is an independent set of $G$ if for any two vertices $u, v \in I$, the edge $(u, v)$ does not exist.

Identify the largest possible independent set in the complete bipartite graph $G$? What is the size of this set? Justify your answer.

Answer:

Its $X$, $|k|$. Because one side in a bipartite graph is never connected.

Problem 44 - Graph Theory
Fall 2011

Let $G$ be a complete undirected graph (all possible edges exist) with $n$ vertices. Assume that you run both a Breadth-first (BFS) and a Depth-first (DFS) search on $G$.

1. What is the height of the BFS-tree? Justify.

2. What is the height of the DFS-tree? Justify. Note that the height of the DFS-tree is equivalent to the depth of recursion of the algorithm.

3. For a general connected graph (i.e. not necessarily complete) what would be the maximum possible height for the BFS-tree?

4. For a general connected graph (i.e. not necessarily complete) what would be the maximum possible height for the DFS-tree?

Answer:

1. $1$. Since it's a complete graph, all vertices are connected to all other vertices, and BFS will return all vertices no matter where you start from.

2. $1$. As the prompt hints at, the depth of the tree is only increased when DFS has to "restart" from the original vertex to seek out a new path, but since this is a complete graph DFS finds all the vertices on its first try.

3. $n-1$

4. $n-1$

Problem 45 - Graph Theory
Spring 2012

A lonely edge in a simple undirected graph is an edge $e = (u, v)$ for which the edge $e$ is the only edge adjacent to the vertices $u$ and $v$. For a given graph $G = (V, E)$, describe an efficient algorithm to identify a lonely edge if such exists.

What is the complexity of your algorithm? Justify your answer.

Answer:

If an adjacency list, scan each index to see if it has more than one node. If it does, stop, it's not lonely. If it has only 1 node, check the other vertex to see if it's lonely. If it is, stop. If not, continue traversing array. Worst-case it makes 2 comparisons for every vertex, so $O(n)$ or $2|V|$.

Problem 46 - Graph Theory
Fall 2012

The following algorithm is applied to a simple undirected graph $G$ with $n$ vertices.

let S be a set containing an arbitrary vertex $u$ of $G$
let $R$ be the set off all the neighbors of $u$:
while $R$ is not empty do
let $v \in R$ be an arbitrary vertex from R
add $v$ to $S$
omit $v$ from $R$
omit from $R$ all the vertices that are not neighbors of $v$
end-while

What could you say about set $S$ when the algorithm terminates?

Answer:

It's a clique, a complete subgraph.

Problem 47 - Graph Theory
Spring 2013

Let $G$ be a simple and undirected graph with $n$ vertices. Assume $G$ is represented by the adjacency matrix $A$. A set of vertices $D$ is a dominating set if for every vertex $u$ not in $D$ there exists a vertex $v$ in $D$ such that the edge $(u, v)$ belongs to $G$. Let $F$ be a set of vertices of size $k$.

1. Describe an efficient algorithm to determine if $F$ is a dominating set in $G$.

2. What is the time-complexity of your algorithm as a function of $n$ and $k$?

Answer:

1. Start with a list of all vertices in $G$. Pick the first vertex in $G$, then traverse that column of the adjacency matrix. Cross that vertex off the list and everywhere there's a 1 in the matrix cross that vertex off the list. Repeat for all vertices in $F$. If after you're done any vertices on your list aren't crossed off, it's not a dominating set. If yes, then it is a dominating set.

2. $O(nk)$ aka $O(n^2)$

Problem 48 - Graph Theory
Fall 2013

The union of two graphs with the same set of vertices is a graph with the same vertex set, and all of the edges of both graphs. A graph contains another graph if its set of edges includes all of the edges of the other graph. Let $F$, $G$, and $H$ be three graphs on the same set of $n$ vertices. Assume these graphs are represented by their adjacency matrices. Describe an $O(n^2)$ algorithm to decide whether $G$ contains the union of $F$ and $H\;(F \cup H \subseteq G)$.

Answer:

Add $F$ and $H$ together, making sure there is $1$ for every edge. Then compare the matrix to $G$ to see if everywhere a $1$ exists there, a $1$ exists in $G$. Matrix addition is $O(n^2)$, as is comparison, so $O(2n^2)$ or just $O(n^2)$.

Problem 49 - Graph Theory
Spring 2014

A bipartite graph is a graph whose vertices can be partitioned into two subsets such that there is no edge between any two vertices in the same subset. Give an efficient algorithm to determine if a graph $G$ is bipartite.

1. What is the time complexity of your algorithm given the adjacency list representation of the graph?

2. What is the time complexity of your algorithm given the adjacency matrix representation of the graph?

Answer:

Use the 2-coloring algorithm (color a vertex one color, color neighbor the other color; if any neighbors have the same color it's not bipartite).

1. $O(n)$

2. $O(n^2)$

Problem 50 - Graph Theory
Fall 2014

Let $G$ be an undirected simple graph with $n$ vertices. For each of the following graphs determine the exact number of edges in $G$ as a function of $n$ and/or other parameters (e.g. $d$, $r$, and $s$). Justify your answers.

1. $G$ is a clique.
2. $G$ is a tree.
3. $G$ is a cycle.
4. $G$ is a $d$-regular graph (the degree of each vertex is exactly $d$).
5. $G$ is a complete bipartite graph that has $r$ vertices in one side and s vertices in the other side such that $r + s = n$. (In a complete bipartite graph, each vertex on one side is adjacent to every vertex on the other side.)

Answer:

1. $\frac{n(n-1)}{2}$ - note this is different from the summation of a series formula

2. $n-1$

3. $n$

4. $\frac{nd}{2} $

5. $rs $

Problem 51 - Graph Theory
Spring 2015

Let $H$ be a sub-graph of a graph $G$ that contains all the vertices of $G$. That is, every edge in $H$ is also an edge in $G$ but there might be edges in $G$ that are not in $H$. Which of the following statements are always TRUE. Justify your answers.

1. The number of edges in $G$ is greater than or equal to the number of edges in $H$.
2. The maximum degree in $G$ is greater than or equal to the maximum degree in $H$.
3. If $G$ has a cycle than $H$ has a cycle.
4. If $G$ is connected then $H$ is connected.
5. If $G$ is bipartite then $H$ is bipartite.

Answer:

1. Always true. A graph "parent" by definition must be equal to or greater than its "child".

2. Always true. Same as number 1.

3. No. The subgraph could be made from less vertices, breaking the cycle.

4. No. Again, the subgraph could be made from less vertices and hence less edges, breaking the connection.

5. No. A subgraph of bipartite graph could be just a pair of connected vertices.

Problem 52 - Graph Theory
Fall 2015

Let $G'$ be the family of all undirected simple graphs $G$ with $n$ vertices for which it is possible to color the vertices of $G$ with the three colors Yellow, Red, and Blue as follows:

For each of the following three statements determine if for all graphs $G \in G'$ (i) the statement is always true, (ii) the statement is sometimes true and sometimes false, or (iii) the statement is always false. Justify your answers.

1. $G$ has a triangle: there exist three vertices $u$, $v$, $w$ such that the edges $(u, v), (v, w), (w, u)$ exist.

2. $G$ is a tree.

3. $G$ is a bipartite graph.

Answer:

1. No

2. Yes

3. Yes

Problem 53 - Graph Theory
Spring 2016

Let $G$ be an undirected graph with $n$ vertices and $m$ edges. Let $C$ be a coloring of the vertices of $G$ with the $k$ colors $1, 2, \cdots, k$ (if $C(v) = j$ then the color of vertex $v$ is $j$). $C$ is a legal coloring of $G$ if $C(u) \ne C(v)$ for every edge $(u, v)$ of $G$.

Describe an efficient algorithm that checks if a coloring $C$ is a legal coloring. What is the time complexity of your algorithm as a function of $n$ and $m$? Justify your answer. Specify the data structure you use to represent the graph.

Answer:

Use Breadth-First Search (BFS) and compare levels. $O(m+n)$.

Problem 54 - Graph Theory
Fall 2016

The reverse of a directed graph $G = (V, E)$ is another directed graph, $G^R = (V, E^R)$ where $E^R = \{(v \rightarrow u)|(u \rightarrow v) \in E\}$. That is, the set of vertices is the same but the edges are reversed.

Describe an efficient algorithm that, given a representation of $G$ with adjacency lists, outputs the adjacency list representation of $G^R$ . Your algorithm should run in time linear of $n$ and $m$, where $n$ denotes the number of vertices and $m$ denotes the number of edges. Be precise in your answer.

Answer:

Use a stack. Push the list for each vertex onto the stack, then pop it off and reverse the direction of the edges. Repeat for each vertex. Runtime is $O(m+n)$.

Problem 55 - Graph Theory
Spring 2017

Two simple (no self loops and no parallel edges) undirected graphs $G$ and $H$ isomorphic if there exists a 1-1 function $f$ from the vertices of $G$ to the vertices of $H$ such that an edge $(u,v)$ exists in $G$ if and only if the edge $(f(u),f(v))$ exists in $H$.

For each one of the following 4 parts say if $G$ and $H$ (i) are always isomorphic (ii) could never be isomorphic (iii) sometimes isomorphic. Justify our answers.

(a) Both graphs have the same number of edges. However, $G$ has $n$ vertices while $h$ has $n + 1$ vertices for $n \ge 1$.

(b) Both graphs have $n \ge 2$ vertices and exactly one edge.

(c) Both graphs have $n \ge 2$ vertices and exactly two edges.

(d) All the vertices have exactly one neighbor.

(e) All the vertices have exactly two neighbors.

Answer:

(a) Never isomorphic. Since $|H| \gt |G|$ there's no bijection of vertices and hence no isomorphism.

(b) Always isomorphic. If there's only 1 edge and the same number vertices, that edge can always be transposed in the other graph.

(c) Sometimes isomorphic. It depends which pairs of vertices are connected.

(d) Sometimes isomorphic. The prompt here neglects to mention $n$, the number of vertices. If $|H| = |G|G then yes, else no.

(e) Sometimes isomorphic. They must have have the same number of vertices.

Problem 56 - Graph Theory
Fall 2017

Let $G = (V,E)$ be a directed graph with $n$ vertices and $m$ edges. For a vertex $v$, the $indegree(v)$ is the number of edges $(u \rightarrow v)$ into $v$ and the $outdegree(v)$ is the number of edges ($v \rightarrow u$) exiting v.

For the following two representations of $G$, give an efficient algorithm to computer $indegree(v)$ of a given vertex $v$. State the running time of each of your algorithms as a function of $n$ and $m$.

(a) $G$ is represented by adjacency lists, where each vertex is represented by a linked list of its outgoing edges.
(b) $G$ is represented by an $n \times n$ adjacency matrix $A$, where $A[i,j] = 1$ if and only if the directed edge $(i \rightarrow j)$ is in G.

Answer:

(a) Traverse the array, iterating through each node ine each looking for a connection to $v$ (skipping the $v$ array), return the total. Runtime is $O(m+n)$.

(b) Normally traversing an adjacency matrix is an $O(n^2)$ operation, but in this case we're only interested in half the edges related to $v$, the ones pointing to it. We only need to find the column (or row) which points to $v$ and count how many ones exist and return that number. In this case the runtime is $O(m)$.

Problem 57 - P, NP and NP-Complete
Spring 2011

The Travelling Salesperson Problem (TSP) is a well-known NP-Complete problem. If you prove either that $TSP \in P$ or that $TSP \notin P$ you will get the equivalence of the Nobel prize in Computer Science. Why?

Answer:

Because if $P=NP$ then all NP problems could be solved in polynomial time, there would be easy/fast algorithms for hard problems like encryption. It's strongly believed $P \ne NP$ but there's been no successful formal proof.

Problem 58 - P, NP and NP-Complete
Fall 2011

Let $G$ be a directed graph with $n$ vertices. A Hamiltonian path in the graph is a path of length $n$ that includes each vertex exactly once. Deciding whether a graph has a Hamiltonian path is an NP-Complete problem.

Consider a variation of this problem in which the start and end vertices are told to you. That is, v and w are given, and the question is whether $G$ has a Hamiltonian path from $v$ to $w$.

1. Prove that this variation of the problem is in the class NP.

2. Is this problem NP-Complete?

3. If you answered 'yes', prove it by doing a reduction, and if you answered 'no' outline an algorithm for the problem.

Answer:

1. To prove membership in NP, it must be at least as hard as a known NP problem. Giving starting and ending vertices makes HAMPATH easier, since you don't have to run your search algorithm on every possible permutation of vertices. Hence it's as hard or easier than HAMPATH, it's in NP.

2. Yes, because HAMPATH is a known NPC problem and this problem is reduced from that in polynomial time, hence it is also NPC. Also, confirming it's a HAMPATH is a polynomial time algorithm, and an NP problem w a polynomial time verifier = membership in NPC.

3. You have a set of $n$ vertices, each of which could be the start or end, $(n-1)^2$ permutations. Select 2 of those $n$ and set them to the start and end. Now run HAMPATH search.

Problem 59 - P, NP and NP-Complete
Spring 2012

Assume that there are proofs for the following four statements. Can you infer anything about $A$, $B$, $C$, and $D$? Explain your answer.

1. $X$ is a polynomially solvable problem and $A$ is polynomially reducible to $X$.

2. $X$ is an NP-Hard problem and $B$ is polynomially reducible to $X$.

3. $X$ is a polynomially solvable problem and $X$ is polynomially reducible to $C$.

4. $X$ is an NP-Hard problem and $X$ is polynomially reducible to $D$.

Answer:

1. $A$ can also be solved in $P$ time.

2. $B$ must at least be as hard as $NPH$.

3. $C$ must also be solvable in $P$ time.

4. $D$ is no harder than $NPH$.

Problem 60 - P, NP and NP-Complete
Fall 2012

Let $G$ be a simple undirected graph. An independent set is a set of vertices with no edges among them. A clique is a set of vertices that contains all the possible edges among them. Let $A$ be an algorithm whose complexity is $T(n)$ that finds the maximum independent set in a graph.

Describe an efficient algorithm to find the maximum clique in the graph based on $A$. What is the complexity of your algorithm as a function of $n$ and $T(n)$? Explain your answer. The complexity should be polynomial with both $n$ and $T(n)$.

Answer:

A max clique is the opposite of a max independent set so just invert the adjacency list (or matrix) and you get a max clique. $O(n)$ to run the algorithm and $O(n)$ to invert so just a runtime of $O(n)$.

Problem 61 - P, NP and NP-Complete
Spring 2013

The longest-simple-cycle problem is the problem of finding a simple cycle (i.e. no repeated vertices) of maximum length in a graph.

1. State the decision problem of the longest-simple-cycle problem.

2. Prove that the decision problem in part (a) is in NP.

3. Assume you know how to reduce the Hamiltonian Cycle Problem the longest-simple-cycle problem. What can you conclude?

Answer:

1. Given an undirected graph $G$ and an integer $k$, does $G$ have a simple cycle length of at least $k$?

2. Let $(G,k)$ be an instance of the longest simple cycle (LSC). Given a certificate of proof $y$, which is a sequence of vertices, we can simply scan through the graph $G$ in $P$ time to verify that $y$ is a cycle, that no vertex appears more than once, and that $|y| \ge k$.

3. That the LSC is NPC since HAMCYC is a known NPC problem.

Problem 62 - P, NP and NP-Complete
Fall 2013

Consider the sets P, NP, and NPC. Mark each of the following six statements as true, false, or unknown.

$$P \subseteq NP$$ $$NP \subseteq P$$ $$P \subseteq NPC$$ $$NPC \subseteq P$$ $$NP \subseteq NPC$$ $$NPC \subseteq NP$$

Answer:

$ P \subseteq NP $ true

$ P \subseteq NPC$ unknown (false if $P \ne NP$, true if $P=NP$)

$ NP \subseteq NPC$ unknown (false if $P \ne NP$, true if $P=NP$)

$ NP \subseteq P$ unknown (false if $P \ne NP$, true if $P=NP$)

$ NP \subseteq P$ unknown (false if $P \ne NP$, true if $P=NP$)

$ NPC \subseteq NP$ true

Problem 63 - P, NP and NP-Complete
Spring 2014

If I would describe a new problem $Q$ to you, explain how you would prove whether $Q$ belongs to each of the following classes: (i) P, (ii) NP, and (iii) NP-Complete.

Answer:

1. Construct an algorithm that can solve it in worst-case $P$ time.

2. Validate a solution in $P$ time.

3. Reduce a known $NPC$ problem like SAT to it.

Problem 64 - P, NP and NP-Complete
Fall 2014

Alice claims that she can solve the SAT problem with an algorithm whose running time is $O(n^12)$. Bob claims that he has a proof that the Travelling Saleperson Problem (TSP) cannot be solved with a polynomial time algorithm. Can both claims be correct? Can both claims be incorrect? Explain.

Answer:

No, both can't be correct because both are $NPC$ problems and if there's a $p$ algorithm to one $NPC$ problem there's a $P$ solution to all $NPC$ problems.

Problem 65 - P, NP and NP-Complete
Spring 2015

Let $A$, $B$, and $C$ be three decision problems. Suppose that there exist polynomial time reductions from $A$ to $B (A \le_P B)$ and from $B$ to $C (B \le_P C)$. Let $P$ be the class of all polynomial time problems and let NPH be the class of all NP-Hard problems.

Which of the following eight combinations are impossible? Justify your answer.

1. $A \in P, B \in P, C \in P$.

2. $A \in P, B \in P, C \in NPH$.

3. $A \in P, B \in NPH, C \in P$.

4. $A \in P, B \in NPH, C \in NPH$.

5. $A \in NPH, B \in P, C \in P$.

6. $A \in NPH, B \in P, C \in NPH$.

7. $A \in NPH, B \in NPH, C \in P$.

8. $A \in NPH, B \in NPH, C \in NPH$.

Answer:

2, 3, 4, and 6 are impossible.

Problem 66 - P, NP and NP-Complete
Fall 2015

Justify your answers for both parts of the problem.

1. Problem $A$ has a polynomial time solution. Can you learn something about Problem $B$ by finding a polynomial-time reduction from $A$ to $B$ or from $B$ to $A$?

2. Problem $C$ is an NP-Hard problem. Can you learn something about Problem D by finding a polynomial-time reduction from $C$ to $D$ or from $D$ to $C$?

Answer:

1. If $A \rightarrow B$, then $B$ is also solvable in $P$ time or $B \rightarrow A$.

2. $C_{NPH} \rightarrow D$ Then D is at least as hard as $NPH$.

Problem 67 - P, NP and NP-Complete
Spring 2016

Let $P$ and $Q$ be two NP-Complete problems.

1. Is it possible to find a polynomial time algorithm for $P$ and at the same time prove that it is impossible to solve Problem $Q$ with a polynomial time algorithm? Justify your answer.

2. Is it possible to prove that there is no polynomial time reduction from Problem $P$ to Problem $Q$ or from Problem $Q$ to Problem $P$? Justify your answer.

Answer:

1. No, because if there is a $P$ time algorithm for one $NPC$ problem there is a $P$ time algorithm for all $NPC$ problems.

2. No, they're equivalent $NPC$ problems.

Problem 68 - P, NP and NP-Complete
Fall 2016

1. Circle all that we know to be true about the Traveling Salesman problem.

in $P$

in $NP$

$NP-complete$

2. Suppose that someone give you a polynomial-time (correct) algorithm for the SAT problem. What do we now know about the Traveling Salesman problem? Circle all that apply.

in $P$

in $NP$

$NP-complete$

3. Suppose that (instead) someone gives you a polynomial-time (correct) algorithm for the Minimum Spanning Tree problem. What do we now know about the Traveling Salesman problem? Circle all that apply.

in $P$

in $NP$

$NP-complete$

Justify your answers.

Answer:

1. $NP, NPC$

2. $P, NP, NPC$

3. $P, NP, NPC$

Problem 69 - P, NP and NP-Complete
Spring 2017

Problem $A$ is an NP-Complete problem and problem $B$ can be solved with a polynomial time algorithm. Alice and Bob do not know these facts. Instead, Alice is trying to find a polynomial time reduction from problem $A$ to problem $B$ while Bob is trying to find a polynomial time reduction from problem $B$ to problem $A$. Who has a fair chance to succeed and who will probably fail? Justify your answers.

Answer: Alice: $A_{NPC} \rightarrow B_P$, Bob: $B_P \rightarrow A_{NPC}$

Alice is more likely to succeed in reducing a hard problem (A) to an easy problem (B). Bob is not going to find a method that reduces a hard problem into an easy problem. Although if $P = NP$ both could find transformations that work since then $P = NP = NPC$.

Problem 70 - P, NP and NP-Complete
Fall 2017

You can win a million dollars if you resolve the open problem "Is $P = NP$ or not?"

In a few sentences describe two strategies to win the prize. (Think about both of the possible answers to the question.)

Answer:

Strategy one: Find a $P$ time algorithm for an $NPC$ problem. This would prove that $P = NP$ because you've found a way to solve a "hard" problem quickly. This would be the easier way to prove it, but most believe $P \ne NP$ so it's unlikely to be accomplished.

Strategy two: Prove it's impossible to find a $P$ time algorithm for $NPC$ problems. This is much harder because you have to rule out the possibility of any method or strategy ever solving a hard problem quickly.