Chris Pollett > Old Classes > CS254
( Print View )

Student Corner:
  [Submit Sec1]
  [Grades Sec1]

  [
Lecture Notes]
  [Discussion Board]

Course Info:
  [Texts & Links]
  [Description]
  [Course Outcomes]
  [Outcomes Matrix]
  [Course Schedule]
  [Grading]
  [Requirements/HW/Quizzes]
  [Class Protocols]
  [Exam Info]
  [Regrades]
  [University Policies]
  [Announcements]

HW Assignments:
  [Hw1]  [Hw2]  [Hw3]
  [Hw4]  [Hw5]  [Quizzes]

Practice Exams:
  [Midterm]  [Final]

                           












HW1 Solutions Page

    1. `n^{100} = o(2^n)`: To see this notice that `n^{100} = 2^{100 log n} < 2^{n/2}` for `n > 2^{13} = 8192`. Next, notice `2^{n/2} = o(2^n)` as `2^{n/2} = o(2^n)` means for any `epsilon > 0` there is an `N` such that for `n>N`, `2^{n/2} < epsilon 2^n`. Rewriting this inequality, we have `1/epsilon < 2^{n/2}`, which we see holds for `n > 2log(1/epsilon)`. So for `n > max(2log(1/epsilon), 8192)` we have `n^{100} < 2^{n/2} < epsilon 2^n` which is what we need to show `n^{100} = o(2^n)`.
    2. `\sum_{i=1}^n i = \Omega(n^2)`: `\sum_{i=1}^n i = \Omega(n^2)` means `n^2 = O(\sum_{i=1}^n i)` which means we need to show there exists an `N` such that `n^2 < c (\sum_{i=1}^n i)` for some `c >0` and all `n>N`. Notice `\sum_{i=1}^n i \geq \sum_{i= \lfloor n/2 \rfloor }^n i \geq \sum_{i=\lfloor n/2 \rfloor}^n \lfloor n/2 \rfloor \geq (\lfloor n/2 rfloor)^2 > n^2/16` as `\lfloor n/2 rfloor > \lfloor n/4 rfloor + 1 > n/4` for `n > 1`. So for `n>1`, `16(\sum_{i=1}^n i) > n^2`.
    3. `n^2+n+1 = \Theta(3n^2)`: To see `n^2+n+1 = O(3n^2)`, we notice for `n > 1`, `1 < n < n^2`, so `n^2 + n + 1 < n^2 + n^2 +n ^2 = 3n^2`. To see `3n^2 = O(n^2+n+1)` notice `3n^2 < 3(n^2+n+1)` for `n > 0`.
    4. `\sum_{i=1}^n log i = O(n \cdot log n)`: This holds as `\sum_{i=1}^n log i < \sum_{i=1}^n log n = n \log n` for all `n > 0`.
  1. A two-tape Turing machine that computes that computes the function from `{0,1}^{\star}` to `{0,1}` which has value 1 if the string contains exactly three 0's and 0 otherwise can be defined as: `M = (Q, Gamma, delta)` where `Q= {q_{s\t\art}, q_1, q_2, q_3, q_{ha\l\t}}` (to make the notation simpler we also call `q_{\s\t\a\r\t}` , `q_0` ), `Gamma = {\Delta, \square, 0, 1}`, and `delta: Q times Gamma^2 -> Q times Gamma times \{L,S,R\}^2` is defined by
    `delta(q, Delta, s) = (q, s, R, R)` where `q` is any of `q_0, q_1, q_2, q_3`, and `s` is any of `\Delta, \square, 0, 1`.
    `delta(q, 1, s) = (q, s, R, S)` where `q` is any of `q_0, q_1, q_2, q_3` and `s` is any of `\Delta, \square, 0, 1`.
    `delta(q_i, 0, s) = (q_{i+1}, s, R, S)` where `q_i` is any of `q_0, q_1, q_2` and `s` is any of `\Delta, \square, 0, 1`.
    `delta(q_i, \square, s) = (q_{ha\l\t}, 0, R, S)` where `q_i` is any of `q_0, q_1, q_2` and `s` is any of `\Delta, \square, 0, 1`.
    `delta(q_3, 0, s) = (q_{ha\l\t}, 0, R, S)` where `s` is any of `\Delta, \square, 0, 1`.
    `delta(q_3, \square, s) = (q_{ha\l\t}, 1, R, S)` where `s` is any of `\Delta, \square, 0, 1`.
    The basic idea is that we use the state `q_i` to indicate that we have currently read `i` 0's from the input. We scan the input tape left to right and if we see more that 3 0's or we see our first `square` after the input before seeing three 0's we enter the halt state and write a 0 to the output. Otherwise, if we are in state `q_3` when we see first `square` after the input we write 1 to the output tape and enter the halt state.
  2. To show `n log n` is time constructible, we need to exhibit a machine `M`, such that on input `x`, where `n=|x|`, (i) runs exactly `n log n` steps and (ii) on its output tape write `n log n`. To achieve both (i) and (ii), we break the construction of a 8-tape TM `M` into the following sub-tasks:
    1. Compute `log n` in binary onto tape 5 using tape 2, 3, 4 as auxiliary tapes for this process. We will use exactly `4n` time to do this.
    2. While doing the above, compute `log n - 6` in unary onto 6.
    3. Use an expanded alphabet to compress `m` squares into one, as we did in the linear speed up, and copy `log n` from tape 5 to tape 2 in this compressed alphabet.
    4. While copying tape 5 to tape 2, we also copy tape 4 to tape 3, do this so that both this step and the previous step take exactly `n` time total.
    5. Add the value of tape 2, the compressed `log n`, `n` times, to compute in compressed binary `n log n` to tape 7. To count the `n` times, we can do a single sweep of tape 3. We can do this in less that `epsilon n \log n < n log n - 6n` steps, using the linear speed-up techniques from class and our arguments about the time needed to do repeated adding from our time constructibility of `31(n+2)` result from class..
    6. Copy the compressed `n log n` to the output tape 8 in `n` steps.
    7. We have an additional routine that runs while (e)-(f) above is also running, we sweep tape 6 once right-to-left, for each move on tape 6, we do a sweep of tape 4, alternating between right-to-left and left-to-right. This takes exactly `n(log n - 6)= n log n - 6n` steps. When this sweeping is done we halt.

    To complete the proof, we need to describe how to do (a) and (b) in more detail. To perform (a), we assume that for each symbol `y in {\Delta, \square, 0, 1}`, the symbol `bar{y}` belongs to our alphabet. We first move right off of the start of tape symbols for tape 1 and 2. Then we move left-to-right on the input tape, every two-steps that we move right on the input tape, we write a 1 on tape 2 (the very first time though we write `bar{1}`). While we are doing this, every move right step on the input tape, we write 1 tape 4 and move right. After a fixed number `m` of `1`'s write -- where we specify `m` in a moment -- we write `bar{1}` instead of `m`. If we determine that the input tape is of length less than `m` (this can only happen for constantly many string), then our machine goes into a "lookup mode" to write `n log n` on tape 6 in exactly remaining time it has. When the input tape reaches a `square`, we change the last `1`'s written takes `2` and `4` to `\bar{1}`. At the end of what has been described so far, on the read tape is at the first square after the input, tape 2 has `\lfloor n/2 \rfloor` `1` or `bar{1}` symbols on it, only the first and last of which are `bar{1}`'s, and tape 4 has `n` `1` or `bar{1}` symbols on it, only the `m`th and last of which are `bar{1}`'s. When we see a `\square` on the input tape we initialize tape 5 so that it has `1` on it.

    We next sweep right-to-left on tape 4 until we get to the square at location `m`. When we get to this location, we change directions and move left-to-right towards the rightmost `bar{1}` and finally return right-to-left, towards the leftmost `bar{1}`. While we are doing this we move right-to-left between and including the `\bar{1}` symbols on tape 2, every second move right we move right on tape 3 and write a `1`. As tape 2 moves left we replace `1`'s and `bar{1}`'s with `square`'s, erasing tape 2. The first and last time we might write a `1` on tape 3, we instead write a `\bar{1}`. This process take `\lfloor n/2 \rfloor` steps after which tape 2 is on a blank square next to the start of tape symbol, and tape 3, has `\lfloor n/4 \rfloor` 1's or `bar{1}`'s on it, only the first and last of which are `bar{1}`'s. When we read the leftmost `\bar{1}` from tape 2, we add 1 in binary to tape 5.

    We can interchange the roles of tape 2 and tape 3 and do the same kind of process to half the length of the string on take 3 to tape 2, then add 1 to tape 5. Continuing this process until the length of the strings get to length 1, we will have added 1 to tape 5 `log n` times, so tape 5 will have `log n` in binary written on it. The total time for performing (a) so far is the time for the back-and-forth movements of the tapes 1,2,3 + time to compute carries to add one `log n` times. The first term has the form `n + \lfloor n/2 \rfloor + \lfloor n/4 \rfloor + \cdots + 2 + 1 + c = 2n + c'` where `c`, `c'` are positive integer constants. The second term is of size less that `log^2 n`. For sufficiently large `n`, the total of these two terms is less than `4n`. So this computation will finish before we finish our sweeping on tape 4, so we can use tape 4 and the choice of `m` to time exactly `4n` steps.

    We can calculate (b) while computing (a). When we are about to add 1 to tape 5 for the sixth time, we also, write a `bar{1}` to tape 6. Thereafter whenever we add a 1 to tape 5 we add a 1 to tape 6 and move right. Rather than write the last `1` in `log n - 6` in unary we write `bar{1}`.

    We have now completed our description of how to do (a)-(g) above. Adding the runtimes of these steps gives `6n + (n log n - 6n) = n log n` and the above operations write the desired `n log n` to the output.

  3. Let `M` be a k-tape RAM TM. Let `x` be an input and let `n=|x|`. We can simulate `M`'s actions with a `k+2`-tape offline TM (our usual definition of TM from class) called `N` as follows. First, we will use tapes `3, ..., k + 2` to carry simulate the behavior of `M`'s `k` tapes. In particular, tape `k+2` will correspond to `M`'s tape `k`, `M`'s output tape, and will be used as `N`'s output tape. Tape `2` of `N` will be used as a counter, and tape 1 will be a read only input tape. At the start of simulating a step of `M`, we ensure the counter tape is initialized to 0 and `N` first tape head is on the start of tape symbol. To simulate one step of `M`, `N` looks at its tape heads on tapes `3, ..., k + 2` and does the same action `M` would do if its tape head had the corresponding data on tapes `1, ..., k`. After simulating this step, `N` resets its counter tape to 0, and rewinds its read-only input tape. In the case, where `M` does not enter its special query state, to simulate 1 step, involves no action for `N` on tapes 1 and 2, and only involves doing the corresponding step on tapes `3, ..., k + 2`, so takes 1 step to perform the simulation. For the case, where `M` entered the special query state, `N`, (a) moves right on its input tape. It (b) then compares, the values on its counter tape with the value on the tape 3 (which corresponds to tape 1 of `M`, `M`'s query tape). While these are not equal, `N` adds 1 in binary to its counter tape, and redoes (a) and (b). When the counter tape matches the query tape, `N` copies the value from its tape 1 as the query response to tape 3 , the query tape, and then does the end of simulating a step reset mentioned above. In the worst case, simulating a step has to add 1 in binary `|x|` times together with constant amount of additional work. Using the same argument as we did in class to show that `31(n+2)` is time constructible, we can upper bound the times to do this by `15(|x|+2)`. So to simulate `T(|x|)` many steps of `M` will take at most `15(|x|+2)T(|x|)` steps of `N`.
  4. Let `ONE` denote the language consisting of encoding `alpha'` of TMs which accept only one string. (I mean by only one string, exactly one string). Suppose `ONE` were computable. We will show this implies HALT is computable. To see this define a f to be the function which when given input `langle alpha, x rangle` outputs the code for a TM `alpha'` which operates as follows: First, it checks if its input is the string 1, if not it rejects. If its input is the string one then `alpha'` simulates `alpha` on input `x` and outputs 1 if this simulation halts. The mapping of `langle alpha, x rangle` to `alpha'` involving a fixed number of tweaks to the string `langle alpha, x rangle` so is computable. Further `langle alpha, x rangle in HALT` iff `f(langle alpha, x rangle) = alpha' in ONE`. So if `ONE` were computable by some `M_{ONE}`, then `M_{ONE}(f(\langle alpha, x rangle))` could be used to compute `HALT`, a contradiction, as we showed in class `HALT` is not computable.

Return to homework page.