Chris Pollett > Old Classes > CS254
( Print View )

Student Corner:
  [Submit Sec1]
  [Grades Sec1]

  [
Lecture Notes]
  [Discussion Board]

Course Info:
  [Texts & Links]
  [Description]
  [Course Outcomes]
  [Outcomes Matrix]
  [Course Schedule]
  [Grading]
  [Requirements/HW/Quizzes]
  [Class Protocols]
  [Exam Info]
  [Regrades]
  [University Policies]
  [Announcements]

HW Assignments:
  [Hw1]  [Hw2]  [Hw3]
  [Hw4]  [Hw5]  [Quizzes]

Practice Exams:
  [Midterm]  [Final]

                           












HW4 Solutions Page

  1. Suppose `k`-tape machine `M` recognizes a language `L` using at most `c \cdot f(n)` time steps for some constant `c`. So `L in DTIME(f(n))`. We want to show there is a block respecting TM `M'` that also recognizes `L` using at most `c' \cdot f(n)` time steps for some constant `c' \geq c`. To define `M'`'s alphabet, `Gamma'`, let `Gamma` be `M`'s alphabet. Define `Gamma''` as `Gamma \cup {ul{a} | a in Gamma}`. Then define `Gamma' := Gamma'' \cup Gamma'' \times Gamma'' \times Gamma''`, so a tape square of `M'` can store triples of symbols from `M`'s tape alphabet. Our simulating machine `M'` will have `2k+1`-tapes, the first `k` will correspond to the `k` tapes of `M`, we will use the `k+1`st tape as a clock, and the remaining `k` tapes will be used for copying.

    As a first step, we could imagine that the original machine `M` had been working using the expanded alphabet `Gamma''` and had the same transitions involving symbols `ul{a}` as it did for `a`. This modified machine, `M''` on input `x`, would output exactly the same output as `M`. In fact, it would output the same thing as `M` on `x` if the input tape had `x`, but where the `sqrt(f(n))`th symbol on the input tape had been underscored. We will show how `M'`, a block respecting machine, can simulate `M` on this kind of input.

    `M'` will spend its first `2 \cdot sqrt(f(n))` steps locating on the input the underscored symbols, marking an underscore blank symbol at the `sqrt(f(n))`th tape square of its `k+1`st tape, and then rewinding both tapes. During the simulation, the head on the `k+1`st tape will scan right to the underscored symbol and then left to the start of tape symbol, and so on. We will use when it hits these boundaries as indicating a time when `M'` is allowed to cross between one block of `sqrt(f(n))` tape symbols on a tape to another block of `sqrt(f(n))` tape symbols. Let `m = sqrt(f(n))`. We imagine `M`'s and `M'`'s work tapes are split into blocks of size `m`. Let `b_i`, denote the contents of `i`th such block on the original machine `M` on one of its tapes at some time step of the simulation. Let `b_{i,j}` denote the `j`th square in such a block. Our simulation ensures that at timesteps of `M` that we are simulating which are multiples of `m`, the tape squares for the block the tape head is in on `M'` have triples `(b_{i-1, m-j}, b_{i,j}, b_{i+1, m-j})`. In `m` steps, the original machine `M` can at most move into either the `i-1`th block or the `i+1`st block on this tape. So for the next `m` steps `M'`'s `i`th block will have all data needed to carry out a simulation without having to move into adjacent blocks. So `M'`'s simulation of `m`steps is as follows: for each tape `j`, and its current block `b_v`, we simulate `m` steps of `M` all within block `v` of `M'`, then we spend `c''\cdot m` steps, waiting for the `k+1`st tape to finish a sweep if we finish early, copying the top coordinates of the `v`th block onto the `k+1+j` auxiliary tape and from the `k+1+j` auxiliary tape into `M'`'s `v-1`th block and to copy the bottom coordinates into `M'`'s onto the aux tape and then onto the `v+1`st block. Finally at the end of the simulation of `m` steps, we move the tape head in `M'` into the same block and position with that machine `M` is in. Hence, simulating `m` steps of `M`, takes some constant times `m` steps. So to simulate `M` for all of its steps `c \cdot f(n)` will take at most a constant factor longer, so will run in some time `c' \cdot f(n)`.

  2. This problem had two parts.
    1. We want to show if `L_1` is logspace reducible to `L_2` then `L_1` is Karp reducible to `L_2` (I.e., is is p-time computable). There are at most `2^{c \log |x| } = |x|^c` configurations of a logarithmic space machine on some input `x`. So if a logspace reduction is going to terminate, it must do so in polynomial time. So a logspace reduction is computable in polynomial time.
    2. We want to show the reduction of any NP language to TMSAT is logspace computable. Suppose `L in NP`, then there a deterministic machine `M` running in `p(n)` time such that `x in L` iff there exists a `y in {0,1}^{q(|x|)}` and `M(x,y) = 1`. `M`'s encoding `|~M~|` is finite, so could be stored in a finite numbers of states of a TM. Given `x`, computing `p(|x|)` and `q(|x|)` could be computed in logspace using the grade school algorithm for multiplication as the number of rows we have to add together to get the `i`th bit is less than the length of `|x|` so is logarithmic in `|x|`. Thus the whole map from `x` to an instance of TMSAT, `\langle |~M~|, x, 1^{p(|x|)}, 1^{q(|x|)} \rangle`, i.e., a simple concatenation of these things, will be logspace computable.
  3. For this problem, we are trying to show the language `L_{min} = {(x,y) | (x,y) in A ^^ forall y' in {0,1}^{p(|x|)}( y' < y => (x,y') !in A)}` is contained in some level of the polynomial hierarchy. Re writing this in terms of a machine `M_A` for `A` this becomes:
    `{(x,y) | \exists z in {0,1}^{q(|x|+|y|)} M_A(x,y,z) =1 ^^ forall y' in {0,1}^{p(|x|)}( y' < y => \neq \exists z' in {0,1}^{q(|x|+|y'|)} M_A(x,y',z') =1)}`,
    using `\neg \exists z' ` is the same as `forall z' neg`.
    `{(x,y) | \exists z in {0,1}^{q(|x|+|y|)} M_A(x,y,z) =1 ^^ forall y' in {0,1}^{p(|x|)}( y' < y => \forall z' in {0,1}^{q(|x|+|y'|)} M_A(x,y',z') = 0)}`,
    pulling quantifiers to the front, this is:
    `{(x,y) | \exists z in {0,1}^{q(|x|+|y|)}forall y' in {0,1}^{p(|x|)} \forall z' in {0,1}^{q(|x|+|y'|)} ( (M_A(x,y,z) =1) ^^ ( y' < y => M_A(x,y',z') = 0)}`.
    Using pairing, the `y'` and `z'` universal quantifiers could be combined. The predicate `(M_A(x,y,z) =1) ^^ ( y' < y => M_A(x,y',z') = 0)` is a boolean combination of p-time computable checks so is `p`-time computable, and so the last line above shows `L_{min}` is a `Sigma_2^p` language.
    1. We want to prove by induction on `k`, that `TISP(n^{k(1-epsilon)}, n^{epsilon}) \subseteq \Sigma_{2k-1}-TIME(n)` for any `1 > epsilon > 0`.

      When `k=1`, the above statement is `TISP(n^{(1-epsilon)}, n^{epsilon}) \subseteq \Sigma_{1}-TIME(n) = NTIME(n)`, so the statement trivial holds as the languages recognized, in less than linear time and linear space can certainly be recognized in linear time. Hence, the base case of the induction holds.

      Assume that `TISP(n^{k(1-epsilon)}, n^{epsilon}) \subseteq \Sigma_{2k-1}-TIME(n)` holds up so some fixed `k`, let's try to show `TISP(n^{(k+1)(1-epsilon)}, n^{epsilon}) \subseteq \Sigma_{2k}-TIME(n)`. Let `L in TISP(n^{(k+1)(1-epsilon)}, n^{epsilon})` be recognized by some machine `M` such that on input `x`, `M` runs fewer than `c cdot n^{(k+1)(1-epsilon)}` steps and uses less than `c' cdot n^{epsilon}` squares. We can express this as a `\Sigma_{k}-TIME(n)` operation as follows:

      1. Using existential moves, guess a sequence of `n^{(1-epsilon)}` configurations of `M`, `C_1, ..., C_{n^{1-epsilon}}` of maximum size `n^{epsilon}`. Given the space constraint on `M`, the size of this guess will be `O(n^{(1-epsilon)} \cdot n^{epsilon}) = O(n)`.
      2. Check in time less than `n` that `C_1` is an initial configuration of `M` on `x`.
      3. Using a universal moves, check for each `j in [1, n^{1-epsilon})`, that `M` started in configuration `C_j` ends in configuration `C_{j+1}` after `n^{k(1-epsilon)}` steps using at most `n^{epsilon}` space. By the induction hypothesis, for a particular `j`, this check can be done in, `\Sigma_{2k-1}-TIME(n)`.
      4. Check that the final configuration has output 1.

      The whole process has two more alternation than Step III, so will be in `\Sigma_{2k}-TIME(n)`, completing the induction step and the proof.

    2. Next we want to show that logspace is contained in `LINH`. Any logspace machine `M` uses at most space `c \cdot log n < n^{epsilon}` for sufficiently large `n`. Further such a machine has fewer than `2^{c \log n} < \cdot n^c \leq n^{k \cdot (1-epsilon)}` configurations, where `k` is suitably larger than `c`. So if it stops, it will do so in time `n^{k \cdot (1-epsilon)}`. Hence, the language recognized by `M` will, by our previous theorem, be contained in `\Sigma_{2k-1}-TIME(n)`, and so in `LINH`.
  4. We need to give a circuit family `{C_n}` such that for any `n`, `C_n` uses AND, OR, NOT gates of fan-in at most 2, and the size of `C_n` is bounded by `p(n)` and its depth is at most `k \cdot log n` for some fixed `k`. Further when given three `n`-bit numbers, `i`, `x`, `y`, as inputs to the family member `C_n`, `C_n` outputs the `i` bit of `x+y`. Let `x_i`, `y_i` denote the `i` input line for `x` and `y` respectively. First, if we knew the `i-1`th carry bit, `c_{i-1}`, then output of `C_n` would be `x_i + y_i + c_{i-1} mod 2` if `i\leq n` and `c_i` if `i = n +1`. These computed be computed by the formulas (and hence circuits):
    `(x_i ^^ neg y_i ^^ neg c_{i-1}) vv (neg x_i ^^ y_i ^^ neg c_{i-1}) vv (neg x_i ^^ neg y_i ^^ c_{i-1}) vv (x_i ^^ y_i ^^ c_{i-1})`
    and just `c_{i-1}` respectively. So it suffices to give an `NC^1` circuit for `c_i`,as we could then supply its output as inputs to these formulas. We note a carry is "started" at `j` if `x_j = y_j = 1` and are propagated to line `i` if for every `j < j' \leq i` at least one of `x_{j'}` or `y_{j'}` is 1. So `c_i` can be expressed as:
    `vv_{j=1}^{i-1}[(x_j ^^ y_j) ^^ (^^_{j'=j+1}^{i-1}(x_{j'} vv y_{j'}))]`
    So `c_i` can be computed by linear in `n` size circuit, and so our overall circuit will be polynomial size. If we implement `vv_{j=1}^{i-1}` and `^^_{j'=j+1}^{i-1}` using balanced trees of fan-in 2 gates, then the resulting circuit will have logarithmic depth in `n` as each of these involves fewer than `n` subcircuits AND'd or OR'd.

Return to homework page.