Directed reachability problem on surface graphs can be solved in poly-time and sub-linear space (!). See the survey by Vinodchandran Variyam.
Hindman’s theorem (a special case). Suppose we color the natural numbers using k colors. Then there exists a color c and an infinite set S, all colored with c, such that every finite sum of over S has color c as well.
Also, this theorem is claimed to be rather unpleasant to prove if we insist on not using ultrafilters.
最近花了許多時間讀資格考, 寫了不少筆記, 沒有時間寫網誌… 難得有新聞來寫一下!
Again the exponent for the fastest matrix multiplication algorithm is improved. This time the magical number is 2.3728639, 0.0000630 better than Williams’ 2.3729269. (Last time it goes from the famous CW bound 2.3754770 to Stothers’ 2.3736898 then to Williams’, a 0.0025501 improvement.)
Powers of Tensors and Fast Matrix Multiplication, François Le Gall, 2014.
* Results related to Pumping Lemma
Theorem. A language accepted by a DFA of states is
- nonempty if and only if accepts a sentence of length less lean .
- infinite if and only if accepts a sentence of length with .
The union of the following languages, , is non-regular, but nevertheless it cannot be proven using pumping lemma.
and either , , or
and precisely of symbols in are 3′s
* Regular languages are precisely constant-space languages
Theorem. A regular language can be recognized by a two-way read-only Turing machine.
Proof sketch. Record the configuration of each time we first visit the -th cell. Define a finite automaton to simulate the same behavior.
Theorem. is the same as , both are the class of regular languages. That is, having an algorithm that uses -space is equal to not using any space.
Proof sketch. If a Turing machine uses space, there are possible configurations and possible crossing sequences. If then there are two crossing sequences that are the same, and we can further shorten a shortest string accepted by the machine by the following lemma:
Lemma. Let be a machine and be the input. Suppose two crossing sequence and are equal; then by removing the substring of from index to (assume ), we get another string that is accepted by the machine .
This is a contradiction.
The following language is non-regular but can be decided in space:
where is the binary representation of .
* Generating function of regular language
For a given language ,
is the generating function of . Using analytic combinatorics we can prove the following useful facts.
Theorem. For a regular language , the generating function is rational.
Theorem. Let be a finite automaton of a language . Then the generating function of is the following rational function determined under matrix form
where number of different labels between and ; and as characteristic vectors.
* Minimizing finite automata
Brzozowski’s algorithm use only power set construction and edge reversal . One can observe that reversing edges of a DFA gives an NFA of the reverse language. Then the power set construction gives a minimum DFA. This algorithm takes exponential time in the worst case.
Hopcroft’s algorithm is the fastest algorithm known; runs in time. Try to partition the states using the Myhill-Nerode equivalence relation.
* Relation with
Theorem. and are incomparable.
Proof. Parity is not in . Palindrome/addition/ are not regular.
Kleene-Buchi Theorem. For a language L, the following are equivalent:
(1) L is recognized by a finite automaton,
(2) L is defined by a regular expression,
(3) L is definable in Monadic second order logic,
(4) L is recognized by a finite monoid.
The syntactic monoid is defined by the equivalence with
if and only if for every .
Myhill Theorem. A language is regular if and only if the syntactic monoid is finite.
If we define another equivalence relation with
if and only if for every ,
may not be a monoid in general, but we have the following:
Myhill-Nerode Theorem. A language can be recognized by a finite automaton with states if and only if the relation has index ; that is, has classes.
Recently I just realized that the analysis of approximation algorithms of covering typed problems are unintuitive to me. I don’t know whether it’s because the analysis-based nature which always gets me, or there’s just that I’m not familiar with the proof enough. In any case, this is my attempt on decoding the proofs. Let me take maximum coverage problem as an example, which given a set system with a number , we ask what is the largest amount of elements we can cover using sets.
Theorem. Greedy covering gives a -approximation to the maximum coverage problem.
Proof. Let be the number of elements newly covered at step i. Let be the number of elements covered from step to step ; we have
Let be the number of elements we still need to cover in order to cover opt elements; therefore .
Key observation. At step , there is always a set that covers at least -th fraction of the uncovered elements in .
Proof. This is exactly the part I feel uncomfortable with; I find the following formulation helps my intuition:
“No matter the removal of any subset of elements in , the original sets in the optimal cover still covers ; and because sets are enough to cover a partial of , there is one set among the sets that covers at least -th fraction of the partial .”
Therefore ; from this point on things becomes easy.
so each step will shrink the gap by . After steps the gap has size at most -th fraction of , which proves the statement.
Dilworth’s Theorem. If is a finite poset, then the maximum size of an antichain in equals the minimum number of chains needed to cover the elements of .
Proof. Induction on . We separate into two cases.
(a) Some maximum antichain omits both a maximal element and a minimal element of . Then neither or contains all of ; the maximality of implies . Both and has width because they contain , and they only intersect in . Apply induction hypothesis on both sets, we obtain chains covering and chains covering ; combining these chains form chains covering .
(b) Every maximum antichain of consists of all maximal elements or all minimal elements. Thus if is a minimal element and is a maximal element below . Apply induction hypothesis on gives us chains; add chain to get a covering of .
Theorem. Dilworth’s Theorem is equivalent to Konig-Egervary Theorem.
Proof. Dilworth to Konig-Egervary: View an -node bipartite graph as a bipartite poset. The nodes of one part are maximal elements, and nodes of the other part are minimal. Every covering of the poset of size uses chains of size 2, which is actually a matching. Each antichain of size corresponds to an independent set in , and the rest of nodes forms a vertex cover.
Konig-Egervary to Dilworth: Consider an arbitrary poset of size . Define a bipartite graph as follow: For each element , create two nodes and . The two parts of the graph are and . The edge set is .
Every matching in yields a chain-covering in as follow: We start we chain, each contains a unique element. If is in the matching then and are combined in the same chain. Therefore because each node in appears in at most on edge of the matching, a matching of edges forms a covering of chains.
Given a vertex cover in of size , define a corresponding antichain ; this is indeed an antichain because if there is a relation in then edge will be presented, and vertex cover needs to contain at least one of .
Claim. No minimal vertex cover of uses both , because by transitivity the sets and induce a complete bipartite subgraph in . A vertex cover of a complete bipartite graph must use all of one part. Since have all the neighbors in these two sets, we can remove one of that is not in the right part and decrease the size of the vertex cover.
Thus a minimum vertex cover of size yields an antichain of size .