- “Tight lower bound for the channel assignment problem” by Arkadiusz Socała: No time algorithm to the channel assignment problem assuming ETH.
- “Chip-firing games on Eulerian digraphs and NP-hardness of computing the rank of a divisor on a graph” by Viktor Kiss and Lilla Tothmeresz: Computing the rank of a divisor on a graph is NP-hard.
- “3SUM Hardness in (Dynamic) Data Structures” by Tsvi Kopelowitz, Seth Pettie, and Ely Porat.
- “Faster Separators for Shallow Minor-Free Graphs via Dynamic Approximate Distance Oracles” by Christian Wulff-Nilsen.
- “Social choice, computational complexity, Gaussian geometry, and Boolean functions” by Ryan O’Donnell.

## Complexity of reachability problem on surface graphs

Directed reachability problem on surface graphs can be solved in poly-time and *sub-linear space (!)*. See the survey by Vinodchandran Variyam.

## Hindman’s Theorem

While reading the posts on ultrafilters by Terry Tao [1,2], I find the following result interesting:

**Hindman’s theorem** (a special case). Suppose we color the natural numbers using k colors. Then there exists a color c and an infinite set S, all colored with c, such that every finite sum of over S has color c as well.

Also, this theorem is claimed to be *rather unpleasant to prove* if we insist on not using ultrafilters.

## New improvement to fast matrix multiplication

最近花了許多時間讀資格考, 寫了不少筆記, 沒有時間寫網誌… 難得有新聞來寫一下!

Again the exponent for the fastest matrix multiplication algorithm is improved. This time the magical number is 2.3728639, 0.0000630 better than Williams’ 2.3729269. (Last time it goes from the famous CW bound 2.3754770 to Stothers’ 2.3736898 then to Williams’, a 0.0025501 improvement.)

Powers of Tensors and Fast Matrix Multiplication, François Le Gall, 2014.

## Compilation of basic results related to regular language

*** Results related to Pumping Lemma**

**Theorem. **A language accepted by a DFA of states is

- nonempty if and only if accepts a sentence of length less lean .
- infinite if and only if accepts a sentence of length with .

The union of the following languages, , is non-regular, but nevertheless it cannot be proven using pumping lemma.

and either , , or

and precisely of symbols in are 3’s

** *Regular languages are precisely constant-space languages

**Theorem. **A regular language can be recognized by a two-way read-only Turing machine.

Proof sketch. Record the configuration of each time we first visit the -th cell. Define a finite automaton to simulate the same behavior.

**Theorem. ** is the same as , both are the class of regular languages. That is, having an algorithm that uses -space is equal to not using any space.

Proof sketch. If a Turing machine uses space, there are possible configurations and possible crossing sequences. If then there are two crossing sequences that are the same, and we can further shorten a shortest string accepted by the machine by the following lemma:

**Lemma. **Let be a machine and be the input. Suppose two crossing sequence and are equal; then by removing the substring of from index to (assume ), we get another string that is accepted by the machine .

This is a contradiction.

The following language is non-regular but can be decided in space:

,

where is the binary representation of .

*** Generating function of regular language**

For a given language ,

is the generating function of . Using analytic combinatorics we can prove the following useful facts.

**Theorem. **For a regular language , the generating function is rational.

**Theorem. **Let be a finite automaton of a language . Then the generating function of is the following rational function determined under matrix form

,

where number of different labels between and ; and as characteristic vectors.

*** Minimizing finite automata**

Brzozowski’s algorithm use only power set construction and edge reversal . One can observe that reversing edges of a DFA gives an NFA of the reverse language. Then the power set construction gives a minimum DFA. This algorithm takes exponential time in the worst case.

Hopcroft’s algorithm is the fastest algorithm known; runs in time. Try to partition the states using the Myhill-Nerode equivalence relation.

*** Relation with **

**Theorem. ** and are incomparable.

Proof. Parity is not in . Palindrome/addition/ are not regular.

## Finite automata, monoids, and Myhill-Nerode theorem

**Kleene-Buchi Theorem.** For a language L, the following are equivalent:

(1) L is recognized by a finite automaton,

(2) L is defined by a regular expression,

(3) L is definable in Monadic second order logic,

(4) L is recognized by a finite monoid.

The *syntactic monoid* is defined by the equivalence with

if and only if for every .

**Myhill Theorem.** A language is regular if and only if the syntactic monoid is finite.

If we define another equivalence relation with

if and only if for every ,

may not be a monoid in general, but we have the following:

**Myhill-Nerode Theorem. **A language can be recognized by a finite automaton with states if and only if the relation has index ; that is, has classes.

## Practice on maximum coverage problem

Recently I just realized that the analysis of approximation algorithms of covering typed problems are unintuitive to me. I don’t know whether it’s because the analysis-based nature which always gets me, or there’s just that I’m not familiar with the proof enough. In any case, this is my attempt on decoding the proofs. Let me take maximum coverage problem as an example, which given a set system with a number , we ask what is the largest amount of elements we can cover using sets.

**Theorem.** Greedy covering gives a -approximation to the maximum coverage problem.

Proof. Let be the number of elements newly covered at step i. Let be the number of elements covered from step to step ; we have

Let be the number of elements we still need to cover in order to cover opt elements; therefore .

**Key observation.** At step , there is always a set that covers at least -th fraction of the uncovered elements in .

Proof. This is exactly the part I feel uncomfortable with; I find the following formulation helps my intuition:

*“No matter the removal of any subset of elements in , the original sets in the optimal cover still covers ; and because sets are enough to cover a partial of , there is one set among the sets that covers at least -th fraction of the partial .”*

Therefore ; from this point on things becomes easy.

,

so each step will shrink the gap by . After steps the gap has size at most -th fraction of , which proves the statement.