Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The interesting thing a out graph properties (like the emergence of connectivity, giant components, Hamiltonian paths etc.) is that they happen as "phase transitions".

A classic example is cuckoo hashing: You want to know how many edges the random graph of hashes can have before it contains a cycle, since that's when you need to rehash into a larger table. You might expect that this number is "pretty random" in that you sometimes get a cycle with few edges or sometimes have many edges but no cycle. However it turns out that it very predictably happens exactly when the graph gets to a certain size. In the same way as 1000 coin flips very predictably have 450-550 heads.

What's so cool about the theorem is thst it proves _any_ property you can think of has _some_ sudden threshold like that.



Surely not _any_ property? From the article I understood that it’s only those that are monotonic under adding edges?


It seems to me most graph problems that people care about are monotonic under adding edges. Actually, I can't think of a single problem in graph theory I've done where that wouldn't hold.


Being a https://en.wikipedia.org/wiki/Perfect_graph is not monotonic under adding edges, and it's certainly an interesting property for a graph to have! (But yes, most interesting properties are monotonic under adding edges)


> It seems to me most graph problems that people care about are monotonic under adding edges. Actually, I can't think of a single problem in graph theory I've done where that wouldn't hold.

https://en.wikipedia.org/wiki/Braess%27s_paradox is very famous.


You are right! Luckily that's the definition of a graph property .

Something that definitely won't work: The parity of the number of edges.

Still, it's pretty general.


> However it turns out that it very predictably happens exactly when the graph gets to a certain size. In the same way as 1000 coin flips very predictably have 450-550 heads.

> What's so cool about the theorem is thst it proves _any_ property you can think of has _some_ sudden threshold like that.

...like what? The example you give, of the number of heads yielded from 1000 coin flips, doesn't have a sudden threshold at any point. What do you mean by a sudden threshold?


You are right. We have to define the notion of sudden. It works like this: As n goes to infinity you have 100% chance of 49% heads or more, but 0% chance of 51% or more.


In the coin flip scenario, the intrinsic properties of a coin result in a threshold of 50% which above a certain scale, is very "sharp" or sudden in the transition. Compare this to a hash table or queuing theory, where a certain amount of capacity works well up to a certain point, then it falls apart. If you can design a random test which will display that property, then you can determine it before the fact without having a way to directly calculate that property, like we do for coin tosses.


The cases I know tend to be about infinite systems, where you have a critical probability below which the chance of some property is exactly 0. Very often, you get that above that probability the chance is exactly 1 (because of [1]).

In general what I am familiar with is 'percolation theory'.

[1] https://en.wikipedia.org/wiki/Kolmogorov%27s_zero%E2%80%93on...


This is exactly what the Cuckoo Cycle [1] family of graph-theory Proof-of-Work systems gets its name from.

The puzzle instances are pseudo-random graphs in which edges are defined by the siphash24 hash function, and the solutions are cycles of length L, which have a chance of about 1/L of occurring.

[1] https://github.com/tromp/cuckoo


Cuckoo hash must be a great example. I was interested in the problem solved by this paper solely because I learnt the analysis of time complexity of cuckoo hash just a few months ago.


How do you make the theorem talk about random number sequences? What should the finite set X be?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: