Hacker Newsnew | past | comments | ask | show | jobs | submit | apw's commentslogin

I found these slides very helpful:

http://fodava.gatech.edu/files/uploaded/DLS/Vapnik.pdf

In particular, I was unsure after reading the original article whether the additional information--for example, the poetry--was available to the learner on test inputs. The above slides explicitly state that it is not.


One possibility is to use the neuromorphic chips as souped-up branch predictors -- instead of predicting one bit, as in a branch predictor, predict all bits relevant for speculative execution. This can effect large-scale automatic parallelization.

See this paper at ASPLOS '14 for details:

http://hips.seas.harvard.edu/content/asc-automatically-scala...


The IBM Blue Gene/Q cannot in any reasonable way be described as "racks of PCs running Linux with souped up network cards".


I thought the Owhadi et al. paper was about model mispecification; i.e., the true model is not in the hypothesis space. That's pretty fundamentally different--and far less of a problem--than gradient descent's "sensitivity to initial conditions".


It's not clear that a nicely formatted error message is preferable to a core dump.

With a core dump one can explore the execution environment at the time of the crash.

A nice compromise is a macro that prints an error message and then calls __builtin_trap().


But a nicely formatted error message, so long as it is correct, can be read by many more people than a core dump.

I don't know how I would read a core dump of an error in a python program to understand where an exception came from.


I don't think that "it can be emulated by computers" follows from "intelligence is a purely physical phenomenon".

Even fairly simple quantum systems (which are "purely physical phenomena") cannot be emulated by any classical computer in any meaningful sense, since the computational complexity of integrating the dynamical equations is exponential. Even if we could recruit all the atoms in the known Universe, we still couldn't build a classical computer capable of emulating many simple quantum systems.


Even if we could recruit all the atoms in the known Universe

In the past 233 years we have evolved from the first steam machine to the iPhone 5S. I would say we have a pretty good track record at overcoming miniaturization problems.


Then again, most people don't think that quantum level phenomena have anything to do with human intelligence.


Not sure it's clear that most people don't think that. There has been a lot of articles on quantum effects in the human brain lately. e.g., http://goo.gl/Ff0elU


Those theories (also in your link) have only one source: Roger Penrose.

If you open any Neuroscience textbook, there will be no mentions of quantum phenomena as related to consciousness.


I was referring to the "recent discovery of quantum vibrations in microtubules inside brain neurons" which is a fact independent of any particular theory or interpretation thereof.


It's quite likely that he was using `perf':

    $ perf stat make
    ...
    116,222 page-faults               #    0.046 M/sec
    ...
https://perf.wiki.kernel.org/index.php/Tutorial


I'm surprised someone hasn't written a Markov Chain-based Yik Yak post generator.

It would learn all the worst insults at a particular school, then apply them to the entire student body at random intervals. After a while, nobody could tell the actual malicious posts from the random posts, so all would be ignored.

Or so I would hope.


This is a great idea. Feed enough misdirection into the system and the whole thing collapses.


While the visualization is attractive, my fear is that it cannot convey the deeper reason why those sine and cosine waves magically sum to the desired function.

Leaving rigour aside for the moment: think of functions f : R -> R as infinite-dimensional vectors. The integer harmonics of sine and cosine comprise a set of orthonormal "vectors" that form a basis for all functions on R (some fine print goes here).

Now compute the inner product of your desired function with every element of that basis. Each such inner product is a real number which we will call a coefficient. The list of nonzero coefficients, once you have computed them, is a complete description of your function.

Now it is clear why those sine and cosine functions "magically" add up to your desired function, since we are simply multiplying each of them by their corresponding coefficient that we computed above.

That visualization is no more (or less!) amazing than the fact that (1, 2, 3) = 1(1, 0, 0) + 2(0, 1, 0) + 3(0, 0, 1).


If you can express your workflow as a set of dependencies (granted, not all workflows are easily expressed this way), make gives you parallel and incremental computation "for free".

Imagine that you needed to download a tar archive, unpack it, then run several simulations followed by regressions followed by figure plotting on the data. You could write a shell script to do this, but it would be hard to make the shell script simulate the capabilities of `make -j', and you'd have to do a lot of timestamping and file existence checking to simulate the incremental computation capabilities of make.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: