Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> program induction is in fact too general to be very interesting

Intelligence is general and is capable of coping with novel problems. AI therefore invovles building machines which have that level of generality. To say this isn't interesting is to say AI isn't interesting.

If you published exactly the problems that were going to be in the test, programs would be written that would solve those problems but would be very poor at doing anything else.

Anyway here are some input-output pairs that I think would be suitable for a learning program:

(i) append 'x' to the end of the string:

r => rx 123 => 123x

Similarly other insertions, deletions, copies of characters or groups of characters.

(ii) learning Roman or Arabic numerals or other number-coding schemes:

/ => i // => ii /// => iii ///////// => ix

3 => xxx 7 => xxxxxxx 9 => xxxxxxxxx 11 => xxxxxxxxxxx

Ideally, once the program has learnt the above two functions it should have an understanding of the underlying concepts and therefore find it easier to learn functions such as:

iii => 3 xii => 12 xvi => 16



> Intelligence is general and is capable of coping with > novel problems. AI therefore invovles building machines > which have that level of generality. > There is a bias when you think of "programs" to think of programs that a human would actually write or consider useful. But in fact, this is only a tiny (and nontrivial to specify) fraction of the space of "possible programs" - see for example _Foundations of Genetic Programming_, by Langdon and Poli, for some next explorations of general program space. Most possible programs are completely useless and uninteresting to humans (same with proofs, grammars, etc. etc.).

To do AI you have to realize that in the sentences: "Humans have general intelligence and can solve problems in many domains" and "Breadth-first search is a problem-general technique that will always find a solution if one exists" the word "general" does not mean the same thing at all! (cf. the huge literature on inductive bias in human cognition).

For a nice list of program induction problems solvable by search (with a system called ADATE), see http://www-ia.hiof.no/~rolando/Examples/index.html .


You're right that most possible programs are useless and uninteresting. So I agree with you that if a learning program was given problems to solve that are generated by generating random programs, that wouldn't be interesting.

I am not suggesting doing that. What I am suggesting is that the problems-to-solve be generated by humans.

Incidently what I am suggesting isn't quite inductive programming but something slightly more general, in that the job of the problem-solver isn't to generate a program that solves the input/output pairs, but to guess what the next output will be depending on its input. (The problem-solver may or may not do this by internally creating a program that solves the problem).

Anyway I've done a quick write-up of what I propose: http://www.includipedia.com/wiki/User:Cabalamat/Function_pre...


Oops the formatting got fucked up. What I meant was:

  r => rx
  123 => 123x
and:

  / => i 
  // => ii 
  /// => iii 
  ///////// => ix

  3 => xxx 
  7 => xxxxxxx 
  9 => xxxxxxxxx 
  11 => xxxxxxxxxxx

  iii => 3 
  xii => 12 
  xvi => 16




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: