I only heard the same argument (this is the phrasing from Wikipedia but everybody seems to say something along these lines):
>"However, in 1931, Gödel's incompleteness theorem proved definitively that PM, and in fact any other attempt, could never achieve this lofty goal; that is, for any set of axioms and inference rules proposed to encapsulate mathematics, there would in fact be some truths of mathematics which could not be deduced from them."
...so what?! The fact that there can be truths that can't be deduced from an "assembler language" just means that the system will sometimes just say something like "error: no proof in the current model database for 'fact x' found" and then the mathematician will just add "consider 'fact x' proven as in the defined in modelXYZ" (a model that can have a totally different logic than the current one - think of a model as a library written in a completely different programming language, in software analogy), taking responsibility for the fact the equivalence of the concepts 'fact x in current model' and 'fact x in modelXYZ'.
The long term goal would be unification of as many of the models as possible (even with, what I understand from Godel, as the impossibility of total unification - if something is proved to be impossible, it doesn't mean you can't get great benefits by always getting asimptotically closer to it) preventing such "forced equivalences", but it would still be a working system in the meantime. And more importantly, I guess, the system will make the "forced equivalences" obvious, and label them as problems for mathematicians to solve.
It wasn't stopped, it's just way, way harder than one would naively expect. Russell continued to work on it by developing type theory, which was later carried on by people like Alonzo Church and Per Martin-Löf. This led not just to types in programming, but also to the use of types in proof checkers like Coq and Agda, which are very much the kind of thing you're talking about. Efforts are ongoing to continue working towards the vision of automated theorem proving, but again, it is an extremely difficult problem.
I only heard the same argument (this is the phrasing from Wikipedia but everybody seems to say something along these lines):
>"However, in 1931, Gödel's incompleteness theorem proved definitively that PM, and in fact any other attempt, could never achieve this lofty goal; that is, for any set of axioms and inference rules proposed to encapsulate mathematics, there would in fact be some truths of mathematics which could not be deduced from them."
...so what?! The fact that there can be truths that can't be deduced from an "assembler language" just means that the system will sometimes just say something like "error: no proof in the current model database for 'fact x' found" and then the mathematician will just add "consider 'fact x' proven as in the defined in modelXYZ" (a model that can have a totally different logic than the current one - think of a model as a library written in a completely different programming language, in software analogy), taking responsibility for the fact the equivalence of the concepts 'fact x in current model' and 'fact x in modelXYZ'.
The long term goal would be unification of as many of the models as possible (even with, what I understand from Godel, as the impossibility of total unification - if something is proved to be impossible, it doesn't mean you can't get great benefits by always getting asimptotically closer to it) preventing such "forced equivalences", but it would still be a working system in the meantime. And more importantly, I guess, the system will make the "forced equivalences" obvious, and label them as problems for mathematicians to solve.