Hacker Newsnew | past | comments | ask | show | jobs | submit | MathYouF's commentslogin

"Bachelor's degrees have become essential for well-paid jobs in the US."

The lies we continue to allow ourselves to tell as a society.


The real moment of truth will be if any models start to assist massively in research in the hard sciences.

Based on the quality of outputs I get when asking for help with somewhat complex AI research problems, I think it'll likely help accelerate the pace of other research as well, and discovery will be limited by people's speed of running the tests it suggests and feeding it back the results.


The main value of using SDF shapes for 3D modeling workflows is you don't have to worry about topology (the vertex, edge, face graph structure which has to be formed over the surface of all 3D models) which makes a lot of modifiers (like boolean combinations of intersecting objects) vastly less tedious (Womp calls this feature "goop").

Right now Blender work still involves a lot of tedium, mostly related to topology. A lot of upcoming 3D ML applications also work considerably better when using SDF instead of mesh representations. I wouldn't be surprised to see this form of 3D modeling take off to a significant degree because of those two factors.


Blender sort of has them...

There was the original metaballs. But more recently there's also been sdf addons using geometry nodes [1] that mimic the same workflow - with my guess being that it uses voxels to generate the final polygon mesh that blender needs since it's not a fully sdf editor. Although, while I was googling this, I did find someone that managed to do it by using pure shaders [2] which is pretty cool.

Also, thanks for actually explaining that. I've seen a few examples of this kind of "clay like" sculpting approach that tries to make it easier for artists. Adobe's Modeler uses sdfs for example.

[1] https://blenderartists.org/t/geometry-nodes-in-3-3-sdf-prese...

[2] https://www.youtube.com/watch?v=sqDCPW85tuQ


Blender already has metaballs. It's just not user friendly or multiplayer

Interestingly most folks think of 3d modeling as quad modeling/subdivision surfaces, but Toy Story 1 was done completely with NURBs (also supported by blender)


You can throw a voxel remesh modifier onto your model in Blender to get the same functionality. It will convert your model from polygons to SDF and then back to polygons.


I would imagine that's a fairly lossy process with some downsides?

Ideally - the end result of an SDF pipeline is pixels. Going back to polygons throws away much of the advantage of SDFs. Raymarching is costly and rarely used in realtime engines but Blender isn't realtime so rendering SDFs directly would probably be viable.


> Raymarching is costly and rarely used in realtime engines but Blender isn't realtime so rendering SDFs directly would probably be viable.

The viewport still needs to update in real time for it to be a viable modeling workflow, though, right?


It's a lot easier to handle realtime in an editor than in a fully featured game. There's less other stuff going on, you can take extra shortcuts and usually rely on higher-end hardware.


> It's a lot easier to handle realtime in an editor than in a fully featured game. There's less other stuff going on

I don't think that's true. Most games are optimized to limit the number of draw calls and textures. The viewport of a 3D software package has no such limits. As a result, my viewport rarely runs as well as the games I play.

In any case, that's besides the original point, which was that it wouldn't need to run in realtime. My argument was that it would.

It's not a completely impossible goal. Look into what some artists are making with Dreams on the PS4. It uses raymarched SDFs for modeling.


Point taken.

Small correction - Dreams isn't purely Raymarched SDFs. From recollection they ended up with a fairly hybrid approach.


I haven't read much about it to be honest. I know they must be doing something clever under the hood to make it run so we'll on that hardware.


The cost of materials and transportation and time using the expensive CNC machine will be the major costs of sculpture. Generating the same quality 3D models is at the very furthest 18 months away. And animating and rigging the models and giving them auto-generated RL policies will surely come very quickly next.


If you were going to make a 3D model sculpture you would probably want to just 3D print it instead of using a CNC. In any case sculptures aren't as simple as cutting a 3d form out of a hunk of metal; the variety of materials and techniques is arguably much more interesting than the shape at the end. And the physical nature of it by definition resists the infinite generative shit that AI throws on the internet; there isn't space in the world to store tons and tons of nonsense sculptures, and the cost and time of making them is nontrivial as well.


> I don't think AI will be able to replace human creativity for discovering new paradigms as fast as it will replace human application of existing paradigms. And by doing the latter really well with AI, we're killing our ability to do the former. We'll end up with a sterile art trajectory.

This may actually end up making the few artists creative enough to create bold new art styles even more valuable, if they can basically not release their art and hide it behind a model.

Though I guess anyone with access to that model's output could then just generate a few samples and train on those, so maybe not.


With your expertise and the fact that they're so early, you should be able to build a competing business with your superior product right?

Then you can keep all the rewards (and find out all they had to do to be in the position to be able to hire you).

Sounds like a win-win, you should be talking to startup accelerators not startups hiring.


Sounds more like a potential lawsuit to me.


Exactly.


> Numeracy is just as valuable as literacy.

I actually think this is likely not true.

If you were to take an 18 year old who can't read, and another one who can't do addition, which do you think is less employable?

There's a lot of jobs one can do without any math. Almost none one can do without any reading.

There's more to it than all this of course, but I think literacy is the clear winner compared to numeracy.


Just living life requires basic algebraic skills. Loans, insurance, taxes, and virtually any kind of projection into the future require concepts like X = 2Y. Understanding that something is linear or exponential is critical to and to understand when one is better than the other requires that someone learn how we as society express those concepts, which is algebra.

The person who can't read can be a traffic guard, server (with the right cash register), bricklayer, or any of a number of jobs. However, all of those people need to know that they worked X hours @ $15/hour and should be paid 15X. Otherwise they will never know if they were ripped off or be able to plan for the future.


I have a suspicion that we both agree that both numeracy and literacy are very important, and picking the winner doesn't change a lot.

That said, addition is a very low bar--I cannot think of a single job that would not require at least basic addition.


The tone of this betrays a possibly more argumentative than collaborative conversation style than that which I may want to engage with further (as seems common I've noticed amongst anti-connectionists), but I did find one point intersting for discussion.

> Parameters are just the database of the system.

Would any equations parameters be considered just the database then? C in E=MC^2, 2 in a^2+b^2=c^2?

I suppose those numbers are basically a database, but the relationships (connections) they have to the other variables (inputs) represent a demonstrable truth about the universe.

To some degree every parameter in a nn is also representing some truth about the universe. How general and compact that representation is currently is not known (likely less than we'd like of both traits).


There's a very literal sense in which NN parameters are just a db. As in, it's fairly trivial to get copyrighted verbatim output from a trained NN (eg., quake source code from git co-pilot, etc.).

"Connectionists" always want to reduce everything to formulae with no natural semantics and then equivocate this with science. Science isnt mathematics. Mathematics is just a short hand for a description of the world made true by the semantics of that description.

E=mc^2 isnt true because it's a polynomial, and it doesnt mean a polynomial, and it doesnt have "polynomial properties" because it isnt about mathematics. It's about the world.

E stands for energy, m for mass, and c for a geometric constant of spacetime. If they were to stand for other properties of the world, in general, the formulae would be false.

I find this "connectionist supernaturalism" about mathematics deeply irritating, it has all the hubris and numerology of religions but wandering around in a stolen lab coat. Hence the tone.

What can one say or feel in the face of the overtaking of science by pseudoscience? It seems plausible to say now, today, more pseudoscientific papers are written than scientific ones. A generation of researchers are doing little more than analysing ink-blot patterns and calling them "models".

The insistence, without explanation, that this is a reasonable activity pushes one past tolerance on these matters. It's exasperating... from psychometrics to AI, the whole world of intellectual life has been taken over by a pseudoscientific analysis of non-experimental post-hoc datasets.


This discussion (the GP and your response) perhaps suggests that a way to evaluate the intelligence of an AI may need to be more than the generation of some content, but also citations and supporting work for that content. I guess I'm suggesting that the field could benefit from a shift towards explainability-first models.


I'm not anti-connectionist, but if I were to put myself in their shoes, I'd respond by pointing out that in E=MC^2, C is a value which directly correlates with empirical results. If all of humanity were to suddenly disappear, a future advanced civilization would re-discover the same constant, though maybe with different units. Their neural networks, on the other hand, probably would be meaningfully different.

Also, the C in E=MC^2 has units which define what it means in physical terms. How can you define a "unit" for a neural network's output?

Now, my thoughts on this are contrary to what I've said so far. Even though neural network outputs aren't easily defined currently, there's some experimental results showing neurons in neural networks demonstrating symbolic-like higher-level behavior:

https://openai.com/blog/multimodal-neurons/

Part of the confusion likely comes from how neural networks represent information -- often by superimposing multiple different representations. A very nice paper from Anthropic and Harvard delved into this recently:

https://transformer-circuits.pub/2022/toy_model/index.html


Related: Polysemanticity and Capacity in Neural Networks https://arxiv.org/abs/2210.01892


If greater parameterization leads to memorization rather than generalisation it's likely a failure in our current architectures and loss formulations rather than an inherent benefit of "fewer parameters" improving generalizaiton. Other animals do not generalize better than humans despite having fewer neurons (or their generalizaitons betray a misunderstanding of the number and depth of subcategories there are for things, like when a dog barks at everything that passes by the window).


The team I work on is building tooling with exactly that in mind, making this a part of an artists workflow rather than any sort of replacement.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: