One really should have digested the manifold hypothesis. It’s the most likely explanation of how AI works.
The question is if there are ultradimensional patterns that are the solutions for meaningful problems. I’m saying meaningful, because so far I’ve mainly seen AI solve problems that might be hard, but not really meaningful in a way that somebody solving it would gain a lot of it.
However if these patterns are the fundamental truth of how we solve problems or they are something completely different, we don’t know and this is the 10 Trillion USD question.
I would hope its not the case, as I quite enjoy solving problems. Also my gut feeling tells me it’s just using existing patterns to solve problems that nobody tackled really hard. It also would be nice to know that Humans are unique in that way, but maybe this is the exact same way we are working ? This really goes back to a free will discussion. Yes very interesting.
But just to give an example on what I mean on meaningful problems.
Can an AI start a restaurant and make it work better than a human. (Prompt: "I’m your slave let’s start a restaurant)
Can an AI sign up as copywriter on upwork and make money? (Prompt: "Make money online")
Can an AI without supervision do a scientific breakthrough that has a provable meaningful impact on us. Think about("Help Humanity")
Can an AI manage geopolitics..
These are meaningful problems and different to any coding tasks or olympiad questions. I’m aware that I’m just moving the goalpost.
One of AI’s strengths is definitely exploration, f.e. in finding bugs, but it still has a high false positive rate. Depending on context that matters or it wont.
Also one has to be aware that there are a lot of bugs that AI won’t find but humans would
I don’t have the expertise to verify this bug actually happened, but I’m curious.
It's not even clear if AI was used to find the bug: they mention modeling the software with an "ai native" language, whatever that means. What is not clear is how they found themselves modeling the gyros software of the apollo code to begin with.
But, I do think their explanation of the lock acquisition and the failure scenario is quite clear and compelling.
Anyways, it seems it would take a dedicated professional serious work to understand if this bug is real. And considering this looks like an Ad for their business, I would be skeptical.
(Apache Drools is an open source rule language and interpreter to declaratively formulate and execute rule-based specifications; it easily integrates with Java code.)
That does not answer my confusion, especially when static analysis could reveal the same conclusion with that language. It's not clear what role ai played at all.
The article does not explain anything about how they used AI—it just has some relation with the behavioral model a human seems to have written (and an AI does not seem necessary to use!)
Where do you think my confusion came from? All it says is that ai assists in resolving the gyroscope lock path, not why they decided to model the gyroscope lock path to begin with.
Please, keep your offensive comments to yourself when a clarifying comment might have sufficed.
Endgame is IPOing those AI companies and getting them on indexes, forcing index funds to buy them, which seemed to be evergreen investment category, but I’m not so sure anymore..
Thank you so much. These comments let me believe in my sanity in an over-hyped world.
I see how people think its more productive, but honestly I iterate on my code like 10-15 times before it goes into production, to make sure it logs the right things, it communicates intent clearly, the types are shared and defined where they should be. It’s stored in the right folder and so on.
Whilst the laziness to just pass it to CC is there I feel more productive writing it on my own, because I go in small iterations. Especially when I need to test stuff.
Let’s say I have to build an automated workflow and for step 1 alone I need to test error handling, max concurrency, set up idempotency, proper logging. Proper intent communication to my future self. Once I’m done I never have to worry about this specific code again (ok some error can be tricky to be fair), but often this function is just practically my thought and whenever i need it. This only works with good variable naming and also good spacing of a function. Nobody really talks about it, but if a very unimportant part takes a lot of space in a service it should be probably refactored into a smaller service.
The goal is to have a function that I probably never have to look again and if I have to do it answers me as fast as possible all the questions my future self would ask when he’s forgotten what decisions needed to be made or how the external parts are working. When it breaks I know what went wrong and when I run it in an orchestration I have the right amount of feedback.
As others I could go very long about that and I’m aware of the other side of the coin overengineering, but I just feel that having solid composable units is just actually enabling to later build features and functionality that might be moat.
Slow, flaky units aren’t less likely to become an asset..
And even if I let AI draft the initial flow, honestly the review will never be as good as the step by step stuff I built.
I have to say AI is great to improve you as a developer to double check you, to answer (broad questions), before it gets to detailed and you need to experiment or read docs. Helps to cover all the basics
So don't write slow flakey unit tests? Or better yet, have the AI make them not slow and not flakey? Of if you wanna be old school, figure out why they're flakey yourself and then fix it? If it's a time thing then fix that or if it's a database thing then mock the hell out of that and integration test, but at this point if your tests suck, you only have yourself to blame.
Sorry I don’t get your point and you didn’t seem to get mine.
I’m saying I would guess I’m faster building manually then to let AI write it, arguably it won’t even achieve the level I feel best with in the future aka the one having the best business impact to my project.
Also the way I semantically define unit tests is that they are instant and non-flaky as they are deterministic else it would be a service for me.
I know deleted most of it and refactored it into an agile codebase with 40k lines less. My life is peace again.
I now use ai for scratch tests and the next edit function from jet brains.
reply