Not sure what counts as a "business problem" for you, but personally I couldn't have gotten as far as I've come with game development without it, as I really struggle with the math and I don't know many people locally who develop games that I could get help from. GPT4 have been instrumental in helping me understand concepts I've tried to learn before but couldn't, and helps me implement algorithms I don't really understand the inner workings of, but I understand the value of the specific algorithm and how to use it.
In the end, it sometimes requires extensive testing as things are wrong in subtle ways, but the same goes for the code I write myself too. I'm happy to just get further than have been possible for the last ~20 years I've tried to do it on my own.
Ultimately, I want to finish games and sell them, so for me this is a "business problem", but I could totally understand that for others it isn't.
Sound like you need to learn to search. There's tons of resources on game dev. I can sort of see the value of using GPT here but have you tried using it in an area you're an expert in ? The rate of convincing bullshit vs correct answers is astonishing. It gets better with Phind/Bing but then it's a roulette that it will hit valid answers in the index fast enough.
My point is - learning with GPT at this point sounds like setting yourself up for failure - you won't know when it's bullshiting you and you're missing out on learning how to actually learn.
By the time LLMs are reliable enough to teach you, whatever you're learning is probably irrelevant since it can be solved better by LLM.
Of course I've searched and tried countless of avenues to pick up this, I'm not saying it's absolutely not possible without GPT, just that I found it the easiest way of learning.
And it's not "Write a function that does X" but more employing the Socratic method to help me further understand a subject, that I can then dive deeper into myself.
But having a rubber duck is infinitive worth, if you happen to a programmer, you probably can see the value in this.
> have you tried using it in an area you're an expert in ? The rate of convincing bullshit vs correct answers is astonishing. It gets better with Phind/Bing but then it's a roulette that it will hit valid answers in the index fast enough.
Yes, programming is my expertise, and I use it daily for programming and it's doing fine for me (GPT4 that is, GPT3.5 and models before are basically trash).
Bing is probably one of the worst implementations of GPT I've seen in the wild, so it seems like our experience already differs quite a bit.
> you won't know when it's bullshiting you and you're missing out on learning how to actually learn.
Yeah, you can tell relatively easy if it's bullshitting and making things up, if you're paying any sort of attention to what it tells you.
> By the time LLMs are reliable enough to teach you, whatever you're learning is probably irrelevant since it can be solved better by LLM.
Disagree, I'm not learning in order to generate more money for myself or whatever, I'm learning because the process of learning is fun, and I want to be able to build games myself. A LLM will never be able to replace that, as part of the fun is that I'm the one doing it.
I have personally found the rubber-ducking to be really helpful, especially for more exploratory work. I find myself typing "So if I understand correctly, the code does this this and this because of this" and usually get some helpful feedback.
It feels a bit like pair programming with someone who knows 90% of the documentation for an older version of a relevant library - definitely more helpful than me by myself, and with somewhat less communication overhead that actually pairing with a human.
>Yeah, you can tell relatively easy if it's bullshitting and making things up, if you're paying any sort of attention to what it tells you.
It's trained on generating the most likely completion to some text, it's not at all easy to tell if it's bullshitting you if you're a newbie.
Agreed that I was condescending and dismissive in my reply, been dealing with people trying to use ChatGPT to get free lunch without understanding the problem recently so I just assume at this point, my bad.
> It's trained on generating the most likely completion to some text, it's not at all easy to tell if it's bullshitting you if you're a newbie.
I don't think many people (at least not myself and others I know who use it) use GPT4 as a source of absolute truth, but more like a "iterate together until solution", taking everything it says with a grain of truth.
I wouldn't decide any life or death decisions on just a chat with GPT4, but I could use it to help me lookup specific questions and find out more information that then gets verified elsewhere.
When it comes to making games (with Rust), it's pretty easy to verify when it's bullshitting as well. If I ask it to write a function, I copy-paste the function and either it compiles or it doesn't. If it compiles, I test it out in the game, and if it works correctly, I write tests to further solidify my own understand and verification it works correctly. Once that's done, even if I have no actual idea of what's happening inside the function, I know how to use it and what to expect from it.
> Sound like you need to learn to search. There's tons of resources on game dev.
I have been making games since / in Flash, HTML5, Unity, and classic consoles using ASM such as NES / SNES / Gameboy: Tons of resources are WRONG, tutorials are incomplete, engines are buggy, answers you find on stackoverflow are outdated, even official documentation can be littered with gaping holes and unmentioned gotcha's.
I have found GPT incredibly valuable when it comes to spitting out exact syntax and tons of lines that i otherwise would have spent hours and hours to write combing through dodgy forum posts, arrogant SO douchebags, and the questionable word salad that is the "official documentation"; and it just does it instantly. What a godsend!
> you won't know when it's bullshiting you and you're missing out on learning how to actually learn.
Have you tried ...compiling it? You can challenge, question, and iterate with GPT at a speed that you cannot with other resources: i doubt you are better off combing pages and pages of Ctrl+F'ing PDFs / giant repositories or getting Just The Right Google Query to get exactly what you need on page 4. GPT isn't perfect but god damn it is a hell of alot better and faster than anything that has ever existed before.
> whatever you're learning is probably irrelevant since it can be solved better by LLM.
Not true. It still makes mistakes (as of Apr '23) and still needs a decent bit of hand holding. Can / should you take what it says as fact? No. But my experience says i can say that about any resource honestly.
>I have found GPT incredibly valuable when it comes to spitting out exact syntax and tons of lines that i otherwise would have spent hours and hours to write combing through dodgy forum posts, arrogant SO douchebags, and the questionable word salad that is the "official documentation"; and it just does it instantly. What a godsend!
IMO if you're learning from GPT you have to double check it's answers, and then you have to go through the same song and dance. For problems that are well documented you might as well start with those. If you're struggling with something how do you know it's not bullshitting you ? Especially for learning, I can see "copy paste and test if it works" flying if you need a quick fix but for learning I've seen it give right answers with wrong reasoning and wrong answers with right reasoning.
I'm not disagreeing with you on code part, my no.1 use case right now is bash scripting/short scripts/tedious model translations - where it's easy to provide all the context and easy to verify the solution.
I'd disagree on the fastest tool part, part of the reason I'm not using it more is because it's so slow (and responses are full of pointless fluff that eats tokens even when you ask it to be concise or give code only). Iterating on nontrivial solutions is usually slower than writing them out on my own (depending on the problem).
Funny enough, I’d been wanting to learn some assembly for my M1 MacBook but had given up after attempts at googling for help as I ran into really basic issues and since I was just messing around and had plenty of actually productive things to work on.
A few sessions with ChatGPT sorted out various platform specific things and within tens of minutes I was popping stacks and conditionally jumping to my heart’s delight.
Yup, ChatGPT is, paradoxically, MOST USEFUL in areas you already know something about. It's easy to nudge it (chat) towards the actual answers you're looking for.
Nontrivial problem solutions are wishful thinking hallucinations, eg. I ask it for some way to use AWS service X and it comes up with a perfect solution - that I spend 10 minutes desperately trying to uncover - and find out that it doesn't exist and I've wasted 15 minutes of my life. "Nudging it" with followups how it's described solutions violate some common patterns on the platform, it doubles down on it's bullshit by inventing other features that would support the functionality. It's the worst when what you're trying to do can't really be done with constraints specified.
It gives out bullshit reasoning and code, eg. I wanted it to shorten some function I spitballed and it made the code both subtly wrong (by switching to unordered collection) and slower (switching from list to hash map with no benefit). And then even claims it's solution is faster because it avoids allocations ! (where my solution was adding new KeyValuePair to the list, which is a value type and doesn't actually allocate anything). I can easily see a newbie absorbing this BS - you need background knowledge to break it down. Or another example I wanted to check the rationale behind some lint warning, not only was it off base but it even said some blatantly wrong facts in the process (like default equality comparison in C# being ordinal ignore case ???).
In my experience working with junior/mid members the amount of half assed/seemingly working solutions that I had to PR in last couple of months has increased and a lot (along with "shrug ChatGPT wrote it").
Maybe in some areas like ASM for a specific machine there's not a lot of newbie friendly material and ChatGPT can grok it correctly (or it's easy to tweak the outputs because you know what it should look like) - but that's not the case for gamedev. Like there are multiple books titled "math for game developers" (OP use case).
Oh, ChatGPT is terrible at actually doing things with ASM in general. It was just good at the boilerplate.
If anyone can get ChatGPT to write the ASM to reverse a string, please show me an example! I’m still having to get out a pad and paper or sit in lldb to figure out how to do much of anything in ASM, same as it has always been!
With respect to writing I've used it for things I know enough to write--and will have to look up some quotes, data, etc. in any case. GPT gives me a sort of 0th draft that saves me some time but I don't need to check every assertion to see if it's right or reasonable because I already know.
But it doesn't really solve a business problem for me. Just saves some time and gives me a starting point. Though on-the-fly spellchecking and, to a lesser degree grammar checking, help me a lot too--especially if I'm not going to ultimately be copyedited.
> By the time LLMs are reliable enough to teach you, whatever you're learning is probably irrelevant since it can be solved better by LLM.
For solving the really common problem of working in a new area LLMs being unreliable isn't actually a big deal. If I just need to know what some math is called or understand how to use an equation, it's often very easy to verify an answer, but can be hard to find it through google. I might not know the right terms to search or my options might be hard to locate documentation or SEO spam
This is fair, using it as a starting point to learning could be useful if you're ready/able to do the rest of the process. Maybe I was too dismissive because it read to me like OP couldn't do that and thought he found the magic trick to skip that part.
I don't particularly have a big problem with math at the level that AIs tend to be useful for, and find that it tends to hallucinate if you ask it anything which is moderately difficult.
There's sort of a narrow area where if you ask it for something fairly common but moderately complicated like a translation matrix that it usually can come up with it, and can write it in the language that you specify. But guarding against hallucinations is almost as much trouble as looking it up on wikipedia or something and writing it yourself.
The language model really needs to be combined with the hard rules of arithmetic/algebra/calculus/dimensional-analysis/etc in a way that it can't violate them and just mash up some equations that its been trained on even though the result is absolute nonsense.
This vibes with my experience as well. In terms of actual long term value, this kind of educational exploration is promising. It acts, in a way, as a domain expert who understands your language and can help you orient yourself in a domain which is otherwise difficult to penetrate.
I also am happy that language-as-gate-keeping, which is plaguing so many fields both in academia and business, is going to quickly be “democratized”, in the best sense of the word. LLMs can help you decipher text written in eg law speak, and it also can translate your own words into a form that will get grand poobah to take you seriously. Kind of like a spell/grammar checker on steroids.
In the end, it sometimes requires extensive testing as things are wrong in subtle ways, but the same goes for the code I write myself too. I'm happy to just get further than have been possible for the last ~20 years I've tried to do it on my own.
Ultimately, I want to finish games and sell them, so for me this is a "business problem", but I could totally understand that for others it isn't.