Prison should be either removing you from society because you don't fit (being and having act on liking to kill people for example) OR for forced education/correction of social behavior.
Do you think those devices boot every time you send them an input, or just when they're plugged in? Unless you're turning the TV on and off thousands of times per day this would be unnoticeable.
Certain systems on a vehicle make sense to power off. Do you really need the unit controlling your backup camera to be booted into standby while the car is turned off?
No, but I'm not taking thousands of six-inch family vacations per day, I'd walk if I wanted to travel a distance for which two milliseconds was noticeable.
I think you're mixing (bad) analogies to make some type of point but I'm not sure what it is.
The original comment you replied to mentioned cars in the context of booting devices often, so I simply pointed out that yes, actually it makes sense to boot up some devices when they go to receive input and not to leave them in standby as you suggested. In a vehicle that may sit off/idle for several weeks, it's best not to drain the battery unnecessarily.
Do we? And... do we know that we can make it significantly faster? To wit, will the code be bigger to do a smarter sort such that that itself causes problems? Or, even with a faster sort, is it still 1.6ms?
For policy setting that's the wrong question. The real ones are: how likely is it to happen, how much would training/comms change that, and how acceptable is that risk. Their answers seem to have been: very, not enough, it's not.
Where the problem is and how to change it is an interesting topic, but likely irrelevant for the decision.
My company is huge 100k and lucky enough they also see it a s critical and make it centrally available to us soon.
But this will be a problem for big companies: small ones normally care less about these types of things. This means big companies have to do something otherwise they will compete with ml enhanced developers.
It's a significant advantage for developers to ask AI to solve technical problems, get right answers right away and move to a next task vs keep on scratching your head for the next 5h wondering "why it doesn't work".
I think equal advantage lies in getting multiple approaches fleshed out, even if the answer isn't right, as in, may not compile as-is. That is more than sufficient advantage a developer gets because most of the time usually spent isn't actually typing the code but finding out how to design/combine things.
E.g. I was working on a rust problem and asked ChatGPT for a solution. What it provided didn't compile (incorrect functions etc) but it provided me information in terms of the crates to use, general outline of the functionality and the approach to combine them - that proved to be more than enough for me to get going (to be clear, I didn't blindly copy the code; I understood it first and wrote tests for the finished product). I think that is where the real advantage lies. I see it as an imperfect but very powerful assistant.
Have you ever tried to solve a difficult technical problem with ChatGPT (any version)? I have. It worked about as well as Google search. Which is to say, I had to keep refining my ask after every answer it gave failed until it ultimately told me my task was impossible. Which isn't strictly true, as I could contribute changes to the libraries that weren't able to solve my problems.
Funny enough, the answers gpt4 gave were basically taken wholesale from the first Google result from stackoverflow each time. It's like the return of the I'm Feeling Lucky button.
There are many developers who are unable to do that and need to be spoon-fed. This is the market for ChatGPT and that's why they heavily promote it.
I doubt though that corporations that employ these developers will have any advantage. To the contrary, their code bases will suffer and secrets will leak.
This is very much my experience too. Occasionally ChatGPT can give me something quickly that I wasn't able to find quickly (because I was looking in the wrong place, likely). But most of the time, it's just a more interactive (and excessively verbose) web search. In fact, search tends to be more optimized for code problems; I can scan results and SO answers much faster than I can scan a generated LLM answer.
Use the edit button if you get an incorrect answer. The whole conversation is fed back in as part of the prompt so leaving incorrect text in there will influence all future answers. Editing creates a new branch allowing you to refine your prompt without using up valuable context space on garbage.
That doesn't change anything in terms of the flow. It's still refining the input over and over. This is exactly how searching on google works as well.
For the case I last tested, there was no correct answer. I asked it to do something that is not currently possible within the programming framework I asked it to use. Many people had tried to solve the problem, so chaptgpt followed the same path as that's what was in its data set and provided solutions that did not actually solve the problem. There wasn't any problem with the prompts, it's the answers that were incorrect. Having those initial prompts influence the results was desired (and usually is, imo).
I haven't actually seen that advantage in action. That is, I haven't seen a case where an LLM has actually given a solution right away for a problem that would have stumped a dev for multiple hours.
In my workplace, two devs are using chatgpt -- and so far, neither has exhibited an increase in productivity or code quality.
That's a sample size of two, of course, so statistically meaningless. But given the hype, I expected to see something.
ChatGPT is a godsend for junior developers, it isn't very great at providing coherent answers to more complex or codebase specific questions. Maybe that will change with time but right now it's mostly useful as a learning aid.
For complex codebases it’s better to use copilot since they do the work of providing context to gpt for you. CopilotX will do a lot more but it’s still waitlist signup. You could hack something together yourself using the API. The quickest option is just to paste the relevant code in to the chat along with your prompt.
I tend to use both. Copilot is vastly better for helping scaffold out code and saves me time as a fancy autocomplete, while I use ChatGPT as a "living" rubber duck debugger. But I find that ChatGPT isn't good at debugging anything that isn't a common issue that you can find answers for by Googling (it's usually faster and more tailored to my specific situation, though). That's why I think it's mostly beneficial in that way to junior devs. More experienced devs are going to find that they can't get good answers to their issues and they just aren't running into the stuff ChatGPT is good at resolving because they already know how to avoid it in the first place.
And this gets into why companies are banning it, at least for the time being; developers and especially junior developers in general think nothing of uploading the sum total contents of the internal code base to the AI if it gets them a better answer. It isn't just what it can do right this very second that has companies worried.
It isn't even about the AI itself; the problem is uploading your code base or whatever other IP anywhere not approved. If mere corporate policy seems like a fuddy-duddy reason to be concerned, there's a lot of regulations in a lot of various places too, and up to this point while employees had to be educated to some extent, there wasn't this attractive nuisance sitting out there on the internet asking to be fed swathes of data with the promise of making your job easier, so it was generally not an issue. Now there is this text box just begging to be loaded with customer medical data, or your internal finance reports, or random data that happen to have information the GDPR requires special treatment for even if that wasn't what the employee "meant" to use it for. You can break a lot of laws very quickly with this textbox.
(I mean, when it comes down to it, the companies have every motivation for you to go ahead and proactively do all the work to figure out how to replace your job with ChatGPT or a similar technology. They're not banning it out of fear or something.)
It depends heavily on what you do. When working with proprietary/non-public software stacks, or anything requiring knowledge of your internal codebase, ChatGPT is of little help.
"On two occasions I have been asked, 'Pray, Mr. Babbage, if you put into the machine wrong figures, will the right answers come out?' I am not able rightly to apprehend the kind of confusion of ideas that could provoke such a question."
A great quote, but not applicable. The problem with LLMs is that they are non-deterministic, and you can ask the exact correct question and get the wrong answer.
Use ChatGPT in a domain you're a relative expert in and you run into a million scenarios where it offers a "solution" that will do something close to what was described, but not quite - and you might even not immediately notice the problem as a domain expert. Even worse it may produce side effects suggestive that it is working as desired, when it's not.
In the not-so-secret world of Stack Exchange coffee pasta, people would have other skilled humans pointing these issues out. In the world of LLMs, you risk introducing ever more code that looks perfectly correct, but isn't. What happens at scale?
The net change in efficiency of LLMs will be quite interesting to see. Because unlike past technologies where there was only user error, we're dealing here with going to a calculator that will not infrequently give you an answer that's wrong, but looks right. And what sort of 'equilibrium' people will settle into with this, is still an open question.
> This means big companies have to do something otherwise they will compete with ml enhanced developers.
It's not that bad. In reality if we're looking at large tech companies, they've got senior people who know pretty much anything you want available within minutes/hours - which is something small companies just can't afford.
Ml enhanced devs may be a little bit faster and get some usually-correct help, but they won't get any wisdom.
I think the AI driven development story is more about leverage than it is about wisdom. Leverage from AI is derived from being able to do more with fewer people.
In my experience the great slowdown of growing companies comes from hitting the communication barrier on their products - the point at which the majority of effort is spent coordinating work rather than doing work. I find that the path from majority focus on product to majority focus on coordination isn't linear, but rather more of a watershed. One day you are 80/20, the seemingly overnight after some growth you are 20/80 the other way and never look back.
The advantage of being on the right side of that watershed is that you can maintain velocity and agility. Not only can you iterate quickly, but you're in a better position to change course and rebuild as needed. The left hand and the right hand require little effort to coordinate and get it done.
Larger companies live and die on their ability to either find a moat large enough to protect them, or build organizational structures that let them keep scaling. It takes decades to get the culture and processes right and baked in across the board for a large company to be able to maintain any velocity and reinvent itself.
This is where the gap is. Being small is easy, you simply don't have the coordination problems. But the moment you hit success and need to grow, you immediately are at a disadvantage compared to the big incumbents who have had decades to refine their coordination systems.
The extent to which AI can provide more leverage to smaller companies allowing them to "grow" without actually crossing that coordination watershed, they will be in a much better position take on the incumbents.
"I think the AI driven development story is more about leverage than it is about wisdom."
One of the things I'm going to be looking for over the next few years is whether extensive use of AI assistance will enhance the development of "wisdom" or inhibit it.
I suspect the latter, based on our existing experiences with leaning too much on help, but only time will tell. If it accelerates the development of this wisdom, it will be an invaluable too; if it inhibits it, it will be a career equivalent of taking hard drugs; fun now, deadly over the long term. I'd advise those who are dabbling with it now to 1. keep an eye out on whether or not your own skills are developing and 2. consider whether there's a way to use the tool in a way that your own skills do continue to develop.
My company just got its own instance which prevents their data from being exposed outside of that container, so it won’t be used for (public) training. Surely companies like Apple can do the same.
If your instance is hosted by Microsoft, I doubt Apple would do the same.
My suspicion is that a company like Apple would want it on-prem, or not at all. A hosted instance with a pinky promise not to peek is not attractive to a lot of large businesses out there.
Your material point may still be valid. Apple could buy it, assuming MS was willing to give Apple a full copy and let them run it internally. (This would require divulging the details of its model to Apple. So I have no clue whether either of them are interested in dealing on such a basis? It would seem like a deal to me if MS could get the right price from Apple? But people a lot smarter than me make that call.)
MS could certainly make a version that runs on-prem (while still restricting inspection of the model), if it wanted to. Whether it does or not, remains to be seen.
Not sure how you would do that? As soon as you load the card, all the weights are visible. You need a bit more than that, but not much, to reverse engineer the whole thing.
If I had to bet, they won't allow non hosted instances of a model until they are well on their way to completing the next model. That's just my gut feeling. But again, smarter people make those calls.
I wouldn't worry about that. Our software contains so much natural stupidity that artificial intelligence, even if it existed, wouldn't have a chance in hell.
I hear your killer features and don't care for them at all in any way to carry a tiny computer for them.
I only need real navigation on holiday and I'm pretty sure not taking some expensive pocket computer on travel. Neither when hiking nor in a foreign country. I just print out the directions from MapQuest. K.I.S.S.
And all the other Infos? I don't even use my desktop PC for them. Why would I carry an expensive handheld computer to compare a 3€ product?
>I only need real navigation on holiday and I'm pretty sure not taking some expensive pocket computer on travel. Neither when hiking nor in a foreign country. I just print out the directions from MapQuest. K.I.S.S.
This is actually what happened historically. Various pocket devices existed for navigation (e.g. by Garmin), but they never got too popular until the smartphone became popular for entirely different reasons. Only after the smartphone become really common, then it was reused for existing applications where the benefit existed but was not that big compared to alternatives. AR/VR will need a killer application first, only then it might be reused for navigation.