The more people you have the more processes you need. With 5 people the same person usually does the same things or they've seen the person who usually does it do it. So they do it in the same way, know the tricks or who to ask if it goes wrong. Your working environment is usually small, communication fluid, so people just know what goes on and how it works.
As you get larger you see what you're seeing - not everyone is in on the meetings, knowledge in one person's head isn't shared and people don't know who to ask.
I worked on a simple barcode database setup years ago. Originally it was just a spreadsheet one person 'owned'. Then I made it into a webapp that I and original owner could both add items to - enforcing unique entries and validating checksums (plus an API I could querry). More process, but still a lot was in our heads. When a 3rd person started using it, they added something in lower-case (we always wrote refs in caps) and a variety of case sensitive bugs were found. I had to fix the DB manually and started adding hints and validation.
The more people you have using something the more problems you will have and the less feasible it is to walk everyone through the system in person!
Documentation, make process, make efforts to share knowledge. No way around it.
What's the best car? If you're trying to go fast it's one answer, if you're trying to carry as much load as possible it's another, if you're buying for your just-qualifed-teen it's another. But best is obviously subjective, so what about safest? I don't know specifics there, but if you're in the EU the "safest" car would be very different to the "safest" in the US, because their safety studies measure very different things.
Which is the issue with almost all studies and statistics, what it means depends entirely on what you're measuring.
I can program very very fast if I only consider the happy path, hard code everything and don't bother with things like writing tests defining types or worrying about performance under expected scale. It's all much faster right up until the point it isn't - and then it's much slower. Ai isn't quite so obviously bad, but it can still hide short term gains into long term problems which is what studies tend to focus on as the short term doesn't usually require a study to observe.
I think Ai is similar to outsourcing staff to cheeper counties, replacing ingredients with cheaper alternatives and other MBA style ideas. It's almost always instantly beneficial, but the long term issues are harder to predict, and can have far more varied outcomes dependent on weird specifics of the business.
But the AI has the work to derive from already. I just went to Gemini and said "make me a picture of a cartoon plumber for a game design".
Based on your logic the image it made me of a tubby character with a red cap, blue dungarees, red top and a big bushy mustache is not a derivative work...
(interestingly asking it to make him some friends it gave me more 'original' ideas, but asking it to give him a brother and I can hear the big N's lawyers writing a letter already...)
So long as you can use the slow port for charging, I think it’s an entirely tolerable trade-off. Remember, this is a machine for people with low technical requirements. It’s not a machine for someone who needs lots of high speed ports.
Outside of packages I doubt few of my code bases would fit into this. But the individual domain areas would. I don't care about users in a orders context, I don't care about payments when dealing with imports, no reason an ai should care either. It shouldn't care about implementations if there's an interface referenced, it shouldn't worry about front end when it's dealing with the back etc.
Scoping the Ai to only use the things you'd use seems far wiser than trying to reduce your codebase so it can look at the whole thing when 90% of it is irrelevant.
The problem with that is the benefit of inspiring children does little to nothing for the business, while the risk of frivolous but expensive legal actions because you decide you should get millions for inventing the self service checkout is not insignificant.
I'd suspect many places would still respond positively though, especially in the more creative worlds. Almost every creative was that kid once.
Different type of creator, different type of bugs. I'd assume a human giving me a way to delete merged branches has probably had the same issue, solved the same problem and understands unspecified context around the problem (e.g protect local data). They probably run it themselves so bugs are most likely to occur in edge cases around none standard use as it works for them.
Ais are giving you what they get from common patterns, parsing documentation etc. Depending what you're asking this might be an entirely novel combination of commands never run before. And depending on the model/prompt it might solve in a way any human would balk at (push main to origin, delete .git, re-clone from origin. Merged local branches are gone!)
It's like the ai art issues - people struggle with relative proportions and tones and making it look real. Ai has no issues with tones, but will add extra fingers or arms etc that humans rarely struggle with. You have to look for different things, and Ai bugs are definitely more dangerous than (most) human bugs.
(Depends a little, it's pretty easy to tell if a human knows what they're talking about. There's for sure humans who could write super destructive code, but other elements usually make you suspicious and worried about the code before that)
A utility meant for viewing data? I don't think you understand what a text editor is.
I'd agree that recent features feel a bit unnecessary, but it does need to edit and write files - including system ones (going through however that is authorised). You could sandbox a lot of apps with limited impact, but it would make a text editor really useless. Least privilege principles work best when you don't need many privileges.
I’m not sure I understand what you’re trying to say. You could always edit system files with notepad, that was something that the program always excelled at thanks to its simplicity in both how it looked and behaved. And i fail to see the new features as anything but useless bloat.
As you get larger you see what you're seeing - not everyone is in on the meetings, knowledge in one person's head isn't shared and people don't know who to ask.
I worked on a simple barcode database setup years ago. Originally it was just a spreadsheet one person 'owned'. Then I made it into a webapp that I and original owner could both add items to - enforcing unique entries and validating checksums (plus an API I could querry). More process, but still a lot was in our heads. When a 3rd person started using it, they added something in lower-case (we always wrote refs in caps) and a variety of case sensitive bugs were found. I had to fix the DB manually and started adding hints and validation.
The more people you have using something the more problems you will have and the less feasible it is to walk everyone through the system in person!
Documentation, make process, make efforts to share knowledge. No way around it.
reply