Not necessarily, C#'s incremental source generators mean you could simply slap an attribute onto a class you want to use the non-boxing pattern with, and it'll just generate the pattern for you.
Yeah I also don't really write commit messages. If your pull request becomes associated with that commit, and the history gets squashed, then that one commit becomes a link to the pull request where all the necessary info is. I just write "commit" for all my messages.
On my local copy of the repo, commits are notes to myself. I don't use the `--message` switch. I let git bring up my $EDITOR where I type what I did since the last commit. This helps when I'm writing the PR description and when I'm rebasing the branch on top of the main trunk. And then some time, I need to do a bit of git-fu and split the changes into different PRs. Hard to do this with generic messages.
But I use magit and I can commit specific lines and hunks as easily as files. That helps with managing changes to meaningfully group them.
Agreed, the thing I'd be most interested in is the isolated execution environment you mentioned. Agents running autopilot are powerful. Agents running unsupervised on a machine with developer permissions and certificates where anything could influence the agent to act on an attacker's behalf is terrifying
I recommend running the agent harness outside of the computer. The mental model I like to use is the computer is a tool the agent is using, and anything in the computer is untrusted.
Kind of. The chat logs of the agent are trustworthly, as should any telemetry you have on it or coming out of the VM. Its behavior should be treated as probabilistic and therefore untrustworthly.
It’s untrustworthy because its context can be poisoned and then the agent is capable of harm to the extent of whatever the “computer” you give it is capable of.
The mitigation is to keep what it can do to “just the things I want it to do” (e.g. branch protection and the like, whitelisted domains/paths). And to keep all the credentials off its box and inject them inline as needed via a proxy/gateway.
I mean, that’s already something you can do for humans also.
I would recommend not giving an agent the full run of any computing environment. Do handle fine grained internet access controls and credential injection like OpenShell does?
I used to believe this, but I think the next generation of agents is much more autonomous and just needs a computer.
The work of a developer is open ended, so we use a computer for it. We don't try to box developers into small granular screwdrivers for each small thing.
Thats whats coming to all agents, they might want to run some analysis with python, want to generate a website/document in typescript, and might want to store data in markdown files or in MongoDB. I expect them to get much more autonomous and with that to end up just needing computers like us.
The difference is that I am not always legally liable for what a rogue developer does with their computer - if I had no knowledge of what they were up to and had clear policies they violated then I'm probably fine. But I'm definitely always liable for anything an agent I created does with the computer I gave it.
And while they are getting better I see them doing some spectacularly stupid shit sometimes that just about no person would ever do. If you tell an agent to do something and it can't do what it thinks you want in the most straightforward way, there is really no way to put a limit on what it might try to do to fulfill its understanding of its assignment.
That is not the same situation. Writing is a thing we do to communicate with other people, and to engage our own thinking. It's creative, it's exploratory, and it's a human-to-human practice. It is a top-level abstraction. The only higher you could possibly go is beaming your thoughts directly into someone else's brain.
Also it irks me to compare writing to a calculator's log function or a self-driving car. There are absolute correct/perfect outcomes in those situations (the log function produces the correct number, the car drives you to your destination without injury or unnecessary danger). That is not the same for most things AI is attempting to be used for.
Creating graphic arts is also a form of communication. But Procreate makes it easier, even for novice to create amazing art. Consider an aircraft, the pilot is given just few knobs to fly the plane but it still takes you from one location to another. The aircraft is indeed very complex than the knobs given but we can hide most of that complexity underneath the knobs assuming happy path flights most of the time. The higher abstraction I am talking about is the future jargons themselves. AI will allow us to create far more complex stories. Imagine one complex jargon represented by a mandelbrot fractal (to paint a picture of the complexity involved), another represented by burning ship fractal. What kind of operations can I do with these two complex ideas. Can I explore a complex conceptual space with it? We would just say to the AI, subtract one fractal from another and it would handle the details (the definitons, references, related ideas in a free form manner). This is exploration itself. Procreate gives you brushes. AI gives you something similar in conceptual space.
> Let's have the robots do all the hard work and then share the wealth with everyone
That sounds fantastic, except that in our capitalist economy the wealth will not be shared by everyone, and will instead be funnelled directly to the tech oligarchy, while workers get laid off. Until we fix that part of the equation, innovations to efficiency will continue to result in working people getting screwed over by technological innovation.
Sure, but this entire thread is about hypotheticals anyway.
It will be equally politically difficult to ban AI as it would be to grab some of the wealth generated by AI for the exact same reasons - either attempt would be fought against by the same tech oligarchs, for the same reason. To protect their money.
If we are going to have to fight them anyway, let's fight for the one where we don't have to work jobs that could be done by computers instead, while still having the same income.
And we don't have to get rid of capitalism entirely to spread the wealth. UBI can be used in a capitalist society, too.
Woodworking analogy for AI is not "power tools vs handsaw", its "power tools vs. wood 3D printer". You don't do any of the creating, you only ideate and allow the machine to do all the creating. It's simply not wood working anymore. Its something else entirely
I cannot choose entirely hand-made code, I don't think I'll even be able to choose 50% hand-made code, because my manager will say "why aren't you just using the 3D Wood Printer 9000? Jeff is building house frames 5x faster than you, you need to get with the program or we're gonna let you go"
So what? If I want a cabinet I care about the final product. power tools, 3d printer, I don't care. If it means it gets done faster and cheaper and is good enough for my needs, then that's what I want. Someone else is free to pay for the costly artisanal approach if they want to.
This is exactly how I feel. I knew it already to an extent from my time in college, but so many people come into this industry because they want to be able to produce the end product, or just have a stable job that makes good money. Neither of those are bad reasons to get into this profession, but it does make me sad how few peers I have who do programming because they're passionate about the act of programming. The problem solving, the dance of using programming languages to communicate efficiently and robustly to both machines and humans... I'm very sad how enthusiastically so many of my peers just toss that away.
Aside from many of these things just being a layer difference - it’s not unreasonable to want to work on databases query optimisation an not enjoy css or enjoy building frontends but just want a db that’s fast and works. The flip of your view is that they may find it sad that you don’t want to make things, you just want to solve puzzles.
> Aside from many of these things just being a layer difference - it’s not unreasonable to want to work on databases query optimisation an not enjoy css or enjoy building frontends but just want a db that’s fast and works.
I don't mean that it's unsetting that people enjoy different parts of the job, I enjoy many of those same aspects, but it's sad to me how few people around me care about the aspect that I originally fell in love with, which was the bedrock of our profession. Specifically, the work of solving problems with the machine/human shared language of code, instead of just writing out plain-english specs of what you want to have happen.
> The flip of your view is that they may find it sad that you don’t want to make things, you just want to solve puzzles.
So what? Their "just get it done" POV is far more common in this industry than mine (apparently), and the enjoyment they get from their job isn't being actively optimized away.
It's a relative to self-hosting. If a language can be compiled in itself the compiler also becomes one of the first tests of language compatibility. Similarly, if you can write low-level pieces like a garbage collector in the the higher-level language itself you can maybe save transitions to and from lower level code and help prove the language is useful in lower level scenarios. It's maybe backwards to expectations but a lot of .NET improvements over the years has just been moving low level code from C++ to a low level subset of C# to avoid thread stack transitions and/or gift the JIT and AOT compilers with more tools to inline low level code. There have been experimental GCs in the .NET Runtime written in C# for the exact same reason. There likely will be again.
reply