>The engineers who thrive will be the ones who can resist the temptation to over-engineer when the marginal cost of adding complexity drops to near zero.
I think this isn't being discussed enough in the SWE world. It wasn't too long ago that engineers on HN would describe a line of code as "not an asset but a liability". Now that code is "free" though, I'm seeing more excessively verbose PRs at work. I'm trying to call it out and rein it in a bit but until engineers on average believe there is inherent risk here, the behavior will continue.
I don't see how the rich don't understand this. Guess they just think it won't happen to them and they'll be smart enough to cash out of America and go elsewhere before everyone else looses.
It seems like we are long way from robots building robots. So they need humans. Maybe the last people with jobs will be building that first generation of robots that still need humans.
> It seems like we are long way from robots building robots.
I think it's very possible some of these folks have convinced themselves otherwise, and have largely eliminated the folks who'd say "you fucking idiot, no" from their lives over the decades.
Would you rather be a billionaire in a world of millionaires, or a millionaire in a world of penniless peasants? What we're seeing now is the billionaires' answer to this question.
They know. That’s why they’re investing in things like land and building castles. They know the future they’ve designed. They’re already laying the foundation needed to win total power.
There's an older article that gets reposted to HN occasionally, titled something like "I hate almost all software". I'm probably more cynical than the average tech user and I relate strongly to the sentiment. So so much software is inexcusably bad from a UX perspective. So I have to ask, if code will really become this dirt cheap unlimited commodity, will we actually have good software?
Depends on whether you think good software comes from good initial design (then yes, via the monkeys with typewriters path) or intentional feature evolution (then no, because that's a more artistic, skilled endeavor).
Anyone who lived through 90s OSS UX and MySpace would likely agree that design taste is unevenly distributed throughout the population.
> To be fair, I was able to get it to work pretty well after giving it extremely detailed instructions and monitoring the "thinking" output and stopping it when I see something wrong there to correct it, but at that point I felt silly for spending all that effort just driving the bot instead of doing it myself.
This is the challenge I also face, it's not always obvious when a change I want will be properly understood by the LLM. Sometimes it one shots it, then others I go back and forth until I could have just done it myself. If we have to get super detailed in our descriptions, at what point are we just writing in some ad-hoc "programming language" that then transpiles to the actual program?
There was no chance that everyone would be running their own email server, but if it wasn't for the lack of IPv6 adaptation a plug and go home email server solution would probably see a decent amount of use. I'd bet we'd already be seeing it as a feature in most mid-ranged home routers by now.
The mail server in a router is easy to host, the problem is:
1) Uptime (though this could be partially alleviated by retries)
and most of all:
2) "Trust"/"Spam score"
It's the main reason to use Sendgrid, AWS, Google, etc. Their "value" is not the email service, it's that their SMTP servers are trusted.
If tomorrow I can just send from localhost instead of going through Google it's fine for me, but in reality, my emails won't arrive due to these filters.
I use a small local provider (posteo) and have 0 problems with spam.
So a 20 pound monkey can also throw around some weight. To be fair I only use it for personal stuff its probably different if you need enterprise scale l.
I've seen plenty of Gmail accounts over the years and they pretty much look the same.
The only Gmail accounts that are "overrun by spam" are those of people subscribing to lots of spammy newsletters and then not knowing how to unsubscribe from them (or figuring they'd stay subscribed in case the next newsletter is the Magical One™). But that's 100% self inflicted and you can't save those people with any technical solution.
Email spam isn't a day to day problem for Gmail (at least) since Bayesian email filtering was first implemented.
The specific concern around uptime & reliability was baked into email systems from almost the start - undeliverable notifications (for the sender) and retries.
But yes, the “trust / spam score” is a legit challenge. If only device manufacturers were held liable for security flaws, but we sadly don’t live in that timeline.
Its not a device/MTA issue, SMTP just is not a secure protocol and there is not much you can do in order to 'secure' human communication. Things like spoofing or social engineering are near impossible to address within SMTP without external systems doing some sort of analysis on the messages or in combination with other protocols like DNS.
SMTP isn't at fault, the social ecosystem is at fault. Every system where identities are cheap has a spam problem. If you think a system has cheap identities and no spam, it probably doesn't have cheap identities — examples are HN or Reddit.
Trust / spam score is the largest one I think, second to consumer ISPs blocking the necessary ports for receiving mail.
Even if your "self hosting" is renting a $5/month VPS, some spam lists (e.g. UCEPROTECT) proactively mark any IP ranges owned by consumer ISPs and VPS hosting as potential spam. I figured paying fastmail $30/yr was worth never having to worry about it.
For "Trust", I believe patio11 described this system as the "Taxi Medallion of Email".
e.g. you spend a lot of money to show that you are a legitimate entity or you pay less money to rent something that shows you are connected to said entity.
Without some kind of federation or centralization, it seems hard to distinguish a hobbyist from a spammer if both of them are using a plug-and-go. Forcing that responsibility into the hands of Google, Zoho, and Microsoft seems like the best compromise, unfortunately.
For one, if my power goes out for an extended period of time I'd still like to be able to access my email. Communications really can't be hosted locally.
What a weird take. I was running my own email server 25 years ago on a 512 kbit ADSL line. No problem at all, would even be enough bandwidth today for most messages.
(Back then email still worked from residential IP addresses, and wasn't blocked by default)
If there's one thing both parties agree with, it's that you can't ever vote for a third party because that's effectively voting for the other major candidate. So the problem of not having more than 2 choices perpetuates indefinitely.
> If there's one thing both parties agree with, it's that you can't ever vote for a third party
Actually, both major parties (not always at the same time) have a long track record of working very hard to promote voting for third-party candidates, doing things like funneling funds covertly (or simply nudging donors) to fund their efforts, assigning party activists to support third-party efforts, etc.
Of course, they exclusively do this for third parties whose appeal is, or is expected to be, mainly to people whos preference, if choices were limited to the major parties, would be for the other major party.
Because it's not just rhetoric, as long as the electoral system isn't reformed to change this, getting people to vote for a minor party instead of your opponent like demoralizing them and getting them to stay home, or disenfranchising them (two other things the major parties have been known to try to do to populations likely to vote for their opponents otherwise) is a lot easier and exactly half as useful, per voter, as getting them to switch to you from the other major party.
That only works if the message of the third party is more appealing to those voters. And so the major party also pays attention to which third party messages from those who would support them are getting through and changes.
It is also helped because many of the people who are insiders in the major party are secretly voting for the third party when the majority of primary voters (who are rarely well informed) force someone they don't like on the party. They can't do anything this time, but they can send a message to each other where they failed.
> That only works if the message of the third party is more appealing to those voters.
It actually works just as well if the third party fails to attract the voters with its message but provides a reason not to vote for the targeted major party candidate that would not work as well if the messenger was the major party using the third party as a stalking horse. Because discouraging voters that would otherwise vote for the other party has the exact same effect on the outcome as moving them to a minor party.
Whichever choice has the least favour is malleable. Right now, by switching up their candidates and policies, the democrats can't do any worse than they're already doing, which is losing. If the democrats next time, then the republicans will have 4 years with nothing to lose.
Fortran, the language, is also older than FLOW-MATIC.
FLOW-MATIC's claim to fame was beating Fortran at releasing a working implementation (and having syntax that looked like English, but that's not something to be proud of). Plankalkül, however, has not yet been implemented so if we're only counting releases of working software, it isn't a contender.
> AI means that those 'easy' tasks can be automated away, so there's less immediate value in hiring a new grad.
Not disagreeing that this is happening in the industry but it still feels like a missed opportunity to not hire juniors. Not only do you have the upcoming skill gap as you mention, but someone needs to instruct AI to do these menial/easy tasks. Perhaps it's only my opinion but I think it would be prudent to instead see this as just having junior engineers who can get more menial tasks done, instead of expecting to add it to the senior dev workflow at zero cost to output.
I think this isn't being discussed enough in the SWE world. It wasn't too long ago that engineers on HN would describe a line of code as "not an asset but a liability". Now that code is "free" though, I'm seeing more excessively verbose PRs at work. I'm trying to call it out and rein it in a bit but until engineers on average believe there is inherent risk here, the behavior will continue.
reply