I know it sounds extreme to dismiss that workflow, but I don't think people are talking enough about the subtle psychological consequences of LLM writing for this kind of thing.
In the same way that googling for an SEO article's superficial answer ends up meaning you never really bother to memorize it, "ask chat" seems to lead to never really bothering to think hard about it.
Of course I google things, but maybe I should be trying to learn in a way that minimizes the need. Maybe its important to learn how to learn in way that minimizes exposure to sycophantic average-blog-speak.
Yeah, same. I like the silo idea, I'll have to explore that.
I'm relieved to hear this because the LLM hype in this thread is seriously disorienting. Deeply convinced that coding "by hand" is just as defensible in the LLM age as handwriting was in the TTY age. My dopamine system is quite unconvinced though, killing me.
People yeating a (shitty) Github clone with Claude in a week apparently can't imagine it, but if you know the shit out of Rails, start with a good a boiler plate, and have a good git library, a solo dev can also build a (shitty) Github clone in a week. And they'll be able to take it somewhere, unlike the llm ratsnest that will require increasingly expensive tokens to (frustratingly) modify.
You're fooling yourself. It's very easy to get demonstrably working results in an afternoon that would take weeks at least without coding agents. Demonstrably working, as in you can prove the code actually works by then putting it to use. I had a coding agent write an entire declarative GUI library for mpv userscripts, rendering all widgets with ASS subtitles, then proceeded to prove to my satisfaction that it does in fact work by using it to make a node editor for constructing ffmpeg filter graphs and an in-mpv nonlinear video editor. All of this is stuff I already knew how to do in practice, had intended to do one day for years now, but never bit the bullet because I knew it would turn into weeks of me pouring over auto-generated ASS doing things it was never intended to do to figure out why something is rendering subtly wrong. Fairly straightforward but a ton of bitch work. The LLM blasted through it like it was nothing. Fooling myself? The code works, I'm using it, you're fooling yourself.
> Demonstrably working, as in you can prove the code actually works by then putting it to use.
That's not how you prove that code works properly and isn't going to fail due to some obscure or unforessen corner case. You need actual proof that's driven by the code's overall structure. Humans do this at least informally when they code, AI's can't do that with any reliability, especially not for non-trivial projects (for reasons that are quite structural and hard to change) so most coding agents simply work their way iteratively to get their test results to pass. That's not a robust methodology.
> That's not how you prove that code works properly
Yes it is. What do you expect, formal verification of a toy GUI library? Get real.
> and isn't going to fail due to some obscure or unforessen corner case.
That's called "a bug", they get fixed when they're found. This isn't aerospace software, failure is not only an option, it's an expected part of the process.
> You need actual proof that's driven by the code's overall structure.
I literally don't.
> Humans do this at least informally when they code, AI's can't do that with any reliability
Sounds like a borderline theological argument. Coding agents one-shot problems a lot more often than I ever did. Results are what matters, demonstrable results.
>That's not how you prove that code works properly and isn't going to fail due to some obscure or unforessen corner case.
So? We didn't prove human code "isn't going to fail due to some obscure or unforessen corner case" either (aside the tiny niche of formal verification).
So from that aspect it's quite similar.
>so most coding agents simply work their way iteratively to get their test results to pass. That's not a robust methodology.
You seem to imply they do some sort of random iteration until the tests pass, which is not the case. Usually they can see the test failing, and describe the issue exactly in the way a human programmer would, then fix it.
>Human programmers don't usually hallucinate things out of thin air
Oh, you wouldn't believe how much they do that too, or are unreliable in similar ways. Bullshiting, thinking they tested x when they didn't, misremembering things, confidently saying that X is the bottleneck and spending weeks refactoring without measuring (to turn out not to be), the list goes on.
>So no, they aren't working the exact same way.
However they work internally, most of the time, current agents (of say, last year and above) "describe the issue exactly in the way a human programmer would".
LLM hallucinating is not an edge case. It is how they generate output 100% time. Mainstream media only calls it "hallucination" when the output is wrong, but from the point of view of a LLM, it is working exactly it is supposed to....
>LLM hallucinating is not an edge case. It is how they generate output 100% time
If enough of the time it matches reality -- which it does, it doesn't matter. Especially in a coding setup, where you can verify the results, have tests you wrote yourself, and the end goal is well defined.
And conversely, if a human is a bullshitter, or ignorant, or liar, or stupid, it doesn't matter if they end up with useless stuff "in a different way" than an LLM hallucinating. The end result regarding the low utility of his output is the same.
Besides, one theory of cognition (pre LLM even) is of the human brain as a prediction machine. In which case, it's not that different than an LLM in principle, even if the scope and design is better.
Does it have to be a specific number? Whatever you feel like its worth using it over not.
If I write code for medical devices I might not tolerate even one AI-induced issue. If I write glorified web apps, I could tolerate dozens of them as long as it still helps get stuff done faster when it works.
Your car fails ocassionally and needs service. If most of the time it gets you there, enough that you find it worth it over NOT having a car or buying a new one, then it's useful.
And unlike the car, you can do whatever review/verification/testing of the resulting AI code before you deploy it. And the code failing wont kill you or others (if you write trading software, medical devices firmware, or airplane code, you can always not use it).
You don't even need to let it rip on your system, can use it with user confirmation for any action or have it go in a sandbox.
It's interesting more people haven't talked about this. A lot of so-called agentic development is really just a very roundabout way to perform metaprogramming.
At my own firm, we generally have a rule we do almost everything through metaprogramming.
That’s actually an amazing case of agents. I love programming, but be honest, there are a lot of tasks like this that are just very time consuming and not interesting at all.
I also did a native implementation of git so I could use an S3 compatible data store, your rails guru can't do that.
Objectively, my GitHub clone is still shitty, BUT it got several ways github is shitty out of my way and allowed me to add several features I wanted, no small one of which was GitHub not owning my data.
I don't know the shit out of Rails and I don't want to, I know the shit out of other things and I want the tools I'm using to be better and Claude is making that happen.
It's a little odd the skepticism to the level that people keep telling me I'm delusional for being satisfied that I've created something useful for myself. The opposition to AI/LLMs seems to be growing into a weird morality cult trying to convince everybody else that they're leading unhappy immoral lives. I'm exaggerating but it's looking like things are going in that direction... and in my house, so to speak, here on HN there are factions. Like programming language zealots but worse.
Hey I understand you've gotten something out of it. You hired a robot to 3d-print a mug that fits your hand. There's a place for that. You understand that it might poison you a little bit? You understand that this doesn't make ceramics irrelevant?
Hobby-project vibe coding is pretty cool (if I'm being honest, its fucking miraculous; this tech is wild) but isn't it clear that there's a problem with the linkedincels, the investors, the management that are all convinced this will remove say 50% of programming jobs? I understand these things have legitimate uses, but I'm at my wits end hearing about how deep understanding, craftsmanship, patience and hard work aren't "results oriented".
There's definitely zealotry developing against AI, but I suspect it is a proportional (if unhelpful) response to the hype machine. Is it really zealotry to insist on the value of your mind and your competence? These people saying you should never "hand write" your code-- how the fuck did the discourse move so much that this isn't a laughably stupid thing to say? "I'm a CEO, and if you aren't using consultants to make your decisions you've already lost"
>isn't it clear that there's a problem with the linkedincels, the investors, the management that are all convinced this will remove say 50% of programming jobs
These people have always been doing this. Starting in the 90s it was outsourcing programming jobs, they were right then, they got more work for less money and you could have less expertise on staff farming out work somewhere else that was cheaper. You also got worse results sometimes. So it goes.
LLMs are making people more powerful and sucking a lot of income off to the people who provide them. Yup. It makes idiot shysters more powerful just the same as it makes experts more powerful.
People are acting like the software engineering industry is full of fine artistry building the finest bespoke tools instead of duct taping rocks to sticks. I'm sorry but there is a tremendous amount of crap out there by people who barely know what they're doing.
Yes new technology empowers idiots, but it also empowers smart people and if you use it well it'll lead to more quality. Yes you're going to have the same problems you had before of someone doing something cheaply competing with someone trying to be careful to build something well. There also will continue to be idiots spouting off about it.
Nothing changed but the tools got more powerful and people are whining complaining about this change this time ruining everything. Like they always have forever.
Also trying to speak dispassionately: If your enemy presents as the most vulnerable as the most vulnerable of a population, shouldn't that be an indication that you're colonizing? That you're squeezing so hard, oppressing so vehemently that an entire people become your enemy? Or the entire people were your enemy the whole time.
How could Israel be "colonizing" Gaza when they've repeatedly tried to hand it off to other governments? They offered it back to Egypt after the six-day war (Egypt refused), and included it in several offers which would have created a new Palestinian state, and finally failing that, unilaterally withdrew in 2005. They removed all Jewish settlements, which is literally the opposite of colonizing.
"There can be no voluntary agreement between ourselves and the Palestine Arabs. Not now, nor in the prospective future. I say this with such conviction, not because I want to hurt the moderate Zionists. I do not believe that they will be hurt. Except for those who were born blind, they realised long ago that it is utterly impossible to obtain the voluntary consent of the Palestine Arabs for converting "Palestine" from an Arab country into a country with a Jewish majority."
"My readers have a general idea of the history of colonisation in other countries. I suggest that they consider all the precedents with which they are acquainted, and see whether there is one solitary instance of any colonisation being carried on with the consent of the native population. There is no such precedent. The native populations, civilised or uncivilised, have always stubbornly resisted the colonists, irrespective of whether they were civilised or savage."
"Every native population, civilised or not, regards its lands as its national home, of which it is the sole master, and it wants to retain that mastery always; it will refuse to admit not only new masters but, even new partners or collaborators."
"This is equally true of the Arabs. Our Peace-mongers are trying to persuade us that the Arabs are either fools, whom we can deceive by masking our real aims, or that they are corrupt and can be bribed to abandon to us their claim to priority in Palestine. ... We may tell them whatever we like about the innocence of our aims, watering them down and sweetening them with honeyed words to make them palatable, but they know what we want, as well as we know what they do not want. They feel at least the same instinctive jealous love of Palestine, as the old Aztecs felt for ancient Mexico, and the Sioux for their rolling Prairies. To imagine, as our Arabophiles do, that they will voluntarily consent to the realisation of Zionism, in return for the moral and material conveniences which the Jewish colonist brings with him, is a childish notion, which has at bottom a kind of contempt for the Arab people; it means that they despise the Arab race, which they regard as a corrupt mob that can be bought and sold, and are willing to give up their fatherland for a good railway system."
"All Natives Resist Colonists. There is no justification for such a belief. It may be that some individual Arabs take bribes. But that does not mean that the Arab people of Palestine as a whole will sell that fervent patriotism that they guard so jealously, and which even the Papuans will never sell. Every native population in the world resists colonists as long as it has the slightest hope of being able to rid itself of the danger of being colonised."
"This Arab editor was actually willing to agree that Palestine has a very large potential absorptive capacity, meaning that there is room for a great many Jews in the country without displacing a single Arab. There is only one thing the Zionists want, and it is that one thing that the Arabs do not want, for that is the way by which the Jews would gradually become the majority, and then a Jewish Government would follow automatically, and the future of the Arab minority would depend on the goodwill of the Jews; and a minority status is not a good thing, as the Jews themselves are never tired of pointing out. So there is no "misunderstanding"."
"This statement of the position by the Arab editor is so logical, so obvious, so indisputable, that everyone ought to know it by heart, and it should be made the basis of all our future discussions on the Arab question. It does not matter at all which phraseology we employ in explaining our colonising aims, Herzl's or Sir Herbert Samuel's.
"Colonisation carries its own explanation, the only possible explanation, unalterable and as clear as daylight to every ordinary Jew and every ordinary Arab. Colonisation can have only one aim, and Palestine Arabs cannot accept this aim. It lies in the very nature of things, and in this particular regard nature cannot be changed. "
"We cannot offer any adequate compensation to the Palestinian Arabs in return for Palestine. And therefore, there is no likelihood of any voluntary agreement being reached. So that all those who regard such an agreement as a condition sine qua non for Zionism may as well say "non" and withdraw from Zionism. Zionist colonisation must either stop, or else proceed regardless of the native population. Which means that it can proceed and develop only under the protection of a power that is independent of the native population – behind an iron wall, which the native population cannot breach."
"In the first place, if anyone objects that this point of view is immoral, I answer: It is not true: either Zionism is moral and just ,or it is immoral and unjust. But that is a question that we should have settled before we became Zionists. Actually we have settled that question, and in the affirmative. We hold that Zionism is moral and just. And since it is moral and just, justice must be done, no matter whether Joseph or Simon or Ivan or Achmet agree with it or not. There is no other morality."
I think these are reasons that Mastodon and Nostr aren't ever going to have a critical mass of users, remaining a niche thing for people who care about the hypotheticals (which is fine). Imho, BlueSky is the only distributed social media project that has a chance of meeting users where there are with usable search, realtime discoverability, and other consequences of centralizing event-busses.
People wine about BlueSky being too centralized, but the fact is that this type of infrastructure isn't self-hostable. You can do social-media over email a la Mastodon (which admittedly is pretty great), but most people will trade that for a walled garden.
The big problem is that all this AT infra is pretty much charity, which doesn't feel sustainable. I wish it could be funded more like public libraries than ad tech.
25G < PLC postgres < 100G, depending if you want to keep all the spam operations (> 50%) and/or add extra indexes for a handle autocomplete service (like me, takes it over 100GB with everything)
Repo data (records) is in the double digit TB range (low end, without any indexing, just raw)
Blobs are in the Petabyte range.
I aim to find out current and accurate details soon.
Bluesky works because people are told "Go to Bluesky" and they hide the federation. When you're told go to Mastodon and pick mastodon.social or any of the hundreds of other servers, you've lost. For some reason, the federation fans never understood this. I remember an interview with Diaspora's developers and they couldn't stop talking about how people can run their own servers.
Dude.
I have two friends who left Twitter for Bluesky. One's an HR rep and the other is a business analyst for warehouses. Does anyone think a selling point for them was that they can run their own Bluesky infrastructure?
Lovely visualization. I like the very concrete depiction of middle layers "recognizing features", that make the whole machine feel more plausible. I'm also a fan of visualizing things, but I think its important to appreciate that some things (like 10,000 dimension vector as the input, or even a 100 dimension vector as an output) can't be concretely visualized, and you have to develop intuitions in more roundabout ways.
I hope make more of these, I'd love to see a transformer presented more clearly.
True, but Bluesky really does solve pains that closed platforms can’t/won’t. Having a choice over your algorithm is like getting lead out of your pipes, or getting a bidet or something.
I've found it somewhat valuable in two ways and unhelpful/misleading in another:
1. Making small notes is so intuitive and low-pressure. I was already essentially doing before but in the form of various lists of "ideas" or "thoughts on _blank_". You can't reliably decide where you would've put something, it becomes a mess. The fact its a single directory of .md's with a phrasal titles is a great organizing constraint.
2. Being able to find old thoughts/ideas easily and link them together lead to the clarification of a lot of my more unique ideas because of the ad hoc link-language that emerged.
The big problems are the rabbit hole of manic articles promising too much, and the fact that after a while you simply have too many half-baked two-year-old notes that the whole thing becomes limiting and your declare note bankruptcy.
Just because LLMs are a technological innovation for "going to the gym" does not make cable machines a good metaphor. Maybe cable machines with cables made of highly variable grade hemp are comparable to LLMs-- they'll break randomly, and cause unexpected friction here and there. A cable machine still involves a human doing a thing. A forklift at the gym does the work instead.
All this fluff about targeting specific muscles etc is simply not analogous to LLMS. Maybe old-school barbells are paper files and fax machines, and cable machines are Slack, Asana, and Excel?
The cable machines are efficient because they _do the work_ of performing a movement for you, much like how LLMs do the work of writing an e-mail from a prompt describing what you want to write.
They are designed to only activate the target muscle/muscle group during the movement, which is good for working that muscle but bad for working all of the other muscles that _should be_ activated in the kinetic chain for that movement.
Bad analogy. More like, "Professional painter says he doesn't employ low wage contractors to paint for him"
If your rebuttal is "Michelangelo would've only painted the broad strokes and the faces" you're still missing the point that he still /did some painting/.
where in Ai use did you find low wage contractors?
both Photography and Ai are literally "click a (shutter) button" - so photo analogy is perfect
And Michelangelo is bad example because it's "ye old paintings" (you could've at least tried with Picasso or smth) - while my argument would be "painters got replaced by photographers"
In the same way that googling for an SEO article's superficial answer ends up meaning you never really bother to memorize it, "ask chat" seems to lead to never really bothering to think hard about it.
Of course I google things, but maybe I should be trying to learn in a way that minimizes the need. Maybe its important to learn how to learn in way that minimizes exposure to sycophantic average-blog-speak.
reply