Depending on a corporation to do your programming (and burning half the planet in the process, pardon the hyperbole) is the very opposite end of the "hacker" ethos where Lisp stands. Very surprising to see this sort of comment on HN, of all places.
I've always understood hackers to be a subset of users at HN. Maybe there were more in the early days, but with the growth of the startup business model, a lot of different users were attracted to the site.
The core value seems to be interest in technology and the cultures around it. Emphasis own the plurality of cultures because I think there are multiple, competing ones. Though, as per guidelines, any story interesting to users is acceptable for submission.
Hackernews isn't really for that kind of hacker. Ever since Paul Graham became a startup wonk and VC, it's really more for "growth hackers". It was originally called "Startup News". For growth hackers, productivity, profitability, and scalability, in metrizable form especially, are far more important than romanticism about the lone hacker or small team of geniuses building something with just a laptop and their wits, or even moral concerns about the environment. (And LLMs burn less energy, and deliver more value, than crypto did. The energy consumption of AI has been way overblown.) And Lisp was created specifically to bring about this world. It was an early initial experiment in intelligence by symbolic computation—one which ultimately failed as we found that we can get a lot closer to intelligence by matmuling probability weights with good old-fashioned numeric code written in C++, Fortran, or maybe even Rust. So the long-term AI initiative which gave rise to Lisp ultimately spelt its end as well.
But the force-multiplier effects of LLMs are not to be denied, even if you are that kind of hacker. Eric S. Raymond doesn't even write code by hand anymore—he has ChatGPT do everything. And he's produced more correct code faster with LLMs than he ever did by hand so now he's one of those saying "you're not a real software engineer if you don't use these tools". With the latest frontier models, he's probably right. You're not going to be able to keep pace with your puny human brain with other developers using LLMs, which is going to make contributing to open source projects more difficult unless you too are using LLMs. And open source projects which forbid LLM use are going to get lapped by those which allow it. This will probably be the next major Linux development after Rust. The remaining C code base may well be lifted into Rust by ChatGPT, after which contributing kernel code in C will be forbidden throughout the entire project. Won't that be a better world!
First thing I did here is a grep for "Skills" and no hits. Simon's posts are well upvoted here and Anthropic/Claude is a bit of HN darling, but I think they are playing the hype game a bit too well here.
3 months ago, Anthropic and Simon claimed that Skills were the next big thing and going to completely change the game. So far, from my exploration, I don't see any good examples out there, nor is a there a big growing/active community of users.
Today, we are talking about Cowork. My prediction is that 3 months from now, there will be yet another new Anthropic positioning, followed up with a detailed blog from Simon, followed by HN discussing possibilities. Rinse and Repeat.
This is something I have experienced first hand participating in the Vim/Emacs/Ricing communities. The newbie spends hours installing and tuning workflows with the mental justification of long-term savings, only to throw it all away in a few weeks when they see a new, shinier thing. I have been there and done that. For many, many years.
The mature user configures and installs 1 or 2 shiny new things, possibly spending several hours even. Then he goes back to work. 6 months later, he reviews his workflow and decides what has worked well, what hasn't and looks for the new shiny things in the market. Because, you need to use your tools in anger, in the ups and downs, to truly evaluate them in various real scenarios. Scenarios that won't show up until serious use.
My point is that Anthropic is incentivized in continuously moving goalposts. Simon is incentivized in writing new blogs every other day. But none of that is healthy for you and me.
They were only announced in October and they've already been ported to Codex and Gemini CLI and VS Code agents and ChatGPT itself (albeit still not publicly acknowledged there by OpenAI). They're also used in Cowork and are part of the internals in Fly's new Sprites. They're doing extremely well for an idea that's only three months old!
This particular post on Cowork isn't some of my best work - it was a first impression I posted within a couple of hours of release (I didn't have preview access to Cowork) just to try and explain what the thing was to people who don't have a $100+/month Claude Max subscription.
I don't think it's "unhealthy" for me to post things like this though! Did you see better coverage of Cowork than mine on day one?
I read that as it's not healthy to constantly follow the day one posts about every iteration of brand new technology in order to try and see how to incorporate it into your workflow in a rapidly evolving manner.
It's not an attack on your article or your habits, it's an accurate indictment of chronically consuming probably short-lived hype instead of practicing craft and the use of hardened tools, much like watching certain programmers on youtube to know about the latest frontend library instead of just working on something with versatile, generalizable, industry-relevant tools
You made the right call. Skills were added to Antigravity and I immediately started creating and using them. I never used custom MCP servers, but skills were immediately obvious to me.
An example: I made a report_polisher skill that cleans some markdown formatting, check image links, and then uses pandoc to convert it to HTML. I ask the tool itself created the skill, then I just tweaked it.
How is the fidelity of something like this? It seems like it would randomly fuck it up once in a blue moon. Is that not the case? For your use case I don't understand why you would want an AI involved at all.
Skills may have have code attached to them, so in this case the formatting and converting is all code.
The value of skills is that they are attached to the context of an LLM for few tokens, and the LLM activates one when it feels that it relevant (and brings it into context). It's a chepear alternative to having a huge CLAUDE.md (or equivalent) file.
Please do open-source your skill and blog about it. Also, would like to hear from your experience after a few months of use. Like - how many times did you use the skill, did you run into some problems later (due to some unexpected thing in the markdown), did the skill generalize - or do you have to make tweaks for particular inputs.
@brailsafe has accurately captured where I am coming from.
I want more blogs/discussion from the community about the existing tools.
In 3/6 months, how many skills have you written? How many times have you used each skill? Did you have edit skills later due to unseen corner cases, or did they generalize? Are skills being used predominantly at the individual level, or are entire teams/orgs able to use a skill as is? What are the usecases where skills are not good at? What are the shortcomings?
(You being the metaphorical HN reader here of course.)
HN has always been a place of greater technical depth than other internet sites and I would like to see more of this sort of thing on the front page along with day one calls.
Anything that lets us compose smaller tasks into larger ones effectively is helpful. That’s because self-attention (ie context) is still a huge limiting factor.
As someone who uses these tools a lot, and who sits on the bleeding edge everyday, I agree with you.
MCP got a ton of use out of the gate. People were fawning over it for the first few months, and we can see how well that hype survived contact with hardcore engineers.
I really disagree, skills are really quite useful and there is a lot of usage + community - e.g. take a look at https://github.com/obra/superpowers which I know is used by a lot of people to smooth out their workflow with Claude with great results (not forced spec driven development just better context use + better results). Just this week I used skills to help encapsulate a way to document legacy services ahead of a rewrite (given that my experience now is that rewriting becomes a valid path vs refactoring in many instances): https://github.com/cliftonc/unwind.
I looked at superpowers, but it felt way too generic. Thanks for sharing unwind. More discussion/blogs about these kind of skills is what I am looking for. I would encourage you to write a blog on unwind, explaining in detail how it has helped you. Even better if you do it after 3 months of use, explaining the journey/evolution of the skill.
I'm happy to bet with that skills -- or "a set of instructions in markdown that get sucked into your context under certain conditions" will stick around. Similarly, I think that the Claude Code/Cowork -- or "interactive prompt using shell commands on a local filesystem" -- will also stick around.
I fully anticipate there being a fair amount of thrashing on what exactly the right wrapper is around both of those concepts. I think the hard thing is to discriminate between the learned constants (vim/emacs) are from the attempts to re-jiggle or extend that (plugins, etc); it's actually useful to get reviews of these experiments exactly so you don't have to install all of them to find out whether they add anything.
(On skills, I think that the reason why there "aren't good examples out there" is because most people just have a stack of impromptu local setups. It takes a bit of work to extract those to throw them out into the public, and right now it's difficult to see that kind of activity over lots of very-excitable hyping, as you rightly describe.
The deal with skills and other piles of markdown is that they don't look, even from a short distance, like you can construct a business model for them, so I think they may well end up in the world of genuine open source sharing, which is a much smaller, but saner, place.
> (On skills, I think that the reason why there "aren't good examples out there" is because most people just have a stack of impromptu local setups. It takes a bit of work to extract those to throw them out into the public, and right now it's difficult to see that kind of activity over lots of very-excitable hyping, as you rightly describe.
Very much this. All of my skills/subagents are highly tailored to my codebases and workflows, usually by asking Claude Code to write them and resuming the conversation any time I see some behavior I don't like. All the skills I've seen on Github are way too generic to be of any use.
I thought skills were supposed to be sharable, but (a) ones that are being shared openly are too generic and not useful, (b) people are writing super specific skills and not sharing them.
Would strongly encourage you to open-source/write blog posts on some concrete examples from your experience to bridge this gap.
To be fair, Cowork and similar things are just trying to take the agentic workflows and tools that developers are already accessing (eg most of us have already been working with files in Cursor/CC/Codex for a long time now, it's nothing new) and making them friendly for others.
> 3 months ago, Anthropic and Simon claimed that Skills were the next big thing and going to completely change the game. So far, from my exploration, I don't see any good examples out there, nor is a there a big growing/active community of users.
Skills have become widely adopted since Anthropic's announcement. They've been implemented across major coding agents[0][1][2] and standardized as a spec[3]. I'm not sure what you mean by "next big thing" but they're certainly superior to MCP in ways, being much easier to implement and reducing context usage by being discoverable, hence their rapid adoption
I don't know if skills will necessarily stay relevant amongst evolution of the rest of the tooling and patterns. But that's more because of huge capital investment around everything touching AI, very active research, and actual improvements in the state of the art, rather than simply "new, shinier things" for the sake of it
2 days ago I built a skill to automate a manual workflow I was using: After Claude writes and commits some code, have Codex review that code and have Claude go back and address what Codex finds. I used this process to implement a fairly complete Docusign-like service, and it did a startlingly good job right out of the gate, the bugs were fairly shallow. In my manual review of the Codex findings, it seems to be producing good results.
Claude Code largely built that skill for me.
Implemented as a skill and I've been using it for the last 2 days to implement a "retrospective meeting runner" web app. Having it as a skill completely automates the code->review->rework step.
I looked that the official repo of skills, but I found those very generic and artificial.
I would encourage you to write up a blog post of your experience and share a version of the skill you have built. And then follow up with a blog post after 3 months with analysis like how well the skill generalized for your daily use, whether you had to make some changes, what didn't work etc. This is the sort of content we need more of here.
I partially agree with you that things get abandoned by users when they are too complex, but I think skills are a big improvement compared to what we had before.
Skills + tool search tool (dynamic MCP loading) announced recently are way better than just using MCP tools. I see more adoption by the people around me compared to a few months ago.
Anthropic has great marketing. They get shit (and I do mean shit) to stick in a way that I don't think anyone else in the AI space could. MCP and skills were both obvious duds to people who understand the tech.
Simon is more influencer than engineer at this point, he's incentivized to ride waves to drive views, and I think the handwaiving "this will be amazing" posts have been good to him, even if they turn out to be completely wrong.
I'm not really sure I understand this critique. Skills and cowork are not mutually exclusive. It sits in a gap between Chat and Claude Code.
In regular Chat, I struggle to get the agent to consistently traverse certain workflows that I have. This is something that I can trivially do in Claude Code - but Claude Code wants to code (so I'm often fighting it's tendencies).
Cowork seems like it's going to allow me to use the best parts of Claude Code, without being forced to output everything to code.
It’s not quite at the same level but it reminds me of YouTubers who get products from companies for free for a “review” and then they say “no money exchanged hands”. The incentives are implicit wink-wink and everyone knows it except the audience.
In the case of Cowork I didn't even get preview access, I learned about it at the same moment as everyone else did. There was no incentive from Anthropic to write about it at all (and I expect they may have preferred me not to bang on about prompt injection risks or point out the bugs in their artifacts implementation.)
Honestly, constantly having to fend off accusations of being a shill is pretty tiring.
Is there an Anthropic document that says this? I mean about the "training" portion. The docs I am seeing talks about Claude Code or Desktop being able to use skills - that's a totally different matter.
> Claude models understand the Skill format and structure natively. You don't need special system prompts or a "writing skills" skill to get Claude to help create Skills. Simply ask Claude to create a Skill and it will generate properly structured SKILL.md content with appropriate frontmatter and body content.
Shameless plug: I built [1] and use a small magit like interface on top of org-mode.
I love org for all its bells and whistles and use them in various ways. But most of the time I need a small subset of org in a form-factor that allows ease of use.
I created my own X11 window manager [1] at the start of this year in around 800 lines of C.
I had been using dwm (4000 lines of C) for many years and wished to write my own for a long time, but what made me take the leap was really steveWM [2] and TinyWM [3] which are both super small.
I think all this discussion around Open-source AI is a total distraction from the elephants in the room. Let's list what you need to run/play around with something like Llama:
1. Software: this is all Pytorch/HF, so completely open-source. This is total parity between what corporates have and what the public has.
2. Model weights: Meta and a few other orgs release open models - as opposed to OpenAI's closed models. So, ok, we have something to work with.
3. Data: to actually do anything useful you need tons of data. This is beyond the reach of the ordinary man, setting aside the legality issues.
4. Hardware: GPUs, which are extremely expensive. Not just that, even if you have the top dollars, you have to go stand in a queue and wait for O(months), since mega-corporates have gotten there before you.
For Inference, you need 1,2 and 4. For training (or fine-tuning), you need all of these. With newer and larger models like the latest Llama, 4 is truly beyond the reach of ordinary entities.
This is NOTHING like open-source, where a random guy can edit/recompile/deploy software on a commodity computer. Wrt LLMs, Data/Hardware are in the equation, the playing field is complete stacked. This thread has a bunch of people discussing nuances of 1 and 2, but this bike-shedding only hides the basic point: Control of LLMs are for mega-corps, not for individuals.
But there is an insidiousness to Meta calling their software 'open source'. It feels as if they are riding on the coat tails of the term as if they are being altruistic, when in fact they are being no more altruistic than any large corporation that wants to capture market share via their financial muscle - which I suppose touches on your last point.
reply