Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Why Htmx Does Not Have a Build Step (htmx.org)
178 points by crbelaus on Aug 25, 2023 | hide | past | favorite | 85 comments


I liked Julia Evans' take on this: https://jvns.ca/blog/2023/02/16/writing-javascript-without-a...

""" My goal is that if I have a site that I made 3 or 5 years ago, I’d like to be able to, in 20 minutes:

- get the source from github on a new computer

- make some changes

- put it on the internet """

Front-end build scripts make sense for projects that are constantly developed by a team that is actively ready to incrementally fix any problems that come up.

For sites that only get maintained every year or so (and I have plenty of those myself) my experience is that they often cause more problems than they solve.

I also found that I started really enjoying JavaScript development again once I gave myself permission to ignore the npm/webpack/etc script world entirely and just write regular code that runs in browsers!


This. I very grudgingly learnt nextjs to do "modern" JS development. Don't get me wrong it isn't a bad framework but I think I spent most of my time figuring out how 3p components worked, or how to setup build system parts or write new plugins or upgrading other dependencies - this even if I came back after say a 3 month hiatus. My layouts rarely changes. Server side rendering is a whole other pain. If you are like me and want to server side NOT in node you are sol! I was lamenting the other day that I just want to do Fe in typescript (on the browser only for fe logic) but have layouts etc just server side rendered.

I was very delighted to find htmx solves that problem for me!


I can't believe I forgot about this! I love Julia's writing, I cited a different blog post of hers in this piece.


Front-end build scripts make sense for projects that are constantly developed by a team that is actively ready to incrementally fix any problems that come up

today i had a conversation with another programmer who is doing mainly backend work, and shudders in despair looking at anything more complex with frontend javascript (and that's not even considering build tools or frameworks). i thought, perfect, that's exactly what i can help with, and then i realized that, sure i can use frameworks and build the most complex interface, but if it takes anything more than a script tag to load up the code then they will never be able to maintain it.

"i need to do WHAT to deploy this?"

i love building SPAs, i am not a stickler for minimizing the need for javascript. if it helps to solve a problem then i'll use it. but deployment has to be so easy that someone who has never touched the project can do it. "here is a website. it works. go make a copy of it. add your changes and send it back."


exactly this. i am just in the process of upgrading the framework of a site that has been built this way. the thing is that the default tools all push towards using buildtools, bundlers and transpilers. so i had to dig a little bit until figured out how to do it without.

in the process i asked myself why go through the trouble and not just use the build tools like everyone else. and i came pretty much to the same answer. when i need to make some changes to the site in a few years, i do not want to be stuck with having to figure out how to get 5 year old build tools working before i can even start working on the site.

the current site, btw was build in 2016, and it still works, and i was able to dive in with adding new features without delay. the upgrade of the framework is optional, because i want to take advantage of new features not because the site would have stopped working otherwise.


I don't understand this issue.

With a proper `package-lock` and `nvm` i picked up 3 or 4 year-old projects, installed everything with one `npm install`, made my changes and never faced a problem.

Edit: I remember facing one issue with `node-gyp` but that was just a poor choice of packages that were trying to be too native.


To counterbalance that anecdote, I have never picked up a JavaScript project more than a year old and not have to mess with the build system or package versions in order to deploy some small update.


I just picked up a project after 2 years. When i left it, eveything worked fine. Npm run and it spun right up.

Today, nothing works, module resolution problems everywhere. It's going to take me days of fixing the build system now just to get the initial compile working again.

Node is a disaster.


Could you have prevented future module resolution problems?


Your host might eventually drop the Node version you are running on, and then the house of cards come crashing down. You might have a dependency that doesn't support the new Node version you need to run.


This is why I build my personal projects in PHP even though I'm not really a fan. I use PHP and JQuery. It'll work basically forever and I can come back to it in 15 years and it'll still work.


JQuery was great when the browsers weren't so standardised; I find it unnecessary now.


It's still great for a lot of things. It's not necessary now but you still write way less code with it and it's got a ton of functions that there are no implementations of in the browser.

Plus you basically get it for free because it's browser cached because 75% of the sites out there are using it.


> Edit: I remember facing one issue with `node-gyp` but that was just a poor choice of packages that were trying to be too native.

Bingo.


> `nvm`

That's mostly why!, you need to have installed something. If you don't use a build step you don't even need nodejs, your browser is your only dependency.


Experiences vary wildly.

> With a proper `package.lock`

I've never heard of package.lock. Are you as familiar with the subject as the general tone of your comment implies?


My bad I meant `package-lock`, updated for clarity.

> Especially when the old project in question is not yours

Then it falls out of the stated scope of "updating my small website once every 5 years or so"


How have you never heard of a package lock, yet are criticizing other people's knowledge and tone?


They were being pedantic, in that `package-lock` is a thing, and `package.lock` (which is the comment originally said) doesn't. Because apparently a typo invalidates whatever experience you have.


You seem to have missed that my comment was a response to pedantry. It was not, in itself, an attempt at pedantry, let alone pedantry unprovoked.

If someone writes a comment intended to come across as knowledgeable about something (NB: in service of trying to downplay the experience of others, including the person they are responding to), but their self-report gets something as wrong as referring to package-lock.json as "package.lock", which is in the ballpark, but far enough off to be weird, then it raises questions about just how much experience they actually have with that thing (including and especially relative to the person they were trying to contradict).


This is interesting. I think what Julia might actually want is a build system that isn’t as fragile as the standard frontend build systems seem to find acceptable.

I’m not a front end dev, but I’m found the same to be true with the few occasions I’ve found myself needing to spin something up. For whatever reason, frontend dev culture doesn’t seem to value robustness in tooling - defined, as Julia noted, in being able to have tools that just work. I’m not sure why that is, but it doesn’t necessarily mean build tools as a concept are wrong. It just means that the standard implementations for them don’t seem to value developer experience.


I think Julia knows what she wants, and articulated it perfectly in her article. It wasn't this.


I was recommending htmx to some of my friends and they asked me "What does it add?"

I told them that it's value proposition was not that it "added" anything but that it "eliminated" things, like build systems, configs, and the need to use js frameworks in incidences where it's an overkill (which in my experience is majority of the cases)

HTMX is a good example/manifestation of Nassim Taleb's "Via Negative" heuristic i.e some systems can improved simply by the removal of components that are dangerous, ineffective or cause friction. As opposed to trying to improve systems by trying to add components that we think will make a system better. [0]

A lot of technology is focused on improvement by adding, but I think we should start thinking more in this paradigm of improvement by subtracting — subtraction complexity that makes systems more difficult to maintain & build upon.

[0]: https://coffeeandjunk.com/via-negativa/


Well... there still has to be a reason to use htmx, or else why not cut it too?


Needlessly snarky… it adds a lot of value if you want to enhance HTML with dynamic behaviors that would be cumbersome to write in plain JavaScript.


Go ahead. Cut out htmx as well if you don't need incremental updates without having to write JavaScript.


Reminds me of this: https://programmingisterrible.com/post/139222674273/write-co...

“Write code that is easy to delete, not easy to extend.”


“Perfection is achieved, not when there is nothing more to add, but when there is nothing left to take away.”

― Antoine de Saint-Exupéry


"do more and more with less and less until eventually you can do everything with nothing" - R Buckminster Fuller


Crossing my fingers that the proposal for allowing (browser-ignored) type annotations in javascript progresses: https://tc39.es/proposal-type-annotations/

Between that, HTTP2/3 and ES modules many of the downsides for building apps with no compile step are almost completely mitigated.


Does http2 solve waterfall imports? Also wouldn't tree shaking still be unsolved? What about minification?

IMO build step woes are really overstated, my build step has linting, type checking, optimization, bundle analysis size diff checks, enables JSX, and is basically necessary for development anyway as you get hot reloading, compiler-added features for better debugging, etc.


>Does http2 solve waterfall imports?

It used to, but browsers ruined it because they didn't want to help improve caching support


HTTP2 allows the server to push files to the browser.

If you don't have tons of code, tree shaking and minification is not that important. If you depend on a couple libraries (Jquery, for example) those can be minified and just refered to directly in the source.


Chrome removed support for HTTP/2 Server Push last year.

As I remember, it never saw broad adoption because it was technically challenging to integrate with, and it tended to push a lot of resources clients already had, so it wasn't a big win for efficiency anyway.


HTTP early hints still exists and can achieve similar things as server push without wasting bandwidth.


Care to share your build set up? I'd very much appreciate it

I haven't done js since jquery days, but keen to get into modern js development


I maintain Tamagui, so the tamagui starter kit basically is it, try `npm create tamagui`


Waterfall imports are solved by rel=modulepreload.


Wouldn't that conflict with possible future optional static typing? It's only Stage 1. I'm hoping it doesn't make it in its current form. Some good discussion in the issues, though: https://github.com/tc39/proposal-type-annotations/issues


Can you explain why the two proposals would "conflict?" Seems like they're proposing the same thing to me.


It would be hard to separate ignored ones from ones that would use a possible future optional static typing system. They would possibly have to use a different, uglier syntax.


I'm not sure I follow. Both proposals are suggesting that browser's engines ignore the syntax completely to allow some other, external, process to check them.

They use slightly different wording but are describing the same basic idea: This syntax will be ignored entirely and then optional/other process can use this area for whatever.


There are other languages that have optional static typing that don't treat types as comments. JavaScript could evolve to add support for this.


I love this, particularly its clear-eyed summary of some of the tradeoffs. Gonna be honest: build systems have completely robbed the joy from writing simple frontend hacks for me. I no longer use them, which means I just can't use a whole lot of Javascript frameworks and libaries. Oh well.


I just threw away an admin site written with trpc+react and rewrote it with htmx. Deleting so much code felt glorious. Everything feels more simple and as a result adding features is so much easier.


Compilers should be optional, in a two senses of the word:

1. Even if you use a language like TypeScript in your library, your users shouldn't have to. Provide types to them, even provide features like decorators, but have first-class and ergonomic ways for plain JS users to accomplish the same task.

2. Frameworks don't need compilers. All the reasons those that do claim to have them: especially performance, ergonomics, are disproven as requirement by libraries that meet or exceed them on those axes. You especially don't need to fork JS and HTML, and doing so fragments the web dev ecosystem, and causes tool friction, for very little gain. You can offer a compiler, but it should be an optional optimizer.

However, even as compilers should be optional, I also think they should be possible. If users want TypeScript to check their work, and especially if they're in large teams and organizations where extra checks help collaboration, your approach shouldn't prevent it.

And if you're doing things in a language without traditionally good static analysis tools, like HTML, you should try to provide them, otherwise your approach really doesn't scale due to ever-increasing fragility as your code base and team size grows.


I think build systems are a very big reason for the mess javascript is in nowadays.

- Typescript is very light from the code transformation point of view. If anyone remembers the output of babel especially for async/await + the way module imports are implemented generates a really hard to read code and the "best practices" about hiding the state make things a lot harder. Just think about the fact that back in the days (tm) it was common to inspect the state of the app and change it full lisp style and now the js console can mostly be used for quick code tests and looking around when execution is stopped at breakpoint and you can't see that much anyway because of how the code is generated. - The whole babel thing still feels like a madness to me. Yeah, you're getting some new language features but for the price of non obvious code transformation that makes you need source maps (and you need to make them work and maintain that) and all the rest - Build step + npm results in on controlled bloat of the js bundle. `npm install <random-component>` may inflate you bundle by 500kb easily - And with all that there is still a quite big trouble with requiring necessary modules for the use on frontend. Just think that there is still no simple way to require all css provided by a library from npm

Another big problem on my opinion is the propagation of "write once/use everywhere" approach. It works sometimes yeah, but you need to sell your soul for that: you'll have to use something like next.js and that implies that you're bound to use node.js and then you either do all your development in node or have another service doing stuff in the language of your choice and then have a node service as a templating layer and then you need containers and orchestration and so on which makes it a technically challenging project for something that almost for sure can be done in some trivial set of technologies if this requirement had been dropped.

One could argue that we need a different approach with libraries that acknowledges that frontend is distinct world and aims to provide ready made components specifically for the web with an easy setup and no/minimal dependencies. Maybe even with a vendor folder like approach, since the won't be too much code anyway. That should be enough for 99.999% of projects and the rest can use the full scale approach and embrace complexity if they really need it.


Dependency hell IS hell.

I broke my dev machine because a bad application of reps + brew, and triying to reinstall all was a messy mess.

Now I doing things with nix, and is almost good, yet I can't escape dependency hell completely...


> It’s certainly not true for Python, or Java, or C

As far as the C code that I've seen from the past (early 2000s), they do compile pretty fine even today. I doubt if "C" belongs there.


I've had lots of old C code refuse to compile on new compilers out-of-the-box, not to mention that you'll likely need to track down old versions of dependencies as well. Sure, you can always get it all to compile with no changes to the code, but the process can be arduous and the result can basically have its own OS's worth in libraries attached to it (not that modern languages are any better here...).


This was a bizarre comment in the OP article, with an even more bizarre reasoning: they "all have versioning mechanisms where opting for new language features will force you off of deprecated APIs"

C code from 1999 will, unless written by a literal psychic, not have "opt[ed] for new language features" that arose in the interim.


I think gets() was the only function removed from C11? That complaint seemed really odd.


I remember going from just having to hit F5 to waiting for a "hot" reload, angular js -> angular 2. Sad times


I like hot module reloads more than F5 since it preserves the state of the page. I don't use Angular though, but with Vite + React it's extremely quick.


> Would code written in those TypeScript versions compile unmodified in today’s TypeScript compiler (version 5.1 at the time of writing)? Maybe, maybe not.

Definitely not. Typescript upgrades are nearly always breaking in some way. But the fun really starts when upgrading typings (type definitions), or when a library that used to have no typings starts including typings which are more often than not not compatible with the 3rd party typings causing a whole headache by itself.

I love the concept of Typescript but I’m glad I left it behind in favor of Blazor (laugh if you want). It allows me to focus on the customer problems in the limited time I have rather than fixing my build for the Nth time.

Also, lately the Typescript type system has become more complex and the typings itself have also grown more complex, with things like:

     export type SomeType = IPickOne<SomeOtherType,SomeProperty> | IPickExcept<SomeOtherType2,SomeProperty2>;
Utterly insane and you need to pre-compile the types in your mind or rely on auto-complete to understand them.

I get idea that some parts of the front-end ecosystem are designed to be busy work, or at least usually turn out to be. But not all of us work in a SaaS development team that can affort 30% or 40% to just play with front-end build systems. Some of us work for clients instead, and need to make the best of the time we have. And more often than not, that means excluding risk factors like large parts of the NPM ecosystem. Limiting things to some simple gulp, dart-sass and some terser to “build” the front-end.


I often use vanilla esbuild installed via NPM.

I feel like this build step would still work in 5 to 10 years:

- esbuild is compiled into a single binary for each target system so has minimal dependencies.

- NPM makes the version immutable, and will download the same binary version on ‘npm install’.

- I will use .tsx as my extension but use only pure JS, this gets me IDE suggestions without writing TS types, which works well for small projects.

Compared to webpack, esbuild seems unbreakable.


I tried to use htmx but the client side templates extension (allowing you to transform json into html client side using a template) doesn't support json arrays.

Maybe I don't understand the right way of doing things with htmx, but returning large amounts of html seemed like it's a waste of bandwidth to me. That said, I appreciate htmx not having a build step. Even tailwind gets annoying to me.


Returning partial HTML from endpoints is the whole point of htmx. If your endpoint returns json and you’re trying to convert that to HTML to use with htmx, don’t bother - just use another “traditional” Js framework.

I highly recommend the “building hypermedia systems” book (free at https://hypermedia.systems/) so you can understand the HTML/ hypermedia architecture of which htmx is a key component and how it’s meant to be used.


Thanks, will take a look at the book. Your points make sense, but having an extension dedicated to this made me think returning JSON may be ok.


If you have an endpoint that returns json and you can’t change for legacy, historical, compliance reasons, it might make some sense to integrate with an application that uses htmx the way it was intended and only needs that one weird legacy endpoint returning json to be converted to HTML client side. That sounds like the use case for that extension - the other explanation is someone wanting to jump on the htmx bandwagon without understanding the philosophy, and so trying to keep returning json, converting to HTML on the client and then trying to shoehorn this into a hypermedia architecture :)


> just use another “traditional” Js framework

Or don't use a framework at all. It's easy enough to convert JSON to HTML — you only need a few lines of JS.


It's not surprising that HTMX didn't allow you to do that because that's pretty explicitly not what it wants to be. HTMX is all about sending HTML over the wire, not JSON. If your HTML/JSON is gzipped, it shouldn't be significantly different in terms of bandwidth. Plus, you're not bundling nearly as much javascript with your site compared to a heavy client side framework, not to mention savings from avoiding processing more than just the DOM update on the client.


You may be surprised how well HTML compresses.


> large amounts of html

It only sends large amounts of html if there is a lot of data, and if there is a lot of data it will send large amounts of JSON. html in itself doesn't add much, it's the data that makes the bulk of the content


Almost all http clients send Accept-Encoding headers that will significantly shrink the response. Try just gzipping an example html response to get an idea of how good it will be.


Makes sense, thanks.


The answer is simple: it offers simple things as a simple library. I really liked the effort to explain this so long. You shouldn't even be satisfied with that, my friend. I'm looking forward to the htmx nutshell book. you can write it.


Maybe, slightly off-topic. I recently started developing with htmx and go and it's been really very productive for me as I am not a front end developer. But there are still some usecases, where I need to use JS, like a simple 'hide and show password field'. I had to use htmx:on JavaScript to make it work. I am for no build as it keeps development and debugging simpler.


I like that the trend cycle is now moving towards simpler systems with good DX instead of yet another React competitor.


I've seen Typescript programs break between versions of Typescript more than I've seen happen in any other programming language.


Upgrading JS dependencies is a nightmare. That includes build/dev dependencies.

Part of my job is to maintain more than 50 web projects. That includes streamlining, refactoring, upgrading, optimization etc. Some of them are a decade old.

The most painful and time consuming part is dealing with JS issues.

To anyone reading this: cut the fat and do it early. A little bit of perceived “DX” is not worth writing “npm install”.

Just write the code that needs to be written in a straight forward manner. Lean on standards and don’t get distracted by fads. Prioritize stability.

Use libraries that do so as well.


Break meaning no longer type check or behaviorally not work?


Me too, but it's usually because of the weird way TypeScript was built as a thin veneer over JavaScript. Unlike a proper strongly-type, statically-compiled language with a consistent underlying framework where unsound behavior is adversarially sought out and eliminated in the very design of the language from the start, TypeScript started off as just bolting types on to JavaScript and progressed to actually sanity checking the code that a JavaScript runtime happily accepts.

I just ran into a case yesterday where some code ported from JavaScript to TypeScript five or more years ago once more wouldn't compile after upgrading from TypeScript 5.1.x to 5.2.x and in this case it was a DOM event handler that for some reason (probably copy-and-paste from another location in the codebase) had a `return foo;` in a couple of the conditional statements in the event handler callback. Now obviously event handlers don't return anything, and anything they do return is ignored (perfectly normal for JavaScript). TypeScript didn't error out on that, but it did error out because not all the code paths returned a value, so it was no longer able to infer the return type.

A function (that returns a value) that can execute along some logic path without returning a value is a soundness issue. Granted it didn't actually cause any problems (as the return value was always ignored in all cases, the solution was to just remove the `return` statements that never should have been there) but tsc incidentally caught something funky that never should have been in the code in the first place.

So if you view incremental tsc releases as "tightening down all the bolts" then I think its behavior makes sense. I wish tsc caught everything from the start, but I'm only ever happy to see that my code no longer compiles because it means that something tsc *wrongly* accepted before is now being caught - which usually means that potentially errant runtime behavior has been identified and this is a chance for me to correct it. Which is, after all, the whole point of turning JavaScript into a compiled language - so I can catch errors (or at least as many as the TypeScript language is empowered to and the TypeScript compiler is smart enough to) at compile-time rather than at run-time.

(But I'm not primarily a web developer, I hate loosely-typed languages, I absolutely abhor runtime errors, and I think any soundness holes in a language should be treated alarm bells left and right, so perhaps my opinion is not actually reflective of the average webdev.)


This is usually the Typescript team tightening up some "loose" behavior that should have had saner defaults. For instance, the catch block default typing changed from unknown from any.

The bigger issue I've found is when you're using 3rd party type definitions for some library you can't always trust them to be correct or updated. It's a major risk for the soundness of your build.


I personally haven’t seen this. What kind of issues have you seen?


The GP could be referring to situations where libraries have strict bounds which versions of TypeScript they work with.

It's most visible in the Angular ecosystem, where both the core library and additional stuff have such bounds.

For this reason Angular updates are often a terrible experience for everyone involved.


The only time I used Angular was when I had to because I was working at Google. I'd never use it outside of Google after that experience.


I've had trouble where the build fails with various TS#### error messages, most of the time there was some change in Typescript syntax. I guess if you use those bounds you have less of those problems.


I’ve found Angular updates to be mostly painless since they’ve automated most of the migrations using `ng update`.

That said, I do wait 2-3 months to give time for dependencies to update.


I have not either, and i work with over 5 million lines of typescript codebases.


I agree with the main points in the post and certainly understand how they could push one in the direction that htmx has taken, but for me (and I posted a long comment along these lines as a reply in this thread elsewhere) the benefit of TypeScript isn't just the type safety but also that it forces you to explicitly provide more strongly-structured, parseable context info (regarding your intent in the codebase) that can be used to further empower static analysis tools (whether it's the TypeScript compiler or various linters à la eslint and co) to pick up on discrepancies between what the code is supposed to do and what the code does.

I view JS as a particularly unreliable language to build solid (as in dependable, consistent, coherent, and sound) projects in, and the various build tools are just ways to at least partially overcome those inherent shortcomings of the language. Transpiling from ESNext to ES5, merging disparate source components, etc are all just icing on the cake and not the real reason to use TypeScript (though I am surprised that the article talked about its ECMAScript version restrictions caused by IE11 support but didn't talk about how w/ TypeScript you can use language-level (not library-level) features such as the spread operator, for ... of, async/await, etc. and have them automatically converted/transpiled to functional equivalents that work on ES5 browsers).

Just look at how the various @typescript-eslint/* packages can supplant basic eslint rules, replacing them with more precise variants¹ by using the extra information provided by the TypeScript AST instead of just the info surfaced by the JavaScript AST.

¹ Let alone the existence of the hundreds of additional lints that are only possible for TypeScript codebases.


If you are developing a large JS codebase then Typescript is probably a good idea. But the Hypermedia movement, including htmx, is saying don't build large JS codebases! If you send JSON down the wire then you need a large JS codebase to handle it. So don't send JSON down the wire! Send HTML instead, full pages or fragments as appropriate.


I think the goal is to never have to use or think about Javascript again.


It's not


I think I meant "the goal ought to be to never have to think about, let alone use, Javascript again."




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: