Hacker Newsnew | past | comments | ask | show | jobs | submit | David-Guillot's commentslogin

> These "we cut 70% of our codebase" claims always make me laugh.

There's also a slide in my talk that presents how many JS dependencies we dropped, while not adding any new Python. Retrospectively, that is a much more impressive achievement.


Thanks to Chris to continue challenging his comfort zone (and mine!) and sharing his impressions and learnings with us!

I may be a little biased because I've been writing webapps with htmx for 4 years now, but here are my first thoughts:

- The examples given in this blogpost show what seems to be the main architectural difference between htmx and Datastar: htmx is HTML-driven, Datastar is server-driven. So yes, the API on client-side is simpler, but that's because the other side has to be more complex: on the first example, if the HTML element doesn't hold the information about where to inject the HTML fragment returned by the server, the server has to know it, so you have to write it somewhere on that side. I guess it's a matter of personal preference then, but from an architecture point-of-view both approaches stand still

- The argument of "less attributes" seems unfair when the htmx examples use optional attributes with their default value (yes you can remove the hx-trigger="click" on the first example, that's 20% less attributes, and the argument is now 20% less strong)

- Minor but still: the blogpost would gain credibility and its arguments would be stronger if HTML was used more properly: who wants to click on <span> elements? <button> exists just for that, please use it, it's accessible ;-)

- In the end I feel that the main Datastar selling point is its integration of client-side features, as if Alpine or Stimulus features were natively included in htmx. And that's a great point!


The article stated that he no longer needs eventing to update other parts of the page, he can send down everything at once. So, I guess that is much less complex. Granted, eventing and pulling something down later could be a better approach depending on the circumstance.


You can send everything down at once with htmx too, with oob swaps.


yes you can, but the complexity is now moved to server side template wrangling. With SSE, its just separate events with targets. It feels much cleaner


Server side template wrangling is not really a big deal, if you use an HTML generation library...something like Python's Hpty/FastHTML or JavaScript's JSX. You can easily split the markup down into 'components' and combine them together trivially with composition.


I mean in practice you rarely target individual elements in datastar. You can sure. But targeting the main body with the entirety of the new content is way simpler. Morph sorts out the rest


A good example is when a page has expensive metrics specific to say a filter on the page. Let's say an action on the page shows a notification count change in the top right corner.

While morph will figure it outz it's unnecessary work done on the server to evaluate the entire body


Expensive queries on the server should be shared where they can be (eg: global leaderboard) or cached on the server (in the game of life demo each frame is rendered/calculated once, regardless of the number of users). Rendering the whole view gives you batching for free and you don't have to have all that overhead tracking what should be updated or changed. Fine grained updates are often a trap when it comes to building systems that can handle a lot of concurrent users. It's way simpler to update all connected users every Xms whenever something changes.


I agree on caching. But in general my point stands. The updates in question may not even be shared across users, but specific to one user.

Philosophically, I agree with you though.


Yeah so that was how I used to think about these things. Now, I'm. less into the fine grain user updates too.

Partly, because the minute you have a shared widget across users 50%+ of your connected users are going to get an update when anything changes. So the overhead of tracking who should update when you are under high load is just that, overhead.

Being able to make those updates corse grain and homogeneous makes them easy to throttle so changes are effectively batched and you can easily set a max rate at which you push changes.

Same with diffing, the minute you need to update most of the page the work of diffing is pure overhead.

So in my mind a simpler corse grain system will actually perform better under heavy load in that worst case scenario somewhat counter intuitively. At least that's my current reasoning.


"Alpine or Stimulus features were natively included in htmx"

I'm contemplating using HTMX in a personal project - do you know if there are any resources out there explaining why you might also need other libraries like Alpine or Stimulus?


They're for client-side only features. Think toggling CSS classes, updating the index on a slider- you ideally don't want to have to hit the server for that


Thanks - I was having a quick read of the documentation for those projects and that makes perfect sense.


if you use alpine, make sure to get the morph extensions for both htmx and alpine.


Reminds me a bit of the Seaside framework in Pharo. A lot of the things I programmed in Pharo at my previous employer was a lot of back and forth between front-end and back-end, because the back-end was managing the front-end state. For B2B apps that don't have a lot of latency requirements, etc., I'd say it's better. For high scalable B2C apps though? No.


Could you expand on why you think it (back-end managing the front-end's state) is better in the scenarios that you do?

Edit - rather than spam with multiple thank you comments, I'll say here to current and potential future repliers: thanks!


Not GP, but I would say, it’s the same reason someone would use React. If you keep you state in a single place, the rest of the app can become very functional and pure. You receive data and tranform it (or render it). The actual business logic that manipulate the state can be contained in a single place.

This reduces a lot of accidental complexities. If done well, you only need to care about the programming language and some core libraries. Everything else becomes orthogonal of each other so cost of changes is greatly reduced.


I would imagine the same arguments for Smalltalk like live coding and an IDE within your production application. So you get some overlap with things like Phoenix LiveView, but more smalltalk-y.

I assume it had backend scaling issues, but usually backend scaling is over-stated and over-engineered, meanwhile news sites load 10+ MB of javascript.


> htmx is HTML-driven, Datastar is server-driven

As far as I understand, the main difference between HTMX and datastar is that HTMX uses innerHTML-swap by default and datastar uses the morph-swap by default, which is available as an extension for HTMX [1].

Another difference is that datastar comes with SSE, which indeed makes it server driven, but you don't have to use SSE. Also datastar comes with client-side scripting by default. So you could say the datastar = integrated HTMX + idiomorph + SSE + Alpine.

[1] https://htmx.org/extensions/idiomorph/


> if the HTML element doesn't hold the information about where to inject the HTML fragment returned by the server, the server has to know it, so you have to write it somewhere on that side

I'm not too strong in frontend, but wouldn't this make for a lighter, faster front end? Especially added up over very many elements?


100%. Datastar is just make HTML spec support reactive expression in data-* attributes, that's it. You will become stronger at web cause it just gets out of your way


I don't think the difference would be significant. How many of your HTML elements would become interactive with htmx? There's a limit to how much interaction you can reasonably add on a page. This will also limit the number of new attributes you will introduce in the markup.

Also, by this argument should we leave out the 'href' attribute from the '<a>' tag and let the server decide what page to serve? Of course not, the 'href' attribute is a critical part of the functionality of HTML.

Htmx makes the same argument for the other attributes.


Of course, but when you'll get to build your web UI, server-side generated HTML will not be the right choice.


To be accurate in the past year we went from 3 to 7 in that team, and I think everyone has been using htmx at least once, and 4 of us are using it on a regular basis.


Cost.

React will require you to hire a React dev to handle all the complexity.

htmx will be mastered by your "back-end" devs (who actually are web devs) in less than a week.


What kind of web dev can't handle React? Meanwhile, htmx uses clunky, non-standard attributes that rely on logic and templates that are split up in a million different places. Plus it requires a context switch to do anything client side.


I know so-called "senior frontend engineers" whose speciality is React, and who can't handle React. So all others...

This thing (and others like Vue, let me be clear) adds many layers of complexity over the web platform (client-side routing, data fetching, state management, rendering, etc.) and many third-party JS libs that you have to constantly update. You just can't say that it's standard/base web development.


Don't underestimate the learning commitment it takes to learn react. Learning the basics of react and its component architecture is one thing, learning all the tricks and gotchas related to hooks, accidental re-renders, accidental no-renders, etc takes time.

Throw in the usual pile of libraries used with any larger react app and it can easily take months to really get moving.


Everything has tricks and gotchas, even htmx.


React is so simple to use that you definitely don't need to hire a "react dev" to "handle all the complexity". Like >95% of the effort is just understanding the basics of standard front end technologies (HTML, JS, and CSS), something that "back-end" devs (especially ones that label themselves as such) are by no means guaranteed to understand, which is an issue, since you'll have to understand these things even when working with htmx.


My feeling as an old-time "web developer" who has been bullied into becoming a "back-end dev" is: thank you, I think I know quite well the basics of standard front-end technologies, but suddenly some people started yelling at me "you're a grandpa, now you can't send HTML from the server anymore, it's lame, you have to send to the browser a JS app that will manipulate the DOM live instead". This new (in 2015) approach has flooded our brains with a deluge of libs, frameworks, tools, concepts, problems, etc. which are a lot more than 5% of the effort of bootstrapping a React app, let alone optimizing it and maintaining it. I know people who's main job is creating React apps and who are overwhelmed with the complexity of the stack. If you don't hire someone dedicated, the rest of your product (domain rules, database optimization, infrastructure, devops, etc.) will suffer.


Hi! Sorry for the delay. I'm the one who gave this talk.

Yes, rendering HTML on server-side increases server load, for the simple reason that the server-side templates contain some display logic. But:

- Most SPAs I've seen generate useless server load: either by fetching too much information, or by fetching information too often. And that's not because SPAs are a bad idea per se, it's because many small companies have very small teams, and very small teams just don't have time to build a custom-tailored API ("backend-for-frontend"). We chose jsonapi with Django-Rest-Framework, which was crazy-easy to implement from the back-end perspective (as we had many other challenges), but which made the front-end developer implement twisted stuff on client-side, like prefetch-on-hover, or plugging react-query with crazy refresh settings generating hundreds of API calls when only 2 or 3 would have been enough. At the end of the day our servers load is not higher now. Each request costs a little bit more, but there are a lot less of them.

- Another thing is: the idea of delegating template processing to the client may seem good from a wallet perspective. But if you also think of the environmental impact of what we do as developers, you might notice that many people get a new laptop on a regular basis just because some applications are more and more CPU and memory greedy. And when you think that about 80% of environmental impact of the digital industry is generated by building and shipping new terminals, it might make you realize that being part of the solution implied reducing the amount of client load you ask your users. And yes, this implies that your company accepts to reduce gross margin to take a very small action in the battle for a cleaner industry.


As I said in my other answer: our facet filters are nothing more than hidden inputs in a form. So nobody consults "the state of multiple facet dropdowns", except htmx when it generates the URL of its XHR call. Everything else (filtering items according to querystring parameters, fetching user favorites, etc.) is done on server-side.


I'm the one who gave this talk, and I can assure you there is no such thing in our code. htmx just enables us to fire some JS events and react to them by triggering AJAX calls then replacing some <div> with some HTML fragment. No state management, just a hook system.


Ok, then I've explained myself poorly. I see that there are both facet filters and favorites on the page, both of which affect what the rest of the page shows. In my mind, that's client-side state. It doesn't have to mean that its managed with JavaScript, but the state does exist; its changed any time the user makes changes into any inputs in the browser. Furthermore, those changes together seem to affect the rest of the page, if I'm not mistaken?

My question was where is the favorites (and facet) state stored. Is it in "html inputs", in which case, I suppose they are included in the requests somehow later? (perhaps via `hx-include`). The answer could also be that e.g. favorites are permanently stored on the backend...

Additionally I was also wondering what htmx can do in more complex cases, like e.g. a "sort direction" button, where you need to set the sort column(s) and direction(s) of columns. It feels like its really easy to exit the htmx comfort zone, after which you have to resort to things like jquery (which is a nightmare). Or perhaps web-components, which would actually be a nice combination...


I don't see facet filters and favorites as "client-side state": to me it's "application state", changed by a user interaction. And you're right, it's related to how the state is stored.

As you anticipated, favorites are stored in a database on server-side, so that makes "show me my favorite items" or "show me items related to my favorite articles" the exact same feature as selecting an option in a facet filter.

The state of "I have selected options 1 and 2 in this facet filter, and option B in that other filter" is simply stored in... the URL. And this is why I think it's "application state" rather than "client-side state", and this is why the hypermedia is great IMO: this whole search+facets+favorites+sorting feature becomes nothing more than a <form> with hidden inputs, generating GET requests which URLs are put in the browser history (keywords search, selected options from facet filters and sorting are put into querystring parameters). And that's great, because it happens that one of our features is to send our users custom e-mails with deep links to this the UI, with facet filters pre-selected. All we have to do is generate links with querystring parameters pre-configured, and the user directly gets to a screen with pre-selected facet options, sorting, etc. To me, such behavior cannot be called "client-side state management".


Hi there ; I'm the author of the talk linked in the article.

Technically you're not missing anything: htmx is nothing more than another turbolinks, maybe more flexible and easier to learn.

What you might be missing is the non-technical implications of these new tools. The idea behind the talk (and behind htmx, and behind turbolinks, or unpoly) is to prove that the usual arguments for Javascript application frameworks are just not valid for 90% of use cases. And *that's* a complete game-changer, even an industry-changer.

Because since 2016, every small-and-not-super-rich company that wants to create a rich UX on the web is told to hire at least 2 developers: one "front-end" (i.e. "JS"), and one "backend-end" (i.e. everything else, from API to hosting through domain stuff and user data). Or one superman with both back-end and React skills, which is, IMO, almost impossible.

From what I've seen, what businesses need is, indeed, 2 devs: one "back-end" (i.e. workers, databases, user data, domain stuff, and even hosting), and one "front-end" (i.e. "the website", from DB queries to CSS). One person should be enough to address this second scope, even with complex UIs and rich UX. And as this is almost impossible with Javascript application frameworks (because they require a lot of work), it becomes possible again, like in 2008, with htmx/hotwired/unpoly (and without the spaghetti code we had in 2008).

One more thing: of course the idea was *never* to do JS-bashing, only people who are too tied to Javascript and not caring about tech cost-effectiveness would see htmx as a thing for JS-haters. In my talk I actually show some Javascript code, because it's useful for handling client-side-only stuff like a custom dropdown, a modal, etc. The whole idea is to put Javascript back at its place: pure client-side advanced interactions.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: