Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

A few things to note:

- This isn't Chrome doing this unilaterally. https://github.com/whatwg/html/issues/11523 shows that representatives from every browser are supportive and there have been discussions about this in standards meetings: https://github.com/whatwg/html/issues/11146#issuecomment-275...

- You can see from the WHATNOT meeting agenda that it was a Mozilla engineer who brought it up last time.

- Opening a PR doesn't necessarily mean that it'll be merged. Notice the unchecked tasks - there's a lot to still do on this one. Even so, give the cross-vendor support for this is seems likely to proceed at some point.



Also, https://github.com/whatwg/html/issues/11523 (Should we remove XSLT from the web platform?) is not a request for community feedback.

It's an issue open on the HTML spec for the HTML spec maintainers to consider. It was opened by a Chrome engineer after at least two meetings where a Mozilla engineer raised the topic, and where there was apparently vendor support for it.

This is happening after some serious exploits were found: https://www.offensivecon.org/speakers/2025/ivan-fratric.html

And the maintainer of libxslt has stepped down: https://gitlab.gnome.org/GNOME/libxml2/-/issues/913


There is a better alternative to libxslt - xee[1][2]. It was discussed[3] on HN before.

[1] https://blog.startifact.com/posts/xee/

[2] https://github.com/Paligo/xee

[3] https://news.ycombinator.com/item?id=43502291


Disclaimer: I work on Chrome/Blink and I've also contributed a (very small) number of patches to libxml/libxslt.

It's not just a matter of replacing the libxslt; libxslt integrates quite closely with libxml2. There's a fair amount of glue to bolt libxml2/libxslt on to Blink (and WebKit); I can't speak for Gecko.

Even when there's no work on new XML/XSLT features, there's a passive cost to just having that glue code around since it adds quirks and special cases that otherwise wouldn't exist.


> Xee implements modern versions of these specifications, rather than the versions released in 1999.

My understanding is that browsers specifically use the 1999 version and changing this would break compat


As if removing XSLT entirely won’t break back-compat?


XSLT versions are backwards compatible.


I think this discussion is quite reasonable, but it also highlights the power imbalance: If this stuff is decided in closed meetings and the bug trackers are not supposed to be places for community feedback, where can the community influence such decisions?


I think it depends on the spec. Some of the working groups still have mailing lists, some of them have GitHub issues.

To be completely honest, though, I'm not sure what people expect to get out of it. I dug into this a while ago for a rather silly reason and I found that it's very inside baseball, and unless you really wanted to get invested in it it seems like it'd be hard to meaningfully contribute.

To be honest if people are very upset about a feature that might be added or a feature that might be removed the right thing to do is probably to literally just raise it publicly, organize supporters and generally act in protest.

Google may have a lot of control over the web, but note that WEI still didn't ship.


If people are upset about xslt being removed, step 1 would have been to actually use it in a significant way on the web. Step 2 would have been to volunteer to maintain libxslt.

Everyone likes to complain as a user of open source. Nobody likes to do the difficult work.


What use would count as significant? Only if big corp like Google uses it?

XSLT is used on the web. That's why people are upset about Google & friends removing it while ignoring user feedback.


Yep, there's a massive bias in companies like Google, Amazon, Microsoft to only see companies their own size.

Outside of this is a whole universe.


Didn't someone step up to volunteer ot maintain libxslt a few weeks ago? https://gitlab.gnome.org/GNOME/libxslt/-/issues/150


Knowing our luck it’s probably Jia Tan.


I'm not that familiar with XSLT but isn't it already quite hobbled? Can it be used in a significant way? Or is this a chicken-egg problem where proving it's useful requires the implementation to be filled out first.


On the link in the post you can scroll down to someone’s comment with a few links to XSLT in action.

It’s been years since I’ve touched it, but clicking the congressional bill XML link and seeing a perfectly formatted and readable page reminded me of exactly why XSLT has a place. To do the same thing without it, you’d need some other engine to parse the XML, convert it to HTML, and then ensure the proper styles get applied - this could of course be backend or frontend, either way it’s a lot of engineering overhead for a task that, with XSLT, requires just a stylesheet.


> To do the same thing without it, you’d need some other engine to parse the XML, convert it to HTML, and then ensure the proper styles get applied

No, you can use <?xml-stylesheet ?> directives with CSS to attach a css stylesheet directly to an xml file.

CSS is not as flexible as xslt, but this seems to be very simple formatting which is well within what css is capable of.



Do Library of Congress and Congress count as significant usage?

https://news.ycombinator.com/item?id=44958929


Not to WhatWG apparently


WhatWG has a fairly well documented process for feature requests. Issues are not usually decided in closed meetings. But there’s a difference between constructive discussion and the stubborn shameless entitlement that some members of the community are displaying in their comments.

https://blog.whatwg.org/staged-proposals-at-the-whatwg


No. WhatWG only has a process for adding and approving features.

It has no process for discussing removal of features or for speaking out against a feature


Fwiw the meetings aren't closed, unlike w3c the whatwg doesn't require paid membership to attend.

The bug trackers are also a fine place to provide community feedback. For example there's plenty of comments providing use cases that weren't hidden. But if you read the hidden ones (especially on the issue rather than PR) there's some truly unhinged commentary that rightly resulted in being hidden and unfortunately locking of the thread.

Ultimately the way the community can influence decisions is to not be completely unhinged.

Like someone else said the other way would be to just use XSLT in the first place.


Honestly, your chance to impact this decision was when you decided what technologies to use on your website, and then statistically speaking [1], chose not to use XSLT in the browser. If the web used it like crazy we would not be having this conversation.

Your other opportunity is to put together a credible plan to resource the XSLT implementations in the various browsers. I underline, highlight, bold, and italicize the word "credible" here. You are facing an extremely uphill battle from the visible lack of support for the development; any truly credible offer should have come many years ago. Big projects are well aware of the utility of last-minute, emotionally-driven offers of support in the midst of a burst of publicity, viz, effectively zero.

I don't know that the power is as imbalanced as people think here so much as a very long and drawn out conversation has been had by the web as a whole, on the whole the web has agreed this is not a terribly useful technology by vast bulk of implementation work, and this is the final closing chapter where the browsers are basically implementing the will of the web. The standard for removal isn't "literally 0 usage in the entire world", and whatever the standard is, if XSLT isn't on the "remove" side of it, that would just be a sign it needs to be tuned up because XSLT is a complete non-entity on the web. If you are not feeling like your voice is being respected it's because it's one of literally millions upon millions; what do you expect?

[1]: I know exceptions are reading this post, but you are exceptions. And not terribly common ones.


Statistically, how many websites are using webusb? I'm guessing fewer than xslt, which is used by e.g. the US Congress website.

I have a hard time buying the idea that document templating is some niche use-case compared to pretty much every modern javascript api. More realistically, lots of younger people don't know it's there. People constantly bemoan html's "lack" of client side includes or extensible component systems.


You seem to be assuming that I would argue against removing webusb. If it went through the same process and the system as a whole reached the same conclusion, I wouldn't fight it too hard personally.

There's probably half-a-dozen other things that could stand serious thought about removal.

There is one major difference though, which is that if you remove webusb, the functionality is just gone, whereas XSLT can be done through Javascript/WebASM just fine.

Document templating is obviously not a niche case. That's why we've got so many hundreds of them. We're not lacking in solutions for document templating, we're drowning in them. If XSLT stands out in its niche, it is as being a particularly bad choice, which is why nobody (to that first approximation we've all heard so much about) uses it.


Where is the US Congress's website identified as a potentially impacted site? https://chromestatus.com/metrics/feature/timeline/popularity...

edit: I see Simon mentioned it - https://simonwillison.net/2025/Aug/19/xslt/ - e.g., https://www.congress.gov/119/bills/hr3617/BILLS-119hr3617ih.... - the site seems to be even less popular than Longhorn Steakhouse in Germany.

My guess is that they'll shuffle people to PDF or move rendering to the server side, which is a common (and, with today's computing power, extremely cheap) way to generate HTML from XML.


Is it cheaper than sending XML and a stylesheet though?

Further, PDF and server-side are fine for achieving the same display, but it removes the XML of it all - that is to say, someone might be using the raw XML to lower tools, feeds, etc. if XSLT goes away and congress drops the XML links in favor of PDFs etc, that breaks more than just the pretty formatting


1. No, not cheaper, but the incremental cost of server-side rendering is minimal (especially at the low request rates these pages receive)

2. One should still be able to retrieve the raw XML document. It's just that it won't be automatically transformed client-side.


i just built a website in XSLT and implementing some form of client side include in XSLT is not easier than doing the same in javascript. while i agree with you that client side include is sorely missing in HTML, XSLT is not the answer to that problem. anyone who doesn't want to use javascript to implement client-side include, won't want to use XSLT either.


> If the web used it like crazy we would not be having this conversation.

It's been a standard part of the Web platform for years. The only question should be, "Is _anyone_ using it?", not whether it's being "used like crazy" or not.

Don't break the Web.


Counterpoint: most websites are not useful. If we only count useful websites a much higher percentage of them are using XSLT.

But useful websites are much less likely to be infested by the all consuming Goo admalware.


[Citation needed]

Seriously, i doubt this.


A lot of very old SPA like heavy applications use XSLT. Basically, enterprise web applications (not websites) that predate fetch, rest, and targeted or still target Internet Explorer 5/6.

There was a time where the standard way to build a highly interactive SPA was using SOAP services on the backend combined with iframes on the front end that executed XSLT in the background to update the DOM.

Obviously such an approach is extremely out of date and you won't find it on any websites you use. But, a lot of critical enterprise software was built this way and is kind of stuck like this.


> Internet Explorer 5/6

Afaik IE 5 did not support XSLT. It supported a proprietary similar language that was different. I think IE6 was first version to support XSLT.

I feel like when i see enterprise xslt a lot of it is serverside.


I ran xslt in foreground, it was fast enough for that even on celeron and 128mb RAM. Imagine running modern web 2.0 on 128mb RAM.


I secondly doubt this. Would love a succinct list of "important" websites.


Do Library of Congress and Congress count? https://news.ycombinator.com/item?id=44958929

It's not for the public to identify these sites. It's for the arrogant Googlers to do a modicum of research


At first glance the library of congress link appears to be using server side XSLT, which would not be affected by this proposal.

The congress one appears to be the first legit example i have seen.

At first glance the congress use case does seem like it would be fully covered by CSS [you can attach CSS stylesheets to generic xml documents in a similar fashion to xslt]. Of course someone would have to make that change.


> Of course someone would have to make that change.

Of course. And yet none of the people from Google even seem to be aware of

> The congress one appears to be the first legit example i have seen.

There are more. E.g. podcast RSS feeds are often presented on the web with XSLT: https://feeds.buzzsprout.com/231452.rss

Again, none of the people from Google even seem to be aware of these use cases, and just power through regardless of any concerns.


> Of course. And yet none of the people from Google even seem to be aware of

I don't see any reason to assume that. I don't think anyone from google is claiming the literal number of sites is 0, just that it is insignificant.

I am very sure the people at google are aware of the rss feed usage.

Don't confuse people disagreeing with you with people not understanding you.


> I am very sure the people at google are aware of the rss feed usage.

No. No they aren't. As you can see in the discussion: https://github.com/whatwg/html/issues/11523 where the engineer who proposed this literally updates his "analysis" as people point out use cases he missed.

Quote:

--- start quote ---

albertobeta: there is a real-world and modern use case from the podcasting industry, where I work. Collectively, we host over 4.5 million RSS feeds. Like many other podcast hosting companies, we use XSLT to beautify our raw feeds and make them easier to understand when viewed in a browser.

mfreed7, the Googler https://github.com/whatwg/html/issues/11523#issuecomment-315... : Thanks for the additional context on this use case! I'm trying to learn more about it.

--- end quote ---

And then just last week: https://github.com/whatwg/html/issues/11523#issuecomment-318...

--- start quote ---

Thanks for all of the comments, details, and information on this issue. It's clear that XSLT (and talk of removing it) strikes a nerve with some folks. I've learned a lot from the posts here.

--- end quote ---

> Don't confuse people disagreeing with you with people not understanding you.

Oh, they don't even attempt to understand people.

Here's him last week adding a PR to remove XSLT from the spec: https://github.com/whatwg/html/pull/11563

Did he address any of the issues? Does he link to any actual research pointing out how much will be broken, where it's used etc.?

Nope.

But then another Googler pulls up, says "good work, don't forget to remove it everywhere else". End of discussion.


I stand by my previous comment.

You're angry you didn't get your way, but the googler's decision seems logical, i think most software developers maintaining a large software platform would have made a similar decision given the evidence presented (as evidenced by other web browsers making the same one).

The only difference here between most software is that google operates somewhat in the open. In the corporate world there would be some customer service rep to shield devs from the special interest group's tantrum.


It's worse than that, of course. XSLT removal breaks quite a few government and regulatory sites: https://github.com/whatwg/html/issues/11582


They are easy to understand :) Modern browsers became such bloatware beyond salvation, they start to feel all the tech debt.


You're naming Google specifically, when it's not just Google. This seems like a you thing, separate to the actual issue at hand.


Well, it's Google who jumped at the opportunity citing their own counters and stats.

Just like they did the last time when they tried to remove confirm/prompt[1] and were surprised to see that their numbers don't paint the full picture, as literally explicitly explained in their own docs: https://docs.google.com/document/d/1RC-pBBvsazYfCNNUSkPqAVpS...

You'd think that the devs of the world's most popular browser would have a little more care than just citing some numbers, ignoring all feedback, and moving forward with whatever they want to do?

Oh. Speaking, of "not just Google".

The question was raised in this meeting: https://github.com/whatwg/html/issues/11146#issuecomment-275... Guess what.

--- start quote ---

dan: even if the data were accurate, not enough zeros for the usage to be low enough.

brian: I'm guessing people will have objections... people do use it and some like it

--- end quote ---

[1] See, e.g. https://gomakethings.com/google-vs.-the-web/


That's not completely wrong, but also misses some nuance. E.g. the thread mentions the fact that web support is still stuck at XSLT 1.0 as a reason for removal.

But as far as I know, there were absolutely zero efforts by browser vendors before to support newer versions of the language, while there was enormous energy to improve JavaScript.

I don't want to imply that if they had just added support for XSLT 3.0 then everyone would be using XSLT instead of JavaScript today and the latest SIMD optimizations of Chrome's XPath pipeline would make the HN front-page. The language is just too bad for that.

But I think it's true that there exists a feedback loop: Browsers can and do influence how much a technology is adopted, by making the tech less or more painful to use. Then turning around and saying no one is using the tech, so we'll remove it, is a bit dishonest.


Javascript was instantly a hit from the day it was released, and it grew from there.

XSLT never took off. Ever. It has never been a major force on the web, not even for five minutes. Even during the "XML all the things!" phase of the software engineering world, with every tailwind it would ever had, it was never a serious player.

There was, at no point, any reason to invest in it any farther.

Moreover, even if you push a button and rewrite history so that even so it was heavily invested in anyhow, I see no reason to believe it would have ever been a major force in that alternate history either. I would personally contend that it has always been a bad idea, and if anything, it has been unduly propped up by the browsers and overinvested in as it is. But perhaps less inflammatorily and more objectively, it has always been a foreign paradigm that most programmers have no experience in, and this was even more true in the "XML all the things!" era which predates the initial Haskell burst that pushed FP forward by a good solid decade, and the prospects of it ever being popular were never all that great.


i also don't see XSLT solving any problem that javascript could not solve. heck, if you rally need XSLT in the browser, using javascript you could even call some library like saxonjs, or you could run it webassembly.


How do you format a raw XML file in the browser without XSLT?


instead of including a reference to the XSLT sylesheet apparently you can also include javascript: https://stackoverflow.com/a/16426395


That's only if the original document is an XHTML document that will have scripts loaded. Other XML documents, such as RSS feeds, will not have any support for JS, short of something silly like putting it in an iframe.


i didn't test it, but the stackoverflow answers suggested otherwise. are they wrong?


Perhaps you should have tried testing it before commenting?


if you know that the solution does not work, then just say so and maybe explain why, instead of being snarky.

all i did was to share a link to a resource. if you don't trust that resource you need to do your own testing. what ever i say, whether i tested it or not, doesn't add much more value. you can't trust my words any more than the resource i linked.

you asked half a dozen times in the last few days how a plain xml file can be transformed without xslt. and you claimed that xslt can be used to transform an rss feed.

well, guess what, i just tested this: an rss feed with the standard mimetype application/rss+xml doesn't load either an xsl stylesheet or javascript. to make that work you have to change the mimetype, and if you do that, both the xsl stylesheet or the javascript load. (just not both at the same time)


At least one of the suggested answers in SO doesn’t work and the other is somewhat painful

Why answer if you don’t know the answer

Here’s one that used application/xml and it works https://www.ellyloel.com/feed.rss

People are using xslt in the wild today and JS isn’t really a replacement


the specific answer that i linked to does work. i have verified that too.

application/xml is not the same as application/rss+xml. application/xml also loads javascript just fine. again, i tested that. so far i have not found a single mimetype that can load xslt, but could not load javascript. i am coming to believe that there isn't one. if xslt works, then javascript works too.

whether javascript itself is a suitable replacement for xslt is not the question. your argument was that it is not possible to replace the builtin xslt support with anything written in javascript, because xml files can't load javascript.

since i have now verified that an xml file that can load xslt in the browser can also load javascript, this is proven wrong. all we need now is a good xslt implementation written in javascript or maybe a good binding to a wasm one and then we are ready to remove the builtin xslt support in the browser.


I too spent a chunk of time seeing what worked and what it looks like…

JS referenced by the XML can manipulate the XML but it frequently executes before the XML DOM is ready (even when waiting for onload) and so misses elements

So while possible it’s a pretty horrible experience to translate XML to HTML using JS - the declarative approach is more reliable and easier IMV

The XSLT polyfill doesn’t seem to work when loaded as a script in an XML doc but not quite sure why ATM

application/xml is commonly used for RSS feeds on static hosts because it’s the correct mimetype for say a feeds.xml response


https://github.com/mfreed7/xslt_polyfill/pull/5 - it will be able to do this soon.


nice. thanks for the link.

someone else mentioned xjslt here: https://news.ycombinator.com/item?id=44994310 which is an xslt 2.0 implementation. i have been trying to get that to work by loading the script directly into the xml data but so far could not figure out how to do it.


But can it transform / format the XML?


why should it not? once loaded it should find the XML in the DOM and transform that any way you like.


for the record, i just tested that the loaded javascript can access the DOM.


True, but that raises the question, why don't the browsers do that? I think no one would object if they removed XSLT from the browser's core and instead loaded up some WASM/JavaScript implementation when some XSLT is actually encountered. Sort of like a "built-in extension".

Then browser devs could treat it like an extension (plus some small shims in the core) while the public API wouldn't have to change.


because there is no demand for it.


You can have template includes that are auto interpreter by the browser - no need to write code AFAIK using XSLT.


XSLT is code. code written with XML syntax. let me give you an example:

in order to create a menu where the current active page is highlighted and not a link, i need to do this:

    <a>
      <xsl:choose>
        <xsl:when test="@name='home'">
          <xsl:attribute name="class">selected</xsl:attribute>
        </xsl:when>
        <xsl:otherwise>
          <xsl:attribute name="href">/</xsl:attribute>
        </xsl:otherwise>
      </xsl:choose>
      home
    </a> |
    <a>
      <xsl:choose>
        <xsl:when test="@name='about'">
          <xsl:attribute name="class">selected</xsl:attribute>
        </xsl:when>
        <xsl:otherwise>
          <xsl:attribute name="href">/about.xhtml</xsl:attribute>
        </xsl:otherwise>
      </xsl:choose>
      about
    </a> |
XSLT is interesting because it has a very different approach to parsing XML, and for some transformations the resulting code can be quite compact. in particular, you don't have an issue with quoting/escaping special characters most of the time while still being able to write XML/HTML syntax. but then JSX from react solves that too. so the longer you look at it the less the advantages of XSLT stand out.


You're sort of exaggerating the boilerplate there; a more idiomatic, complete template might be:

  <xsl:variable name="nav-menu-items">
    <item href="foo.xhtml"><strong>Foo</strong> Page</item>
    <item href="bar.xhtml"><em>Bar</em> Page</item>
    <item href="baz.xhtml">Baz <span>Page</span></item>
  </xsl:variable>

  <xsl:template match="nav-menu">
    <nav>
      <ul>
        <xsl:apply-templates select="$nav-menu-items/item">
          <xsl:with-param name="current" select="@current-page"/>
        </xsl:apply-templates>
      </ul>
    </nav>
  </xsl:template>

  <xsl:template match="item">
    <xsl:param name="current"/>
    <li>
      <xsl:choose>
        <xsl:when test="@href=$current">
          <a class="selected"><xsl:apply-templates/></a>
        </xsl:when>
        <xsl:otherwise>
          <a href="{@href}"><xsl:apply-templates/></a>
        </xsl:otherwise>
      </xsl:choose>
    </li>
 </xsl:template>

One nice thing about XSLT is that if you start with a passthrough template:

  <xsl:template match="@*|node()">
    <xsl:copy>
      <xsl:apply-templates select="@*|node()"/>
    </xsl:copy>
  </xsl:template>
You have basically your entire "framework" with no need to figure out how to set up a build environment because there is no build environment; it's just baked into the browser. Apparently in XSLT 3.0, the passthrough template is shortened to just `<xsl:mode on-no-match="shallow-copy"/>`. In XSLT 2.0+ you could also check against `base-uri(/)` instead of needing to pass in the current page with `<nav-menu current-page="foo.xhtml"/> and there's no `param` and `with-param` stuff needed. In modern XSLT 3.0, it should be able to be something more straightforward like:

  <xsl:mode on-no-match="shallow-copy"/>

  <xsl:variable name="menu-items">
    <item href="foo.xhtml"><strong>Foo</strong> Page</item>
    <item href="bar.xhtml"><em>Bar</em> Page</item>
    <item href="baz.xhtml">Baz <span>Page</span></item>
  </xsl:variable>

  <xsl:template match="nav-menu">
    <nav>
      <ul>
        <xsl:apply-templates select="$menu-items/item"/>
      </ul>
    </nav>
  </xsl:template>

  <xsl:template match="item">
    <li>
      <xsl:variable name="current-page" select="tokenize(base-uri(/),'/')[last()]"/>
      <a href="{if (@href = $current-page) then '' else @href}"
         class="{if (@href = $current-page) then 'selected' else ''}">
        <xsl:apply-templates/>
      </a>
    </li>
  </xsl:template>

The other nice thing is that it's something that's easy to grow into. If you don't want to get fancy with your menu, you can just do:

  <xsl:template match="nav-menu">
    <nav>
      <ul>
        <li><a href="foo.xhtml">Foo</a></li>
        <li><a href="bar.xhtml">Bar</a></li>
        <li><a href="baz.xhtml">Baz</a></li>
      </ul>
    </nav>
   </xsl:template>
And now you have a `<nav-menu/>` component that you can add to any page. So to the extent that you're using it to create simple website templates but you're not a "web dev", it works really well for people that don't want to go through all of the hoops that professional programmers deal with. Asking people to figure out react to make a static website is absurd.


wow, thank you. your first example is actually what i have been trying to do but i could not get it to work. i did search for examples or explanations for hours (spread over a week or so). i found the documentation of each of the parts and directives used, but i just could not figure out how to pull it together.

your last example is what i started out with, including the pass through template. you may remember this message from almost two months ago: https://news.ycombinator.com/item?id=44398626

one comment for the xslt 3 example: href="" doesn't disable the link. it's just turns into a link to self (which it would be anyways if the value was present). the href attribute needs to be gone completely to disable the link.


unfortunately i hit another snag: https://stackoverflow.com/questions/3884927/how-to-use-xsl-v...

nodes you output don't have type "node-set" - instead, they're what is called a "result tree fragment". You can store that to a variable, and you can use that variable to insert the fragment into output (or another variable) later on, but you cannot use XPath to query over it.

the xsl documentation https://www.w3.org/TR/xslt-10/#variables says:

Variables introduce an additional data-type into the expression language. This additional data type is called result tree fragment. A variable may be bound to a result tree fragment instead of one of the four basic XPath data-types (string, number, boolean, node-set). A result tree fragment represents a fragment of the result tree. A result tree fragment is treated equivalently to a node-set that contains just a single root node. However, the operations permitted on a result tree fragment are a subset of those permitted on a node-set. An operation is permitted on a result tree fragment only if that operation would be permitted on a string (the operation on the string may involve first converting the string to a number or boolean). In particular, it is not permitted to use the /, //, and [] operators on result tree fragments.

so using apply-templates on a variable doesn't work. this is actually where i got stuck before. i just was not sure because i could not verify that everything else was correct.

i wonder if it is possible to load the menu from a second document: https://www.w3.org/TR/xslt-10/#document

edit: it is!

    <xsl:apply-templates select="document('nav-menu.xml')/menu">
now i just need to finetune this because somehow the $current param fails now.


Ah, I could've sworn that it worked in some version of the page that I tried as I iterated on things, but it could be that the browser just froze on my previously working page and I fooled myself.

Adding xmlns:exsl="http://exslt.org/common" to your xsl:stylesheet and doing select="exsl:node-set($nav-menu-items)/item" seems to work on both Chrome and Librewolf.


tried that, getting an empty match.

here is the actual stylesheet i am using:

    <?xml version="1.0" encoding="UTF-8"?>
    <xsl:stylesheet version="1.0" xmlns:exsl="http://exslt.org/common" xmlns:xsl="http://www.w3.org/1999/XSL/Transform" xmlns="http://www.w3.org/1999/xhtml">
      <xsl:output method="html"/>

      <xsl:variable name="nav-menu">
        <item href="/">Home</item>
        <item href="/about.xhtml">About</item>
      </xsl:variable>

      <xsl:template match="document">
        <html>
          <head>
            <meta charset="utf-8" />
            <title><xsl:value-of select="title" /></title>
            <link rel="stylesheet" type="text/css" href="site.css" />
          </head>

          <body>
            <!-- <xsl:apply-templates select="document('nav-menu.xml')/menu"> -->
            <xsl:apply-templates select="exsl:node-set($nav-menu)/item">
              <xsl:with-param name="current" select="@name"/>
            </xsl:apply-templates>
            <xsl:apply-templates select="content" />
          </body>
        </html>
      </xsl:template>

      <xsl:template match="item">
        <xsl:param name="current"/>
        <xsl:choose>
          <xsl:when test="@href=$current">
            <a class="selected"><xsl:apply-templates/></a>
          </xsl:when>
          <xsl:otherwise>
            <a href="{@href}"><xsl:apply-templates/></a>
          </xsl:otherwise>
        </xsl:choose>
      </xsl:template>

      <xsl:template match="content">
        <xsl:apply-templates select="@*|node()" />
      </xsl:template>

      <xsl:template match="@*|node()">
        <xsl:copy>
          <xsl:apply-templates select="@*|node()"/>
        </xsl:copy>
      </xsl:template>
    </xsl:stylesheet>

documents look like this:

    <?xml version="1.0" encoding="UTF-8" ?>
    <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd">
    <?xml-stylesheet type="text/xsl" href="site.xsl"?>
    <document name="about">
      <title>About Us</title>
      <content>
        html content here, to be inserted without change
      </content>
    </document>
if i use the document() function, with nav-menu.xml looking like this:

    <menu>
      <item href="/">Home</item>
      <item href="/about.xhtml">About</item>
    </menu>
then i get the menu items, but the test <xsl:when test="@href=$current"> fails


It looks like it's related to your setting the default namespace xmlns="http://www.w3.org/1999/xhtml". You could either add a xmlns:example="http://example.org/templates" and then replace `item` with `example:item` everywhere, or you can override the default namespace within your variable's scope:

    <xsl:variable name="nav-menu-items" xmlns="">
        <item href="/">Home</item>
        <item href="/about.xhtml">About</item>
    </xsl:variable>
I think you also don't really need to set the default namespace to xhtml, so I believe you could remove that and not worry about namespaces at all (except for xsl and exsl).

The test is failing because it's `/about.xhtml` in the template but `about` outside. You'd either need to add a name attribute to item to compare on or make it match the href.

That should make your thing work if I haven't fooled myself again. :)


I think you also don't really need to set the default namespace to xhtml

you are right. i removed it, and it works. typical "copy from stackoverflow" error. these namespaces are a mystery and not intuitive at all. i suppose most people don't notice that because it only applies to xml data within the stylesheet. most people won't have that so they won't notice an issue. the less the better.

for the other error, my mistake, duh! in my original example in https://news.ycombinator.com/item?id=44961352 i am comparing $current/@name to a hardcoded value, so if i want to keep that comparison i have to add that value to the nav-menu data. or use a value that's already in there.

i went with adding a name="about" attribute to the nav-menu because it keeps the documents cleaner: <document name="about"> just looks better, and it also allows me to treat it like an ID that doesn't have to match the URL which allows renaming/moving documents around without having to change the content. (they might go from about.xhtml to about/index.xhtml for example)

i am also probably going to use the document() function instead of exsl:node-set() because having the menu data in a separate file in this case is also easier to manage. it's good to know about that option though. being able to iterate over some local data is a really useful feature. i'll keep that around as an example.

the final piece of the puzzle was:

    <xsl:if test="position() != last()"> | </xsl:if>
to put a separator between the items, but not after.

that sorted, now it all works. thank you again.

btw, it's funny that we are turning hackernews into an xsl support forum. i guess i should write all that up into a post some day.


Nice. Fwiw I believe you can also use css for the separators if you've put them in a list:

  li + li::before {
    content: " | ";
  }
If xslt survives maybe I should make a forum and/or wiki. Using xslt of course.


Yeah, unfortunately the one criticism of XSLT that you can't really deny is that there's no information out there about how to use it, so beyond the tiny amount of documentation on MDN, you kind of have to just figure out your own patterns. It feels a little unfair though that it basically comes down to "this doesn't have a mega-corporation marketing it". That and the devtools for it are utterly broken/left in the early 00s for similar reasons. You could imagine something could exist like the Godbolt compiler explorer for template expansion showing the input document on the left and output on the right with color highlighting for how things expanded, but instead we get devtools that barely work at all.

You're right on the href; maybe there's not a slick/more "HTML beginner friendly" way to get rid of the <xsl:choose> stuff even in 3.0. I have no experience with 3.0 though since it doesn't work.

I get a little fired up about the XSLT stuff because I remember being introduced to HTML in an intersession school class when I was like... 6? XSLT wasn't around at that time, but I think I maybe learned about it when I was ~12-13, and it made sense to me then. The design of all of the old stuff was all very normal-human approachable and made it very easy to bite a little bit more off at a time to make your own personal web pages. "Use React and JSON APIs" or "use SSR" seems to just be giving up on the idea that non-programmers should be able to participate in the web too. Should we do away with top level HTML/CSS while we're at it and just use DOM APIs?

There were lots of things in the XML ecosystem I didn't understand at the time (what in the world was the point of XSDs and what was a schema and how do you use them to make web pages? I later came to appreciate those as well after having to work as a programmer with APIs that didn't have schema files), but the template expansion thing to make new tags was easy to latch onto.


devtools for it are utterly broken

right, that's a big issue too. when the xsl breaks (in this case when i use <xsl:apply-templates select="$nav-menu-items/item">) i get an empty page and nothing telling me what could be wrong. if i remove the $ the page works, and the apply-templates directive is just left out.


It solves the problem of not requiring a full turing machine with a giant API that has a history of actual exploits and not just FUD behind it.


i believe XSLT is touring complete, and regarding exploits, you rather want to read this: https://news.ycombinator.com/item?id=44910050

it turns out that because XSLT was largely ignored, it is full of security issues, some of which have been in there for decades.

so the reason XSLT doesn't have a history of exploits is because nobody used it.


>while there was enormous energy to improve JavaScript

What was the point of it though? People transpile from other languages anyway and pull megabytes of npm dependencies.


This question in analogous to what is the point of better CPUs when people use compilers/assemblers instead of writing binaries in an hex editor.


Community feedback is usually very ad hoc. Platform PMs will work with major sites, framework maintainers, and sometimes do discussions and polls on social sites. IOW, they try to go where the community that uses the features are, rather than stay on GitHub in the spec issues.


Although in this case, it seems more like they are trying to go where the community that uses the feature isn't.


There isn't one. It's Google's web now. You should be thankful that you are still allowed to use it.


I think this post is useful where the thread author proposed some solutions to the people affected: https://github.com/whatwg/html/issues/11523#issuecomment-318...

The main thing that seems unaddressed is the UX if a user opens a direct link to an XML file and will now just see tag soup instead of the intended rendering.

I think this could be addressed by introducing a <?human-readable ...some url...?> processing instruction that browsers would interpret like a meta tag redirect. Then sites that are interested could put that line at the top of their XML files and redirect to an alternative representation in HTML or even to a server-side or WASM-powered XSLT processor for the file.

Sort of like an inverse of the <link rel="alternate" ...> solution that the post mentioned.

The only thing this doesn't fix is sites that are abandoned and won't update or are part if embedded devices and can't update.


> I think this could be addressed by introducing a <?human-readable ...some url...?> processing instruction that browsers would interpret like a meta tag redirect. Then sites that are interested could put that line at the top of their XML files and redirect to an alternative representation in HTML or even to a server-side or WASM-powered XSLT processor for the file.

HTTP has already had this since the 90s. Clients send the Accept HTTP header indicating which format they want and servers can respond with alternative representations. You can already respond with HTML for browsers and XML for other clients today. You don’t need the browser to know how to do the transformation.


This is breaking the web though.

If they are so worried, then have the xslt support compiled to wasm and sandboxed.


This is not breaking the web, stop being so needlessly hyperbolic. XSLT use is absolutely tiny. If you removed it, >99.9% of the web wouldn’t even notice.


If we removed everyone named Jim Dabell from the world, the other 99% wouldn't even notice. They're absolutely tiny. Perhaps we should try doing that.


It certainly wouldn’t break the world. You are being needlessly hyperbolic.


Apart from that doesn’t really work for people who are statically hosting their RSS feeds etc.


You can use content negotiation with static websites too. Apache has mod_negotiation, for example.


Assuming you have access to server configuration. XML/XSLT works anywhere you can host a static page.


it still depends on the mimetype those servers use to host the files.


Most people are hosting static sites on GH pages, Vercel, Netlify, Cloudflare pages etc


I actually found that particular response to be quite disappointing. It should give pause to those advocating removal of XSLT that these three totally disparate use cases could already be gracefully handled by a single technology which is:

* side effect free (a pure data to data transformation)

* stable, from a spec perspective, for decades

* completely client-side

Isn't this basically an A+ report card for any attempt at making a powerful general tool? The fact that the suggested solution in the absence of XSLT is to toil away at implementing application-specific solutions forever really feels like working toward the wrong direction.


Purely out of curiosity, what are some websites that actually make use of XSLT?


Skechers used to :)

https://thedailywtf.com/articles/Sketchy-Skecherscom

Also world of warcraft used to.

Can’t think of recent examples though.


Many sitemaps and RSS feeds use XSL to seamlessly present human readable content.


Isn't this theoretically already supported by the standards? The client supplies an Accept content type, and if that is html not xml the server should render it appropriately.


You can include a "link" HTTP header similar to a link tag. See https://developer.mozilla.org/en-US/docs/Web/HTTP/Reference/...

This would work without special syntax in the XML file.


Any solution that requires any change to the websites affected, no matter how small, is not a solution at all. DO. NOT. BREAK. THE. WEB.


Ah how easy is it to bloviate when you're not actually the one having to maintain the web, huh?


Google doesn't have to maintain the web, they chose to. They also chose to make the web infinitely more complicated so that others are less likely to "compete" for that responsibility. You don't get to insert yourself into that position and then only reap the benefits without putting int the required effort.


> [T]he maintainer of libxslt has stepped down: https://gitlab.gnome.org/GNOME/libxml2/-/issues/913

... Largely because of lack of help from major users such as browsers.


Disclaimer: I work on Chrome and I have contributed a (very) small number of fixes to libxml2/libxslt for some of the recent security bugs.

Speaking from personal experience, working on libxslt... not easy for many reasons beyond the complexity of XSLT itself. For instance:

- libxslt is linked against by all sorts of random apps and changes to libxslt (and libxml2) must not break ABI compatibility. This often constrains the shape of possible patches, and makes it that much harder to write systemic fixes.

- libxslt reaches into libxml and reuses fields in creative ways, e.g. libxml2's `xmlDoc` has a `compression` field that is ostensibly for storing the zlib compression level [1], but libxslt has co-opted it for a completely different purpose [2].

- There's a lot of missing institutional knowledge and no clear place to go for answers, e.g. what does a compile-time flag that guards "refactored parts of libxslt" [3] do exactly?

[1] https://gitlab.gnome.org/GNOME/libxml2/-/blob/ca10c7d7b513f3...

[2] https://gitlab.gnome.org/GNOME/libxslt/-/blob/841a1805a9a9aa...

[3] https://gitlab.gnome.org/GNOME/libxslt/-/blob/841a1805a9a9aa...


Sounds like libxslt needs more than just a small number of fixes, and it sounds like Google could be paying someone, like you, to help provide the necessary guidance and feedback to increase the usability and capabilities of the library and evolve it for the better.

Instead Google and others just use it, and expect that any issues that come up to be immediately fixed by the one or two open source maintainers that happen to work on it in their spare time. The power imbalance must not be lost on you here...

If you wanted to dive into what [3] does, you could do so, you could then document it, or refactor it so that it is more obvious, or remove the compile time flag entirely. There is institutional knowledge everywhere...


or, the downstream users who use it and benefit directly from it could step up, but websites and their users are extremely good at expecting things to just magically keep working especially if they don't pay for it. it was free, so it should be free forever, and someone set it up many moons ago, so it should keep working for many more magically!

// of course we know that, as end-users became the product, Big Tech [sic?] started making sure that users remain dumb.


Website operators are fine with how libxslt works now. It's browser vendors that want change.


You mean they are fine with expecting it to be maintained by browser vendors indefinitely for free.


Browser vendors aren't maintaining the web for fee, they are for profit corporations that have chosen to take on that role for the benefits it provides to them. It's only fair that we demand that they also respect the responsibilities that come with it. And we can also point out the hollowness about complaints of hardship due to having to maintain the web's legacy when they keep making it harder for independent browser developers by adding tons on new complexity.


Sure, of course, but unless funding is coming from users the economics won't change, because:

The vendors cite an aspect of said responsibility (security!) to get rid of an other aspect (costly maintenance of a low-revenue feature).

The web is evolving, there's a ton of things that developers (and website product people, and end-users) want. Of course it comes with a lot of "frivolous" innovation, but that's part of finding the right abstractions/APIs.

(And just to make it clear, I think it's terrible for the web and vendors that ~100% of the funding comes from a shady oligopoly that makes money by selling users - but IMHO this doesn't invalidate the aforementioned resource allocation trade off.)


> libxslt is linked against by all sorts of random apps and changes to libxslt (and libxml2) must not break ABI compatibility. This often constrains the shape of possible patches, and makes it that much harder to write systemic fixes.

I’m having trouble expressing this in a way that won’t likely sound harsher than I really want, but, uh, yes? That’s the fundamental difference between maintaining a part of the commons that anybody can benefit from and a subdirectory in a monorepo. The bazaar incurs coordination costs, and not being able to go and fix all the callers is one of them.

(As best as I can see, Chrome’s approach is largely to make everything a part of the monorepo, so maintaining a part of the commons may not be high on the list of priorities.)

This not to defend any particular ABI choice. Too often ABI is left to luck and essentially just happens instead of being deliberately designed, and too often in those cases we get unlucky. (I’m tempted to recite an old quote[1] about file formats, which are only a bit more sticky than public ABI, because of how well it communicates the amount of seriousness the subject ought to evoke: “Do you, Programmer, take this Object to be part of the persistent state of your application, to have and to hold, through maintenance and iterations, for past and future versions, as long as the application shall live?”)

I’m not even deliberately singling out what seems to me like the weakest of the examples in your list. It’s just that ABI, to me, is such a fundamental part of lib-anything that raising it as an objection against fixing libxslt or libxml2 specifically feels utterly bizarre.

[1] http://erights.org/data/serial/jhu-paper/upgrade.html


It's one thing if the library was proactively written with ABI compatibility in mind. It's another thing entirely if the library happens to expose all its implementation details in the headers, making it that much harder to change things.


When i first encountered the early GNOME 1 software back in the very late 1990s, and DV (libml author) was active, i was very surprised when i asked for the public API for a library and was told, look at the header files and the source.

They simply didn’t seem to have a concept of data hiding and encapsulation, or worse, felt it led to evil nasty proprietary hidden code and were better than that.

They were all really nice people, mind you—i met quite a few of them, still know some—and the GNOME project has grown up a lot, but i think that’s where libxml was coming from. Daniel didn’t really expect it to be quite so widely used, though, i’m sure.

I’ve actually considered stepping up to maintain libxslt, but i don’t know enough about building on Windows and don’t have access to non-Linux systems really. Remote access will only go so far on Windows i think, although it’d be OK on Mac.

It might be better to move to one of the Rust XML stacks that are under active development (one more active than the other).


No, it's the same in both cases. ABI stability is what every library should provide no matter how ugly the ABI is.


Former Mozilla and Google (Chrome team specifically) dev here. The way I see what you're saying is: Representatives from Chrome/Blink, Safari/Webkit, and Firefox/Gecko are all supportive of removing XSLT from the web platform, regardless of whether it's still being used. It's okay because someone from Mozilla brought it up.

Out of those three projects, two are notoriously under-resourced, and one is notorious for constantly ramming through new features at a pace the other two projects can't or won't keep up with.

Why wouldn't the overworked/underresourced Safari and Firefox people want an excuse to have less work to do?

This appeal to authority doesn't hold water for me because the important question is not 'do people with specific priorities think this is a good idea' but instead 'will this idea negatively impact the web platform and its billions of users'. Out of those billions of users it's quite possible a sizable number of them rely on XSLT, and in my reading around this issue I haven't seen concrete data supporting that nobody uses XSLT. If nobody really used it there wouldn't be a need for that polyfill.

Fundamentally the question that should be asked here is: Billions of people use the web every day, which means they're relying on technologies like HTML, CSS, XML, XSLT, etc. Are we okay with breaking something that 0.1% of users rely on? If we are, okay, but who's going to tell that 0.1% of a billion people that they don't matter?

The argument I've seen made is that Google doesn't have the resources (somehow) to maintain XSLT support. One of the googlers argued that new emerging web APIs are more popular, and thus more deserving of resources. So what we've created is a zero-sum game where any new feature added to the platform requires the removal of an existing feature. Where does that game end? Will we eventually remove ARIA and/or screen reader support because it's not used by enough people?

I think all three browser vendors have a duty to their users to support them to the best of their ability, and Google has the financial and human resources to support users of XSLT and is choosing not to.


Another way to look at this is:

Billions of people use the web every day. Should the 99.99% of them be vulnerable to XSLT security bugs for the other 0.01%?


That same argument applies to numerous web technologies, though.

Applied to each individually it seems to make sense. However the aggregate effect is kill off a substantial portion of the web.

In fact, it's an argument to never add a new web technology: Should 100% of web users be made vulnerable to bugs in a new technology that 0% of the people are currently using?

Plus it's a false dichotomy. They could instead address XSLT security... e.g., as various people have suggested, by building in the XSLT polyfill they are suggesting all the XSLT pages start using as an alternative.


depends entirely on which technologies are acctively addressing current and future vulnerabilities.


The vulnerabilities associated with native client-side XSLT are not in the language itself (XSLT 1.0) but instead are caused by bugs in the browser implementations.

Ps. The XSLT language is actively maintained and is used in many applications and contexts outside of the browser.


If this is the reason to remove and or not add something to the web, then we should take a good hard look at things like WebSerial/WebBluetooth/WebGPU/Canvas/WebMIDI and other stuff that has been added that is used by a very small percentage of people yet all could contain various security bugs...

If the goal is to reduce security bugs, then we should stop introducing niche features that only make sense when you are trying to have the browser replace the whole OS.


whatever you do with xslt you can do it in a saner way, but whatever we need to use serial/bluetooth/webgpu/midi for there is no other way, and canvas is massively used.


I'd love to see more powerful HTML templating that'd be able to handle arbitrary XML or JSON inputs, but until we get that, we'll have to make do with XSLT.

For now, there's no alternative that allows serving an XML file with the raw data from e.g. an embedded microcontroller in a way that renders a full website in the browser if desired.

Even more so if you want to support people downloading the data and viewing it from a local file.


If you're OK with the startup cost of 2-3 more files for the viewer bootstrap, you could just fetch the XML data from the microcontroller using JS. I assume the xsl stylesheet is already a separate file.


I don't think anyone is attached to the technology of xslt itself, but to the UX it provides.

Your microcontroller only serves the actual xml data, the xslt is served from a different server somewhere else (e.g., the manufacturer's website). You can download the .xml, double-click it, and it'll get the xslt treatment just the same.

In your example, either the microcontroller would have to serve the entire UI to parse and present the data, or you'd have to navigate to the manufacturers website, input the URL of your microcontroller, and it'd have to do a cors fetch to process the data.

One option I'd suggest is instead of

    <?xml-stylesheet href="http://example.org/example2.xsl" type="text/xsl" ?>
we'd instead use a service worker script to process the data

    <?xml-stylesheet href="http://example.org/example2.js" type="application/javascript" ?>
Service workers are already predestined to do this kind of resource processing and interception, and it'd provide the same UX.

The service worker would not be associated with any specific origin, but it would still receive the regular lifecycle of events, including a fetch event for every load of an xml document pointing at this specific service worker script.

Using https://developer.mozilla.org/en-US/docs/Web/API/FetchEvent/... it could respond to the XML being loaded with a transformed response, allowing it to process the XML similar to an XSLT.

You could even have a polyfill service worker that loads an XSLT and applies it to the XML.


Of course there is a better way than webserial/bluetooth/webgpu/webmidi: Write actual applications instead of eroding the meaning and user expectations of a web browser. The expectation should not be that the browser can access your hardware directly. That is a much more significant risk for browsers than XSLT could ever be.


Solutions have been proposed in that threads, including adding the XSLT polyfill to the browser (which would run it in the Javascript VM/sandbox).


If the usage/risk of XSLT is enough to remove it, you'd have to remove webusb, webbluetooth, webmidi, webxr, and countless more


Yes, please.


Tbh, I'm still hoping we can get rid of these ridiculous webusb/bluetooth/etc specs and redirect the funding to libxslt instead.


Don't threaten me with a good time!


Isn't this something that could be implemented using javascript?

I don't think anyone is arguing that XSLT has to be fast.

You could probably compile libxslt to wasm, run it when loading xml with xslt, and be done.

Does XSLT affect the DOM after processing, isn't it just a dumb preprocessing step, where the render xhtml is what becomes the DOM.


It could be. The meaningful argument is over whether the javascript polyfill should be built into the browser (in which case, browser support remains the same as it ever was, they just swap out a fast but insecure implementation for a slow but secure one), or whether site operators, principally podcast hosts, should be required to integrate it into their sites and serve it.

The first strategy is obviously correct, but Google wants strategy 2.


As discussed in the GitHub thread, strategy two is fundamentally flawed because there’s no other way to make an XML document human readable in today’s browsers. (CSS is close but lacking some capabilities)

So site operators who rely on this feature today are not merely asked to load a polyfill but to fundamentally change the structure of their website - without necessarily getting to the same result in the end.


It sounds like Mozilla has problems despite the quite lucrative "notoriously under-resourced" $400M - $500M a year Google spends on FF a year

Is there a spending on junk projects issue with Firefox?

https://galaxy.ai/youtube-summarizer/is-mozilla-wasting-mone...


So the Safari developers are overworked/under-resourced, but Google somehow should have infinite resources to maintain things forever? Apple is a much bigger company than Google these days, so why shouldn't they also have these infinite resources? Oh, right, its because fundamentally they don't value their web browser as much as they should. But you give them a pass.


One is a browser. The other is an ad delivery platform which requires a more strategic active development posture.


The funny thing is that apple has a huge ad business so I don't know which browser you mean.


> but Google somehow should have infinite resources to maintain things forever?

Google adds 1000+ new APIs to the web platform a year. They are expected to be supported nearly forever. They have no qualms adding those.


And Google even has a doc literally saying that you shouldn't break the web even if a small number of sites use a feature: https://news.ycombinator.com/item?id=44956267


Bring back VRML!

Seriously though, if I were forced to maintain every tiny legacy feature in a 20 year old app... I'd also become a "former" dev :)

Even in its heyday, XSLT seemed like an afterthought. Probably there are a handful of legacy corporate users hanging on to it for dear life. But if infinitely more popular techs (like Flash or FTP or non HTTPS sites) can be deprecated without much fuss... I don't think XSLT has much of a leg to stand on...


> But if infinitely more popular techs (like Flash or FTP or non HTTPS sites) can be deprecated without much fuss... I don't think XSLT has much of a leg to stand on...

Flash was not part of the web platform. It was a plugin, a plugin that was, over time, abandoned by its maker.

FTP was not part of the web platform. It was a separate protocol that some browsers just happened to include a handler for. If you have an FTP client, you can still open FTP links just fine.

Non-HTTPS sites are being discouraged, but still work fine, and can reasonably be expected to continue to work indefinitely, though they are likely to be discouraged a bit harder over time.

XSLT is part of the web platform. And removing it breaks various things.


I don't think that distinction makes much of a difference for the users and devs affected...


Flash was the best part of the web, though.


Not if you were on a non-mainstream platform. Like some Linux, or oh my gawd NetBSD!1!!

I couldn't be more happy about its demise.


XSLT was awesome back in the day. You could get a block of XML data from the server, and with a bit of very simple scripting, slice it, filter it, sort it, present summary or detail views, generate tables or forms, all without a server round trip. This was back in IE6 days, or even IE5 with an add-on.

We built stuff with it that amazed users, because they were so used to the "full page reload" for every change.


> Probably there are a handful of legacy corporate users hanging on to it for dear life.

Like more or less everyone that hosts podcasts. But the current trend is for podcast feeds to go away, and be subsumed into Spotify and YouTube.


Do people consume RSS feeds directly via XSLT? Not through apps and such that subscribe to the feed?


This came up in some of the comments: https://github.com/whatwg/html/issues/11523#issuecomment-315... if you click the links instead of copy/pasting into your reader you get a page full of raw XML. It's not harmful or anything but it's not a great look. You can't really expect your users to just never click on your links, that's usually what links are for.


> Seriously though, if I were forced to maintain every tiny legacy feature in a 20 year old app... I'd also become a "former" dev :)

And those that would replace you might care more for the web rather than the next performance review.


+1. I worked on an internal corporate eCommerce in 2005 built entirely on DOM + XSLT to create the final HTML. It was an atrocious pain in the neck to maintain (despite being server side so the browser never had to deal with the XSLT). Unless you still manipulate XML and need to transform it in various other formats through XSLT/XSL-FO, I don’t see why anyone would bother with it. It always cracks me up when people « demand » support for features hardly ever used for which they won’t spend a dime or a minute to help


When I see "reps from every browser agree" my bullshit alarm immediately goes off. Does it include unanimous support from browser projects that are either:

1. not trillion dollar tech companies

or

2. not 99% funded from a trillion dollar tech company.

I have long suspected that Google gives so much money to Mozilla both for the default search option, but also for massive indirect control to deliberately cripple Mozilla in insidious ways to massively reduce Firefox's marketshare. And I have long predicted that Google is going to make the rate of change needed in web standards so high that orgs like Mozilla can't keep up and then implode/become unusable.


Well, every browser engine that is part of WHATWG. That's how working groups... work. The current crop of "not Chrome/Firefox/Webkit" aren't typically building their own browser engines though. They're re-skinning Chromium/Gecko/Webkit.


The reckless, infinite scope of web browsers https://drewdevault.com/2020/03/18/Reckless-limitless-scope....


It’s worth noting that since that article was written, the Ladybird browser has made a lot of progress with their new browser engine.

https://ladybird.org


> Does it include unanimous support from browser projects

They could continue supporting XSLT if they wanted.


This makes the job of smaller engines like Servo and Ladybird a lot easier.


It's not a huge conspiracy, but it is worthwhile to consider what the incentives are for people from each browser vendor. In practice all the vendors probably have big backlogs of work they are struggling to keep up with. The backlogs are accumulating in part because of the breakneck pace at which new APIs and features are added to the web platform, and in part because of the unending torrent of new security vulnerabilities being discovered in existing parts of the platform. Anything that reduces the backlog is thus really appealing, and money doesn't have to change hands.

Arguably, we could lighten the load on all three teams (especially the under-resourced Firefox and Safari teams) by slowing the pace of new APIs and platform features. This would also ease development of browsers by new teams, like Servo or Ladybird. But this seems to be an unpopular stance because people really (for good reason) want the web platform to have every pet feature they're an advocate for. Most people don't have the perspective necessary to see why a slower pace may be necessary.


>I have long suspected that Google gives so much money to Mozilla both for the default search option, but also for massive indirect control to deliberately cripple Mozilla in insidious ways to massively reduce Firefox's marketshare.

This has never ever made sense because Mozilla is not at all afraid to piss in Google's cheerios at the standards meetings. How many different variations of Flock and similar adtech oriented features did they shoot down? It's gotta be at least 3. Not to mention the anti-fingerprinting tech that's available in Firefox (not by default because it breaks several websites) and opposition to several Google-proposed APIs on grounds of fingerprinting. And keeping Manifest V2 around indefinitely for the adblockers.

People just want a conspiracy, even when no observed evidence actually supports it.

>And I have long predicted that Google is going to make the rate of change needed in web standards so high that orgs like Mozilla can't keep up and then implode/become unusable.

That's basically true whether incidentally or on purpose.


Controlled opposition is absolutely a thing, and to think that people at trillion dollar companies wouldn't do this is naive. I'm not claiming for a fact that mozilla is controlled opposition, i'm just saying it's very feasible that it could be, and i look for signs of it.

You give examples of things they disagree on, and i wouldn't refute that. However i would say that google is going to pick and choose their battles, because ultimately things they appear to "lose on" sort of don't matter. fingerprinting is a great example - yes, firefox provides it, but it's still largely pretty useless, and its impact is even more meaningless because so few people use it. if you have javascript on and arent using a VPN, chances are your anti-fingerprinting isn't actually doing much other than annoying you and breaking sites.

the only real thing to be used for near-complete-anonymity is Tor, but only when it's also used in the right way, and when JavaScript is also turned off. And even then there are ways it could and probably has failed.


Many such cases. Remember when the Chrome team seriously thought they could just disable JavaScript alert() overnight [1][2] and not break decades of internet compatibility? It still makes me smile how quietly this was swept under the rug once it crashed and burned, just like how the countless "off-topic" and "too emotional" comments on Github said it would.

Glad to see the disdain for the actual users of their software remains.

[1] https://github.com/whatwg/html/issues/2894 [2] https://www.theregister.com/2021/08/05/google_chrome_iframe/

(FWIW I agree alert and XSLT are terrible, but that ship sailed a long time ago.)


> Representatives from Chrome/Blink, Safari/Webkit, and Firefox/Gecko are all supportive of removing XSLT

Did anybody bother checking with Microsoft? XML/XSLT is very enterprisey and this will likely break a lot of intranet (or $$$ commercial) applications.

Secondly, why is Firefox/Gecko given full weight for their vote when their marketshare is dwindling into irrelevancy? It's the equivalent of the crazy cat hoarder who wormed her way onto the HOA board speaking for everyone else. No.


There are countries like Germany where Firefox still has around 10% market share [0], or closer to 20% on the desktop, only second behind Chrome [1]. Not exactly irrelevant.

[0] https://gs.statcounter.com/browser-market-share/all/germany

[1] https://gs.statcounter.com/browser-market-share/desktop/germ...


It has long seemed like Firefox is likely doing Google's bidding? That could be a reason why they're given a full vote?

/abject-speculation


> Did anybody bother checking with Microsoft?

> Secondly, why is Firefox/Gecko given full weight for their vote when their marketshare is dwindling into irrelevancy?

The juxtaposition of these two statements is very funny.

Firefox actually develops a browser, Microsoft doesn't. That's why Firefox gets a say and Microsoft doesn't. Microsoft jumped off the browser game years ago.

No, changing the search engine from Google to Bing in chromium doesn't count.

Ultimately, Microsoft isn't implementing jack shit around XSLT because they aren't implementing ANY web standards.


You make it sound like those two thoughts are incompatible in juxtaposition, but they are in fact perfectly consistent, even if you were correct that Microsoft isn't building anything, as the premise is that users matter more than elbow grease. The reason why you'd want to ask Microsoft is the same reason why you might not bother consulting Firefox: because Microsoft has actual users they represent, and Firefox does not.


Right, sure, but this is a matter of implementation and maintenance burden.

Obviously the people doing nothing aren't a reliable source. They probably want the browser to cook your food and walk your dog, too.

That's why we ask the people actually writing the code that is being used.


This is not true. Microsoft is participating in standards and implementing them in Blink.


I didn't know Microsoft contributed to chromium, although that makes some sense.

But my thoughts remain. Chromium IS NOT Microsofts browser.

Chromiums opinion might matter, which might include contributers from the open source community, which might then include some Microsoft engineers.

But Microsoft, as a whole, does not develop a browser so they don't have a seat. The seats are Firefox, Safari, and Chromium/Chrome/Blink.


"Secondly, why is Firefox/Gecko given full weight for their vote when their marketshare is dwindling into irrelevancy?"

There was not really a vote in the first place and FF is still dependant on google. Otherwise FF (users) represants a vocal and somewhat influental minority, capable of creating shitstorms, if the pain level is high enough.

Personally, I always thought XSLT is somewhat weird, so I never used it. Good choice in hindsight.


Maybe because Edge is just a wrapper around Blink?


So Microsoft cucked by Google and Mozilla being a puppet regime of Google at this point.

Seems like a rigged game to me.

Yes it's a wrapper but Microsoft represents a completely different market with individual needs/wants.

If it wasn't for Apple (who doesn't care about enterprise) butting in, the browser consortium would be reminiscent of the old Soviet Union in terms of voting.


> Secondly, why is Firefox/Gecko given full weight for their vote when their marketshare is dwindling into irrelevancy?

Ironic, considering the market share of XSLT.


>who's going to tell that 0.1% of a billion people that they don't matter?

This is also not a fair framing. There are lots of good reasons to deprecate a technology, and it doesn't mean the users don't matter. As always, technology requires tradeoffs (as does the "common good", usually.)


> Why wouldn't the overworked/underresourced Safari and Firefox people want an excuse to have less work to do?

Because otherwise everybody has to repeat same work again and again, programming how - instead of focusing on what, declarative way.

Then data is not free, but caged by processing so it can't exist without it.

I just want data or information - not processing, not strings attached.

I don't see any need to run any extra code over any information - except to keep control and to attach other code, trackers etc. - just, I'm not Google, no need to push anything (just.. faster JS engine instead of empowering users somehow made a browser better ? (no matter how fast, you can't) - for what ? (of what I needed) - or instead of something, that they 'forgot' with a wish they could erase it ?)


> 0.1% of a billion people

Probably more like 0.0001% these days. I doubt 0.1% of websites ever used it.


0.02% of public Web pages, apparently, have the XSLT processing instruction in them, and a few more invoke XSLT through JavaScript (no-one really knows how many right now).

It’s likely more heavily used inside corporate and governmental firewalls, but that’s much harder to measure.


By your argument, once anything makes it in, then it can't be removed. Billions of people are going to use the web every day and it won't stop. Even the most obscure feature will end up being used by 0.1% of users. Can you name a feature that's supported by all browsers that's not being used by anyone?


Yes. That is exactly how web standards work historically. If something will break 0.1% of the web it isn't done unless there are really really strong reasons to do it anyway. I personally watched lots of things get bounced due to their impact on a very small % of all websites.

This is part of why web standards processes need to be very conservative about what's added to the web, and part of why a small vocal contingent of web people are angry that Google keeps adding all sorts of weird stuff to the platform. Useful weird stuff, but regardless.


“That is exactly how web standards work…”

Says who? You keep mentioning this 0.1% threshold yet…

1. I can’t find any reference to that do you have examples / citations?

2. On the contrary here’s a paper that proposes a 3x higher heuristic: https://arianamirian.com/docs/icse2019_deprecation.pdf

3. It seems there are plenty of examples of features being removed above that threshold NPAPI/SPDY/WebSQL/etc.

4. Resources are finite. It’s not a simple matter of who would be impacted. It’s also opportunity cost and people who could be helped as resources are applied to other efforts.


E.g. Google said in their document https://docs.google.com/document/d/1RC-pBBvsazYfCNNUSkPqAVpS...

--- start quote ---

As a general rule of thumb, 0.1% of PageVisits (1 in 1000) is large, while 0.001% is considered small but non-trivial. Anything below about 0.00001% (1 in 10 million) is generally considered trivial. There are around 771 billion web pages viewed in Chrome every month (not counting other Chromium-based browsers). So seriously breaking even 0.0001% still results in someone being frustrated every 3 seconds, and so not to be taken lightly!

--- end quote ---

Read the full doc. They even give examples when they couldn't remove a feature impacting just 0.0000008% of web views.


Thank you for the citation. Up voted.


> Even so, give the cross-vendor support for this is seems likely to proceed at some point.

Yup. Just like the removal of confirm/prompt that had vendor support and was immediately rushed. Thankfully to be indefinitely postponed.

Here's Google's own doc on how a feature should be removed: https://docs.google.com/document/d/1RC-pBBvsazYfCNNUSkPqAVpS...

Notice how "unilateral support by browser vendors" didn't even look at actual usage of XSLT, where it's used, and whether significant parts would be affected.

Good times.


Also, according to Chrome's telemetry, very, very few websites are using it in practice. It's not like the proposal is threatening to make some significant portion of the web inaccessible. At least we can see the data underlying the proposal here.


Sadly, I just built a web site with HTMX and am using the client-side-templates extension for client-side XSLT.

>very, very few websites

Doesn't include all the corporate web sites that they are probably blocked from getting such telemetry for. These are the users that are pushing back.


Does that library use the browser's xslt?

I'm curious as to the scope of the problem, if html spec drops xslt, what the solutions would be; I've never really used xslt (once maybe, 20 years ago). In addition to just pre-rendering your webpage server-side, I assume another possible solution is some javascript library that does the transformations, if it needed to be client-side?

Found a js-only library, so someone has done this before: https://www.npmjs.com/package/xslt-processor


1. Chrome telemetry underreports a lot of use cases

2. They have a semi-internal document https://docs.google.com/document/d/1RC-pBBvsazYfCNNUSkPqAVpS... that explicitly states: small usage percentage doesn't mean you can safely remove a feature

--- start quote ---

As a general rule of thumb, 0.1% of PageVisits (1 in 1000) is large, while 0.001% is considered small but non-trivial. Anything below about 0.00001% (1 in 10 million) is generally considered trivial.

There are around 771 billion web pages viewed in Chrome every month (not counting other Chromium-based browsers). So seriously breaking even 0.0001% still results in someone being frustrated every 3 seconds, and so not to be taken lightly!

--- end quote ---

3. Any feature removal on the web has to be a) given thorough thought and investigation which we haven't seen. Library of congress apparently uses XSLT and Chrome devs couldn't care less


Hmm, I don't see the LOC listed here among the top sites: https://chromestatus.com/metrics/feature/timeline/popularity... - where are you seeing the Library of Congress as impacted?


This was mentioned in the discussions and are an easy search away. Which means that googlers in their arrogance didn't do any research at all and that their counter underrepresents data as explicitly stated in their own document

https://www.loc.gov/standards/mods/mods-conversions.html

https://www.loc.gov/preservation/digital/formats/fdd/fdd_xml...

And then there's Congress: https://simonwillison.net/2025/Aug/19/xslt/


The library of congress examples appear to be using server side xslt not client side. Thus they are not affected by this deprecation.

Before calling people arrogant you should read your own links.

[The congress example is legit]


Here is an example of a URI using client-side XSLT in the library of congress. They are definitely using this feature.

https://www.loc.gov/standards/mets/profiles/00000016.xml

    <?xml version="1.0" encoding="UTF-8"?>
    <?xml-stylesheet type="text/xsl" href="echoProfile2html.xsl" ?>

Before calling people arrogant you should validate your own arrogance.


> [The congress example is legit]

So let me get this straight. The Congress example is legit. Multiple other cases discussed here: https://github.com/whatwg/html/issues/11523 are legit

And yet it's not the Googlers and other browser implementers who didn't do even a modicum or research who are arrogant, but me, because I made a potential mistake quickly searching for something on my phone at night?


Do you honestly believe none of these will be addressed before the deadline passes?


>Chrome telemetry underreports a lot of use cases Sure; in that case, I would suggest to the people with those use cases that they should stop switching off telemetry. Everyone on HN seems to forget telemetry isn't there for shits and giggles, it's there to help improve a product. If you refuse to help improve the product, don't expect a company to improve the product for you, for free.


Looking at the problem differently. Say some change would make Hacker News unusable, the data would support this and show that it practically affects no one.


First, we are an insignificant portion of the web, and it's okay to admit that.

Second, if HN were built upon outdated Web standards practically nobody else uses, I'm sure YCombinator could address the issue before the deadline (which would probably be at least a year or two out) to meet the needs of its community. Every plant needs nourishment to survive.


It's not OK for the Google & co to chip away at "insignificant" portions of the web until all that's left are big corporate run platforms.


First, you're assuming that those portions of the Web won't evolve in order to survive. Second, you're ascribing a motive to Google that you assume (probably falsely) that they possess.


The people writing, and visiting websites that rely on XSLT are the same users that disable or patch out telemetry.


A LOT of internal corpo websites use XSLT.


Ok thanks, we've dechromed the title above. (Submitted title was "Chrome intends to remove XSLT from the HTML spec".)


The implementations are owned by the implementers. Who owns the actual standard, the implementers or the users?


I think trying to own a web standard is like trying to own a prayer. You can believe all you want, but it's up to the gods to listen or not...


As for any standard, the implementers ultimately own it. Users don't spend resources on implementing standards, so they only get a marginal say. Do you expect to contribute to the 6G standards, or USB-C, too?


Own is not really the right word for an open source project. In practice it is controlled by Apple, Google, Microsoft and Mozilla.


The responses of some folks on this thread reminds me of this:

https://xkcd.com/1172/


That's more a joke about people coming to rely on any observable behavior of something, no matter how buggy or unintentional.

Here's we're talking about killing off XSLT used in the intended, documented, standard way.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: