Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Also, https://github.com/whatwg/html/issues/11523 (Should we remove XSLT from the web platform?) is not a request for community feedback.

It's an issue open on the HTML spec for the HTML spec maintainers to consider. It was opened by a Chrome engineer after at least two meetings where a Mozilla engineer raised the topic, and where there was apparently vendor support for it.

This is happening after some serious exploits were found: https://www.offensivecon.org/speakers/2025/ivan-fratric.html

And the maintainer of libxslt has stepped down: https://gitlab.gnome.org/GNOME/libxml2/-/issues/913



There is a better alternative to libxslt - xee[1][2]. It was discussed[3] on HN before.

[1] https://blog.startifact.com/posts/xee/

[2] https://github.com/Paligo/xee

[3] https://news.ycombinator.com/item?id=43502291


Disclaimer: I work on Chrome/Blink and I've also contributed a (very small) number of patches to libxml/libxslt.

It's not just a matter of replacing the libxslt; libxslt integrates quite closely with libxml2. There's a fair amount of glue to bolt libxml2/libxslt on to Blink (and WebKit); I can't speak for Gecko.

Even when there's no work on new XML/XSLT features, there's a passive cost to just having that glue code around since it adds quirks and special cases that otherwise wouldn't exist.


> Xee implements modern versions of these specifications, rather than the versions released in 1999.

My understanding is that browsers specifically use the 1999 version and changing this would break compat


As if removing XSLT entirely won’t break back-compat?


XSLT versions are backwards compatible.


I think this discussion is quite reasonable, but it also highlights the power imbalance: If this stuff is decided in closed meetings and the bug trackers are not supposed to be places for community feedback, where can the community influence such decisions?


I think it depends on the spec. Some of the working groups still have mailing lists, some of them have GitHub issues.

To be completely honest, though, I'm not sure what people expect to get out of it. I dug into this a while ago for a rather silly reason and I found that it's very inside baseball, and unless you really wanted to get invested in it it seems like it'd be hard to meaningfully contribute.

To be honest if people are very upset about a feature that might be added or a feature that might be removed the right thing to do is probably to literally just raise it publicly, organize supporters and generally act in protest.

Google may have a lot of control over the web, but note that WEI still didn't ship.


If people are upset about xslt being removed, step 1 would have been to actually use it in a significant way on the web. Step 2 would have been to volunteer to maintain libxslt.

Everyone likes to complain as a user of open source. Nobody likes to do the difficult work.


What use would count as significant? Only if big corp like Google uses it?

XSLT is used on the web. That's why people are upset about Google & friends removing it while ignoring user feedback.


Yep, there's a massive bias in companies like Google, Amazon, Microsoft to only see companies their own size.

Outside of this is a whole universe.


Didn't someone step up to volunteer ot maintain libxslt a few weeks ago? https://gitlab.gnome.org/GNOME/libxslt/-/issues/150


Knowing our luck it’s probably Jia Tan.


I'm not that familiar with XSLT but isn't it already quite hobbled? Can it be used in a significant way? Or is this a chicken-egg problem where proving it's useful requires the implementation to be filled out first.


On the link in the post you can scroll down to someone’s comment with a few links to XSLT in action.

It’s been years since I’ve touched it, but clicking the congressional bill XML link and seeing a perfectly formatted and readable page reminded me of exactly why XSLT has a place. To do the same thing without it, you’d need some other engine to parse the XML, convert it to HTML, and then ensure the proper styles get applied - this could of course be backend or frontend, either way it’s a lot of engineering overhead for a task that, with XSLT, requires just a stylesheet.


> To do the same thing without it, you’d need some other engine to parse the XML, convert it to HTML, and then ensure the proper styles get applied

No, you can use <?xml-stylesheet ?> directives with CSS to attach a css stylesheet directly to an xml file.

CSS is not as flexible as xslt, but this seems to be very simple formatting which is well within what css is capable of.



Do Library of Congress and Congress count as significant usage?

https://news.ycombinator.com/item?id=44958929


Not to WhatWG apparently


WhatWG has a fairly well documented process for feature requests. Issues are not usually decided in closed meetings. But there’s a difference between constructive discussion and the stubborn shameless entitlement that some members of the community are displaying in their comments.

https://blog.whatwg.org/staged-proposals-at-the-whatwg


No. WhatWG only has a process for adding and approving features.

It has no process for discussing removal of features or for speaking out against a feature


Fwiw the meetings aren't closed, unlike w3c the whatwg doesn't require paid membership to attend.

The bug trackers are also a fine place to provide community feedback. For example there's plenty of comments providing use cases that weren't hidden. But if you read the hidden ones (especially on the issue rather than PR) there's some truly unhinged commentary that rightly resulted in being hidden and unfortunately locking of the thread.

Ultimately the way the community can influence decisions is to not be completely unhinged.

Like someone else said the other way would be to just use XSLT in the first place.


Honestly, your chance to impact this decision was when you decided what technologies to use on your website, and then statistically speaking [1], chose not to use XSLT in the browser. If the web used it like crazy we would not be having this conversation.

Your other opportunity is to put together a credible plan to resource the XSLT implementations in the various browsers. I underline, highlight, bold, and italicize the word "credible" here. You are facing an extremely uphill battle from the visible lack of support for the development; any truly credible offer should have come many years ago. Big projects are well aware of the utility of last-minute, emotionally-driven offers of support in the midst of a burst of publicity, viz, effectively zero.

I don't know that the power is as imbalanced as people think here so much as a very long and drawn out conversation has been had by the web as a whole, on the whole the web has agreed this is not a terribly useful technology by vast bulk of implementation work, and this is the final closing chapter where the browsers are basically implementing the will of the web. The standard for removal isn't "literally 0 usage in the entire world", and whatever the standard is, if XSLT isn't on the "remove" side of it, that would just be a sign it needs to be tuned up because XSLT is a complete non-entity on the web. If you are not feeling like your voice is being respected it's because it's one of literally millions upon millions; what do you expect?

[1]: I know exceptions are reading this post, but you are exceptions. And not terribly common ones.


Statistically, how many websites are using webusb? I'm guessing fewer than xslt, which is used by e.g. the US Congress website.

I have a hard time buying the idea that document templating is some niche use-case compared to pretty much every modern javascript api. More realistically, lots of younger people don't know it's there. People constantly bemoan html's "lack" of client side includes or extensible component systems.


You seem to be assuming that I would argue against removing webusb. If it went through the same process and the system as a whole reached the same conclusion, I wouldn't fight it too hard personally.

There's probably half-a-dozen other things that could stand serious thought about removal.

There is one major difference though, which is that if you remove webusb, the functionality is just gone, whereas XSLT can be done through Javascript/WebASM just fine.

Document templating is obviously not a niche case. That's why we've got so many hundreds of them. We're not lacking in solutions for document templating, we're drowning in them. If XSLT stands out in its niche, it is as being a particularly bad choice, which is why nobody (to that first approximation we've all heard so much about) uses it.


Where is the US Congress's website identified as a potentially impacted site? https://chromestatus.com/metrics/feature/timeline/popularity...

edit: I see Simon mentioned it - https://simonwillison.net/2025/Aug/19/xslt/ - e.g., https://www.congress.gov/119/bills/hr3617/BILLS-119hr3617ih.... - the site seems to be even less popular than Longhorn Steakhouse in Germany.

My guess is that they'll shuffle people to PDF or move rendering to the server side, which is a common (and, with today's computing power, extremely cheap) way to generate HTML from XML.


Is it cheaper than sending XML and a stylesheet though?

Further, PDF and server-side are fine for achieving the same display, but it removes the XML of it all - that is to say, someone might be using the raw XML to lower tools, feeds, etc. if XSLT goes away and congress drops the XML links in favor of PDFs etc, that breaks more than just the pretty formatting


1. No, not cheaper, but the incremental cost of server-side rendering is minimal (especially at the low request rates these pages receive)

2. One should still be able to retrieve the raw XML document. It's just that it won't be automatically transformed client-side.


i just built a website in XSLT and implementing some form of client side include in XSLT is not easier than doing the same in javascript. while i agree with you that client side include is sorely missing in HTML, XSLT is not the answer to that problem. anyone who doesn't want to use javascript to implement client-side include, won't want to use XSLT either.


> If the web used it like crazy we would not be having this conversation.

It's been a standard part of the Web platform for years. The only question should be, "Is _anyone_ using it?", not whether it's being "used like crazy" or not.

Don't break the Web.


Counterpoint: most websites are not useful. If we only count useful websites a much higher percentage of them are using XSLT.

But useful websites are much less likely to be infested by the all consuming Goo admalware.


[Citation needed]

Seriously, i doubt this.


A lot of very old SPA like heavy applications use XSLT. Basically, enterprise web applications (not websites) that predate fetch, rest, and targeted or still target Internet Explorer 5/6.

There was a time where the standard way to build a highly interactive SPA was using SOAP services on the backend combined with iframes on the front end that executed XSLT in the background to update the DOM.

Obviously such an approach is extremely out of date and you won't find it on any websites you use. But, a lot of critical enterprise software was built this way and is kind of stuck like this.


> Internet Explorer 5/6

Afaik IE 5 did not support XSLT. It supported a proprietary similar language that was different. I think IE6 was first version to support XSLT.

I feel like when i see enterprise xslt a lot of it is serverside.


I ran xslt in foreground, it was fast enough for that even on celeron and 128mb RAM. Imagine running modern web 2.0 on 128mb RAM.


I secondly doubt this. Would love a succinct list of "important" websites.


Do Library of Congress and Congress count? https://news.ycombinator.com/item?id=44958929

It's not for the public to identify these sites. It's for the arrogant Googlers to do a modicum of research


At first glance the library of congress link appears to be using server side XSLT, which would not be affected by this proposal.

The congress one appears to be the first legit example i have seen.

At first glance the congress use case does seem like it would be fully covered by CSS [you can attach CSS stylesheets to generic xml documents in a similar fashion to xslt]. Of course someone would have to make that change.


> Of course someone would have to make that change.

Of course. And yet none of the people from Google even seem to be aware of

> The congress one appears to be the first legit example i have seen.

There are more. E.g. podcast RSS feeds are often presented on the web with XSLT: https://feeds.buzzsprout.com/231452.rss

Again, none of the people from Google even seem to be aware of these use cases, and just power through regardless of any concerns.


> Of course. And yet none of the people from Google even seem to be aware of

I don't see any reason to assume that. I don't think anyone from google is claiming the literal number of sites is 0, just that it is insignificant.

I am very sure the people at google are aware of the rss feed usage.

Don't confuse people disagreeing with you with people not understanding you.


> I am very sure the people at google are aware of the rss feed usage.

No. No they aren't. As you can see in the discussion: https://github.com/whatwg/html/issues/11523 where the engineer who proposed this literally updates his "analysis" as people point out use cases he missed.

Quote:

--- start quote ---

albertobeta: there is a real-world and modern use case from the podcasting industry, where I work. Collectively, we host over 4.5 million RSS feeds. Like many other podcast hosting companies, we use XSLT to beautify our raw feeds and make them easier to understand when viewed in a browser.

mfreed7, the Googler https://github.com/whatwg/html/issues/11523#issuecomment-315... : Thanks for the additional context on this use case! I'm trying to learn more about it.

--- end quote ---

And then just last week: https://github.com/whatwg/html/issues/11523#issuecomment-318...

--- start quote ---

Thanks for all of the comments, details, and information on this issue. It's clear that XSLT (and talk of removing it) strikes a nerve with some folks. I've learned a lot from the posts here.

--- end quote ---

> Don't confuse people disagreeing with you with people not understanding you.

Oh, they don't even attempt to understand people.

Here's him last week adding a PR to remove XSLT from the spec: https://github.com/whatwg/html/pull/11563

Did he address any of the issues? Does he link to any actual research pointing out how much will be broken, where it's used etc.?

Nope.

But then another Googler pulls up, says "good work, don't forget to remove it everywhere else". End of discussion.


I stand by my previous comment.

You're angry you didn't get your way, but the googler's decision seems logical, i think most software developers maintaining a large software platform would have made a similar decision given the evidence presented (as evidenced by other web browsers making the same one).

The only difference here between most software is that google operates somewhat in the open. In the corporate world there would be some customer service rep to shield devs from the special interest group's tantrum.


It's worse than that, of course. XSLT removal breaks quite a few government and regulatory sites: https://github.com/whatwg/html/issues/11582


They are easy to understand :) Modern browsers became such bloatware beyond salvation, they start to feel all the tech debt.


You're naming Google specifically, when it's not just Google. This seems like a you thing, separate to the actual issue at hand.


Well, it's Google who jumped at the opportunity citing their own counters and stats.

Just like they did the last time when they tried to remove confirm/prompt[1] and were surprised to see that their numbers don't paint the full picture, as literally explicitly explained in their own docs: https://docs.google.com/document/d/1RC-pBBvsazYfCNNUSkPqAVpS...

You'd think that the devs of the world's most popular browser would have a little more care than just citing some numbers, ignoring all feedback, and moving forward with whatever they want to do?

Oh. Speaking, of "not just Google".

The question was raised in this meeting: https://github.com/whatwg/html/issues/11146#issuecomment-275... Guess what.

--- start quote ---

dan: even if the data were accurate, not enough zeros for the usage to be low enough.

brian: I'm guessing people will have objections... people do use it and some like it

--- end quote ---

[1] See, e.g. https://gomakethings.com/google-vs.-the-web/


That's not completely wrong, but also misses some nuance. E.g. the thread mentions the fact that web support is still stuck at XSLT 1.0 as a reason for removal.

But as far as I know, there were absolutely zero efforts by browser vendors before to support newer versions of the language, while there was enormous energy to improve JavaScript.

I don't want to imply that if they had just added support for XSLT 3.0 then everyone would be using XSLT instead of JavaScript today and the latest SIMD optimizations of Chrome's XPath pipeline would make the HN front-page. The language is just too bad for that.

But I think it's true that there exists a feedback loop: Browsers can and do influence how much a technology is adopted, by making the tech less or more painful to use. Then turning around and saying no one is using the tech, so we'll remove it, is a bit dishonest.


Javascript was instantly a hit from the day it was released, and it grew from there.

XSLT never took off. Ever. It has never been a major force on the web, not even for five minutes. Even during the "XML all the things!" phase of the software engineering world, with every tailwind it would ever had, it was never a serious player.

There was, at no point, any reason to invest in it any farther.

Moreover, even if you push a button and rewrite history so that even so it was heavily invested in anyhow, I see no reason to believe it would have ever been a major force in that alternate history either. I would personally contend that it has always been a bad idea, and if anything, it has been unduly propped up by the browsers and overinvested in as it is. But perhaps less inflammatorily and more objectively, it has always been a foreign paradigm that most programmers have no experience in, and this was even more true in the "XML all the things!" era which predates the initial Haskell burst that pushed FP forward by a good solid decade, and the prospects of it ever being popular were never all that great.


i also don't see XSLT solving any problem that javascript could not solve. heck, if you rally need XSLT in the browser, using javascript you could even call some library like saxonjs, or you could run it webassembly.


How do you format a raw XML file in the browser without XSLT?


instead of including a reference to the XSLT sylesheet apparently you can also include javascript: https://stackoverflow.com/a/16426395


That's only if the original document is an XHTML document that will have scripts loaded. Other XML documents, such as RSS feeds, will not have any support for JS, short of something silly like putting it in an iframe.


i didn't test it, but the stackoverflow answers suggested otherwise. are they wrong?


Perhaps you should have tried testing it before commenting?


if you know that the solution does not work, then just say so and maybe explain why, instead of being snarky.

all i did was to share a link to a resource. if you don't trust that resource you need to do your own testing. what ever i say, whether i tested it or not, doesn't add much more value. you can't trust my words any more than the resource i linked.

you asked half a dozen times in the last few days how a plain xml file can be transformed without xslt. and you claimed that xslt can be used to transform an rss feed.

well, guess what, i just tested this: an rss feed with the standard mimetype application/rss+xml doesn't load either an xsl stylesheet or javascript. to make that work you have to change the mimetype, and if you do that, both the xsl stylesheet or the javascript load. (just not both at the same time)


At least one of the suggested answers in SO doesn’t work and the other is somewhat painful

Why answer if you don’t know the answer

Here’s one that used application/xml and it works https://www.ellyloel.com/feed.rss

People are using xslt in the wild today and JS isn’t really a replacement


the specific answer that i linked to does work. i have verified that too.

application/xml is not the same as application/rss+xml. application/xml also loads javascript just fine. again, i tested that. so far i have not found a single mimetype that can load xslt, but could not load javascript. i am coming to believe that there isn't one. if xslt works, then javascript works too.

whether javascript itself is a suitable replacement for xslt is not the question. your argument was that it is not possible to replace the builtin xslt support with anything written in javascript, because xml files can't load javascript.

since i have now verified that an xml file that can load xslt in the browser can also load javascript, this is proven wrong. all we need now is a good xslt implementation written in javascript or maybe a good binding to a wasm one and then we are ready to remove the builtin xslt support in the browser.


I too spent a chunk of time seeing what worked and what it looks like…

JS referenced by the XML can manipulate the XML but it frequently executes before the XML DOM is ready (even when waiting for onload) and so misses elements

So while possible it’s a pretty horrible experience to translate XML to HTML using JS - the declarative approach is more reliable and easier IMV

The XSLT polyfill doesn’t seem to work when loaded as a script in an XML doc but not quite sure why ATM

application/xml is commonly used for RSS feeds on static hosts because it’s the correct mimetype for say a feeds.xml response


https://github.com/mfreed7/xslt_polyfill/pull/5 - it will be able to do this soon.


nice. thanks for the link.

someone else mentioned xjslt here: https://news.ycombinator.com/item?id=44994310 which is an xslt 2.0 implementation. i have been trying to get that to work by loading the script directly into the xml data but so far could not figure out how to do it.


But can it transform / format the XML?


why should it not? once loaded it should find the XML in the DOM and transform that any way you like.


for the record, i just tested that the loaded javascript can access the DOM.


True, but that raises the question, why don't the browsers do that? I think no one would object if they removed XSLT from the browser's core and instead loaded up some WASM/JavaScript implementation when some XSLT is actually encountered. Sort of like a "built-in extension".

Then browser devs could treat it like an extension (plus some small shims in the core) while the public API wouldn't have to change.


because there is no demand for it.


You can have template includes that are auto interpreter by the browser - no need to write code AFAIK using XSLT.


XSLT is code. code written with XML syntax. let me give you an example:

in order to create a menu where the current active page is highlighted and not a link, i need to do this:

    <a>
      <xsl:choose>
        <xsl:when test="@name='home'">
          <xsl:attribute name="class">selected</xsl:attribute>
        </xsl:when>
        <xsl:otherwise>
          <xsl:attribute name="href">/</xsl:attribute>
        </xsl:otherwise>
      </xsl:choose>
      home
    </a> |
    <a>
      <xsl:choose>
        <xsl:when test="@name='about'">
          <xsl:attribute name="class">selected</xsl:attribute>
        </xsl:when>
        <xsl:otherwise>
          <xsl:attribute name="href">/about.xhtml</xsl:attribute>
        </xsl:otherwise>
      </xsl:choose>
      about
    </a> |
XSLT is interesting because it has a very different approach to parsing XML, and for some transformations the resulting code can be quite compact. in particular, you don't have an issue with quoting/escaping special characters most of the time while still being able to write XML/HTML syntax. but then JSX from react solves that too. so the longer you look at it the less the advantages of XSLT stand out.


You're sort of exaggerating the boilerplate there; a more idiomatic, complete template might be:

  <xsl:variable name="nav-menu-items">
    <item href="foo.xhtml"><strong>Foo</strong> Page</item>
    <item href="bar.xhtml"><em>Bar</em> Page</item>
    <item href="baz.xhtml">Baz <span>Page</span></item>
  </xsl:variable>

  <xsl:template match="nav-menu">
    <nav>
      <ul>
        <xsl:apply-templates select="$nav-menu-items/item">
          <xsl:with-param name="current" select="@current-page"/>
        </xsl:apply-templates>
      </ul>
    </nav>
  </xsl:template>

  <xsl:template match="item">
    <xsl:param name="current"/>
    <li>
      <xsl:choose>
        <xsl:when test="@href=$current">
          <a class="selected"><xsl:apply-templates/></a>
        </xsl:when>
        <xsl:otherwise>
          <a href="{@href}"><xsl:apply-templates/></a>
        </xsl:otherwise>
      </xsl:choose>
    </li>
 </xsl:template>

One nice thing about XSLT is that if you start with a passthrough template:

  <xsl:template match="@*|node()">
    <xsl:copy>
      <xsl:apply-templates select="@*|node()"/>
    </xsl:copy>
  </xsl:template>
You have basically your entire "framework" with no need to figure out how to set up a build environment because there is no build environment; it's just baked into the browser. Apparently in XSLT 3.0, the passthrough template is shortened to just `<xsl:mode on-no-match="shallow-copy"/>`. In XSLT 2.0+ you could also check against `base-uri(/)` instead of needing to pass in the current page with `<nav-menu current-page="foo.xhtml"/> and there's no `param` and `with-param` stuff needed. In modern XSLT 3.0, it should be able to be something more straightforward like:

  <xsl:mode on-no-match="shallow-copy"/>

  <xsl:variable name="menu-items">
    <item href="foo.xhtml"><strong>Foo</strong> Page</item>
    <item href="bar.xhtml"><em>Bar</em> Page</item>
    <item href="baz.xhtml">Baz <span>Page</span></item>
  </xsl:variable>

  <xsl:template match="nav-menu">
    <nav>
      <ul>
        <xsl:apply-templates select="$menu-items/item"/>
      </ul>
    </nav>
  </xsl:template>

  <xsl:template match="item">
    <li>
      <xsl:variable name="current-page" select="tokenize(base-uri(/),'/')[last()]"/>
      <a href="{if (@href = $current-page) then '' else @href}"
         class="{if (@href = $current-page) then 'selected' else ''}">
        <xsl:apply-templates/>
      </a>
    </li>
  </xsl:template>

The other nice thing is that it's something that's easy to grow into. If you don't want to get fancy with your menu, you can just do:

  <xsl:template match="nav-menu">
    <nav>
      <ul>
        <li><a href="foo.xhtml">Foo</a></li>
        <li><a href="bar.xhtml">Bar</a></li>
        <li><a href="baz.xhtml">Baz</a></li>
      </ul>
    </nav>
   </xsl:template>
And now you have a `<nav-menu/>` component that you can add to any page. So to the extent that you're using it to create simple website templates but you're not a "web dev", it works really well for people that don't want to go through all of the hoops that professional programmers deal with. Asking people to figure out react to make a static website is absurd.


wow, thank you. your first example is actually what i have been trying to do but i could not get it to work. i did search for examples or explanations for hours (spread over a week or so). i found the documentation of each of the parts and directives used, but i just could not figure out how to pull it together.

your last example is what i started out with, including the pass through template. you may remember this message from almost two months ago: https://news.ycombinator.com/item?id=44398626

one comment for the xslt 3 example: href="" doesn't disable the link. it's just turns into a link to self (which it would be anyways if the value was present). the href attribute needs to be gone completely to disable the link.


unfortunately i hit another snag: https://stackoverflow.com/questions/3884927/how-to-use-xsl-v...

nodes you output don't have type "node-set" - instead, they're what is called a "result tree fragment". You can store that to a variable, and you can use that variable to insert the fragment into output (or another variable) later on, but you cannot use XPath to query over it.

the xsl documentation https://www.w3.org/TR/xslt-10/#variables says:

Variables introduce an additional data-type into the expression language. This additional data type is called result tree fragment. A variable may be bound to a result tree fragment instead of one of the four basic XPath data-types (string, number, boolean, node-set). A result tree fragment represents a fragment of the result tree. A result tree fragment is treated equivalently to a node-set that contains just a single root node. However, the operations permitted on a result tree fragment are a subset of those permitted on a node-set. An operation is permitted on a result tree fragment only if that operation would be permitted on a string (the operation on the string may involve first converting the string to a number or boolean). In particular, it is not permitted to use the /, //, and [] operators on result tree fragments.

so using apply-templates on a variable doesn't work. this is actually where i got stuck before. i just was not sure because i could not verify that everything else was correct.

i wonder if it is possible to load the menu from a second document: https://www.w3.org/TR/xslt-10/#document

edit: it is!

    <xsl:apply-templates select="document('nav-menu.xml')/menu">
now i just need to finetune this because somehow the $current param fails now.


Ah, I could've sworn that it worked in some version of the page that I tried as I iterated on things, but it could be that the browser just froze on my previously working page and I fooled myself.

Adding xmlns:exsl="http://exslt.org/common" to your xsl:stylesheet and doing select="exsl:node-set($nav-menu-items)/item" seems to work on both Chrome and Librewolf.


tried that, getting an empty match.

here is the actual stylesheet i am using:

    <?xml version="1.0" encoding="UTF-8"?>
    <xsl:stylesheet version="1.0" xmlns:exsl="http://exslt.org/common" xmlns:xsl="http://www.w3.org/1999/XSL/Transform" xmlns="http://www.w3.org/1999/xhtml">
      <xsl:output method="html"/>

      <xsl:variable name="nav-menu">
        <item href="/">Home</item>
        <item href="/about.xhtml">About</item>
      </xsl:variable>

      <xsl:template match="document">
        <html>
          <head>
            <meta charset="utf-8" />
            <title><xsl:value-of select="title" /></title>
            <link rel="stylesheet" type="text/css" href="site.css" />
          </head>

          <body>
            <!-- <xsl:apply-templates select="document('nav-menu.xml')/menu"> -->
            <xsl:apply-templates select="exsl:node-set($nav-menu)/item">
              <xsl:with-param name="current" select="@name"/>
            </xsl:apply-templates>
            <xsl:apply-templates select="content" />
          </body>
        </html>
      </xsl:template>

      <xsl:template match="item">
        <xsl:param name="current"/>
        <xsl:choose>
          <xsl:when test="@href=$current">
            <a class="selected"><xsl:apply-templates/></a>
          </xsl:when>
          <xsl:otherwise>
            <a href="{@href}"><xsl:apply-templates/></a>
          </xsl:otherwise>
        </xsl:choose>
      </xsl:template>

      <xsl:template match="content">
        <xsl:apply-templates select="@*|node()" />
      </xsl:template>

      <xsl:template match="@*|node()">
        <xsl:copy>
          <xsl:apply-templates select="@*|node()"/>
        </xsl:copy>
      </xsl:template>
    </xsl:stylesheet>

documents look like this:

    <?xml version="1.0" encoding="UTF-8" ?>
    <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd">
    <?xml-stylesheet type="text/xsl" href="site.xsl"?>
    <document name="about">
      <title>About Us</title>
      <content>
        html content here, to be inserted without change
      </content>
    </document>
if i use the document() function, with nav-menu.xml looking like this:

    <menu>
      <item href="/">Home</item>
      <item href="/about.xhtml">About</item>
    </menu>
then i get the menu items, but the test <xsl:when test="@href=$current"> fails


It looks like it's related to your setting the default namespace xmlns="http://www.w3.org/1999/xhtml". You could either add a xmlns:example="http://example.org/templates" and then replace `item` with `example:item` everywhere, or you can override the default namespace within your variable's scope:

    <xsl:variable name="nav-menu-items" xmlns="">
        <item href="/">Home</item>
        <item href="/about.xhtml">About</item>
    </xsl:variable>
I think you also don't really need to set the default namespace to xhtml, so I believe you could remove that and not worry about namespaces at all (except for xsl and exsl).

The test is failing because it's `/about.xhtml` in the template but `about` outside. You'd either need to add a name attribute to item to compare on or make it match the href.

That should make your thing work if I haven't fooled myself again. :)


I think you also don't really need to set the default namespace to xhtml

you are right. i removed it, and it works. typical "copy from stackoverflow" error. these namespaces are a mystery and not intuitive at all. i suppose most people don't notice that because it only applies to xml data within the stylesheet. most people won't have that so they won't notice an issue. the less the better.

for the other error, my mistake, duh! in my original example in https://news.ycombinator.com/item?id=44961352 i am comparing $current/@name to a hardcoded value, so if i want to keep that comparison i have to add that value to the nav-menu data. or use a value that's already in there.

i went with adding a name="about" attribute to the nav-menu because it keeps the documents cleaner: <document name="about"> just looks better, and it also allows me to treat it like an ID that doesn't have to match the URL which allows renaming/moving documents around without having to change the content. (they might go from about.xhtml to about/index.xhtml for example)

i am also probably going to use the document() function instead of exsl:node-set() because having the menu data in a separate file in this case is also easier to manage. it's good to know about that option though. being able to iterate over some local data is a really useful feature. i'll keep that around as an example.

the final piece of the puzzle was:

    <xsl:if test="position() != last()"> | </xsl:if>
to put a separator between the items, but not after.

that sorted, now it all works. thank you again.

btw, it's funny that we are turning hackernews into an xsl support forum. i guess i should write all that up into a post some day.


Nice. Fwiw I believe you can also use css for the separators if you've put them in a list:

  li + li::before {
    content: " | ";
  }
If xslt survives maybe I should make a forum and/or wiki. Using xslt of course.


Yeah, unfortunately the one criticism of XSLT that you can't really deny is that there's no information out there about how to use it, so beyond the tiny amount of documentation on MDN, you kind of have to just figure out your own patterns. It feels a little unfair though that it basically comes down to "this doesn't have a mega-corporation marketing it". That and the devtools for it are utterly broken/left in the early 00s for similar reasons. You could imagine something could exist like the Godbolt compiler explorer for template expansion showing the input document on the left and output on the right with color highlighting for how things expanded, but instead we get devtools that barely work at all.

You're right on the href; maybe there's not a slick/more "HTML beginner friendly" way to get rid of the <xsl:choose> stuff even in 3.0. I have no experience with 3.0 though since it doesn't work.

I get a little fired up about the XSLT stuff because I remember being introduced to HTML in an intersession school class when I was like... 6? XSLT wasn't around at that time, but I think I maybe learned about it when I was ~12-13, and it made sense to me then. The design of all of the old stuff was all very normal-human approachable and made it very easy to bite a little bit more off at a time to make your own personal web pages. "Use React and JSON APIs" or "use SSR" seems to just be giving up on the idea that non-programmers should be able to participate in the web too. Should we do away with top level HTML/CSS while we're at it and just use DOM APIs?

There were lots of things in the XML ecosystem I didn't understand at the time (what in the world was the point of XSDs and what was a schema and how do you use them to make web pages? I later came to appreciate those as well after having to work as a programmer with APIs that didn't have schema files), but the template expansion thing to make new tags was easy to latch onto.


devtools for it are utterly broken

right, that's a big issue too. when the xsl breaks (in this case when i use <xsl:apply-templates select="$nav-menu-items/item">) i get an empty page and nothing telling me what could be wrong. if i remove the $ the page works, and the apply-templates directive is just left out.


It solves the problem of not requiring a full turing machine with a giant API that has a history of actual exploits and not just FUD behind it.


i believe XSLT is touring complete, and regarding exploits, you rather want to read this: https://news.ycombinator.com/item?id=44910050

it turns out that because XSLT was largely ignored, it is full of security issues, some of which have been in there for decades.

so the reason XSLT doesn't have a history of exploits is because nobody used it.


>while there was enormous energy to improve JavaScript

What was the point of it though? People transpile from other languages anyway and pull megabytes of npm dependencies.


This question in analogous to what is the point of better CPUs when people use compilers/assemblers instead of writing binaries in an hex editor.


Community feedback is usually very ad hoc. Platform PMs will work with major sites, framework maintainers, and sometimes do discussions and polls on social sites. IOW, they try to go where the community that uses the features are, rather than stay on GitHub in the spec issues.


Although in this case, it seems more like they are trying to go where the community that uses the feature isn't.


There isn't one. It's Google's web now. You should be thankful that you are still allowed to use it.


I think this post is useful where the thread author proposed some solutions to the people affected: https://github.com/whatwg/html/issues/11523#issuecomment-318...

The main thing that seems unaddressed is the UX if a user opens a direct link to an XML file and will now just see tag soup instead of the intended rendering.

I think this could be addressed by introducing a <?human-readable ...some url...?> processing instruction that browsers would interpret like a meta tag redirect. Then sites that are interested could put that line at the top of their XML files and redirect to an alternative representation in HTML or even to a server-side or WASM-powered XSLT processor for the file.

Sort of like an inverse of the <link rel="alternate" ...> solution that the post mentioned.

The only thing this doesn't fix is sites that are abandoned and won't update or are part if embedded devices and can't update.


> I think this could be addressed by introducing a <?human-readable ...some url...?> processing instruction that browsers would interpret like a meta tag redirect. Then sites that are interested could put that line at the top of their XML files and redirect to an alternative representation in HTML or even to a server-side or WASM-powered XSLT processor for the file.

HTTP has already had this since the 90s. Clients send the Accept HTTP header indicating which format they want and servers can respond with alternative representations. You can already respond with HTML for browsers and XML for other clients today. You don’t need the browser to know how to do the transformation.


This is breaking the web though.

If they are so worried, then have the xslt support compiled to wasm and sandboxed.


This is not breaking the web, stop being so needlessly hyperbolic. XSLT use is absolutely tiny. If you removed it, >99.9% of the web wouldn’t even notice.


If we removed everyone named Jim Dabell from the world, the other 99% wouldn't even notice. They're absolutely tiny. Perhaps we should try doing that.


It certainly wouldn’t break the world. You are being needlessly hyperbolic.


Apart from that doesn’t really work for people who are statically hosting their RSS feeds etc.


You can use content negotiation with static websites too. Apache has mod_negotiation, for example.


Assuming you have access to server configuration. XML/XSLT works anywhere you can host a static page.


it still depends on the mimetype those servers use to host the files.


Most people are hosting static sites on GH pages, Vercel, Netlify, Cloudflare pages etc


I actually found that particular response to be quite disappointing. It should give pause to those advocating removal of XSLT that these three totally disparate use cases could already be gracefully handled by a single technology which is:

* side effect free (a pure data to data transformation)

* stable, from a spec perspective, for decades

* completely client-side

Isn't this basically an A+ report card for any attempt at making a powerful general tool? The fact that the suggested solution in the absence of XSLT is to toil away at implementing application-specific solutions forever really feels like working toward the wrong direction.


Purely out of curiosity, what are some websites that actually make use of XSLT?


Skechers used to :)

https://thedailywtf.com/articles/Sketchy-Skecherscom

Also world of warcraft used to.

Can’t think of recent examples though.


Many sitemaps and RSS feeds use XSL to seamlessly present human readable content.


Isn't this theoretically already supported by the standards? The client supplies an Accept content type, and if that is html not xml the server should render it appropriately.


You can include a "link" HTTP header similar to a link tag. See https://developer.mozilla.org/en-US/docs/Web/HTTP/Reference/...

This would work without special syntax in the XML file.


Any solution that requires any change to the websites affected, no matter how small, is not a solution at all. DO. NOT. BREAK. THE. WEB.


Ah how easy is it to bloviate when you're not actually the one having to maintain the web, huh?


Google doesn't have to maintain the web, they chose to. They also chose to make the web infinitely more complicated so that others are less likely to "compete" for that responsibility. You don't get to insert yourself into that position and then only reap the benefits without putting int the required effort.


> [T]he maintainer of libxslt has stepped down: https://gitlab.gnome.org/GNOME/libxml2/-/issues/913

... Largely because of lack of help from major users such as browsers.


Disclaimer: I work on Chrome and I have contributed a (very) small number of fixes to libxml2/libxslt for some of the recent security bugs.

Speaking from personal experience, working on libxslt... not easy for many reasons beyond the complexity of XSLT itself. For instance:

- libxslt is linked against by all sorts of random apps and changes to libxslt (and libxml2) must not break ABI compatibility. This often constrains the shape of possible patches, and makes it that much harder to write systemic fixes.

- libxslt reaches into libxml and reuses fields in creative ways, e.g. libxml2's `xmlDoc` has a `compression` field that is ostensibly for storing the zlib compression level [1], but libxslt has co-opted it for a completely different purpose [2].

- There's a lot of missing institutional knowledge and no clear place to go for answers, e.g. what does a compile-time flag that guards "refactored parts of libxslt" [3] do exactly?

[1] https://gitlab.gnome.org/GNOME/libxml2/-/blob/ca10c7d7b513f3...

[2] https://gitlab.gnome.org/GNOME/libxslt/-/blob/841a1805a9a9aa...

[3] https://gitlab.gnome.org/GNOME/libxslt/-/blob/841a1805a9a9aa...


Sounds like libxslt needs more than just a small number of fixes, and it sounds like Google could be paying someone, like you, to help provide the necessary guidance and feedback to increase the usability and capabilities of the library and evolve it for the better.

Instead Google and others just use it, and expect that any issues that come up to be immediately fixed by the one or two open source maintainers that happen to work on it in their spare time. The power imbalance must not be lost on you here...

If you wanted to dive into what [3] does, you could do so, you could then document it, or refactor it so that it is more obvious, or remove the compile time flag entirely. There is institutional knowledge everywhere...


or, the downstream users who use it and benefit directly from it could step up, but websites and their users are extremely good at expecting things to just magically keep working especially if they don't pay for it. it was free, so it should be free forever, and someone set it up many moons ago, so it should keep working for many more magically!

// of course we know that, as end-users became the product, Big Tech [sic?] started making sure that users remain dumb.


Website operators are fine with how libxslt works now. It's browser vendors that want change.


You mean they are fine with expecting it to be maintained by browser vendors indefinitely for free.


Browser vendors aren't maintaining the web for fee, they are for profit corporations that have chosen to take on that role for the benefits it provides to them. It's only fair that we demand that they also respect the responsibilities that come with it. And we can also point out the hollowness about complaints of hardship due to having to maintain the web's legacy when they keep making it harder for independent browser developers by adding tons on new complexity.


Sure, of course, but unless funding is coming from users the economics won't change, because:

The vendors cite an aspect of said responsibility (security!) to get rid of an other aspect (costly maintenance of a low-revenue feature).

The web is evolving, there's a ton of things that developers (and website product people, and end-users) want. Of course it comes with a lot of "frivolous" innovation, but that's part of finding the right abstractions/APIs.

(And just to make it clear, I think it's terrible for the web and vendors that ~100% of the funding comes from a shady oligopoly that makes money by selling users - but IMHO this doesn't invalidate the aforementioned resource allocation trade off.)


> libxslt is linked against by all sorts of random apps and changes to libxslt (and libxml2) must not break ABI compatibility. This often constrains the shape of possible patches, and makes it that much harder to write systemic fixes.

I’m having trouble expressing this in a way that won’t likely sound harsher than I really want, but, uh, yes? That’s the fundamental difference between maintaining a part of the commons that anybody can benefit from and a subdirectory in a monorepo. The bazaar incurs coordination costs, and not being able to go and fix all the callers is one of them.

(As best as I can see, Chrome’s approach is largely to make everything a part of the monorepo, so maintaining a part of the commons may not be high on the list of priorities.)

This not to defend any particular ABI choice. Too often ABI is left to luck and essentially just happens instead of being deliberately designed, and too often in those cases we get unlucky. (I’m tempted to recite an old quote[1] about file formats, which are only a bit more sticky than public ABI, because of how well it communicates the amount of seriousness the subject ought to evoke: “Do you, Programmer, take this Object to be part of the persistent state of your application, to have and to hold, through maintenance and iterations, for past and future versions, as long as the application shall live?”)

I’m not even deliberately singling out what seems to me like the weakest of the examples in your list. It’s just that ABI, to me, is such a fundamental part of lib-anything that raising it as an objection against fixing libxslt or libxml2 specifically feels utterly bizarre.

[1] http://erights.org/data/serial/jhu-paper/upgrade.html


It's one thing if the library was proactively written with ABI compatibility in mind. It's another thing entirely if the library happens to expose all its implementation details in the headers, making it that much harder to change things.


When i first encountered the early GNOME 1 software back in the very late 1990s, and DV (libml author) was active, i was very surprised when i asked for the public API for a library and was told, look at the header files and the source.

They simply didn’t seem to have a concept of data hiding and encapsulation, or worse, felt it led to evil nasty proprietary hidden code and were better than that.

They were all really nice people, mind you—i met quite a few of them, still know some—and the GNOME project has grown up a lot, but i think that’s where libxml was coming from. Daniel didn’t really expect it to be quite so widely used, though, i’m sure.

I’ve actually considered stepping up to maintain libxslt, but i don’t know enough about building on Windows and don’t have access to non-Linux systems really. Remote access will only go so far on Windows i think, although it’d be OK on Mac.

It might be better to move to one of the Rust XML stacks that are under active development (one more active than the other).


No, it's the same in both cases. ABI stability is what every library should provide no matter how ugly the ABI is.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: