Handling errors this way is possible in only very brittle and simplistic software.
I mean, you're contradicting your very own argument. If this was the primary/idiomatic way of handling errors... then Go should just go the way of most languages with Try/Catch blocks. If there's no valuable information or control flow to managing errors... then what's the point of forcing that paradigm to be so verbose and explicit in control flow?
There's plenty of of "proper" markup languages and full programming languages to actually write code in.
Why do we need a hybrid program like this, which is not as simple as pure markup, and is not as powerful as a proper templating language?
I personally just run markdown -> HTML/CSS -> python templating (Jinja or something) -> PDF/HTML
As a dev, I find this works the best for me. But I also cannot imagine that learning Quarkdown would improve my workflow meaningfully, and I also cannot imagine recommending someone learn such a niche product instead of having them learn HTML/CSS and Python (Jinja if they need fancy). Seems like a comparable amount of effort.
Every conference has their own required LaTeX style file that must be used. Unless there is an automated way to convert these exactly, I don't see how LaTeX alternatives can be used.
CS strongly prefers LaTeX [0,1] while broader journals and conferences prefer MS Word over it [2,3]. As long as there is not a solid infrastructure for these other typesetting systems, I never saw the appeal. I think for internal company reports they do have their uses, but other than that, why not use the LaTeX or Word? Realistically any person wanting to submit a work will know how to work with either one or the other.
I also don't see the need for journals and conferences to make a typst template for exactly these reasons. The templates will have to be community-made and then you still run the risk of having a paper rejected a year from now because the template is outdated.
I'm just saying that these systems don't work for me. I write ML/AI conference papers in LaTeX, and I think that use case will be tough to dislodge. I can see this being very attractive to people making other types of documents without a fixed format, especially if you don't already know LaTeX.
Depends on the user. Basic LaTeX2e/LuaTeX can be learned over 5 days. Guru level like any programming language needs its 10K hours. There are people who have an aversion for backlashes. The main reason for the "\" is perhaps the only char that is not commonly found in texts. Others like ":" re very common in texts. When parsing LaTeX and behind it is Knuth's original TeX engine, the commands are swimming in a sea of text (as the Dragon book says).
One thing that has helped with ease of use is Overleaf. It is a hosted LaTeX editor with lots of collaboration features (leaving comments, history of edits) that let people collaborate in real time on a paper. It comes with many templates to get you started on a new document. If you're working with collaborators, it has a lock on the market.
LaTeX itself can be easy for simple things (pick a template, and put text in each section). And it can grow into almost anything if you put in enough effort. It is far and away the standard way to write math equations, so if your document has lots of formulas, that's a plus.
I settled using latex with tectonic, you could always leverage playwright or similar for easy html -> print to pdf without any weird libs? (not great startup time, but you can batch many ops in one session)
# justfile ── put in repo root
set shell := ["bash", "-cu"] # one shell → predictable env, pipe-fail, etc.
# Build a PDF once
pdf:
tectonic -X compile src-v0.1/main.tex --outdir target/pdf # or swap for typst
If one just chooses a reasonable documentclass and if need be a few packages suited to the requirements of one's document, then it all "just works" with (mostly) sensible defaults and minimal configuration.
Memoir hugely simplified my own work in LaTeX back when I was doing book composition.
Well, you'll have to install and keep those packages somewhere on your system. And maybe a few months from now after your latex distribution got updated by the system your document suddenly no longer compiles.
What I want is something like npm-like package management for this, where the packages are just kept there next to the document. I don't care if I'll have a package 20 times on my system either, storage hasn't been a concern in many years.
> And maybe a few months from now after your latex distribution got updated by the system your document suddenly no longer compiles.
I'm using LaTeX 2e for 25+ years. This has literally never happened to me. If that's not stability, I don't know what is. LaTeX documents I wrote in my grad days still compile for me. I just checked and it does. I do keep the dependency packages myself in my folder.
Has this issue ever happened to anyone? Why would LaTeX distribution getting updated break my documents? It's still the same latex compiler and the same base styles and packages!
It happened to me because I had to use the templates and document classes provided by my university, which themselves rely on a bunch of packages I wouldn't have installed myself.
My next step was to just try doing the build in containers but I even ran into it there once because I accidentally pulled a newer image...
But it's just anecdotally. Maybe I really was holding it all wrong.
The only instance of a document not working right anymore for me was a really hacked book using an early/beta version of memoir --- there were (documented) breaking changes for the final release --- updated to match the new macro calls and it was back to working in short order.
Is there any specific issue you face which stops you from compiling old files?
As I mentioned in my other comment, my grad school days documents are still compiling fine.
If you still use LaTeX 2e and you've got all the dependency packages with you, pdflatex should Just Work. right? I can't remember any major change that would outright break your compilation. And I haven't seen such issue too myself. So I genuinely want to know what specific issues you or others face that wouldn't even let you compile your document.
> If one just chooses a reasonable documentclass and if need be a few packages suited to the requirements of one's document, then it all "just works" with (mostly) sensible defaults and minimal configuration.
Ironically, very similar to the story with modern C++. If you use a limited subset it can "just work" but only if you are disciplined and don't have to mix in legacy code that's pre-C++11.
which is ironic considering at one time the appeal of LaTeX was "sane defaults". Don't get me wrong. The default choices were really the best choice at some point in the past. They just no longer are.
And latex is one of very few programs where changing defaults would fundamentally undermine its very purpose (being able to recreate documents no matter how old)
This was true maybe 20 years ago, before TeX engines that output directly to PDF were created. Today, the recommended engine is LuaLaTeX, and it defaults to OpenType fonts.
That's why these things don't go anywhere. If I need to write formatting details, it is better to use LaTeX which is a well-tested and stable language that will last for another 30 years.
Outdated means there is something better that is now used to substitute an old technology, which is not the case for Latex. Unfortunately, programmers tend to think "outdated" just means "was created more than 5 years ago"...
Could definitely see using this for docs. We end up with HTML scattered through our markdown files whenever we need something beyond basic formatting, which is ugly. The ecosystem support is the real question though - Markdown works everywhere because it's been around forever.
SEEKING WORK | 100% REMOTE (on-site travel possible) | Central Europe, timezone independent but living in UTC+1
Email: tinmarkoviccs (at) gmail.com
I'm a software engineer who helps media and research organizations turn their internal knowledge into external-facing APIs. Whether your data lives in spreadsheets, PDFs, Airtable, or Jira, I'll help you extract it, build an API around it, and integrate it with platforms your customers already use.
- Fixed pricing (no surprise bills, no scope creep)
- Results guaranteed - or you don't pay
- Autonomous delivery - I take care of the details so you can focus on what matters
- Used by orgs like the NBA, Kiwi.com, and Redfolder Research
I'm a software engineer who specializes in turning internal knowledge into well-documented, external-facing APIs. I’ve worked with teams at the NBA, Kiwi.com, and Redfolder Research to help them expose structured data from sources like Airtable, spreadsheets, PDFs, and Jira — with minimal lift on their side.
I’m open to freelance, contract, or full-time roles where I can lead API design and delivery, particularly in media, publishing, or research-focused teams.
- Production-ready APIs from messy data
- Fixed-scope freelance projects or long-term roles
- Async-native, autonomous, results-focused (can run self-managed)
- Worked in stacks based on [Python, JavaScript/TypeScript, Golang] but I'm flexible on technology choices
Why limit your price to LTV for the offline-only version? Think of it as a full blooded product, instead of trying to squeeze it into the SaaS thinking model you've got already.
Plenty of enterprise (and such) clients wouldn't balk at all at a $500 fee. Brainstorm your target market and price accordingly. In other comments, you're mentioning the support burden - I don't think you should sell the offline version if you're not ready to lift that burden, and thus should price it in a way where this is attractive to you.
Offline versions are usually used by more demanding customers in the current day and age - the web is where you go for the user-friendly version.
I’m certainly open to charging more, but I operate in a price sensitive hobby market. If we do go for it, I’ll definitely try a higher price first. No harm in trying.
You can always try to categorize your target audience, and figure out their preferences.
The main thing to focus on here, however, is that this offering would not be for your usual audience. If that's all you expect from it, I would rather not bother. It's a separate market, and while there's some bleedover, I think you'll be surprised how different they are.
You can always try to play it safe and put in a contact form for discounted quotes (nonprofits, individuals, etc). This depends a lot on your capacities, but it could quickly tell you if you're pricing out desirable customers.
Practical AI vs hype AI is what I see the biggest distinction on.
I haven't seen people negatively comment on simple AI tooling, or cases where AI creates real output.
I do see a lot of hate on hype-trains and, for what it's worth, I wouldn't say it's undeserved. LLMs are currently oversold as this be-all end-all AI, while there's still a lot of "all" to conquer.
I'm a software engineer who specializes in turning internal knowledge into well-documented, external-facing APIs. I’ve worked with teams at the NBA, Kiwi.com, and Redfolder Research to help them expose structured data from sources like Airtable, spreadsheets, PDFs, and Jira — with minimal lift on their side.
I’m open to freelance, contract, or full-time roles where I can lead API design and delivery, particularly in media, publishing, or research-focused teams.
- Production-ready APIs from messy data
- Fixed-scope freelance projects or long-term roles
- Async-native, autonomous, results-focused (can run self-managed)
- Worked in stacks based on [Python, JavaScript/TypeScript, Golang] but I'm flexible on technology choices
SEEKING WORK | 100% REMOTE (on-site travel possible) | Central Europe, timezone independent but living in UTC+1
Email: tinmarkoviccs (at) gmail.com
I'm a software engineer who helps media and research organizations turn their internal knowledge into external-facing APIs. Whether your data lives in spreadsheets, PDFs, Airtable, or Jira, I'll help you extract it, build an API around it, and integrate it with platforms your customers already use.
- Fixed pricing (no surprise bills, no scope creep)
- Results guaranteed - or you don't pay
- Autonomous delivery - I take care of the details so you can focus on what matters
- Used by orgs like the NBA, Kiwi.com, and Redfolder Research
Hah, I was about to criticise the text for far too lightly conflating markup and punctuation, just to see the afterword.
I actually do think the author has a point, in that must solutions today are inelegant, I also don't think this is a problem which has a real elegant solution. Where to draw the line? Why not encode fonts into the standard too, if we're doing bold? Etc.
I'm still mostly in favour of keeping everything markdown (in my own writing), however much it pollutes the "purity" of text.
Yes, it's not markup but typesetting [1]. Well before 2013 people used to use stars, _underscores_ or /slashes/ in Usenet forums or mailing lists to mimic typesetting, which lead to Markdown.
The name still maintains the confusion as it tries to be an alternative to markup systems such as HTML which had the purpose to introduce semantic clues for computers.
We all know how it went; the semantic part was entirely thrown away and markup was thoroughly abused for layout (HTML tables before CSS - CSS which also has little to do with "style" and more to do with typesetting and layout), as no browser today can just show a table of contents based on the HTML title tags.
Handling errors this way is possible in only very brittle and simplistic software.
I mean, you're contradicting your very own argument. If this was the primary/idiomatic way of handling errors... then Go should just go the way of most languages with Try/Catch blocks. If there's no valuable information or control flow to managing errors... then what's the point of forcing that paradigm to be so verbose and explicit in control flow?
None.