Hacker Newsnew | past | comments | ask | show | jobs | submit | huydotnet's commentslogin

exactly what i think when reading the top of the article, maybe the author turned off vebose mode

The verbose mode is, well, verbose. They removed, without any need, info and hid it in a wall of text.

Joke about train line aside, I think Railway fits right in the spot that Heroku left.

They have a nice UI, support deploy any kind of backend-involved apps as long as it can be built into a docker container. While many PaaS out there seems to prioritize frontend only apps.

And they have a free plan, so people can just quickly deploy some POC before decide if it's good to move on.

Anyone know if there is any other PaaS that come with a low cost starter plan like this (a side from paying for a VPS)?


Been building an open source version of railway at https://canine.sh. Offers all the same features without the potential of a vendor lock-in / price gouging.

The docs seem to be non existent. Is the canine yaml documented?

You want docs like this:

https://coolify.io/docs/applications/ci-cd/github/setup-app

https://coolify.io/docs/applications/build-packs/dockerfile

https://coolify.io/docs/applications/build-packs/overview

Plenty of screenshots and exact step by step instructions. Throwing an "example git repo" with no documentation won't get you any users.

Put your shoes into that of a Heroku/Vercel user. DevOps is usually Somebody Else's Problem. They are not going to spend hours debugging kubernetes so if you want to sell them a PaaS built on Kubernetes, it has to be fool proof. Coolify is an excellent example, the underlying engineering is average at best (from a pure engineering point of view it's a very heavy app that suffers from frequent memory leaks, they have a new v5 rewrite but it's been stuck for 2 years) but the UI/UX has been polished very well.


Yeah working through documentation still. The goal isn’t so much to replace coolify. Mostly born out of my last start up that ran a $20M business, 15 engineers, with about 300-1000qps at peak, with fairly complex query patterns.

I think the single VPS model is just too hard to get working right at that scale.

I think north flank / enterprise applications, would be a better comparison of what canine is trying to do, rather than coolify / indie hackers. The goal is not take away kubernetes, but to simplify it massively for 90% of use cases but still give full k8s api for any more advanced features


> Computing is getting cheaper

Heh.

Looks like a great product, although maybe mention some honest reasons to not use it, instead of the passive-aggressive marketing ones.


Render.com has a similar value proposition. I’ve used them and am pretty happy. Railway seems to have more bundled observability built in, that i’d like in render.

Yes, have you seen miget.com by any chance? You can start with the free tier, and can have a backend with a database for free (256Mi plan). If you need more, just upgrade. They redefined cloud billing. Worth checking.

VPS + Dokploy gives you just as much functionality with an additional performance boost. Hostinger has great prices and a one-click setup. Good for dozens of small projects.

+1 for dokploy, it's very flexible and allows me to setup my sites how I need. Especially as it concerns to the way I setup a static landing page, then /app goes to the react app. And /auth goes to a separate auth service, etc.


Context: This is Railway the PaaS company, not your daily commute vehicle (which is good in general, still bad for many users, like me).

A global train outage would be quite a spectacle, is that even possible?

Yeah, i probs should have made that clear

> Due to a miscommunication with the factory, the injection pins were moved inside the heatsink fins, causing the cylindrical extrusions below.

What happened after this? the factory have to replace the casting mold at their own expense or you have to pay for it?


We had to remake half the mold, and I split it 50/50 with the factory.

I was hoping for the /v1/messages endpoint to use with Claude Code without any extra proxies :(


This is a breeze to do with llama.cpp, which has had Anthropic responses API support for over a month now.

On your inference machine:

  you@yourbox:~/Downloads/llama.cpp/bin$ ./llama-server -m <path/to/your/model.gguf> --alias <your-alias> --jinja --ctx-size 32768 --host 0.0.0.0 --port 8080 -fa on
Obviously, feel free to change your port, context size, flash attention, other params, etc.

Then, on the system you're running Claude Code on:

  export ANTHROPIC_BASE_URL=http://<ip-of-your-inference-system>:<port>
  export ANTHROPIC_AUTH_TOKEN="whatever"
  export CLAUDE_CODE_DISABLE_NONESSENTIAL_TRAFFIC=1
  claude --model <your-alias> [optionally: --system "your system prompt here"]
Note that the auth token can be whatever value you want, but it does need to be set, otherwise a fresh CC install will still prompt you to login / auth with Anthropic or Vertex/Azure/whatever.


yup, I've been using llama.cpp for that on my PC, but on my Mac I found some cases where MLX models work best. haven't tried MLX with llama.cpp, so not sure how that will work out (or if it's even supported yet).


Well, to whoever downvoted my comment: It's supported now!!!! https://lmstudio.ai/blog/claudecode

Unrelated to the conversation, but the post title was something like "Starlink roam 50GB is now 100GB and unlimited slow speed after that", then a minute later it's now "Roam 50GB is now Roam 100GB".

Was this change made by a mod or OP, and why would someone making that change? I do think the original title was more descriptive, and the new title was completely out of context, or it's imply that everyone is using Starlink and know what's Roam 50GB is.


The guidelines[0] state:

> ...

> Otherwise please use the original title, unless it is misleading or linkbait; don't editorialize.

0: https://news.ycombinator.com/newsguidelines.html


I would say that omitting the crucial detail about the unlimited slow speed access is pretty misleading. It's a difference between needing to set up a fallback channel, and not, which halves the complexity.


I think this is one of the cases where strictly applying the guideline fails the reader, but yeah, I can see that this guideline make sense most of the (other) cases.


They changed two things and the title only has one of the things, so personally I think that's 'misleading' enough to append the rest.


I believe there is a rule where the HN title should mirror the article’s title


I guess the idea is that the Starlink URL is displayed after the title so it's redundant, but it definitely makes it impossible to understand at first glance if you're unfamiliar with Starlink service names


Yes, this is weird title change.


The HN website shows the host part of the URL right next to the title, so it says "Roam 50GB is now Roam 100GB (starlink.com)", but it looks strange in my RSS reader


If I were to guess, probably because Musk achieved self-fulfilling prophecy of hate and discriminatory handling against him, and now any obviously him related content gets massive, organic, figurative, score penalties.

Tesla and SpaceX posts used to routinely hit the top spots and accumulate thousands of comments here, now they hardly stay an hour on the first page. Someone on the Internet's first headphone amp is now considered more important to people here than the world's largest rocket flying, if that comes with Musk attached.

Obviously as anybody knows, that's how `hate` actually works: silent exclusion, not posturing. But that was what they advocated for years, so, here's my slow claps...


very nice! would be nicer if it can be playable on mobile, i know where i'm gonna spend my time waiting for my wife at the mall now.


Thank! I've been trying with a conversation with 20, 30 comment threads, with about 5 replies per thread, so far so good.

I heard in Chrome, there's a gemini nano model built-in as well, maybe this is a good example to integrate it.


I started my project in 2023 and posted here, made 20k that year. The traffic has been slowly decreasing during 2024, and last October, I was officially entering losing territory, where the cost of running it exceeded the total earnings (mostly due to free trials).

It's been a good journey. Thank you so much to whoever keeps running this thread!


Just a small comment, as I don't know if you're planning to wrap up or keep maintaining the product...

I can't find the pricing of the product on the site, I only find that I get '10 free credits', but I don't know how much a credit is and what can I do with it.

Home page says it's one credit per diagram, but then the docs say it's a certain amount of credits per modification (that could be correct or not, I guess...)

I usually skip if I can't find the price, but it could also happen that people create the trial account, spends quickly the credits, then they find the price and it doesn't fit them. Of course, there's always people coming just for the free credits.

I don't know if this is helpful to you or not, but I hope so :)


Thank you so much, that is a fair point! It's part of a series of mistakes I made, the product started out as a free to try and only showed the pricing after the user used up all their credits (I didn't even have a landing page back then). I'll update the landing page to make this clear!


how do you deal with continuous Google-degrading-risk?

I stopped a site lately i ran for 10 years, because Google changed the ranking so often over the years, finally traffic drowned nearly completely like 1k visitors per month, it was so frustrating so I just stopped the webserver after so many years.


I think the sustainable way is to put more and more backlinks out there, more blog posts, etc. I actually suffered from it too.


What do you think led to the fall off?


Many reasons: 1) lack of marketing, 2) I stopped working on it for a while, 3) because of #2, the app lacks new features to attract users.

Another one but turned out it was never really a big deal: some chatbots from frontier AI labs started to support those niche features (people still coming to my app for the flexibility of using multiple AI models).

I think the biggest problem was #2, life kept pulling me the other way.


While everyone training AI, this man train a rat. Are you gonna release the open weights (or the rat)?

Great project btw!


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: