Cards.fun | Software Engineers (Fullstack) | Remote only
We're digitizing pokemon cards and creating unprecedented financialization of card assets. We're a small bootstrapped team. Interested in those able to wield Claude Code effectively, and take a test-driven approach. Bonus points if you have any interest in tokenization of RWAs. Our stack is written in Rust, with a vanilla JS and templated HTML frontend.
Hey there! I tried finding a way to apply, but was unsuccessful. So, I'll put my best foot forward here!
My name is Logan. I'm a software engineer who works a lot in the PERN stack, but a lot of my interests align with what you're looking for(Pokemon, real world assets). I believe that going forward, RWAs are going to become a much more attainable way of investing in assets that would normally be out of reach for a lot of people. I'm quite competent with JS and have started taking a big interest in Rust recently.
If possible, I'd love to chat more. Is there a way I could contact you?
this isn't true, tariffs are assessed within the US by the receiving firm, not on the sender's side. We run a business with a foreign supply chain and our suppliers have changed nothing, we just get an extra bill to pay to the government when our inventory arrives.
This is related to the removal of 'de minimis' rule that exempts parcels under $800 to ship duty free. This has caused some European postal services to stop/delay shipping some packages to the US [0]. The Dutch postal service for instance has stopped shipping to the US [1]
It's more the public postal services with very cheap international shipping, they typically can't or won't handle import customs/tariffs and operate under the assumption that the packages aren't valuable enough. Many of them don't even have tracking.
I can fill in the blanks in my head, but I doubt they are what you're thinking. Would you mind elaborating on the cause/effect you have in mind? It is difficult for me to imagine this in and of itself being successful. We would also need to solve the allocation of those collected funds, as in many countries it would likely go to welfare, defense, corruption, etc.
I can't speak to Trilogy specifically, but for many bootcamps, the deception is in the marketing about job and salary prospects, as well as padding the hiring statistics by giving jobs to graduates and counting those jobs towards those marketing numbers.
> padding the hiring statistics by giving jobs to graduates and counting those jobs towards those marketing numbers.
I see this brought up quite a bit, and I don't really see the issue. They are offering their graduates jobs, so why should these jobs not count just like any other jobs?
People pay for a bootcamp for a developer job (with a developer salary), not a teacher's aid job (where they are undoubtedly not paying the salaries they also use in their marketing to attract students). Thus they assume that number reflects the career the bootcamp is promising.
Because the only way that would continue to work is they got more new students (at the bottom) to pay for the old students (at the top) jobs and it (the pyramid) would fall apart.
In other words it looks like a pyramid scheme if the grads only get jobs as teachers.
FWIW, many teachers are only at bootcamps for a year or two before moving on to work for normal companies. I didn’t go through a boot camp, but I do work with people who did and were teachers at those boot camps before working here.
From what I understand, teaching is generally seen as a positive signal (if the boot camp is a good one), because it means they know the material well enough to teach it.
I can totally imagine the opposite becoming true (teaching is a negative signal), creating a positive feedback loop in the opposite direction, and turning the boot camp into a pyramid scheme.
I guess it all depends on the credibility of the boot camp, kind of analogous to universities, as a sibling commenter points out.
This all may be true, but hand waves away the point: students don't know this going in; they assume the high cost will get them a developer salary much sooner than is true. The marketing should be more transparent about this, and if it were, how many students would rethink the costs with a longer ROI?
Your comment touches on a few misconceptions I see a lot.
Firstly, `reqwest` exposes both an async and a synchronous API, allowing the developer to choose which one to use. They are largely interchangeable code-wise. [1]
Secondarily, and more broadly, async is possible to opt out of. You must understand that most web and network related libraries will be async by default for performance, because people who write in Rust and people who write web servers typically care greatly about performance. This is the intersection of those two groups. That being said, there are options outside of that ecosystem. [2]
If you truly want to use an asynchronous library without migrating your application to run entirely on an async runtime like tokio, you can run it inside of a synchronous function without much trouble. I've put together a playground link for you. [3]
"reqwest" exposes a synchronous API, but it's doing async stuff underneath. If you turn on logging, you can see 30 or so async events associated with a single HTTP client side request. It's starting up and shutting down a polling thread just to make one HTTP client request.
You must understand that most web and network related libraries will be async by default for "performance". That's what scares me - async contamination of the low level Rust ecosystem. The async enthusiasts have to be kept in check to prevent breaking Rust as a systems language.
I had understood your concern as wanting to avoid the complexity of asynchronous code execution in your codebase, I did not realize your concern is about writing very low level systems code. In that case, you are doing the right thing: libraries like ureq, minreq, Isahc, curl, and more all offer what you want.
It is unclear to me what you mean by keeping the community "in check". There are a lot of people who rely on and enjoy the async story, and they will continue to produce code that improves that story. Simultaneously, there are people who do not need that, and they are not hindered by this. People will build what they want and need. You've just picked some libraries from some of the biggest async contributors in the community and requested that they be kept in check so that you don't have to switch to a synchronous alternative, of which there are plenty.
It's not that "low level". It's that it doesn't fit the model of "mostly waiting for the network". Here are some of the things I have going on:
- Incoming events UDP packets from multiple servers. These arrive at the client and go into a queue for processing.
- Refreshing the 3D window. This ties up one thread almost full time. At the beginning of each frame, it reads queued events that tell it what needs to change in the GPU. The rest of the time it feeds the GPU.
- Some incoming events require querying external HTTP servers to retrieve assets. When the results come back, they include compressed items which have to be decompressed, processed, and turned into GPU-ready textures or meshes. This is prioritized by how important it is to display that object right now, based on the viewpoint. So there are priority queues along with multithreading.
There's more, but you get the idea.
What's so great about Rust is that you can do stuff like this without crashing all the time, spending your life in the debugger, or recopying big objects to safely pass them around.
The previous implementation, in C++, looked a lot more like an "async" model. It had lots of "coroutines", mostly running off a single thread. It also had a few things running as independent threads because they were so CPU-intensive. It was very prone to short stalls that annoyed players. This happened because something had to do more work than expected and briefly stalled out the coroutine system. The killer in async systems is the subroutine that is usually fast but sometimes slow. So I've seen this problem done in "async" style, and it didn't work well.
I've previously done robotics work which had many asynchronous tasks. I've used QNX for that, and I've used ROS. There, you have a lot of intercommunicating processes, which works but has more overhead.
None of this maps well to an "async" model.
Part of the problem here may be that I've done a lot of multi-thread programming and am used to it. It's an alien approach to programmers who came up from Javascript. That's a big fraction of the web backend crowd.
(Personally, when I have to do a web service, I write it in Go. That's the use case for which Go is designed. It has the libraries for that, and the goroutine concept is well matched to that task.)
> I've previously done robotics work which had many asynchronous tasks. I've used QNX for that, and I've used ROS. There, you have a lot of intercommunicating processes, which works but has more overhead.
I wonder, why is this kind of system a bad fit for the async model? Is it because all threads must be reliably preemptable?
Doesn’t matter who he is, if he’s mad that libraries made for and by web and network developers are using a concurrency model that works well for their applications, he should use different libraries.
It does matter -- he was one of the first "network developers". He published RFC 896 over 35 years ago; he has more experience on this topic than almost anyone.
> Please respond to the strongest plausible interpretation of what someone says, not a weaker one that's easier to criticize. Assume good faith.
This is not the strongest plausible interpretation of what he said --
He's not asking for people to not develop async code. He's asking for them to not hide it in synchronous code.
If you're expecting a blocking system call, and actually get a brand new background thread that's polling, it's quite reasonable to be frustrated.
>If you're expecting a blocking system call, and actually get a brand new background thread that's polling, it's quite reasonable to be frustrated.
It really isn't if the documentation doesn't outright say that it's single threaded and not thread safe. For a lot of simpler use cases where you just want to ship a thread-safe API (e.g. application does not have its own thread pool) then it just makes sense in a lot of cases to use some kind of automatic thread pooling. The caller does not have to know or care how the internal state machine is implemented.
If you have implemented your own thread pool it seems you should know enough to dig down enough to the lower layers to where you can get to that blocking syscall, or least to the point where you can strip off the O_NONBLOCK flags yourself.
Let me try to rephrase this in a way that doesn't pin the blame on "async enthusiasts" as people, and see if you agree:
Many years ago, well before Rust 1.0, Rust used its own M:N threading system, used segmented stacks, had it's own libuv-based event loop, etc. Also, it had garbage collection built into the language.
These were removed before 1.0, which made Rust a lot better as a systems language: you could reliably embed it into non-Rust programs, you could reliably interoperate with non-Rust libraries that didn't expect to be moved around threads, you didn't need to care about starting up the GC (or handling GC pauses), etc.
This was a good decision for Rust, and it turns out most of the things people wanted to do with these features could be done outside it - e.g., the borrow checker avoided the need for pervasive GC. (Though almost certainly not intentional, one side effect is that it distinguished Rust from Go: Go is great for standalone programs that need lightweight concurrency, but it's very bad at being embedded into other code and not the best choice if you're mostly calling FFI libraries.)
First, it would be good for Rust to stick to that decision. Rust should not regain a pausing GC in the standard library - similarly, it should not regain a thread manager in the standard library.
Second, it would be good for Rust libraries to work within the spirit of that decision. That's a lot harder, because part of the expectation when those features were removed was that some needs - notably around event-based processing - would be met by third-party libraries. As I understand it (and I might be totally wrong!), it was honestly a bit of luck that the borrow checker worked as well as it did and was ready at the right time, and the expectation was that someone would add a GC library and it would be widely used. However, it's very good that no widely-used GC library sprung up. In the same vein, it would be good for there to be no widely-used third-party thread manager library.
This might be hard, possibly requiring a borrow-checker-level miracle, but it's worth aiming for. And if there has to be a thread manager (or a garbage collector), it should not be part of core Rust.
Would you agree with that phrasing?
--
Incidentally, why does reqwest start up a polling thread when called in sync mode? Can't it do the polling on the main thread? (Or in other words, async programming doesn't imply multithreaded programming. I actually sort of expect that async programming is better suited to single-threaded programming with an event loop, because if you're okay with threads, you may as well just write synchronous code on threads! So either there is something subtle and very interesting here, or there's an easy fix, or I'm misunderstanding something badly.)
I think this rephrasing misses the mark a bit on the original concern. GP explicitly states that he wants Rust-the-language to remain as it is - close to the metal, no GC, with minimal runtime and 1:1 threading. The concern is indeed with the libraries/ecosystem. We are not quite there yet, but it is not hard to imagine that in a few years somebody who asks how to do some simple task in a blocking fashion will be met with replies in the vein of "well, that's not idiomatic", "why don't you use async", or "there was a library for that but it is now kind of unmaintained". All because the main effort of the community went into supporting and maintaining the async mode.
If the reqwest library indeed spawns a thread for every request, it is a good example of that dynamic. Sync mode kind of works but is clearly a second-class citizen and works in a suboptimal fashion. And if you want to peek under the hood to debug it you still have to deal with the async machinery in all its gory detail.
So that gets at my second question. I would expect that if async mode is working well, it specifically avoids needing libraries to spawn a thread.
Or put another way - when Rust removed M:N threading and also shipped out of the box with no event handling support after removing librustuv, the recommendation was that libraries should use threads to handle concurrency and make blocking calls on each thread, and modern OSes make threads perform well, so why not. Isn't the whole point of revisiting async to avoid that answer?
I have the same use cases of wanting Rust to be a close-to-the-metal language with a minimal runtime that you can safely plop in place of any C code, and it seems to me that the way to do that is to get the async story to be so good that people start saying "Well, that's not idiomatic" and "That approach was common but the libraries are all unmaintained" to libraries that spawn threads. What am I missing? Why are we associating "more async" with "more threads" instead of "remain on the calling thread and use an event loop"?
That doesn't seem to be related to async? I don't know the details of rust's async implementation but that sounds like a problem with your application's setup -- you should be able to have a single threaded async executor that uses an event loop, or in simple cases, just calls poll/select directly?
To put it another way, it's unfortunate that particular synchronous API is implemented using threads, but there is nothing about async that implies one way or another that a synchronous method will be implemented using threads -- I've seen plenty of (questionable) C functions that do similar things like using pthread_create and then pthread_join immediately after to fake a blocking task.
Er? No, the point is that threads are what you want for cpu-bound tasks. Async does not deal well with long running cpu intensive jobs that hog the cpu without yield points.
But regardless, the GP post was not taking about matrix math, it seems it was talking about sending an HTTP request and waiting for a response, which is something that actually is I/O bound on the TCP socket.
The systems that use it as a native threading model are obsolete, but there's also this sentence there:
>Cooperative multitasking is used with await in languages with a single-threaded event-loop in their runtime, like JavaScript or Python.
There's no reason rust can't have an executor that does the same, and you only use that within the event loop on your one or two HTTP worker threads. If you're waiting in a thread for an HTTP request to return, that's never going to be CPU-bound. I still am failing to see what the problem here is besides a complaint about some rust crate only supporting a multi-threaded executor, which again is a different problem than whether it's done with async futures or not. One could just as easily write some C code that forces the use of threads.
Those languages are also well known for not handling multiple cpu bound threads well. (And for that matter, it's simply wrong about Python, which uses native threads, but locks very heavily: you need to write native code to use more than one core effectively from one process.)
The goal is to NOT do what you're suggesting. It's a holdover from when native threads were much more expensive than they are today, and multiple cores on a single cpu were rare.
Complexity kills code. Being able to reason about what your code is doing, is FAR more valuable to me than async. Having tokio act as my runtime and switch tasks as it sees fit will be debug hell.
The problem I see is the current async story is opt-out. It's use async or go find something else. Async should be opt in. As in, the code works regardless of an async runtime, async is added magic if you want it, but it will run like normal single threaded code if not.
Yes, in fact, you cannot even use async Rust without writing your own executor or bringing one in via a library. It is very, very much opt in. That was a hard constraint on the design.
However, I think what the parent is getting at is the feeling of the total package, not the technical details. If every library you want to use is async, you can't really "opt out" exactly, even if technically the feature is opt out.
An executor is required, in name or in spirit. Every async system has software that does this. Most language runtimes that do simply give you no choice in the matter.
Rust is a language that does things different to other languages because it is a better way. I challenge you to do the same with Async. There is a different better way.
Anecdata: as a Texan from a rural town currently working in software in Seattle, I've had multiple experiences with this. People assume my political views or agenda far before they know me. Luckily I don't believe it has impacted my actual job search thus far.
This is my first "Show HN" submission here. This is an ANSI X12 EDI parser and generator for Rust. It has been already used commercially for multiple EDI pipelines and is able to handle any X12 document which is specification-compliant. It can both parse and output valid EDI documents while maintaining versatility to cover the entire spec. There is also a `loose_parse` mode which is less strict on the spec, in case the incoming data is slightly malformed.
I hope this crate helps some companies stuck with antiquated EDI pipelines eliminate some old tech cruft.
We're digitizing pokemon cards and creating unprecedented financialization of card assets. We're a small bootstrapped team. Interested in those able to wield Claude Code effectively, and take a test-driven approach. Bonus points if you have any interest in tokenization of RWAs. Our stack is written in Rust, with a vanilla JS and templated HTML frontend.