Hacker Newsnew | past | comments | ask | show | jobs | submit | mulholio's commentslogin

It’s certainly been my experience that page sizes should be bigger than you initially expect. Paginated endpoints are typically iterated all the way through meaning you’re going to return that data anyway. May as well save the additional overhead from multiple requests.

Not implementing pagination at the outset can be problematic, however. If you later want to paginate data (e.g. if the size of your data grows) then it’s going to be a breaking change to implement that later. Big page sizes but with pagination can be a reasonable balance.


I had a beloved F91W that I used for years. One day, when surfing in Morocco, it gave out and the screen became foggy before showing me an error code and never recovering. I guess I should have gone deeper.


This model doesn't seem to display error codes for anything. I think their failure mode is just to "not work properly anymore".


Although I'm only on job three and have not had that much involvement with open source, I think my current employer (Attio) has one of the best codebases I've seen.

Qualitatively, I experience this in a few ways: * Codebase quality improves over time, even as codebase and team size rapidly increase * Everything is easy to find. Sub-packages are well-organised. Files are easy to search for * Scaling is now essentially solved and engineers can put 90% of their time into feature-focused work instead of load concerns

I think there are a few reasons for this:

* We have standard patterns for our common use cases * Our hiring bar is high and everyone is expected to improve code quality over time * Critical engineering decisions have been consistently well-made. For example, we are very happy to have chosen our current DB architecture, avoided GraphQL and used Rust for some performance-critical areas * A TypeScript monorepo means code quality spreads across web/mobile/backend * Doing good migrations has become a core competency. Old systems get migrated out and replaced by better, newer ones * GCP makes infra easy * All the standard best practices: code review, appropriate unit testing, feature flagging, ...

Of course, there are still some holes. We have one or two dark forest features that will eventually need refactoring/rebuilding; testing needs a little more work. But overall, I'm confident these things will get fixed and the trajectory is very good.


I've gotten a lot of use from a version of Zettelkasten in Roam recently.

The general idea is this.

1. Take "Literature Notes" in your own words as you consume content. These are little summaries of ideas in the text that are usually 1-3 lines. For paper books, I do this in a little Muji notebook. For digital resources, these go straight in Roam. 2. When you have finished the resource, copy your notes to one page and summarise them in "Permanent Notes". 3. Keep your Permanent Notes in one Permanent Note page and link everything together.

The best guide I found for getting this kind of system set up was https://www.nateliason.com/blog/smart-notes. The original "How to Take Smart Notes" book is also good, but much less concrete.

Really, I found step 1 to be the most valuable. I was previously processing highlights from articles and books and that weirdly took more time. Writing little notes as you go along saves only what is relevant and it's much more direct for your use cases.

As a word of warning though, I really haven't gotten this stuff to work very well for programming (or disciplines where you need to exercise knowledge rather than archive it in a system). I either find I need to memorise the stuff more directly with Anki, or having to practice skills in a real world context.


As a non-systems engineer, I’ve found Julia’s blog immensely helpful. She does a great job of explaining a wide range of topics and also tackles soft skills very well


Going to test this out on Hinge voice prompts and see what happens


Damn, this looks great. I'm very happy with my ErgoDoxes but the domed keys on this would make me seriously consider it if I were buying today. IMO, the split keyboard is super important for not messing up your shoulders with hunched over typing.


Just wait until you hear about git blame


I think create-react-app is much more beginner friendly. Fewer concepts to learn. Rolling your own or using script tags to begin with is also an option (see this course for an example: https://egghead.io/courses/the-beginner-s-guide-to-react)


And create-react-app would be all I would ever need except that SEO on single page apps that don't have data loaded upon load really sucks compared to server side rendered pages. Is there a way to build a site using create-react-app and enable server side rendering without implementing a whole other framework like Nextjs?


We used React Router's ssr process described here[1] on my last React project. It worked ok, but there's a bit of hoop-jumping to go through if your pages are populated by async data.

[1] https://reactrouter.com/web/guides/server-rendering


Next.js really isn't that bad. If you need SSR, it is definitely the best way to go.


Next.js has been very good for not introducing breaking changes in my experience (especially for the JavaScript ecosystem). The changes are more along the lines of lots of new and useful things (e.g. image optimisation) as opposed to problematic maintenance issues.

I agree cookies can be a bit awkward with Next but I've always found a way around.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: