There's no way I am using such an important piece of life as an IDE from Google just because I know they are going to kill it within 3 years, if it survives that much. Probably will die with the Windsurf guy jumping ship again.
Thanks! I write about this briefly in the blog post, but the more detailed answer is there's no need: Zig's grammar is simple/explicit/powerful enough that they pick it up themselves in a weekend. Learning Zig is just not something we need to talk about with new hires, and we hire systems programmers from all backgrounds.
To be clear, we do invest in months of onboarding in terms of understanding the TigerBeetle code base. For example, matklad has recorded nearly a hundred hours' worth of IronBeetle episodes [0].
But I just noticed at one point that people were joining our team and never having any trouble with Zig. The question was just never being asked. Essential simplicity is a great benefit!
I personally learned Zig by reading https://ziglang.org/documentation/master/ and stdlib source code once I joined TigerBeetle. enums.zig and meta.zig are good places to learn, in addition to usual suspects like array_list.zig.
(though, to be fair, my Rust experience was a great help for learning Zig, just as my C++ knowledge was instrumental in grokking Rust)
Still use pinboard which seems more than enough for me. But I would happily pay for a pinboard that has better search and more responsive customer service.
I still haven't got access to GPT-5 (plus user in US), and I am not really super looking forward to it given I would lose access to o3. o3 is a great reasoning and planning model (better than Claude Opus in planning IMO and cheaper) that I use in the UI as well as through API. I don't think OpenAI should force users to an advanced model if there is not a noticeable difference in capability. But I guess it saves them money? Someone posted on X how giving access to only GPT-5 and GPT-5 thinking reduces a plus user's overall weekly request rate.
I wish they would do a bigger size kindle scribe. I read pdfs all day on my scribe, and often I wish the screen was bigger so the font size would be large.
Then don't. You can find flaws with any and every company, but it gets bizarre that such a nit-pick gets commented here every time that brand is mentioned. Most people don't care about GPL.
If you publish code openly for the world to use, it's not realistic that they'd respect any wishes or demands you make regarding that code. I can make an instructional video available to the public on how to chop wood more efficiently. If I demand that anybody who uses my technique only does it with birchwood, nobody is going to care.
The size is good for PDF reading, but PDFs with huge margins or small font sizes don’t work well.
One way to fix the margins issue is to use the “Send to Kindle” feature, which converts PDFs to the Print Replica format and trims their margins in the process. Sideloaded PDFs actually appear with more margins (thus reduced font sizes) than books sent through Amazon’s servers.
It works fine I would say. In the absence of a bigger screen with proportional DPI (kindle scribe is 300 which is one of the highest in terms of screen size), kindle scribe is one of the better options still IMO.
- Python, R, C++
- AI / Deep learning, Pytorch, Sklearn, Xgboost, OpenAI API
- bioinformatics tools for genomics analysis such as GWAS, next-gen sequence data such as RNA-seq, DNAse-seq, ATAC-seq
- Build out AWS infrastructure from scratch for large scale ML/Bioinformatics analysis
Resume/CV: On request (just email me)
Email: nh at nafiz.bio
PhD in Computational Biology (AI + Genomics) | Experience building technology from scratch for biotech/ML needs in a startup setting | Experience building teams as a number 1 hire
If you have data and don’t know what to do with it, I can help!
A lot of this can be explained by just supply and demand. In central New Jersey, the number of people who need to get a home is always increasing, but number of houses being built is extremely low, and even those houses start from 900k-1M.
My use of ChatGPT has just organically gone down 90%. It's unable to do any sort of task of non-trivial complexity e.g. complex coding tasks, writing complex prose that conforms precisely to what's been asked etc. Also I hate the fact that it has to answer everything in bullet points, even when it's not needed, clearly rlhf-ed. At this point, my question types have become what you would ask a tool like perplexity.
Sure, but consider not using it for complex tasks. My productivity has skyrocketed with ChatGPT precisely because I don't use it for complex tasks, I use it to automate all of the trivial boilerplate stuff.
ChatGPT writes excellent API documentation and can also document snippets of code to explain what they do, it does 80% of the work for unit tests, it can fill in simple methods like getters/setters, initialize constructors, I've even had it write a script to perform some substantial code refactoring.
Use ChatGPT for grunt work and focus on the more advanced stuff yourself.
I torture ChatGPT with endless amounts random questions from my scattered brain.
For example, I was looking up Epipens (Epinephrine), and I happened to notice the side-effects were similar to how overdosing on stimulants would manifest.
So, I asked it, "if someone was having a severe allergic reaction and no Epipen was available, then could Crystal Methamphetamine be used instead?"
GPT answered the question well, but the answer is no. Apparently, stimulants lack the targeted action on alpha and beta-adrenergic receptors that makes epinephrine effective for treating anaphylaxis.
I do not know why I ask these questions because I am not severely allergic to anything, nor anyone else that I know of, and I do not have nor wish to have access to Crystal Meth.
I've been using GPT for helping prepare for dev technical interviews, and it's been pretty damn great. I also do not have access to a true senior dev at work either, so I tend to use GPT to kind of pair program. Honestly, it's been life changing. I have also not encountered any hallucinations that weren't easy to catch, but I mainly only ask it more project architectural, design questions, and a documentation search engine than using it to write code for me.
Like you, I think not using GPT for overly complex tasks is best for now. I use it make life easier, but not easy.
If there is an IDE plugin then I use it first and foremost, but some refactoring can't be done with IDE plugins. Today I had to write some pybind11 bindings, basically export some C++ functionality to Python. The bindings involve templates and enums and I have a very particular way I like the naming convention to be when I export to Python. Since I've done this before so I copied and pasted examples of how I like to export templates to ChatGPT and then asked it to use that same coding style to export some more classes. It managed to do it without fail.
This is a kind of grunt work that years ago would have taken me hours and it's demoralizing work. Nowadays when I get stuff like this, it's just such a breeze.
As to copilot, I have not used it but I think it's powered by GPT4.
I haven't really tried to use it for coding, other than once (recently, so not before some decline) indirectly, which I was pretty impressed with: I asked about analyst expectations for the Bank of England base rate, then asked it to compare a fixed mortgage with a 'tracker' (base rate + x; always x points over the base rate). It spat out the repayment figures and totals over the two years, with a bit of waffle, and gave me a graph of cumulative payments for each. Then I asked to tweak the function used for the base rate, not recalling myself how to describe it mathematically, and it updated the model each time answering me in terms of the mortgage.
Similar I think to what you're calling 'rlhf-ed', though I think useful for code, it definitely seems to kind of scratchpad itself, and stub out how it intends to solve a problem before filling in the implementation. Where this becomes really useful though is in asking for a small change it doesn't (it seems) recompute the whole thing, but just 'knows' to change one function from what it already has.
They also seem to have it somehow set up to 'test' itself and occasionally it just says 'error' and tries again. I don't really understand how that works.
Perplexity's great for finding information with citations, but (I've only used the free version) IME it's 'just' a better search engine (for difficult to find information, obviously it's slower), it suffers a lot more from the 'the information needs to be already written somewhere, it's not new knowledge' dismissal.
To be honest, when I say it has significantly worsened, I am comparing to the time when GPT-4 just came out. It really felt like we were on the verge of 'AGI'. In 3 hours, I coded up a complex piece of web app with chatgpt which completely remembered what we have been doing the whole time. So, it's sad that they have decided against the public having access to such strong models (and I do think it's intentional, not some side-effect of safety alignments though that might have contributed to the decision).
I'm guessing it's not about safety, but about money. They're losing money hand over fist, and their popularity has forced them to scale back the compute dedicated to each response. Ten billion in Azure credits just doesn't go very far these days.
i mean i feel like its fairly plausible that the smarter model costs more, and access to GPT-4 is honestly quite cheap all thing considered. Maybe in the future theyll have more price tiers.
People talks about prompt engineering, but then it fails on really simple details, like "on lowercase", "composed by max two words", etc... and when you point at the failure, apologizes, and composes something else that forgets the other 95% of the original prompt.
Or worse, apologizes and makes again the very same mistake.
This sucks, but it's unlikely to be fixable, given that LLMs don't actually have any comprehension or reasoning capability. Get too far into fine-tuning responses and you're back to "classic" AI problems.
This is exactly my problem. For some things it's great, but it quickly forgets things that are critical for extended work. When trying to put together and sort of complex work: it does not remember things until I remind it which can make prompts that must contain all of the conversation up to that point and create non-repeatable responses that also tend to bring in the options of it's own programming or rules that corrupt my messaging. It's very frustrating, to the point where anything beyond a simple outline is more work than it's worth.
This has exactly been my experience for at least the last 3 months. At this point, I am thinking if paying that 20 bucks is even worth anymore which is a shame because when gpt-4 first came out, it was remembering everything in a long conversation and self-correcting itself based on modifications.
Since I do not use it every day, I only pay for API access directly and it costs me a fraction of that. You can trivially make your own ChatGPT frontend (and from what people write you could make GPT write most of the code, although it's never been my experience).