I have an M2 for my work. MacOS still gives me random lag spikes sometimes. But it's probably because the OS is bad and not the hardware. This is never an issue on win/linux. idk. im not a fan. but.. im just saying.. if the hardware is so superior, why does it feel so inferior? Like.. my $500 linux laptop feels smoother and faster (GUI wise)
I’ve had a few corporate MacBooks sent to me by clients for security testing and they always seem much slower than my own. It’s normally down to security tooling they install on them.
Careful, your homedir has a CloudStorage folder and if you are using, say, Dropbox or Google Drive then that find will be incredibly slow (in addition to security software possibly slowing it down).
On the one hand, a lot of security software is poorly written, eats resources like it's Chrome, and introduces all kinds of microstutters through (exclusive) locks all over the place.
On the other hand, many operating systems don't provide a (reliable) API to design decent security software against. Log collection is all over the place, even on Windows, which traditionally had Event Log as a well concentrated logging destination. There's no way to write good security software for an operating system that's written without security software in mind.
If I were handling important secrets, I wouldn't want a fleet of machines out there with just the basic antivirus that came preinstalled with the OS (if it came with one at all). On the other hand, so many pieces of "enterprise" security management are absolutely terrible, and require one (or more) full-time employee(s) to constantly configure them, communicate with users, and solve problems, just to keep software in check.
I think both operating systems and security management software need to listen to each other, and change. Operating systems need to be written with security stuff in mind, and security software needs to focus on a good user experience rather collecting than shiny buzzwords to sell to management.
> Operating systems need to be written with security stuff in mind, and security software needs to focus on a good user experience rather collecting than shiny buzzwords to sell to management.
They won't because by the time the users get fed the "food" the contract is long since signed and valid for a few years, and the competition isn't better so even if management could be arsed to vote with their wallet, they couldn't.
Is there an actual term for "the entire industry is bullshit, but can get away with it because the cost to entry is so high that new, less bullshit competitors can't even enter the industry"?
Yeah. At my previous job we had all kinds of JAMF management software and Crowdstrike (??) and it was a massive performance killer.
Particularly, it seemed to be configured to scan every file on disk access, which was a performance nightmare for things like that involved "accessing lots of files" like Git on our large repo. Spotlight indexing seemed to cause some pretty big random lag spikes as well.
Do you work for a big corporation? They tend to install a ton of spyware on work machines they issue. Lots of flakey security and monitoring software with overlapping functionality.
Is this a work laptop at a large corporation? I’ve found things my company installs can bring any system to its knees. The M1 was the first time my fans didn’t kick on randomly due to some background process, which has been nice. Windows or macOS, they found ways to screw up both. I’m sure they’d do it to Linux too, if it was supported.
I get this on my Mac Studio when I do anything CPU-intensive, the UI becomes jittery. macOS on M CPUs seems to be particularly sensitive to this because I can push a Ryzen much harder and not notice anything in Windows.
Windows XP used to have that issue until they released some patch that prioritized UI when CPU was under stress. Linux had the same issue some time ago as well. I guess Apple will need to do something similar...
When you say lag spikes what kind of thing are you talking about? I have an M1 iMac I run pretty’s hard and I’m not sure what kind of behaviour you’re seeing.
Anything weird showing in Activity Monitor or Console?
If it refers to random beach balls for 30 seconds when accessing disk ether directly or through apps... yeah I get that. I found it disappointing, a flaw in an otherwise great machine (M1 Mac mini).
I have 16BG too. I have a Time Machine backup set up for a disk plugged into an AirPort Extreme, could that be the issue? (The machine connected to the network using WiFi if that matters.)
I used to hit this on an M1 with low RAM while absolutely assaulting it with a project that used docker compose to run far too many resources.
These days I can't seem to hit my M2 Max hard enough. Nothing slows it down. Just about anything will have lag spikes if you crank too much stuff through it, so I assume that (like me) you're experiencing this with devices running MacOS with around 8-16GB RAM, M1 processor at best?
>Like.. my $500 linux laptop feels smoother and faster (GUI wise)
Which Linux do you use? We've had fast enough machines for fast and smooth GUI rendering for decades now. It's just that most software isn't optimized any farther than a few years back.
Linux tends to be quite good on resource consumption. I can still run smooth and fast Linux on my 15 year old laptop. I can't do the same with Windows.
IME, Intel’s Clear Linux is unbelievably good. Most people never give it a chance, but it runs great on core i3 from 2017 with just 8GB of RAM… and this is an OS that uses containers for almost everything. Just wonderful work by Intel’s team and contributors.
One thing to check: are you using iterm? iterm, by default (or at least this used to be the default; not sure if they fixed it) keeps indefinite scrollback history, in memory. It's perfectly possible for this to consume all your memory. It's worth checking memory usage in activity monitor, anyway.
That said, "terrible corporate spyware" would be my first guess here.
Same - I've used multiple ARM Macs and they just freeze up for seconds sometimes. At this point I'm pretty sure its related to their swapping behavior.
The OS really has zero hesitation to swap to sometimes insane amounts and by the time you feel it in the UI responsiveness you might already be swapping 10-20GB.
8GB is just too little for the base model. From what I've seen the OS alone will use 20-50% of that, and if I do anything productive with a couple applications and browser tabs I'm soon at 100% memory usage. Which can cause lags or short freezes when switching between applications. But on 16GB it's a very different experience, it gives decent breathing room.
I think Apple really shot themselves in the foot with this, anyone using even half the CPU's potential is guaranteed to have significantly degraded performance and a worse experience.
1. You can share templates between front & back-end using any language. (im not talking about WASM)
2. CSS has NOTHING to do with js/ts.
3. Most single page js applications require SSR anyway, otherwise you have a blank screen or spinner until the browser has downloaded & intialized everything.
Personally, i dont care if it's SSR or SPA. But the js/ts community tends to use things like webpack in combination with ~20 packages which themselfs rely on ~20 packages, resulting in index.js files that are +2MB... That's bad programming.
Man if you think 2MB is big, you should check out the size of the build product from a compiled language. And if you have an issue with nested dependencies, you've should check out the build a Python or C application which similarly require you grab all the dependencies, and then the dependencies of the dependencies, before building. That's just what dependency management is about.
If you have a frontend app that requires loading a 2MB bundle in the browser, then whoever configured that application did not know what they are doing. There are lots of ways of optimizing JS bundle sizes, and SSR is actually one of the best. With SSR, only the code that executes on the frontend gets included in the client bundle. Webpack is one of many build orchestration frameworks you could use, though honestly at this point you rarely have to actually write custom configuration for frontend applications. A great deal of standardization has happened over the last 5 years, and generally you just use a template for your use case.
As someone who has worked all over the stack, from API development to data pipelines to infrastructure to client-facing application, I find the dismissive attitude of other parts of the stack incredibly bizarre. It's a tool, it exists for a reason, and if you don't see the reason it's probably because you don't understand the problem.
horrible article, title is super clickbait and totally not what the article represents. There is no backfire and all it says is that some people dont know what can or cannot be recycled...
at the moment, if you want a non gc language with automatic memory management, rust is your only choice. ki is an alternative, but my goal is to make it a much simpler than rust. I need to spend some more time writing rust & ki in order to answer your question with full certainty.
! ignores the function error (only possible with void return types)
!? provides an alternative value when the function errors
!! exits the current scope on an error, e.g. return,continue,exit,...
It doesnt seem that complex. Ofcourse, there is also '??' and '?!'. That might make it more difficult. It's not vague actually. if it starts with '!', it's a function error handler. if it starts with '?', it's a null-check.
"It does not have any garbage collection and instead uses ownership combined with minimal ref counting to manage memory"
Because we only allow you to store values with ownership inside other objects, you cannot have a circular reference. It uses reference counting to know if something needs to be freed or not. But because we keep track of ownership and moved values we are able to run an algorithm that removes most of these counts.
I think v-lang is faster than ki, rust, go. But their memory management isnt waterproof. Also, they have been in development for a really long time and there isnt much progress. They should have reached 1.0 by now, but they havent and i think it's because the language might have problems.
> ...they have been in development for a really long time and there isnt much progress.
> They should have reached 1.0 by now...
These are strange statements. People having any familiarity with newer programming languages are likely to not know or be confused by where this is coming from.
V is a relatively new language that came out in 2019 and is in beta. So comparatively speaking, V has progressed well at the least, debatably quite fast. V also has more GitHub stars than all the listed below languages (in beta) combined (per OSS Github language rankings). Let's look at the starting dates for these popular and newer languages:
1) Zig came out in 2016 and is still in beta.
2) Odin came out in 2016 and is still in beta.
3) Jai started in 2014 (full time around 2017) and is still in beta.
4) Red came out in 2011 and is still in beta.
The languages that we see today, which are popular and at the top of the rankings (like TIOBE) are significantly to quite old. Among the youngest languages in the top 20 (your post mentions Rust and Go), they came out in 2015 (Rust), 2014 (Swift), and 2009 (Go). All of these have huge corporate support.
I thought v was older, hmm, ok. But still, all these languages should have reached 1.0 by now. Except for jai, because jblow has other things todo. Wasnt c created in 2 weeks?
No, because C was derived from B. B was created in 1969 and was much of the basis of what would become C, which is given the release date of 1972. So we can make the argument of at least around 3 years of development, of what would be named C, before reaching a stable or usable enough state.
Languages and goals were much simpler in the past. Stages of development like alpha, beta, or what was 1.0 get kind of mixed up. It's not as clear a process, as we have today.
> Except for jai, because jblow has other things todo
Surely the other language creators had/have other things going on too. One of the main differences, that was attempting to clarify in the previous post, is that certain programming languages have huge corporate backing, which affects their development time. C (AT&T), Go (Google), Rust (Mozilla), and Swift (Apple) have reached 1.0 or stability faster, because of who they have supporting and pushing them.
Independent and more grassroots projects can sometimes achieve 1.0 in comparable times. But, this seems related to how exceptionally talented the lead developers are, experienced (as created other languages before), goals of the project (simpler is often easier), popularity, or how many contributors and sponsors got involved.
Crystal, looks to have took around 7 years to reach 1.0 (though with Windows support issues). Julia, comes in at about 9 years. Nim, another notable project, appears to have took around 11 years. We might can expect things to go a bit faster now, than back then, but within reason. And referring to programming languages that are reasonably well known and used.
Thanks! I didn't see the link, the thread links to GitHub, I think it doesn't hurt to have it on the README even if its just a small amount of code. I now see the language looks somewhat like Go, do you cover differences anywhere between Ki and other languages?