I wrote this crate [1] as a compressor in Rust which is the opposite of a noise gate, as in gain reduction is applied after a threshold is passed instead of gain reduction applied if it is under a threshold.
If you want a really great approach to noise gating, a fixed threshold is fine but it works better when you apply it to the difference of two envelope followers - one with a short attack, long release (tracks input) and long attack, short release (tracks noise floor). Takes a bit to set it up, but it's a stupid simple way to get extremely effective gating and is easy to fine tune for your application. A lot of Voice Activity Detection (VAD) works this way; it's just a matter of tuning the coefficients and thresholds for your input.
Also useful reference for envelope following are the DAFX text [2], Will Pirkle's textbook on audio in C++ [3] and Zölzer's text [4]
The examples used in the OP are helped by having an RF squelch to zero out the noise floor. If there was no squelch, the difficulties finding a good static (har har) threshold would have been much more apparent.
I wouldn't say std::mem::drop acts like free at all, it's the equivalent of a destructor in C++. Mostly useful when you're dealing with manually allocated memory, FFI, implementing an RAII pattern, etc.
One cool thing about Drop (and some other cool stuff, like MaybeUninit) is that it makes doing things like allocating/freeing in place just like any other Rust code. There may be some unsafe involved, but the syntax is consistent. Whereas in C++ seeing placement new and manually called destructors can raise eyebrows.
This is similar to the work done in winapi [1], com-imp[2] and my own tangential work on porting VST3 [3] to Rust [4]
I'm really glad MS is doing this. What needs to be a bit clearer to me is how they maintain ABI compatibility under the hood of MSVC for COM interfaces (which uses the vtable layout of an inherited class) and how that's compatible with MinGW/GCC stacks on Windows, mostly what can break it. I got stuck porting VST3 with multiple inheritance, and it was a headache trying to reverse engineer the appropriate struct layouts for COM implementations.
COM is an ABI standard. The structs are defined in terms of C with Winapi (stdcall) calling convention. Very little needs reverse engineering - it's all pIntf->vTable->func(pIntf, ...). You can explain it on a whiteboard in a couple of minutes.
The way MSVC does it may need reverse engineering (it may be patented btw). I could explain how Delphi implements COM interfaces, but any specific implementation is actually more complicated than the ABI, because they're trying to add implementation ergonomics on top of the basic calling convention.
Is the ABI documented anywhere? Every time I google around for it, I just get information like "COM is ABI stable and language agnostic" but not what the ABI is. I've successfully implemented implementations of single COM interfaces and get the basics, my trouble was in implementing many interfaces for the same implementation and running into copious segfaults when testing the Rust implementation through a reference app written in C++.
Then if you have access to public libraries, maybe one of them has one of the several Microsoft Systems Journals issues, later MSDN Magazine, with plenty of low level COM articles.
COM is from the days where good documentation was to be found in books, not on the Interwebs.
+1 for Essential COM by Don Box, it still survives my bookshelf purges..."just-in-case". IUnknown and IDispatch are burned permanently in my memory from a period of my life building a COM/CORBA bridge.
I'm gonna piggyback on your comment to give another shout out to Essential COM. I haven't touched COM in ages but that book is so good I still pick it up every once in a while -- and it was a lifesaver back when I did touch COM on a daily basis.
Wow, that's a throw back to the late 90s, early 2000s. I'm still slightly scarred from working with ATL and COM. A lot of people around here would probably be surprised to hear that there were even Python -> COM bindings back in the day that were even used to ship server software once upon a time. Anyway, I remember that Don Box book well.
True... I've looked at Delphi, but at this point I doubt the license fee is worth it compared to say C#. I've never known anyone who actually used Eiffel though.
What's to document? The stdcall / WINAPI calling convention, which is of course OS and architecture dependent, but can be summarized as on the stack, right-to-left, callee pops.
The rest is just that interfaces are doubly indirected to get to a vtable (an array or struct of function pointers), the method is chosen by ordinal in the vtable (first three are always the IUnknown methods), and the interface is passed as the first argument.
How you construct those vtables and how you select one to return in QueryInterface, and how you implement interfaces (i.e. traits) so you can convert them into a vtable is where all the work is. You can do anything you like that works as long as it's called according to the COM conventions.
Delphi works by implementing each method in the vtable with a stub which subtracts the offset of the vtable pointer in the instance data from the passed in interface, and then jumps to the implementation method on the instance after the instance has now been adjusted. Instances look like this:
[class pointer, the native vtable] <- normal instance pointers
[vtable interface 1] <- COM interface pointers
...
[vtable interface n] <- COM interface pointers
instance field 1
...
instance field n
So you can see that in order to convert a COM interface pointer into an instance pointer that the methods expect, the COM interface pointer needs to be adjusted depending on the offset of the vtable in the instance.
In a language with multiple inheritance like C++, the compiler vendor targeting Windows may choose a layout which is suitable for COM's calling convention (there's more than one way to do MI, e.g. fat pointers is another way to go that wouldn't be COM-compatible). If the vendor does that, then they make implementation of COM interfaces much easier. And if they don't, well, life isn't going to be easy. Technically you could do a bunch of stuff with reflection and code generation, but that's harder and harder these days with security restrictions around code writing code. You could write macros which create statically initialized structures of the right shape, and fields of the right type, to emulate the same effect as Delphi's scheme as I sketched above, or some other method which would work with COM's calling convention.
Realize that COM is supposed to be used with code generators. You can write it in raw C even, but it quickly becomes intractable. Perhaps I did not read completely but this is something I see missing from the post -- the annotations fit what the pre-pass C++ code looks like, but don't mention that layer?
COM layout is followed upon most mainstream Windows compiled languages, namely major C++ compilers, .NET, Delphi, Eiffel, Ada, so it is not MSVC++ keeping ABI compatibility under the hood on their own.
My issue isn't ABI stability but the ABI itself w.r.t vtable layout. Best I can tell it should be similar to Itanium's spec? [1]. It's been months since I did this, but iirc my problems stemmed from having multiple interfaces on top of the same implementation and the ordering/layout of those interfaces, though the IUnknown interface which is supposed to handle that.
COM is agnostic as to how you do multiple inheritance - it doesn't have the concept. It specifies the QueryInterface protocol, but you don't need to return the same instance for the result of the QI call, just one that uses the same lifetime refcount.
Tear-off interfaces and delegated implementations are things in this world.
MSVC uses a completely different ABI from Itanium, and you shouldn't rely on the Itanium ABI to inform you what it might look like.
vtable layout in the most basic situations is going to be accidentally portable because those situations boil down to "it's a struct of function pointers," and there's only so many ways you can order the fields of a struct. But even here, MSVC uses a quite different ABI: the order of the vtable entries can change if you overload a virtual method with a non-virtual method.
AFAIK, non-virtual methods never affect the vtable layout. But when you overload a virtual method with another virtual method, the ordering in the vtable is unspecified!
Also, a public COM interface mustn't have a virtual destructor, because some compilers (e.g. recent GCC) put more than one method in the vtable. Implementation classes might define a virtual destructor, though.
About multiple interfaces, all of them need 3 first vtable entries pointing to the 3 IUnknown methods. Also, don't forget that when client calls QueryInterface on any interface of the same object with IID_IUnknown argument, you must return same IUnknown pointer. Some parts of COM use that pointer as object's identity.
A bigger issue I've seen is that "gut feeling" comes from a misplaced sense of confidence. Like say, sample size. I can't tell you how many times I've heard engineers say "the data isn't significant because the sample size is too small." If you have the data, calculate the confidence interval!
Most of the time you don't need hundreds to thousands of data points to be reasonably confident, just a few dozen. I remember the example distinctly from my sophomore engineering stats course, I don't know why everyone else has forgotten it.
I think most people overlook/don't know that the needed sample size depends not just on the confidence you want, but also on how big the effect you want to measure is. E.g. if landing page A has a conversion rate of 50% and B has one of 55%, that's going to take a lot of sampling to prove. But if A has 40% and B has 80%, then that's going to show up in the samples very quickly. But exactly which question you ask affects the needed sample size greatly - e.g. showing that 'B performs better than A' will take fewer samples than 'B performs at least 20% better than A'. This makes it much harder to have a correct intuition about needed samples sizes.
In my experience when the data is very small it is almost always also biased towards how easy it was to gather, which also makes it non representative. Think about it, if it were as easy to let n=5000 as it were to let n=25, you would always pick 5000. You only pick n=25 because of the low effort involved, which often means proximity.
A very common example is when some software feature is A/B tested only internally, or even only tested on the team that developed it. It introduces a lot of bias in users’ technical competence, willingness to understand/understanding of the new behavior, how the environment is set up, etc.
Tencent has about a 5% stake iirc. They also have small interests in Activision-Blizzard and a big chunk of Epic. I have no idea what 5% of a company buys them.
I mean most software that's currently maintained has been 64-bit ready for a decade. The issue is that there's a lot of 3rd party ghostware that isn't, and 32 bit bridging to support that code has been popular for awhile.
That software is a lot like vintage gear, there may be alternatives but they aren't the same, and that's the problem.
Sounds like the software was effectively already dead and it was best to move off it as soon as it became clear there was only one developer and they were hard to get hold of. The problem isn't with macOS.
I would note Chris, given your insistence on all this, that your own project Graal is still shipping on Java 8 despite that being many years old. You're now working on moving to Java 11, which is itself already obsolete.
Imagine if tomorrow nobody could download GraalVM anymore because OpenJDK 8 stopped working for some reason (yes I know it's bundled, this is just a metaphor). It could easily be said you had years to upgrade, so why so sluggish? Well, of course, there were actual features you wanted to ship during this time too, not just doing upgrade work, especially given that Java 9 and 10 maybe didn't deliver many compelling upgrades.
Sure, but so is Mojave. It'll be some years before Apple stops shipping security updates to older releases. Until then app vendors saying "don't upgrade macOS" is no different to Java developers saying "don't run this on Java 11 because it doesn't work yet" and we've seen plenty of that.
In fact, I'm guessing the pain of losing Java 8 will be too much for many organisations after so many years of stability and 9/10/11 breaking so much (current Gradle doesn't even work on Java 13!). Maintaining 8 will be a good business for a long time.
Your software doesn't run in isolation. It needs services of the operating system. Apple has to spend resources maintaining 32-bit software and consumers would rather they didn't do that.
Going to 64 bit can be a lot of work. So there is quite some software which is still lightly maintained for which there isn't a 64 bit update available. Software which has worked perfectly fine up to date.
Only in misleading microbenchmarks. In the real world, the memory bandwidth saved by using 32-bit pointers in some programs that can be guaranteed to not need more than 4GB of memory (or ASLR or other features enabled by x86-64) is completely outweighed by the costs of keeping both 64-bit and 32-bit libraries on disk and in memory and in cache. That's why even on Linux the x32 ABI was never able to gain traction even among Gentoo users, and why retaining traditional 32-bit support is viewed as only a compatibility measure for closed-source code that literally can't be updated.
If a program works fine, with no issues, why should companies be forced to update it simply because Apple decrees they no longer support 32 bit applications? "Sweeping the dust" is an absurd declaration.
Waves sent one out last month as well. Par for the course really. Audio pros are some of the most distrustful users w.r.t updates because they've been burned so many times.
As a former developer of pro audio apps at Native Instruments, sometimes it's just that the first version of macOS updates has bugs that suddenly causes crashes in your lower level OS calls or unexplained latency. If something like this happens, users mostly blame the audio devs and not Apple. So, if you don't want to ruin your reputation, you better warn, test, wait, hope and fix (maybe not in that order).
I think that's unfair to audio developers. You're asking for a real-time scenario from an obvious not real-time OS. Dropping audio is a lot more obvious than dropping video frames. Audio also deals with /a lot/ of plugins both hardware and software--many paid (years ago and now abandoned). Most people I know with an audio setup are very sensitive to physical changes due to subtle problems.
I also think it's unrealistic for large applications to be 100% ready on day one. It's not like Apple has a GM ready weeks ago. Betas are known to change and nobody really knew when Catalina would ship (many people are surprised they're shipping what they have).
Audio software is very hard to get right and it's a relatively low margin business. Asking them to track new OS releases on day one is just not realistic, especially when Apple has a pretty bad track record on stability and maturity of x.0 releases of macOS.
Could be, but when the new OS updates every 12 months they can either dedicate a whole chunk of their time updating to the new version, or make it work good on a version that's still going to be supported for 3 or 4 years so they can focus on bug fixes and updates.
Just because Apple can bump out a new OS version once a year doesn't mind these app developers have the same bandwidth to keep up the chase. They have plenty of other priorities.
Or you could see it as your OS breaking lots of software you depend on and then telling you tough luck.
As a user I only see "OS upgraded -> stuff broken". I'll blame the OS for that. All finger pointing and shoulda/coulda/woulda is not magically going to unbreak things,rolling back the upgrade will.
And why would they care to improve when they know their user base will let them get away with it and even side with them against Apple ? Reposting a comment I left elsewhere in the thread :
For the most part, it's not a technical issue at all but a cultural one : pro audio users are notoriously, almost pathologically conservative when it comes to software upgrades.
You'll find plenty of threads on forums like Gearslutz, asking for tips on how to downgrade brand new Macs to an older version of macOS that doesn't even support their hardware. Or 2019 threads asking if it's now safe to upgrade to High Sierra. They're typically 2-3 versions behind. Why ? Older is just safer, better in their worldview.
In that context, audio developers know they have customers on their side against "evil Apple that's always breaking everything for no benefit", and they get away with emails that read like Apple just unexpectedly dropped a bomb on them without notice, and it'll take them 6-12 months to get ready, like WWDC and 3-4 months of developer betas never happened.
As a MacOS audio developer who supports legacy machines to this day, there's some truth to what you say but you're glossing over important realities: Apple has a long-standing pattern of revising developer tools to throw away support for working machines, and then requiring you to use their newest developer tools for current development.
It's a technical issue. It's a bear to support older systems from newer machines. (it's a lot easier to support current stuff from dawn-of-time old systems! I keep an antique laptop to code on which allows me to support EVERYTHING all the way back to PPC Macs. Which I do support)
My choices of what I choose to buy (in Apple hardware) or even CAN buy are conditioned very much by this reality. I'll get stuff if there's a fighting chance I can build a working ecosystem on it. I'll be willing to do things like ditch Logic and switch to Reaper, and I'll be well within my rights to tell users 'this is what I can offer, and this is what I cannot'.
Because Apple is not automatically my ally. It can be my adversary, even when I'm doing its bidding (I was fairly early in porting my entire product line to 64-bit when few others bothered. Apple literally called me and offered to help me do this, so I told 'em I'd already done it three months before. I did NOT tell them that I continued to support PPC machines or maintained a time capsule dev machine as the only way to develop for a large range of cheaply, easily available hardware)
Users have every right to side with me against Apple when I'm an open source developer letting them do professional-quality audio work on computers costing only a few hundred dollars, and/or letting them continue to use known-good and predictable equipment, and Apple is locked in to a course of action requiring it to churn its userbase at whatever cost to the userbase.
I totally get Apple's motivation here, but it doesn't serve my customers.
OS X's CoreAudio is still an incredibly well-designed technical architecture, designed by a team who really understood digital clocking from a hardware perspective and the importance of low-latency kernel support. It's still kind of amazing to me that it was essentially fully baked by 2002-2003. On Windows there's now WASAPI Event, which has a similar architecture, but for the longest time third-parties had to step in with a third party solution (ASIO) because the OS support wasn't there (and ASIO really only solves a subset of the problems CoreAudio solves). I'm frustrated by how little attention the driver and documentation side of things has gotten from Apple since then, but for some specialized requirements the underlying architecture is still just fantastic.
Audio people have been moving away from Apple over the past few years specifically because of show-stopping bugs introduced by new versions of the OS. This really never happens in Windows now and it's become more and more appealing to switch.
macOS has had a reliable low-latency audio API for years, when on Windows you had to resort to third party hacks like ASIO.
For years it has also had things like Audio Midi Setup, which lets you set up aggregate virtual audio interfaces from physical ones, dealing with latency compensation etc.
Or MIDI over Bluetooth. All of that out the box. It just feels it's been designed with pro audio in mind, compared to Windows.
Aside from the other replies you’ve already received that are spot on, macOS has also been consistently good at isolating the audio ports from coil noise / interference from the other electrical signals on both laptops and desktops. While “pro” audio folks are likely to be using an external audio interface anyway, having an audio jack that doesn’t garble the sound is one of many ways that Apple hardware engineers have been attentive to audio.
That's fair. Pro Tools crashing during a session has been a meme for awhile, despite it being pretty stable for the last 5-ish years.
But keep in mind these companies are usually pretty small, a lot of sole-proprietors out there, and their revenue comes from new products/sales not maintaining projects. So there's logistical and incentive problems in deploying fixes quickly.
> Why would you hold an investment for 55 years that isn't beating inflation?
The point is that the inflation is getting taxed also. Say inflation is at 2 percent per year over the period, and it's appreciating at 2 percent more than that annually. Then when you pay taxes, the "capital gain" amount includes both the real appreciation and the inflation. So in this example, half of the putative "gain" is actually inflation.
>So in this example, half of the putative "gain" is actually inflation.
Is this an issue in practice? Like I get if I have an asset that meets inflation for a decade than I'll lose value through the taxation. But that would be a bad investment, wouldn't it?
Yes, plenty of people have assets that don't appreciate a ton over a long period of time. Even with stock — you don't know in January if it's going to be worth more in December. So you make some calculation and sometimes end up holding it. Also, people hold onto things for sentimental reasons (inherited from relatives, etc.). But the bottom line is that anyone who holds any asset for 20 years and then sells it is going to pay tax on the inflation that occurred during that period.
>the chances of me being attacked are practically zero
That's because the OS developers have placed value on security over performance. Whooping cough is rare too, we still vaccinate against it.
If you want a classic example, bounds checking an array is important to avoid RCE and sandbox escape attempts. It can also have a hefty performance penalty, under some scenarios it trashes the branch predictor/instruction pipeline. But I'm glad that my browser isn't as fast as machine-ly possible when streaming video, because I'd prefer if there wasn't a risk of having my emails from various banks, stored passwords in the browser, etc from being collected and sent to a bad actor.
> That's because the OS developers have placed value on security over performance.
No, that is mainly because nobody knows nor cares about me personally.
As for your example, i already addressed it with that last part in my message:
> Note BTW that there is a difference between "i find performance more important" and "i do not care about security at all". I do care about security, but i am not willing to sacrifice my computer's performance for it. I simply consider performance more important.
The browser is a case where i'd accept less performance for better security because it is the primary way where things can get into my computer outside of my control. However that doesn't mean i'd accept less performance in, e.g., my image editor, 3d renderer, video encoder or whatever else.
In other words, i want my computer to be reasonably secure, just not at all costs.
> No, that is mainly because nobody knows nor cares about me personally.
I mean, they do care about you. I assume you have a bank account, or personal information that can be used to open a credit card under your name?
> However that doesn't mean i'd accept less performance in, e.g., my image editor, 3d renderer, video encoder or whatever else.
Most of that is specifically designed with security in mind. For instance the GPU has it's own MMU so you can't use it to break the boundaries between user mode and kernel mode.
> I mean, they do care about you. I assume you have a bank account, or personal information that can be used to open a credit card under your name?
That is not caring about me though. Honestly at that point you are spreading the same sort of hand-wavy FUD that is used to take away user control "because security".
> Most of that is specifically designed with security in mind. For instance the GPU has it's own MMU so you can't use it to break the boundaries between user mode and kernel mode.
Again, i'm not talking about not having security at all.
> That is not caring about me though. Honestly at that point you are spreading the same sort of hand-wavy FUD that is used to take away user control "because security".
I legitimately don't understand your argument here. Do you not lock your car? A opportunistic car thief doesn't have to "care about you", and going through the process of unlocking your car could slow you down.
Those comparisons miss important details so they aren't helpful - and also i do not have a car. Though if you want a comparison that does apply to me - i lock my apartment's door, though i do not bother with installing a metal door and window bars despite knowing how easy the door would be to break for someone who insists in entering my place as the chances of this happening are simply not worth the cost.
I already repeated that several times, i'm not sure how else to convey it: i care about security (lock my door), but it isn't at the top of my priorities (do not have a metal door and window bars).
That said mixing DSP + UI is hard, and if you want to make the scripting user accessible it makes sense to keep them totally divorced.