This is Swift, where Type? is syntax sugar for Optional<Type>. Swift's Optional is a standard sum type, with a lot of syntax sugar and compiler niceties to make common cases easier and nicer to work with.
For completeness, this description of alignment is misleading:
> Well, dear reader, this padding is added because the CPU needs memory to be aligned in sets of 4 bytes because it’s optimized in that fashion.
> ...
> Remember: since structs are aligned to 4 bytes, any padding is therefore unnecessary if the size of the struct is a multiple of 4 without the padding.
Individual data types have their own alignment (e.g., `bool`/`char` may be 1, `short` may be 2, `int` may be 4, `long` may be 8, etc.), and the alignment of a compound type (like a struct) defaults to the maximum alignment of its constituent types.
In this article, `struct Monster` has an alignment of 4 because `int` and `float` have an alignment of 4 for the author's configuration. Expanding one of the `int`s to a `long` could increase the alignment to 8 on some CPUs, and removing the `int` and `float` fields would decrease the alignment to 1 for most CPUs.
Also keep in mind that is also all very CPU and compiler specific. Had one compiler where it packed everything at 4/8, usually 8. Not the 1/2/4/8 you would expect. That was because the CPU would just seg fault if you didnt play nice with the data access. The compiler hid a lot of it if you set the packing with offsets and mem moves and shifting. It was clever but slow. So they by default picked a wide enough packing that removed the extra instructions at the cost of using more memory. x86 was by far the most forgiving while at the time I was doing it. ARM was the least forgiving (at least on the platform I was using). With MIPS being OK in some cases but not others.
Some of the Cray hardware was basically pure 64-bit. The systems largely didn’t recognize smaller granularity. I learned a lot of lessons about writing portable C by writing code for Cray systems.
On one of these less forgiving architectures, how does one write programs that read some bytes off the network, bitcast them into a struct, and do something based on that?
On x86 you would use a packed struct that matches the wire protocol.
Wouldn’t this require extra copying if member reads were forced to be aligned?
yep exactly that. I had that exact issue. Junk coming in from a tcp/ppp connection then had to unpack it. Tons of garbage moves and byte offsetting and then making sure you keep the endianness correct too. On the platform I was using luckily memcpy could do most of what I needed. Not the best way to do it but the wildly out of date branch of gcc could do it. Got pretty good at picking junk out of random streams shifting and and/or whatever was needed. Totally useless skill for what I work on these days.
The widely used platforms with multiple compilers generally have one or more written down ABIs that the compilers all follow, but more niche platforms frequently have exactly one compiler (often a very out of date fork of gcc) that just does whatever they felt like implementing and may not even support linking together things built by different versions of that one compiler.
We had that exact thing. Our target at the time was about 6 different platforms. 2 of them had very picky compilers/ABI. We were trying to keep it to one codebase with minimal if-def callouts. Learned very quickly not all compilers are the same even thought they may have the same name and version number. Then the std libs are subtly different enough from each other you really have to pay attention to what you are doing.
AFAIK alignment doesn't even matter anymore (for CPU data at least) since the 'word size' of a modern CPU is the size of a cache line (32 or 64 bytes?), e.g. unaligned accesses within a 32 or 64 byte block are not different than aligned accesses.
(technically there is still an advantage of items aligned to their size in that such an item can never straddle adjacent cache lines though)
And there's also still tons of different alignment requirements when working with GPU data - and interestingly those alignment requirements may differ from C's alignment rules, so you may need to explicitly use packed structs (which are still not a standard C feature!) with manual padding.
My understanding is that C++ compilers still add padding by default for performance reasons. CPU will have to spend a few cycles to reorganize data that is not aligned in chunks of 4 bytes.
TL;DR: 10% difference on what in 2012 was a low-end CPU, no difference on "new in 2012" CPUs. So my guess is that by now it really doesn't matter anymore :)
> Individual data types have their own alignment (e.g., `bool`/`char` may be 1, `short` may be 2, `int` may be 4, `long` may be 8, etc.), and the alignment of a compound type (like a struct) defaults to the maximum alignment of its constituent types.
I will add that this is implementation defined. IIRC the only restriction the standard imposes on the alignment of a struct is that a pointer to it is also a pointer to its first member when converted, meaning its alignment must practically be a multiple of that of its first field.
implementation-defined means your specialized platform can be supported without needing to conform - it does not mean that common knowledge is false for common users
"Implementation-defined" means that there is nothing to conform to as far as the standard is concerned. I have not claimed that "common knowledge is false for common users" or anything to that effect. My comment is additive, which should have been clear to anyone reading the first three words of it.
> Which means if you actually edited those files, you might fill up your HD much more quickly than you expected.
I'm not sure if this is what you intended, but just to be sure: writing changes to a cloned file doesn't immediately duplicate the entire file again in order to write those changes — they're actually written out-of-line, and the identical blocks are only stored once. From [the docs](^1) posted in a sibling comment:
> Modifications to the data are written elsewhere, and both files continue to share the unmodified blocks. You can use this behavior, for example, to reduce storage space required for document revisions and copies. The figure below shows a file named “My file” and its copy “My file copy” that have two blocks in common and one block that varies between them. On file systems like HFS Plus, they’d each need three on-disk blocks, but on an Apple File System volume, the two common blocks are shared.
The key is “unmodified” and how APFS knows or doesn’t know whether they are modified. How many apps write on block boundaries or even mutate just in disk data that has changed vs overwriting or replacing atomically? For most applications there is no benefit and a significant risk of corruption.
So APFS supports it, but there is no way to control what an app is going to do, and after it’s done it, no way to know what APFS has done.
For apps which write a new file and replace atomically, the CoW mechanism doesn't come into play at all. The new file is a new file.
I don't understand what makes you think there's a significant risk of corruption. Are you talking about the risk of something modifying a file while the dedupe is happening? Or do you think there's risk associated with just having deduplicated files on disk?
The vast majority of apps using structured data and not block oriented data formats. A major exception is databases, but common file formats that most people work with - images, text, etc. often aren't best mutated directly on disk, but rewritten either to the same file or a new file. Without some transactional capability, mutating a file directly on disk can corrupt a file if the writer fails in the middle of the write. More than a few text editors use this as their method of saving to ensure that there is never an inconsistent state of that file on disk.
Not at all: unless a license is provided, the code is fully protected under copyright and you have _no_ rights to copy it or use it in _any_ way you want (unless falling under "fair use" clauses for the jurisdiction you're in/the author is in).
Zellij is pretty great, and I recommend others check it out. The UI is extremely slick, and getting a comfortable setup is nicer (to me) than tmux or screen.
Unfortunately, it's missing one key feature that keeps me from using it as a daily-driver: it doesn't appear to be possible to attach to an existing session by automatically creating a new tab or pane. iTerm2 has fantastic integration with tmux that allows it to directly create a new tmux tab for every native iTerm2 split or tab, and I was hoping to recreate that with Zellij, outside of iTerm2.
It _is_ possible to open a new tab with the `new-tab` action (or whatever it's called), but unfortunately, there's no way to do that "in the background": one of your open sessions always switches to that new tab when it opens. I don't know if this is a limitation of the session/tab system, but when I dug through the source, I couldn't for the life of me figure out why this was happening.
I did spend some time trying to contribute a flag to allow attaching to existing sessions with a new tab/pane, but the actual architecture in place back then made this very difficult to support without non-trivial refactoring (and at least at the time, Zellij wasn't accepting any major contributions that weren't directly aligned with the roadmap, which I respect: there's only enough time in the day to review random PRs).
I check back periodically; if this is made possible at some point, I'd love to switch to it.
I'm a Linux user so of course I think like this, but: why use the native terminal tabs? I use Kitty with window decorations turned off, and Zellij provides all the UI I need.
You're right, good question. At least partially, habit and muscle memory. I'm used to the keybindings, and the behavior for navigating tabs/splits/panes (across macOS, Windows, and Linux).
But also, native splits/panes and tabs cover 90% of what I really want from a multiplexer, so it's easier for me personally to stick with familiar behavior than to integrate another tool into my workflow just to recreate it.
I feel similar in that there are great strengths but some things are missing. I love that mouse highlight-to-copy will wrap within the current pane rather than span panes. But mouse pane re-size like in tmux is currently not available.
I think it'd be curious to see the code, some example strings, and whether or not you compiled with optimizations enabled — if you're willing to share. There's no reason for this to have been the case.
As a former NYC resident, and as someone whose family still lives and works in the city, I'm curious to see how this'll distribute traffic patterns throughout Manhattan. If you live in the outer boroughs like my family does, getting into certain areas in Manhattan via public transit can be difficult, and time consuming — significantly more so than getting in by car.
My dad is an on-call doctor; getting to his hospital by car takes ~15 minutes, but ~60–90 via public transit. His patients don't have the luxury of waiting for him to take the bus. His hospital is outside of this zone, but I imagine that paying $15 every time he got called in would be extraordinarily frustrating.
My mom does work within this zone, also in places not easily reachable by public transit. I suspect that she, like many others, will still commute into Manhattan, park in areas outside of the zone, then take public transit into it — which will increase congestion in those areas. It'll be interesting to watch for the lead-on effects.
I sympathize entirely with the desire to reduce traffic in the city, but man, for people who live far from work and can't easily commute any other way, what a pain.
I think hospitals should charge a $600 an hour fee for stepping inside their multimillion dollar buildings staffed with multimillion dollar salaried people, and charge market rate for products. That would make more sense to people. We somehow expect access to their facilities for free 24/7, yet get annoyed when they pass the billing through items we know the price of. They should charge an entrance fee like Disney land or a hotel room per hour and then be more reasonable on the Tylenol.
Not all doctors make enough money to not notice it, especially ones earlier in their career.
Regardless, assuming you have two working adults each paying the $15 toll once a day, conservatively, working 250 days a year, that's $7,500/year that has to come from somewhere. I can't imagine an income level (even if it doesn't affect your quality of life) where that's not an insanely frustrating amount to pay… to the MTA of all places. That money getting reinvested into useful infrastructure would be a dream come true!
You make it sound like they have no alternative. A literal majority of the city uses the trains regularly. The person you're describing can just live within walking distance of a station. The $20k they arent paying for the car and tolls should be just about enough for the rent on a 1 bedroom.
I mean... I get it. But an extra $15 per entry to the city for a doctor based in Manhattan is exactly the type of person this is aimed at.
Can likely afford the expense: check
Prefers/needs to get into the city faster than public transport allows for: check
The article says they expect traffic entering the city to fall by 17%. You dad is part of the 83% of people who will say the pain of paying another toll is lower than the pain of taking public transit.
Exactly this. I don’t know many lower income folks affording on call doctors, and those who can afford it can also afford an extra $15 charge on their bill, god knows how much they’re paying already anyway.
even if you did work in this zone you could still drive to above 60th and then take public transit down from there. unlikely to change timing very much.
Traffic will drop as a result of that toll. One of the benefits that will clearly positively impact people like your father is that it will make his travels faster.
> A series of function calls that each do something and each returns a value.
This is not what functional programming is. Functional programming is a completely different paradigm for writing code, and it _is_ declarative by definition. The Wikipedia definition is pretty decent:
Functional programming is a programming paradigm where programs are constructed by applying and composing functions. It is a declarative programming paradigm in which function definitions are trees of expressions that map values to other values, rather than a sequence of imperative statements which update the running state of the program.
The "Comparison to imperative programming" section[1] offers a decent example of the paradigm, but DOM manipulation in JavaScript is pretty much as imperative as it gets. Having functions is necessary, but not sufficient, for functional programming.