Hacker Newsnew | past | comments | ask | show | jobs | submit | Siyo's commentslogin

I wish that were true. MacOS for example only assigns 127.0.0.1/32. You have to assign extra IPs manually, the same way you would on IPv6.


that is a mac problem, not an ipv4 problem. The mac does a lot of pretty stupid things


As someone who used Windows and Linux my whole career and now am forced to use Mac for a new job; I completely agree, I was right, Mac is a very weird thing overall.


Same thing on FreeBSD. It's just not something you can rely on and the spec doesn't say it needs to be assigned.


Yeah, and I always do.

Because parent is right, having multiple loopback addresses is just so nice for a lot of things.


I'm not sure why, but this trick doesn't work on Firefox / Mac. YouTube HDR video works fine though. Maybe it actually has to be playing?


Also avoid using ProtonMail Bridge if you value your emails. It's been silently corrupting/deleting your emails for years now. And Proton hasn't done a single thing to warn users about it. Instead they've been working on a complete rewrite which is far from completion. Meanwhile people are still discovering that their emails are disappearing.

https://github.com/ProtonMail/proton-bridge/issues/29

https://github.com/ProtonMail/proton-bridge/issues/220


> It makes no sense for them to add the touchbar to the base model 13" MBPs only to remove them from the higher end versions.

Well keep in mind that the first generation Arm MacBooks are basically the same old Intel laptops with their guts replaced. The next generation is probably going to be a completely new design.


Exactly, Thats how they did it before. First revision is the same as the old Macs with the new Arch, then once they are happy with everything they create a whole new design.


Looking at the design of the iPads and iPhones starting with the 12 it seems clear that there's been a change in design decisions at Apple.

There'd been a drive to remove ports and to make devices as thin as possible, even if it meant losing functionality (no SD slots, USB-A ports, shorter battery life) or increasing complexity (dongles, battery packs).

That seems to be reversing to some degree. Whether this was all Jony Ive or not, it's nice to see more of a balance. It should be possible to have a functional device but not include every I/O port under the sun.


But there were old gen Intel 13" MBPs without the touch bar. They added those into the M1 macs that replaced those old models. Clearly, they could simply have... not, if they were indeed thinking of removing the touchbar entirely.


Last Intel MacBook Pro generation before M1 had all TouchBar


But why didn't they revive the 2018 version of the Intel MBP that didn't have the TouchBar? It would have fit perfectly as it was also a two-port model. Maybe it has to do with the keyboard being broken in the 2018 model, but (to this layman) it sounds easier to fix that than to add support for the TouchBar on Apple Silicon for a single (!) temporary hardware model.

Now they have to keep that TouchBar code around in AppKit even after they inevitably drop Intel support. Extremely curious decision.


i'd wager it's either a hardware requirement (reusing existing components) to not have to build a different design prior to a full refresh with the new arch, or kind of a trial run for Apple to see how many would prefer a slightly cheaper design without the touchbar. The differences of the Pro vs Air are quite minimal. Where I do like the fanless design, I was concerned of thermal concerns on the Air, but there was no way that I'll ever buy a touchbar product from Apple ever again, so I took a shot with the Air. Pleasantly surprised with heat/energy/battery, and couldn't be happier to have the physical F key row back. M1 Air is hands down the best MacBook I've used since the 2015 MacBook Pro.


Most likely the issue was resolved by the reboot, not by uninstalling Chrome. From my experience Mac OS tends to slow down after a few days. Animations and scrolling gets choppy and there's nothing I can do to fix it, short of rebooting.


I can assure you that it wasn't related just to rebooting as this has been something that's been bugging me for months. I had previously rebooted earlier in the day because of this exact same issue (a runaway WindowServer process), before I saw this article. I also did a fresh reboot before following these instructions too as a sort of "control".

The change only happened for me after removing Chrome as well as the launch agent for keystone.


> From my experience Mac OS tends to slow down after a few days

Just for the sake of the argument, do you have installed Chrome?


I use macOS daily for months at a time without rebooting and have not seen this problem on any of the 5 Macs I've been using for the past several years, for what that's worth. It does seem like it's something specific to your machine.


Does your system have Chrome on it, because if it doesn't then you observations don't really apply since the conversation is centered around a bug possibly associated with that. Personally I don't allow chrome on my machines. I do install brave for those cases where I have to use "chrome" because some company simply doesn't design their web app for anything else otherwise it's firefox.


The question is: Chrome, or no Chrome? xD


Yes. Just like Linux - in 1996.

Very sad.

(I have recently started using Apple tools for developping with Swift/Xcode on iPad. I am very unimpressed by the quality of the tools, and quite impressed by the range and depth of features....)


"Goto is all you need, no ifs or loops or dynamic dispatch, just jumps. I have no problem with this, I enjoy it like this, we don't need an improvement. There I said it, any downvotes will be worth it, because I get to voice my opinion. In the end I will still be happily using my assembler and enjoy my life."

I'm sorry, I couldn't help myself. Your comment reminds me of an anecdote I heard from the early times of structured programming. When structured programming was just gaining its feet, there was a certain class of programmers who just could not understand why people would want to write structured code. You can do everything in assembly they said, you have much more control over performance, etc. They looked down on structured programming as not "real programming".

There's a lot of benefits to adding some structure to text. I don't think that Nushell's approach is the best one, but to say that there are no problems and we shouldn't look to improve things is just backwards. We should always look to improve our tools and our craft, otherwise we would still be stuck writing assembly.


There's a lot of benefits to adding some structure to text.

There are benefits, true, but there are also potentially serious drawbacks. Chief among them, I would hazard, is the risk that we get locked into a format that didn't anticipate a (completely unknown) future need and we have to go back and rewrite everything again.

The beauty of text's lack of structure is that people are free to interpret it any way they please.


I keep seeing comments like this one, but just as you can parse textual data into something more structured, like the examples in the post, you can also print the structured data as text. So the tools that use and generate structured data are no less compatible because it's easy to go back and forth between the formats at will. You're not losing anything with this approach.


> You're not losing anything with this approach.

True, but you are adding something. And if you add things you don't really need, it becomes plain old cruft.

Like the others above, I understand the urge to make some things simpler by adding complexity, but for the long haul I'm convinced it's better to keep tools simple and add any needed complexity to the code you're writing (whether it's shell scripts or anything else). And then document the complexity so future you or some poor stranger can understand why the hell you did that in the first place.

But I don't look forward to reading comments like "X was deprecated so we're converting these tables back to text streams so they can be converted back to Y properly."


But isn't that precisely the problem? A lot of people are unsatisfied with the current solution which doesn't lend itself to a particular goal?

Aren't we now locked into a format (plain text) which is now becoming more and more of a pain (what type is this? is this text related to this other text?) and we're now having to go back to old utilities and add --json?


I get it, but to clarify, and maybe it's to do with the work I do, I have no problem with current shells, I think they are great. As I learn more about them and use different features I like them more. This is my experience.


Maybe you're just not very imaginative. I can see my workflow improve considerably by using a structured-io shell, and I'm already quite proficient in bash. So, I'm very happy some people are trying to make it happen.

Also, I'm not sure what your argument is, except "I don't get it"? It's fine, lots of people don't get stuff. Just move on, and maybe in a few years when it's mature, you'll see it again and go "Ah!"


Also goto being considered bad is already a very old idea. Yet it is still used, sparingly in the right places. Shell underpins a lot of stuff, it is tried and tested. Text is already structured, or can be, IFS, new lines, etc.


While comptime looks extremely powerful, I'm really not a fan of how it's used for unconstrained generics. This is the same problem I have with C++ templates where an incorrect use of a generic function would result in an explosion of bizarre undescriptive template errors. Sure you can write these type asserts yourself, but it's time consuming and how many developers will actually do it and get it right? I don't know, maybe it's really not that big of a deal, but I much prefer how Rust does this using traits as type constraints (although at some cost of complexity, e.g. Eq, PartialEq, Ord, PartialOrd). Not to mention that by using constraints on the type system level, you actually get useful type signature documentation on what you can or cannot pass to a function.


For one, Zig's error reporting is more friendly than C++ and will continue to get better. For another, it's a tradeoff, but the fact that the language is so much simpler than C++/Rust and compilation faster gives you a lot of headroom elsewhere (when the development cycle is faster, it's easier to focus your resources where they matter most to you).

Of course, language preference is personal and aesthetic, and different people have different preferences.


Zig is a simpler language, but that simplicity comes at a cost: the memory bugs that exist in C also exist in Zig. Users bear the cost of the programmer using a simple language.

Rust is a more complex language, but it eliminates a number of classes of memory error. In this case, the programmer bears the cost of users having safer programs.

Although I find Zig very interesting—and I wish Andy the best of success with it—the trade-off that the Rust offers is more in line with my values and my ethics.


> the memory bugs that exist in C also exist in Zig.

No, they don't. See Zig's release-safe mode.

Also: https://news.ycombinator.com/item?id=17184929

> the trade-off that the Rust offers is more in line with my values and my ethics.

By the way, it's awesome that you like Rust, but there's no need to imply there's something ethically wrong with Zig. Depending on the platform you're targeting, you may yet find a use for either language.


release-safe doesn't prevent all memory bugs.


Sure, but then not even "safe" languages like JavaScript will prevent all memory bugs.

And to be fair, Rust does introduce memory issues of its own in terms of allocation strategy. For certain projects, Rust would not be considered "safe" with respect to memory.

Zig's release-safe mode is worlds away from "the memory bugs that exist in C".

See also: https://github.com/ziglang/zig/issues/2301


This is very inaccurate, and as a formal methods practitioner, I'd like to try and explain why. It's part of what I call "the soundness problem," and it is this. Suppose you have some class of bugs that you can eliminate in the type system (in the case of Rust, this class of bugs -- various kinds of memory errors -- is, indeed, an important one, known to be the cause of many costly bugs). A type system eliminates 100% of those errors, but because a type system works based on deductive proofs, it, too, must make a tradeoff: either be relatively unrestrictive with regards to how the programs are written, but then those proofs can be very complicated[1], or work with simple proofs, but restrict the programs. Rust has chosen the latter (which is the correct choice, IMO). However, either way, this kind of proof has a significant cost, and the cost exists because of the 100% guarantee. In general, in formal methods, the cost of verifying a property can rise by 10x or more if you want to go from 99.5% certainty to 100% certainty. So Rust gives you 100%, but at a non-negligible cost. So now the question is, is it worth it? After all, while extremely important, these memory bugs aren't the only dangerous ones, and you still need to verify your program by other means.

Zig does something different, and no, it doesn't work like C. Zig allows (and encourages) you to turn on runtime checks that also guarantee no memory errors, but at a cost to performance. Then, after testing, you can turn those off for your entire program or just the performance-sensitive parts. So this, combined with language simplicity, also gets rid of those bugs, except not with 100% guarantee. The fact that Zig is so simple also helps reduce other kinds of bugs perhaps better than Rust. Soundness comes at a cost that, perhaps unintuitively, can harm correctness.

So to make this simplistic, you could say that Rust sacrifices other kinds of bugs (due to complexity) in order to guarantee no bugs of a certain class, while Zig gives you no 100% guarantee regarding any kind of bug (after turning off runtime checks), but it does give you the tools and the focus to reduce bugs across the board.

So even if you're uncompromising on correctness to the point you employ formal methods (as I do, and I assume you do, too, as you consider it a matter of ethics), there's still no clear winner even on correctness alone between the two approaches. You could just as well argue that your ethics directs you to preferring Zig, because it may well be that its correctness story is stronger than Rust's; we just don't know. I like to choose my own "correctness focus" and I dislike complex languages so I prefer Zig, but I completely understand that Rust is a better fit for other people's tastes.

[1]: This, e.g., is what you can do with ACSL and C.


Does a type system eliminate 100% of those errors? It eliminates those errors if you can be guaranteed that the underlying operating system calls and the hardware are always correct. That's pretty damn good but it's not 100%.


Right. Absolute guarantees can only apply to an algorithm, i.e. the abstract program, under certain assumptions about hardware and OS behavior. The correctness of software, a program running in the real world, is always probabilistic. There was an interesting case of a bug in the Java and Python sorting routines (TimSort) whose probability of manifestation in practice was about the same as a failure due to some hardware malfunction (bit-flip etc.). So you could think about it as if the algorithm was completely incorrect, but the program was about as correct as when running a totally correct algorithm.


I guess you could amend it to 'the type system prevents 100% of reasonably preventable errors of the class'!!


You’re absolutely correct that this depends on having the concrete semantics of the compiled program line up with the abstract semantics of the source language. What you may not notice is that the source language can indeed have a platform-independent abstract semantics, and it is that which 100% correctness is proved of.

Once you leave the abstract semantics, you have the problem of proving your compiler to be a full and faithful transformer of program semantics. But this is a different problem space with different tools of analysis. It’s beneficial to start off already knowing that your source space is correct.


Sure, but there's also an assumption that the computer faithfully executes (w.r.t hardware spec) the machine instructions the compiler emits, and that is only true with probability. So system correctness is, therefore, always probabilistic. Whether, when and how it pays to absolutely guarantee certain aspects of it is a complex question (or, rather, a large set of questions) that can only be answered empirically.


Oh, definitely. My point was only that verifying semantics at a layer, and verifying faithful translation between layers, are legitimately distinct activities that benefit from being considerd separately. I would consider "faithful execution of machine instructions" to be a translation between layers -- specifically, between the abstract semantics of the machine instructions and the concrete semantics of, well, physics!

Siloing things this way helps us tighten down where faults can happen. It's true that faithful execution of a program can only be answered empirically -- but the question can be broken down into many other sub-questions which can be answered formally, with a smaller core of sub-questions that are necessarily empirical.


> but the question can be broken down into many other sub-questions which can be answered formally, with a smaller core of sub-questions that are necessarily empirical.

I don't think this path is particularly fruitful, though. For one, the question of cost is still empirical. For another, researchers who study technique X are interested in answering the question of what it can do, but developers are interested in an entirely different question: of the many techniques of achieving their particular correctness requirements, which is the cheapest? This question is clearly empirical.

After all, engineers ultimately care about one thing only: the probability of system failure and its severity vs the cost of achieving this correctness goal. Since none of these factors is ever zero, I don't think the right path is to separate the layers and pick which ones we want to dial to 100%, especially when considering that at each layer, the difference between 99.99% and 100% can be 10x in cost. I think that a more holistic view is suitable in the vast majority of cases, which means that we may prefer less than 100% certainty at all layers. In particular, dialing some layers to 100% is almost certainly not the cheapest way to achieve some realistic correctness goal. Put differently, the mere fact that you could achieve your correctness goal by dialing the correctness of some of the layers to 100%, doesn't mean that's how you should achieve it and that there aren't cheaper ways.


I wonder if there is a framework where we can systematically reason about all those trade-offs we usually navigate empirically


> So to make this simplistic, you could say that Rust sacrifices other kinds of bugs (due to complexity) in order to guarantee no bugs of a certain class,

I'd say more that Rust sacrifices productivity and complexity in order to guarantee no bugs of a certain class. I think there's plenty of room for a language like Zig (just like there is room for C as well as C++), but Rust is going to win every time on correctness.


> but Rust is going to win every time on correctness.

This is just not true (or, more precisely, your reasoning is not true and so your conclusion is not necessarily true), because guaranteeing no bugs of a certain class comes at the cost of complexity, which can harm the ability to reduce bugs of other kinds (e.g. if writing a program takes you 90% of the time than in another language, the remaining 10% can be spent on reducing bugs). I mean, it could be that Rust wins on correctness vs. Zig, it could be that it's the same, and it could be that it loses, but it's impossible to determine without an empirical study. So even if correctness is the only thing you care about, you cannot conclude (without a study) that Rust (or soundness in general) is always a win [1]. Correctness is very complicated [2], and there are many ways of achieving it, many of them come at the expense of others.

If your claim were so obviously true, we'd see a big difference in correctness between, say, Haskell and Python, but we just don't see it.

[1]: As usual when correctness is concerned, this needs to be qualified. If correctness is truly your only concern, then, at the extreme, you could have a language that always guarantees 100% correctness of safety properties by rejecting all programs. Your software would always be correct, but you'll never be able to produce software that does anything.

[2]: See my post on computational complexity results on software correctness: https://pron.github.io/posts/correctness-and-complexity


> at the extreme, you could have a language that always guarantees 100% correctness of safety properties by rejecting all programs

To get at this from another angle: consider a buggy C program. It will always be possible to write a Rust program that behaves identically, even faithfully simulating the bugs. They're both 'just' Turing-complete languages, after all, and correctness depends on what program behaviour you want.


> Users bear the cost of the programmer using a simple language.

this is basically the point of having a simple language


The presence of general type level programming does, however, mean that you can build decent preconditions on types. Which will get you further than you could in C++.

I’m more concerned by the typeid switching, but maybe it’s got a proper structured mechanism as well.


Unlike in C++, Zig makes it easy to do compile time type introspection to see what the types passed as parameters are capable of, and then emit useful error messages if it doesn't satisfy the necessary contract. It's less streamlined than Rust traits, but you also get HKT for free (in theory) by making parameterized types be functions.


There is a proposal for adding type constraints to generic functions. That doesn't mean it will necessarily be added, but it's something the community is thinking about.

https://github.com/ziglang/zig/issues/1669


That C++ problem is partially fixed via enable_if, static_assert and constexpr if.

And fully fixed in future C++20 codebases with concepts.


There are plenty of competitors in this space, they just don't use the CGI model of spawning a new _OS_ process for every request. You have Erlang/Elixir that have their own lightweight processes that don't share memory and have independent GC. And you can build similar systems on Java/Go. The difference is that in those language you have a choice. There are certain types of applications that you just can't reasonably make in PHP.


"they just don't use the CGI model of spawning a new _OS_ process for every request"

Which isn't what PHP does, fwiw.


What do you mean exactly? I mean yeah, it's a bit different with FastCGI when you have a pool of persistent processes. But you still basically boot up your whole application every time a request comes in. Opcode caching helps with loading code, but not execution. And you're still working with OS processes for concurrency, which is not ideal.


I feel like this is especially problematic when using NPM for your frontend. Now you have to run all your code through deduplication and slow down your build times or end up with huge assets. I wonder if it's really worth the trouble.


Something to consider for people interested in Sequel. Sequel is great and all, but it still follows the Active Record pattern (Sequel::Model), not Data Mapper. Also expect to run into some problems with gems that interact with ActiveModel (Devise, CarrierWave, etc.). It's all solvable of course but it might require some hacking. Most popular gems have sequel versions or ship with sequel support, but it's not as well tested and maintained, we had to contribute several patches. Also, if you plan on using Sequel with Rails, don't use any of it's plugins that make it behave closer to ActiveRecord. Stuff like nested_attributes, delay_add_association, association_proxies, instance_hooks. They seem really nice at first, but I guarantee they will cause all sorts of unpredictable problems down the road. I would recommend looking into something like Reform which decouples form logic from your models because working with complex forms is going to be harder without all of the AR magic.


Sequel::Model's quite optional - it doesn't force you to use the AR pattern. It's built on top of Sequel::Dataset, which is entirely usable all on its own.

ROM also uses it as its SQL backend: http://rom-rb.org/


It doesn't force you, but you lose all of it's ORM features. Sequel::Dataset is basically just a query builder so ROM would probably be a better choice then yeah.


Is there an authentication gem which works well with Sequel?



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: