Hacker Newsnew | past | comments | ask | show | jobs | submit | mh7's commentslogin

Bread takes hours to make and lasts for one day. So unless you wanna wake up at 3 am (or have a dedicated stay-at-home person making bread every day) it's not really a realistic option for many people.


Real bread lasts a week, if stored appropriately. Though self-baking with sourdough is definitely not an option for everyone.


You can mix this recipe and "bake" in in a frying pan in minutes.

A tiny bit of practice and perspective change to get something you like and you can have fresh bread daily for the rest of your life.


Signed and unsigned should both go, there should only be wordN types (word8, word16, etc) with two's complement semantics.

Then you can have explicit functions for the operation you want: signed_mul(), signed_less_than(), unsigned_greater_than(), add_assume_no_overflow(), etc. (add your favorite syntactic sugar/operator symbols for these).

Assembly is more explicit and clearer to understand in this regard.


This would be terrible, the point of the type system is to stop you making mistakes by accidentally treating one type as another


How would you make a mistake?

The only time signed vs unsigned matters is for comparisons and mul/div, and those have explicit names so you're never surprised that 0xFFFFFFFF is less than 0 when doing signed_less_than().


That's how Java works. See >>>, compareUnsigned, divideUnsigned, parseUnsignedInt, etc.


Any evidence that telemetry actually works? (i.e makes the program better)


Yes, the simplest example is crashes being reported.

Developers can see that a specific crash is being hit by 1% of their userbase and then check the logs to see what went wrong and where the cash happened. The fix is made the program is indeed better.


You can let users report crashes. You can even prepare the data for them. You can even provide a wizard that automatically opens on crashes to help upload that data. But you need to obtain informed consent. Sending data behind the user's back without ASKING FIRST is not ok. Stop doing it.


If it collects actionable data, yes, of course it works.

Crashes, common failures, UI/UX friction points, avarage usage patterns - all can be used to prioritize work to take care of things that have the biggest impact.


I asked for actual concrete evidence, not "can be used".

Is there an example of a program that was crap, implemented telemetry and then got better afterwards? (and of course controlled for factors that might have improved the program anyway)

I mean since telemetry advocates are so into how useful data is, surely they must have data on whether telemetry itself works?


It's probably a case of me not knowing what I don't know, but I've never understood what the big deal with 'observation' is in QM. I dabble a lot in electronics and I'm painfully aware of how measurements affect a circuit - try to measure current and you introduce a voltage drop, stick a probe somewhere and you add an antenna, add capacitance, etc.

So to me when trying to measure anything it seems so blatantly obvious that it has to change the outcome - you will interact in some way with the system to get any information out, and this will change the tracejctory, energy, momentum or whatever of the particles.

I mean is that it? That's the mystery?


If you read up on the Delayed-choice quantum eraser experiment [1] you'll see how your "simplistic" explanation leads to paradoxes such as the present altering events that occurred in the past. But if you describe unmeasured states as being in a superposition then there is no paradox.

[1] https://en.wikipedia.org/wiki/Delayed-choice_quantum_eraser


Does anyone know what the Many World's interpretation of the delayed choice quantum eraser experiment is?


As with most things, under many worlds you barely need any interpretation. You either read the measurement and thereby entangle your own state with what was measured (putting yourself in a superposition of having seen one waveform or the other), or you don't and you see the interference pattern. There's not really anything to interpret or explain.


That there is no delayed choice or quantum eraser. The naming of the experiment is based on waveform collapse interpretations.


If you allow hidden variables it’s also easily resolved ;)


Good thing we ruled out local hidden variables. https://en.wikipedia.org/wiki/Bell_test


The local part is important. We haven't ruled out hidden variables. Bell himself preferred hidden variable interpretations.


Yes, I am aware of that, that's why I made sure to mention it. There is a world of difference between local and non-local theories. And in no way do non-local theories fit in any of the simple explanations the original poster had in mind.

> We haven't ruled out hidden variables

No, but we ruled out every non-local or non-real one. Which is a far more interesting achievement.


Superdeterminism hasn't been ruled out yet.


In your examples interacting with a circuit, you affect what you're observing because you have physically changed it. You have poked it with a metal stick and you have physically changed the system you are observing.

With QM, it's NOT that you are poking it with a stick. It's NOT that you are physically interacting with the system and physically changing it because you've poked it with a stick (be it literally or metaphorically).

With QM, having knowledge of the system is what changes it. Observing it is what changes it; not some physical change you make to the system because you're poking it with a tool. The fact that it is observed is what changes it.

Having knowledge about the system changes it, no matter how you got that knowledge, even if you got that knowledge in a way that cannot possibly have physically affected the system.


It’s not “knowledge” that changes it—this seems to give some mystical power to human minds. It’s that it interacts with the “outside world”, ie decoherence. Which isn’t very magical in itself.


By "knowledge", I include any kind of record. Anything. Something different in the universe. Even if nobody actually looked at it, even if it wasn't actually recorded. Not magical human minds indeed; just that the measurement happened. Information created. However we like to word it. English isn't a great language for this. I suspect no language really suits.

Which isn’t very magical in itself.

Well, it's also supremely magical. That's the whole weirdness of it all.


I don’t think decoherence is magical, indeed it’s what happens every day all the time such that we never perceive quantum effects in our everyday lives!


You do realise the definition is "beautiful or delightful in a way that seems removed from everyday life"...seems like you're set on defining "magical" as "related to druids" lol, when nobody is using the term in that way.


even if you got that knowledge in a way that cannot possibly have physically affected the system.

How can that be possible? Any examples?



No, it’s not like sticking a voltmeter on a circuit. The double slit experiment definitely does not show something obvious. If you look into it a bit more about the wave function collapse, you will see very surprising results, if you are not familiar with it. I never heard anyone say it was obvious when they learned it. :)


Imagine you set up Schrodinger's cat experiment where you have a photon pass through a double slit and if it goes through the left slit then it electrocutes the cat in the box and if it goes through the right slit then the cat is spared. You set up the experiment and leave the box for two days and then come open it to "measure" if the cat is alive or dead. The mystery is that its hard to understand how performing the measurement of opening the box can change the outcome of seeing a healthy cat or one that's been dead for two days.


The cat thing isn't really used anymore. The cat isn't going to be in superposition. All the things that could have been in superposition will have already collapsed including the cascade of things that lead to a dead cat or an alive cat. A tree falls it makes sound. It doesn't need an conscious observer. They universe has plenty of 'observers' that are just plain matter.


This example isn’t very compelling because we might as well say the cat died two days ago you just found out later.


Except that that's precisely what some Copenhagen Interpretation guys actually wanted to say; that the collapse of superposition didn't happen two days ago. Hence S's large-living-object example. You want to be sensible, they (according S) didn't. Their "lack of sense" tends to force them to many-worlds views.

But Einstein and S's being sensible pushed them towards thinking entanglement couldn't be a real thing. Although their best default position is to toss up their hands and say that there's gotta be a non-local hidden variable we just haven't found yet. But there are no candidates, as yet. (Unless you like Bohm, I suppose.)


Schrodinger's cat thought experiment was meant to argue against the collapse-based interpretations of QM.


I don't get it. At what point in this hypothetical experiment did the photon pass one of the slits? Two days before you open the box?


Yes


It very obviously does not change yhe outcome here right? This is an odd example


The fact that checking on the cat is obviously unrelated to the path of the photon is what makes it an interesting example of how quantum mechanics is different from ordinary intuition.


I see. Makes sense.

I still refuse to believe the unintuitive interpretations of QM. I am a show me stater, after all. (Only kind if joking)


Einstein would have agreed with you


If an electron can get to the same outcome with equal probability it does so equally. Or rather we get the sample distribution weighted outcome of each.

If you put something in the middle that would need to physically change to experience either end state (screw measurements, imagine little closed doors), you can’t have gotten to the end by taking either path. It must be clear which path you’ve taken. So the creation of the physical paper trail means we get only the outcomes corresponding the possible pasts.


What you're describing is called the "observer effect", which is different from the "measurement effect" that's used to describe the quantum mechanics problem. The misunderstanding is understandable though, because it's difficult to properly explain why 'observation' in quantum mechanics is so weird. What constitutes observation is a bit controversial, but you can more or less interpret it as taking a measurement - measuring voltage with a voltmeter, looking at something with your eyes, touching something, etc.

I feel like Schrodinger's cat is used as an example a lot for this, but imo it's a bad example because it doesn't properly distinguish between our classical intuition (the cat is either alive or dead) and the quantum interpretation (the cat is in a superposition between being alive and dead until observed). If I recall correctly, when Schrodinger originally proposed the thought-experiment, it was more of a jab against quantum theory, since the concept of a cat being in a superposition of being alive and dead sounds nonsensical (and probably is, since most would agree that a cat, or any conscious entity, measures things constantly).

Also, in case it's not clear, saying an object is in a superposition between X or Y does not mean that the object is either in a state of X or Y. I don't think there's an intuitive way to describe it without referencing some math. If you've taken some linear algebra, imagine that X and Y are linearly independent vectors in a vector space. Then classical mechanics says that an object can either be in state X or state Y. Quantum mechanics says that the object can be in X, Y, or a linear combination of the two vectors.

To work with something concrete, let's say that our object is an electron and X is spin-up and Y is spin-down (disclaimer: spin is bad name since they don't correspond physically to something spinning). I'm hoping this might be familiar to you since you like electronics, but let's just say that we've created a context where these are the only two states the electron is ever observed in.

In the classical interpretation, the electron is only ever in a spin-up or spin-down position, regardless of whether we're observing it or not. In the quantum interpretation, it's possible for the electron to be in a superposition of spin-up and spin-down when we're not observing it, and when we observe it, it "collapses" into either spin-up or spin-down. Put this way, it sounds like cheating; quantum mechanics is saying we can only observe spin-up or spin-down anyway, so what's the difference! Well, fortunately, there ARE experiments that can distinguish between the classical and quantum based on what they're doing 'behind the scenes' when we're not observing them.

Imagine now that we have photons of light. Instead of spin-down and spin-up, these photons are either horizontally polarized or vertically polarized. The experiment I'm about to explain would also work for the electron example above, but I'm only switching to photons since I know experiments for this have been performed (https://en.wikipedia.org/w/index.php?title=GHZ_experiment&ol...).

Suppose that we've entangled three photons of light together. If you're unfamiliar with entanglement, it just means that we've produced the photons in such a way that they're either all horizontally polarized or all vertically polarized. We can confirm this by using a horizontal polarizer (or vertical polarizer if you prefer). Whenever we shoot the horizontal polarizer with the three photons, they either all go through or none of them go through. Maybe we switch the horizontal polarizer with a vertical polarizer just to be sure, and indeed, we observe the exact same thing happen. Right now, the classical and quantum interpretations agree that this is what we should observe.

Now let's do something that sounds a bit silly. Horizontal and vertical polarization aren't absolute things, they're relative. What this means is that we're testing for polarization at angles, say 0 degrees and 90 degrees. This also means we can rotate our polarizer to a 45 degree angle.

Just for fun, let's say we shoot our three polarized photons through the polarizer which is now at a 45 degree angle. If you're thinking classically, you might think that maybe all will go through or all won't go through. Maybe some will go through sometimes and others will go through other times (probabilistic).

The standard classical interpretation says that you'll observe either: 1. All three photons go through. 2. None of the photons go through.

This is where the classical and quantum disagree. The quantum interpretation also says you'll observe one of two scenarios as well, but those scenarios are: 1. Two photons will pass through, one photon will not 2. One photon will pass through, two photons will not

And lo and behold, experiments show (within experimental error) that the quantum interpretation is correct!

There's still plenty of room for disagreement. Maybe you or someone might argue that the photons are interacting with each other or something funny is going on with the polarizer in question. However, we still observe results aligned with the quantum interpretation regardless if we use different polarizers for each photon, have them sent on a delay, or so on (although, I don't know how many variations have been tested by others for this specific experiment).

Hopefully, I haven't been much of a bore, or wasn't overly confusing. :)

There are ways to "save" classical mechanics using non-local hidden variables and other fancy things, but (if you can take my word for it) at that point, classical mechanics starts losing its intuition anyway. I'm not very knowledgeable about these alternate theories of classical mechanics, but my impression is that they don't make strong predictions, which I'm guessing is why quantum mechanics is more heavily favored.

If you're interested in reading more on the topic, an experiment related to Bell's Inequality was a major piece of evidence in favor of the quantum model. It's similar to the GHZ experiment I described, but simpler. The tradeoff is that its predicted result is inherently probabilistic.


I don't see a way around that - Organs are inherently rare because of their size, cost, locations and maintenance requirements.

Unless someone finds a way of scaling up production and/or reducing the cost, you're not gonna see organs popping up everywhere, hence limited access.


No it is not reserved, it's just posix's personal style guides. There's just as much of a name clash possibility when not using _t because lots of other libraries and platforms uses some other convention.

Only ISO C can reserve things, no one else, and ISO C does not reserve the _t suffix.


I think it is pedantic to say that POSIX's guidance is not the de-facto C standard, or at least that the union of rules between ISO and POSIX


The sad part is that this setup is probably slower and laggier running on top end machines of today than turbo c running on the actual legacy hardware back in the day.


It's actually pretty close in speed, and quite a bit faster to compile larger code bases. The key benefit is being able to debug while the program is running in VESA modes, something you couldn't do in Turbo C or similar environments.


You could do this with the Turbo Debugger when you had a Hercules board attached to a second monitor.


This seems like such a needlessly kneejerk reaction just to speculate and rag unfairly on anything modern. Couldn't you have actually done some testing and research first?


Maybe - if you're just looking at keypress to character on screen. But you're missing a ton of other advancements - higher resolution screens, better colors for syntax highlighting, intellisense, robust debugging, and, of course, access to the internet for help that you need. Back in the day I had a printed copy of Ralf Brown's Interrupt List that I just had to muck with and pray I got it to work without hanging my computer. With this if it hangs, no big deal.

So, yes, the raw response rate from key press to phosphors being illuminated on the screen might be less, but the overall productivity of a developer is probably and order of magnitude more.

Then again, I did learn to program from the online help in Turbo Pascal and Turbo C++, so maybe there's something to those systems.


The great part is that it's free, one-click to setup and you have full code intelligence and debugging capability.


Does zig plan on not supporting microcontrollers or not using "/" for integer division?

Because on your typical arm mcu, x/y is a function call to a definitively non-constant time function.

And lets not forget soft-fp. Every single floating point op is a function call...


A better way to think about it is: for a given line of code, how much context do you need in order to understand what function is actually going to be called? Yes, some compiler-rt or soft-fp function might get called but you know that's happening and what it does.

With most languages you need significantly more context than you do in Zig - in C you need to know what preprocessor shenanigans might be going on; in C++ pretty much anything could be happening (operator overloads, virtual functions, constructors, destructors, who knows what else). With Rust you need to know what traits are imported, and if proc macros are involved then anything goes.


>There is no language called "C/C++".

Yes there is.

Take the union of C and C++ which is valid syntax (and same semantics) in both languages, voila; C/C++.


But it is not, in fact, written in such a "language". Nothing is.


strlen() returns a size_t so you're already constrained to a maximum length of SIZE_MAX.


This is hilarious. SIZE_MAX is at least as large as the largest string that you can put in your address space / memory anyway. Which is what the strlen() API already assumes.

That, plus you'd be a fool to store a huge string in this way anywhere (in or out of memory) in any case.


> SIZE_MAX is at least as large as the largest string that you can put in your address space / memory anyway.

Not necessarily. A 64-bit system could give processes an address space that’s significantly larger than half the full 64-bit address space and have an allocator that allows you to allocate a block of more than SIZE_MAX bytes (malloc takes a size_t, but you can use calloc)


This doesn't make sense to me. You can't "allocate" more than SIZE_MAX bytes by definition. If you take "allocate" to mean "make it available in the process's address space", that is.


Are you sure?

The calloc() [1] function mentioned above takes two values of type size_t, and allocates their product bytes.

I'm on mobile without (!) the C99 draft spec but at least the man page gives no such restriction.

[1] https://linux.die.net/man/3/calloc


How would it be possible to allocate more address space than is addressable?

calloc returns NULL when can't satisfy the request. The idea of taking two arguments is not to allow the user to specify a larger requested size, but to protect against overflows as it can happen with e.g. malloc() where the user has to compute the size of arrays by multiplying NUM_ELEMS * SIZE_PER_ELEM. And the user will normally do so less carefully than a library function.


I read something about this recently, somewhere, maybe HN. Specifically, in calloc(), what is done and what should really be done if the multiplication overflows. As will happen, for example, if you try to calloc() two elements of size SIZE_MAX, when SIZE_MAX is the maximum representable unsigned integer value on the machine. So, I don't think calloc() is available or intended as a way to circumvent malloc()'s size restriction.


I stand corrected. Initially, I thought that, even if it calloc can’t, an OS could provide a different way to obtain a pointer to a memory region that’s larger than SIZE_MAX.

However, the standard says (https://en.cppreference.com/w/c/types/size_t):

“size_t can store the maximum size of a theoretically possible object of any type (including array).”

and (https://en.cppreference.com/w/c/language/pointer):

“Pointer is a type of an object that refers to a function or an object of another type, possibly adding qualifiers. Pointer may also refer to nothing, which is indicated by the special null pointer value.”

⇒ pointers must either be null or point to an object, and objects aren’t larger than SIZE_MAX, so I think having a pointer pointing to a block larger than SIZE_MAX violates the standard.


> pointer pointing to a block larger than SIZE_MAX violates the standard.

it's simply not possible, by definition.


size_t is unsigned, right? ssize_t is the signed version?

On a quick test on my 64-bit system, a C program doing `printf("%zu\n", SIZE_MAX);` outputs 18446744073709551615, which looks like (2^64)-1 to me.

Or is there a thing in the standard that says this isn't always the case?


No, ssize_t is not the signed version. As best as I can tell, the only things POSIX says about ssize_t is that[1] it is an integer type that can hold integer values in [-1, SSIZE_MAX], where[2] SSIZE_MAX ≥ _POSIX_SSIZE_MAX = 32767, not that it should have any particular relation to size_t. In the standard, it is used for byte counts in I/O, like the return value of read() (traditionally int), for the return value of strfmon() and strfmon_l() (OK I guess, though the C standard stuck with int for *printf()), and for the argument to swab() (wat).

Note that neither is ptrdiff_t guaranteed to be that signed version, or to hold any possible value in the domain of size_t or (strictly speaking) any possible object size. Both GCC and Clang assume the latter, though, and can miscompile[3] code that relies on (e.g.) malloc() succeeding for sizes > 2^31 on a 32-bit system.

[1] https://pubs.opengroup.org/onlinepubs/9699919799/basedefs/sy...

[2] https://pubs.opengroup.org/onlinepubs/9699919799/basedefs/li...

[3] https://trust-in-soft.com/blog/2016/05/20/objects-larger-tha...


ssize_t is a weird one, the only negative value it is guaranteed to store is -1.

> The type ssize_t shall be capable of storing values at least in the range [-1, {SSIZE_MAX}].

[1] https://pubs.opengroup.org/onlinepubs/9699919799/basedefs/sy...


size_t need only be large enough to cover the (virtual) address space. It's up to hardware and OS to decide how much addressable space you get. I believe current systems can use only the low 48 bits of 64-bit pointers. However that number is likely to be increased in the future and OSes would be unwise to define size_t as something smaller than 64 bits.


Isn't size_t defined as being able to fit the largest possible data allocation?


Indeed, you just need to forget to put a terminator to get a nice memory dump.


If you use a different data structure you would maybe use a different API for accessing it too


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: