Hacker Newsnew | past | comments | ask | show | jobs | submit | bsdetector's commentslogin

Seems reasonable that an alien craft travelling between stars might want to illuminate the whole star system to detect dark objects and plot a safe or more perfect course.

Apparently Wow! came from the same area and seemingly was blue-shifted by an amount that could make sense from an approaching craft, so that doesn't sound that silly to me.

Unlikely to be the real cause, not silly.


> and seemingly was blue-shifted by an amount that could make sense from an approaching craft

What do you think the natural spectrum of the Wow signal was, for determining amount of blue shift? What resolution of spectral data do you think we have on it?


Wikipedia Wow! article says it is equivalent of hydrogen line plus 10 km/s blue shift.

Even if this was a scanning beam I think we can assume it would take a lot of energy and so may be based on a simple scalable physical process. Using hydrogen to create it makes sense as it is low mass and can be replenished.


How is it "illuminating the whole star system"?

It seems more likely that it'll act like a non intelligent hunk of rock going through some random trajectory.

It's less silly to declare you'll win the lottery. That has happened many times over - but we're yet to discover that can or has existed outside of Earth. While it's nearly impossible it hasn't happened several times over, it's so far impossible that we've encountered even the crumbiest excuse for life.

I assert that it is silly. We're not indigenous American happening upon European settlers. We're indigenous Americans wandering about the continent harassing mammoths, inventing stories of how it'll go when it happens.


A ship approaching a sun will see the objects on the far side illuminated fully, but objects on the near side will be illuminated only on a thin edge, like a crescent moon, because they're looking at the 'back' side of the objects.

By sending out a pulse of light they could not just light up the ship-facing side of objects but also determine their precise location and velocity. Seems like something you'd want to do to not waste your thousand-year mission by accidentally colliding with a dark object.

The Wow! signal could be just such an event.

Aliens might use some type of scanning beam rather than a big flash, but I doubt we have the 1977 data to differentiate between a beam scanning our area and a solar-system-wide flash.


Would this be far enough out to use the sun's gravitational lensing to image distant planets?

It seems like the idea was to send a bunch of instruments way out and then take pictures in the brief time they were at a useful distance, but if there's a planet out there we can orbit and so stop the instruments at that distance it seems like we could make a permanent super telescope.


orbiting a planet in that case is no different than orbiting a sun on the same orbit as planet. Probably even more cumbersome, all that jiggling around. Or are you talking about making a gravity assist to turn the orbit of the probe into less eccentric?


Probably easier to just put it in solar orbit at that distance. Orbital velocity is only about 1km/s at 700 AU.


In October when Windows 10 support ends it'll finally be the year of desktop linux.


Well, those that are on the Windows 10 IoT LTSC builds will enjoy updates until 2032.

https://learn.microsoft.com/en-us/lifecycle/products/windows...


Do you know where to buy it?



It's hard to find information about it, but this post has quite a bit (some may be out of date): https://www.reddit.com/r/sysadmin/comments/bbof9s/windows_10...


As far as I know you need to sail across the high seas.


mas


sgrave


dot dev


I'd like to remind you that there are still millions of people around the world using Windows 7 daily. The fact that some software is no longer supported by its developer doesn't mean it stops working somehow, or becomes radioactive.


It becomes easier to exploit, as it no longer gets security updates; and vulnerabilities are publicly disclosed.


You can't really exploit something when its attack surface is nearly nonexistent, which is the case for most people who use an outdated OS on their personal device, for example.


What is it about unmaintained software on a personal device that somehow makes the attack service non-existent?


Even if there's an exploitable vulnerability, the exploit has to be delivered to the target system somehow. You don't have much of an opportunity to do that with a device that doesn't have a public IP address. Most likely the user themselves will have to do something that would compromise their system, like visiting a website that would serve them an exploit for their particular combination of browser and OS.


"I'd like to remind you that there are still millions of people around the world using Windows 7 daily"

Correct, and I am one of them!


You're thinking about energy and not cost.

For example, when solar plus direct air capture can remove a ton of CO2 for cheaper than it costs a container ship not to emit that CO2 then it's reduced cost for the same CO2 outcome even though it's using more total energy.

Regardless of whether it actually makes sense to capture carbon, you'll see a lot of sky-is-falling fanatics and vested interests dismissing it because it caps the price of carbon credits and limits economic damage estimates. You can't price CO2 at $500/ton to necessitate change when it only costs $200/ton to capture it - without quickly going bankrupt that is.

This is why the IPCC not even attempting to evaluate mechanical capture shows they aren't serious about solving the problem. They seemingly exist to push a fear narrative, and having an upper bound on the impact of CO2 limits their ability to do so.


The 1..125 loop stores 8000 bytes of string and they need to clear 8000 bytes.

There may be a fast path for adding one character, but in any case bytes of program are a valuable resource with only 64k ram so having a second loop from nearest power of two to 8000 would be a waste of bytes.


JSON is not immediately usable by and is cumbersome to parse correctly in a shell.

A simple line-based shell variable name=value format works unreasonably well. For example:

    # ls --shell-var ./thefile
    dir="/home/user" file="thefile" size=1234 ...
    # eval $(ls --shell-var ./thefile); echo $size
    1234
If this had been in shells and cmdline tools since the beginning it would have saved so much work, and the security problems could have been dealt with by an eval that only set variables, adding a prefix/scope to variables, and so on.

Unfortunately it's too late for this and today you'll be using a pipeline to make the json output shell friendly or use some substring hacks that probably work most of the time.


That's great for key=value data, but more complex data structures don't work so well in that format, JSON does. "Why would you need to represent data as a complex data structure?" Sometimes attributes are owned by a specific entity, and that entity might own multiple attributes. It might even own other sub-entities. JSON represents that. Key=value does not.


JSON is literally key=value, just nested. Which you can do with shell variables.

The question was "What's not to like [about JSON output from cmdline tools]?" and the answer is that it's cumbersome to read in a shell and all but requires another pipeline stage.

I didn't even recommend shell variable output and made it clear this isn't today a reasonable solution so I'm not sure where this hostility in the replies comes from, but I assume from recognition that it's a more practical solution to reading data within a shell but not wanting that to be so.


> JSON is literally key=value, just nested.

The nature of being nested, and also containing structures like lists, maps, etc. All of which makes it more complicated than key=value.

> The question was "What's not to like [about JSON output from cmdline tools]?" and the answer is that it's cumbersome to read in a shell and all but requires another pipeline stage.

It depends on the intended use for your shell program. If you intend the CLI tool to be used in CI pipelines (eg. your CLI tool's output is being read by an automated process on a computer) and the data it outputs is more complicated than a simple key=value, JSON is great for that. Your CI program can pipe to jq. You as a human can pipe to jq, though I agree it's somewhat less desirable. Though just piping to jq without any arguments pretty prints it for you which also makes it fairly readable for humans.

> so I'm not sure where this hostility in the replies comes from

You're reading into hostility where there isn't any.


> The nature of being nested, and also containing structures like lists, maps, etc. All of which makes it more complicated than key=value.

These are javascript objects, which are key-value. A list array is just keyed by a number instead of a string. They're functionally exactly the same as name=value except JSON is parsed depth-first whereas shell variables are breadth-first parsing (which is way better from shells).

Do you have an example of a CLI tool - intended for human use - that has output so complicated it can't be easily mapped to name=value? I don't think there is one, and it's certainly not common.

> You're reading into hostility where there isn't any.

I think "it seems you're determined not to use jq" is pretty hostile since I made no intimation of that at all.


> I think "it seems you're determined not to use jq" is pretty hostile since I made no intimation of that at all.

Well, I didn't say that, so I don't know what that other person's feelings or intentions are, to be fair. I personally have no feeling of hostility towards you just because we (apparently) disagree on the usefulness of JSON to represent complex data types, or at least disagree on how often human-usable CLI tools output complex data. But to answer:

> Do you have an example of a CLI tool - intended for human use - that has output so complicated it can't be easily mapped to name=value? I don't think there is one, and it's certainly not common.

kubectl. Which to be fair defaults to output to a table-like format. Though it gets all that data in the table from JSON for you. smartctl is another one, which also defaults to table format. To be honest, I could go on and on if the only qualifier is a CLI tool that emits complex data, not suited for just key=value.

> These are javascript objects, which are key-value. A list array is just keyed by a number instead of a string. They're functionally exactly the same as name=value except JSON is parsed depth-first whereas shell variables are breadth-first parsing (which is way better from shells).

As mentioned before, just because you can compare JSON to key=value, does not mean it's as simple as key=value. It's a data serialization language that builds well on top of simple key=value formats. You're welcome to enjoy other data serialization languages, like yaml, HCL, or PKL. But none of those are simple key=value formats either. They built the ability to represent more complex structures on top of that.

A data serialization language allows the end-user to specify how they would like to use that data, while allowing them to use standard parsing tools like jq. Cramming complex data into a value string in a key=value format gives end users the same allowance to use that data however they want, while also giving them a chore to handle parsing it in custom ways tailored to just your CLI application, likely in ways that would seem far more brittle than parsing a defined language with well defined constraints. That doesn't sound like great UX to me. But to be fair to you, you're not saying that you wish to use key=value to represent complex data. Rather, you're saying there's a general lack of complex data to be found, to which I also disagree with.


> But none of those are simple key=value formats either.

What is the difference between:

    { object: { name: value }}
    { object: "{ name: value }"}
    object="name=value"
There's zero difference between any of them except how you parse and process the data.

> kubectl. Which to be fair defaults to output to a table-like format.

With line-based shell-variable output you have a line of variables and you have blocks of lines separated by an empty line (like an HTTP 1 header).

This can easily map to any table, two dimensions, or two levels of data structure without even quoting subvariables like in the example above. So, no, kubectl is not an example at least not how you've described it.


> What is the difference between .. There's zero difference between any of them except how you parse and process the data.

Answered in the previous message... "A data serialization language allows the end-user to specify how they would like to use that data, while allowing them to use standard parsing tools like jq. Cramming complex data into a value string in a key=value format gives end users the same allowance to use that data however they want, while also giving them a chore to handle parsing it in custom ways tailored to just your CLI application, likely in ways that would seem far more brittle than parsing a defined language with well defined constraints."

> With line-based shell-variable output you have a line of variables and you have blocks of lines separated by an empty line (like an HTTP 1 header)...

I would not choose to write application logic that foregoes defined data serialization languages for parsing barely structured strings the way you seem to prefer. But you go about it the way you prefer, I guess. This whole discussion leaves a lot of room for personal opinions. I think we both agree that the other person's opinion here is subjectively the more annoying route to deal with. But that's the way life is sometimes.


That's not your original request though, to use line-based data. It seems you're determined not to use jq but if anything, json output | jq is more the unix way than piping everything through shell vars.


> That's not your original request though, to use line-based data.

It wasn't my request and OP (not me) said "line-based data" is best. The comment I replied to said "Newline-delimited JSON ... a line-based format".

If the only objection you have is "but that's line-based!" then you're in a completely different conversation.

> if anything, json output | jq is more the unix way than piping everything through shell vars.

The unix way is line-based. The comment I replied to is talking about line-based output. Line-based output is the only structure for data universal to unix cmdline tools - even tab/space isn't universal; sending structured non-line-delimited data to a program to unpack it is the least unix-like way to do it.

Also there's no pipe in the shell-variable output scheme I described, whereas "json | jq" is a shell pipeline.


And, the author isn’t suggesting only having JSON output, but adding it as an option for those of use that would make use of it. The plain text should remain as well (and has to or many, many things would break).

On a separate point, I find the JSON much easier to reason about. The wall of text output doesn’t work for my brain - I just can’t see it all. Structuring/nesting with clear delineations makes it far easier for me to grok.


I use jq - which ChatGPT knows inside out, so I can generally get exactly what I want from it with a single prompt.


SPDY's header compression allowed cookies to be easily leaked. This vulnerability was well known at the time so had they even asked an intern at Google Zero to look at it they would have been immediately schooled.

https://bugzilla.mozilla.org/show_bug.cgi?id=779413

In their performance tests vs HTTP 1.1 the team simulated loading many top websites, but presumably by accident used a single TCP connection for SPDY across the entire test suite (this was visible in their screenshots of Chrome's network panel, no connection time for SPDY).

They also never tested SPDY against pipelining - but Microsoft did and found pipelining performed the same. SPDY's benefit was merely a cleaner, less messy equivalent of pipelining.

So I think it's fair to say these developers were not the best Google had to offer.


another explanation - they did test it in other scenarios but the results were against their hopes so they 'accidentally' omitted such tests in the 'official' test suite. Very common tactic, you massage your data until you get what you want.


> there’s no way to measure time directly. It clearly exists, yet all you can measure is change of things besides time.

If it can't be measured then it can't be said to clearly exist.

Imagine a cellular automata where particles have lots of "slots" that could be used for moving or interacting. As the particle speeds up and more slots are used for moving, there are fewer slots for the kind of interaction change that we use to measure time. At the highest speed, with all possible slots used for motion, the particle would experience no change, which is indistinguishable from no time passing.

Does that sound familiar to anything? It's certainly possible that light being a speed limit, time dilation, relativity, and so on are in some way actually describing change rather than time.


> if running without swap and there exists any ram which is accessed less commonly than the next-most-commonly-accessed area of disk currently not in cache, the memory utilization is suboptimal.

Memory that is swapped out is a small write operation, which generally is much more resource and wear intensive than a read; a program memory page and disk cache page are not equivalent.

Additionally, the swapped out program memory may be required again and cause an unpredictable delay in program operation; when a user has to wait for a menu to open while it is swapped back in that is suboptimal use of memory.

A modern operating system should have compressed memory rather than swap. Take the pages that would be swapped out for being rarely accessed, if they compress well then free the page and store it in an area for compressed pages. This will get most of the expanded cache benefit from swap without delays, wear, or possibility of the system grinding to a halt.


The tweet's 17 Mb/s is UDP down from satellite, so no ack replies from the phone, and 15% packet loss.

So it really doesn't say anything about upload speed.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: