Hacker Newsnew | past | comments | ask | show | jobs | submit | mnarayan01's commentslogin

For anyone else confused by this, the control logic described in the comment happens in https://github.com/ifduyue/musl/blob/master/src/math/cos.c


Git has all sorts of options for using proxies, but running socat would be the quick and dirty way to do this, something like:

  socat TCP-LISTEN:8080,fork,reuseaddr OPENSSL:github.com:443
would let you use an HTTP connection to your less archaic machine. Could even configure git to automatically do the rewrite similar to https://stackoverflow.com/questions/1722807/how-to-convert-g.... There's security considerations here so don't just do this blindly, but if no more unencrypted git protocol is a real pain point there's pain free ways around it.


Potentially useful, since I've used proxies before in a different manner.

However what I'm more inclined to do is simply compile up a version of git from source, ensure it includes the HTTPS transport, and links against an appropriate TLS/SSL library. I used to build it from source years ago (on a different platform).

The git protocol approach was simply to avoid that hassle, as used correctly it is safe enough.


Unfortunately e.g. the RHEL 7 world doesn't have mitigations=off yet (in addition to many other kernel command line sugars).


0/0 = NaN


> Code Blocks probably aren't necessary

When you're reading raw markdown they're extremely useful for short snippets since they save you two wasted lines per block.


But these short snippets are useless because you can't easily copy paste them, especially with some programming languages that are whitespace-significant (looking at you, Python). And I don't understand how triple ticks are wasted lines, because you need to surround a code block with empty lines anyway while triple ticks work straight after line break.


If you're using your clipboard, it's invariably going to be at the wrong level of indentation regardless. What's the correct level of indentation? 0? 1? more? There's no generally correct answer, so you're going to have to use the block indent feature on your editor anyway.


The other requests are still taking the latency hit.


Sure, but they were parallel requests in the first place, so in terms of wall time you're still only taking the latency hit once. And if they're larger in aggregate than the connection's BDP some of them would only have finished by the time that it has recovered from the loss anyway.

So... it's complicated. I'm not saying that HoL is a non-issue and that TCP is perfect. It's just that you're not eating the latency penalty N times and for anything that is dominated by throughput rather than latency it's even less relevant.


The parser thinks that's a block not a hash.


Right, so why not implement the sig as a block and keep the syntax concise - the Ruby Way.


The sigs are implemented as blocks.

We had them as hashes for a while, but it meant that code in all sigs was loaded as the code was loaded, even if runtime typechecking was disabled. We were forced to load all constants in any signature in a file, an effect which cascades quickly. It had a big impact on the dev-edit-test loop.

For example, if we're testing `method1` on `Foo`, but `method2` has a sig that references `Bar`, we'd have to load `Bar` to run a test against `method1`.

Now sigs are blocks and lazy, and we pay that load penalty the first time the method is called and a typecheck is performed.


Then you'd need an extra set of delimiters, e.g.:

  sig {{name: String, returns: Integer}}


But that would make the hash braces redundant so you could just use parenthesis.


That's a block returning a Hash; see bhuga's sibling comment where he notes that they're using blocks to lazy load the constants in the type definition, which may seem silly for e.g. Integer, but consider e.g. some high-dependency Rails model which requires auto-loading 10,000 other classes.


> mouse(!) selection of arbitrary text that can be piped to commands and so on

E.g.:

  xclip -out | ...
or do you mean something different?


Essentially selecting and operating on text in the same buffer. Saw it in Russ Cox’s demonstration of Acme.


The idea was taken from Oberon, which got inspired by Mesa/Cedar at Xerox PARC.

Basically any function/procedure that gets exported from a module can be used either on the REPL, or from them mouse, depending on its signature.

Quite powerful concept for those that like to take advantage of GUI based OSes.

Powershell is the only that comes close to it. Maybe fish as well, but never used it.


> 3rd parties should version their JavaScript resources with a version number in the URL

I mean yes this is true, but it seems almost totally orthogonal to SRI (which is aimed at security AIUI), particularly given you can give more than one hash for a particular resource. If for some reason a third-party can't use fingerprinted URLs, they can still update the resource provided they give "sufficient" advanced notification of the new fingerprint to clients.

> And this should be a manual process because automating it would defeat the whole point of SRI.

Obviously it should be offline/write-once, but unless you're reviewing the actual assets as well I'm not sure I see the need to avoid automation.


Without versioned assets, both the 3rd and 1st parties would need to make the update at the exact same time or else the enforced SRI would cause failures. This kind of tight coordination is impractical or even impossible. Versioned assets allow you to deploy updates when using SRI without breaking anything.

The need to avoid automation stems from the fact that an automated process which simply detects that a change has been made and blindly updates the SRI attribute with a new hash would let a malicious code update through, which normally SRI would have blocked. This situation is no different than not using SRI at all.


You can specify multiple digests for an asset, so a non-versioned one can be updated sans downtime:

1. Provider informs clients that a new version with digest X will be deployed.

2. Clients add the new digest in addition to the current digest.

3. Provider switches to new asset version.

4. Clients remove old digest (optional).

Versioned assets are obviously better unless you're in a really weird situation, but SRI doesn't particularly require them.


Thank you for letting me know that you can use multiple hashes in SRI. I didn't know you could do that.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: