Hacker Newsnew | past | comments | ask | show | jobs | submit | jannic's commentslogin

> Certificates signed by TrustCor that were issued before December 1st will still be trusted (for now); certificates issued on December 1st or later will not be.

How does this work? If TrustCor is no longer trusted, what keeps them from creating certificates which claim to be issued before December 1st, even after that date?


> what keeps them from creating certificates which claim to be issued before December 1st, even after that date?

See https://groups.google.com/a/mozilla.org/g/dev-security-polic... for the actions proposed depending on how the TrustCor situation plays out:

> If there is reason to believe that the CA has mis-used certificates or the CA backdates certificates to bypass the distrust-after settings, then remove the root certificates from Mozilla’s root store in an expedited timeline, without waiting for the end-entity certificates to expire.

Right now, they're being slowly removed for poor behaviour in general, but there's no direct evidence of abuse of CA powers. If any clear evidence of that appears in future, including backdating certificates, then they'll be completely removed from the trust store immediately.


Certificate Transparency [1]: Public logs of issued certificates.

[1] https://en.wikipedia.org/wiki/Certificate_Transparency


Assuming the untrusted and unreliable CA actually follows the rules and publishes things to the CT log.


If you don't then the certificates usually aren't usable at all. All modern browsers should reject certs from any root CA if the cert isn't correctly included in a CT log.


Firefox doesn't: https://bugzilla.mozilla.org/show_bug.cgi?id=1281469

(But it's the only browser that doesn't support CT)


Do modern browsers check this though? It would introduce a large latency to each request, I bet none of the browsers do that.

Moreover, Chrome removed the browser extension API for TLS certificate details, so it is not even possible to do CT log verification via extensions.

Only way to do CT log verification would be by customising an existing TLS MITM software. As far as I’m aware no such solution exists at the moment.

Certificate system is actually rather insecure, and although solutions are possible to develop, nobody has taken time to do it.


> Do modern browsers check this though? It would introduce a large latency to each request, I bet none of the browsers do that.

Yes they do. Only firefox doesn't.


Chrome and Safari require that TLS certificates include cryptographic promises of future log inclusion ('SCTs') from N trusted CT logs. As far as I know, neither of them actually contact the log's API endpoints to make sure that this has gone through, but in practice IMHO it's not much of a security gap for various reasons.


The SCT is a promise of the log to include the certificate (or pre-certificate, which is used for embedded SCTs) within a time window. The only way a cert could have a valid embedded SCT is to have actually sent the pre-certificate to the log in question.

The SCT contains a signature over some log related information (which is also included in the SCT itself) and everything in the pre-certificate except the signature and poison (which means everything in the real cert, except the SCTs and signature). This means the browser can reconstruct the signed data and verify the signature.

Thus the only way to have a valid SCT and not have at least the pre-certificate show up in the log (after the merge delay) is if the log operator/software messed up. For transparency purposes, a pre-certificate is basically as good as a full certificate, although if I recall correctly the CAs are also supposed to submit the full certificates too.


Just a guess: In priciple, yes, but it wouldn't be very practial. Because of noise sources on earth, one would need very big antennas pointing to the pulsars to get good s/n ratios. Additionally, as x-rays are blocked by the atmosphere, one would be limited to longer-wavelength pulsars, again increasing the size of the antennas. Given that, using GPS, we already have a positioning system much more accurate, I don't see why one would use pulsars for positioning on earth.


It does, perhaps, have the advantage of not needed a satellite fleet to work. So therefore isn’t at the whim of said fleet’s owner, or someone with anti-satellite missiles.

(Granted that the antenna size problem might be a killer)


Star trackers do exactly this, providing relatively accurate positioning without GPS:

https://en.wikipedia.org/wiki/Star_tracker


I was about to suggest the use of a sextant for this. As far as I know, navigators are still trained to operate them in this day. Of course, the same operational principle can be integrated into a dedicated device.


Was going to say this, and in use by things like submarines and ships since the 60’s


I thought this was more of a thought experiment for galactic travel, that would make the most sense. However navigation by star triangulation is much less complicated and historically proven ever since ships were sailed. Star trackers are on many satellites and rovers for positioning.


Even some military aircraft like the SR-71: http://www.thedrive.com/the-war-zone/17207/sr-71s-r2-d2-coul...

Does star navigation work once you’re far beyond locations where we’ve previously been able to map stars from? Or put another way, is our "3D" map of the stars sufficiently accurate, or is it more of a "2D" map.

Perhaps you could update the map as you move (SLAM?)


rustc will omit the frame pointer if you add "-C debuginfo=0".

  example::count:
        popcnt  eax, edi
        ret


My reaction when I wanted to searched for a file in a larger github repo, and the search bar was missing: I just cloned the repo and used `git grep` locally.

What's the next step? Cloning only allowed for logged-in users?


And thanks for writing acme-tiny!

It was really easy to setup automatic renewals, running as an ordinary user. sudo access for reloading apache is the only privileged operation necessary. Great job!


Please consider contributing to something like Freifunk, instead: http://freifunk.net/en/

Similar technology (OpenWRT based firmware), easier to use, less privacy issues as it's not necessary to track your network usage. And completely free.


+1 on Freifunk. Setup is super easy:

1) order an Ubiquiti Nanostation LOCO M2 (amazing reach and super sensitive antenna),

2) sign-up with them for a VPN key. This will keep your visitors traffic tunneled through their VPN. IF there is any problem with content infringement, it ends up on their network.

3) Flash the box with their OpenWRT port and install the VPN key.

I setup my box last week and and it's been running well (http://monitor.berlin.freifunk.net/host.php?h=imaginator). Nice to see users dropping on and off. The firmware also includes support for the Freifunk mesh network. I'm looking forward to adding more nodes to the neighbourhood and growing the wifi coverage.

Action shots: http://imgur.com/a/q7nOk (Decided that martini bottle is a better solution than the tripod)

Get started at http://config.berlin.freifunk.net/wizard/routers (disclaimer: not clear if you should/must/can be in Berlin for this to work)


Friefunk actually is a pretty cool project. Any such initiative would typically require quite a bit of community involvement if it has to grow organically. How big is the Friefunk community btw ?


According to http://freifunk.net/en/how-to-join/find-your-nearest-communi... there are >15.000 access points, mainly in Germany.

Is there some international equivalent?



Fair point.


Okay, but this relies on CSS trickery. If you had navigated to a text URL this would not be a vector.


What's a text url? The only way I can see this not being a vector is if you browse with css (and javascript for good measure) turned off. Or use lynx.


A page of text? With Content-type: text? An example being a shell script?


Do you think the average user copying and pasting administrative commands into their shell will stop to check the content encoding of the document they are copying from? Do you trust your browser not to try rendering an ill-defined document with an ambiguous extension?


Do you check the Content-type: header of the response for text/plain before copying? If you do, you'd be in the minority.


"Connect the attacker machine (host) and the victim (target) with a FireWire cable"

What keeps the target from starting an inception attack on the host?


Any of the things in the "Attack mitigation" section.


Not loading the firewire driver is not an option, as the attack needs the driver.

I guess loading the driver with DMA disabled would be a good option, as I don't think the attack needs DMA on the host side. Not sure, though, after skimming over the documentation.


I guess writing a new hard drive firmware from scratch, without inside knowledge, would be close to impossible.

But why start from scratch if you can just modify the existing firmware? And that seems to be perfectly possible: http://spritesmods.com/?art=hddhack

I'd say the most difficult and resource consuming part is to make versions which work on as many brands, models and revisions as possible, and make them all robust enough so they won't be detected because of random malfunctions.

But I don't know anything about hard disk firmwares: Perhaps they are not too diverse and once you know how to modify one drive, the others will follow easily?


The system may only store a hash of the correct password, and then try to brute force it with the entered (slightly wrong) password as a starting point.

This has several advantages: - Nearly as secure as only accepting the correct password (as an attacker could do the brute forcing as well, without help from the system) - Incentive for the user to remember and type the correct password, as logging in with an incorrect password takes longer - After successfully brute forcing the password, the system can remind the user of the correct one - without having to store it!

Of course, this is not without disadvantages: - Much more load on the server, which probably can't be offloaded to the client without leaking the hash. (But perhaps it can, by offloading parts of the calculation to the client, and only doing the final comparison on the server.) - Depends on efficiently generating a list of likely passwords given an imperfect starting point - one needs to develop a model of likely user errors.

Assuming 56 bit passwords, and 2^20 hashes per second, one could try all 4-bit-errors in 9s and all 5-bit-errors in 8min. But 'all possible n-bit errors' is not a realistic measure, as errors wouldn't be random.

56 bit would be about 10 random letters, and e.g. assuming that the only possible errors are omissions of letters, one could forget 3 letters and would still be able to login in about 1 minute. On the other hand, an attacker without any knowledge of the password would need ~2000 CPU years to brute force the password. (Of course the values should be tuned according to the intended security level.)


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: