Hacker Newsnew | past | comments | ask | show | jobs | submit | blockoperation's commentslogin

What bothers me is the lack of confirmation on the microcode updates (other than 'soon') – apparently they're out there[1], but Intel's own package has yet to be updated[2]. The 20171215 update carried by some distros only seems to cover HSX/BDX/SKX, so does that mean regular HSW/BDW/SKL users are screwed, are they covered by the 20171117 update, or are the new microcodes simply not ready for all models yet?

Of course, the microcode updates are meaningless without the corresponding kernel patches, but apparently only RHEL and SLES were deemed worthy of receiving those ahead of time, having already rolled them out while Linus and co are left scrambling to integrate the IBRS and retpoline code dumps after the fact.

[1] https://tracker.debian.org/news/899110 [2] https://downloadcenter.intel.com/download/27337/Linux-Proces...


Just to add to that, it looks like some manufacturers have already pushed out the new microcode alongside the recent ME fixes.

I flashed my Skylake laptop a few days ago, and now I'm on microcode rev 0xc2 (versus 0xba in the latest Intel tarball). CPUID output suggests IBRS support (according to [1]).

[1] https://patchwork.kernel.org/patch/10147547/


It appears to be available on a few other models as well (searching 'site:dell.com "ME inoperable"' brings up service tags for a few non-rugged Latitudes and a couple of Optiplexes), despite it not being an option on their configurators.

I wonder if this means that it's also possible to disable ME on those machines after purchase (without triggering Boot Guard)?

Dell only offers a single download per firmware version per model (or group of models) to cover all configurations, so presumably it contains multiple images (i.e. AMT-enabled, AMT-disabled, ME-disabled), and the updater just uses ME status or DMI data to determine the correct one.

If that's the case, then surely it should be as simple as manually extracting and flashing the ME-disabled image, right? That's assuming it's actually included in the publicly available updaters, and that all of the images are signed with the same key, of course.



Actually, Chromium (and uBO) has supported blocking WebSockets via webRequest for a while now[1], despite the desperate protests of a MindGeek employee[2].

[1] https://chromium.googlesource.com/chromium/src/+/0f198df6bc8... [2] https://bugs.chromium.org/p/chromium/issues/detail?id=129353...


Still, installing uBO-Extra is a damn good idea. If nothing else than to block insidious methods such as Instart Logic and WebRTC abuse: https://github.com/gorhill/uBO-Extra/wiki/Sites-on-which-uBO...


If you're able to build Chromium yourself, here's a trivial patch which does just that (only tested on Linux):

https://gist.github.com/blockoperation/5ec91d666e670e39584d2...


Since we're talking about backdoors, how about compiler ones?

With C, there are several routes to bootstrapping your compiler of choice – there are countless implementations that can be used as intermediates (both closed and open source, for all sorts of architectures, with decades worth of binaries and sources available), and diverse double compilation is a thing.

Rust? Unless you want to go back to the original OCaml version and build hundreds of snapshots (and providing you actually trust your OCaml environment), you've got no choice but to put your faith in a blob.

I'm not against Rust as a language, but it seems counterintuitive to use a language that only has one proper implementation and requires a blob to bootstrap, as a defense against backdoors.


You're referring to trusting-trust backdoors, but I suspect that those should be low on the threat model: they seem like they'd be hard to weaponise in way that they're maintained through years of very large changes (in the case of Rust). Just a normal backdoor of a malicious piece of code snuck in seems more likely, and a full bootstrap isn't necessary, nor does it actually help at all, to stop that. (But it's still true that a single implementation is more risky in that respect.)


This is something I've been thinking about quite a bit. It feels like there have to be two kinds of compilers and VMs (if necessary), with different strengths.

One kind of compiler should be like current compilers, with a focus on speed, resource consumption, optimization. Most actual commercial applications would use this compiler, because it provides the fastest and most efficient software.

But beyond that, it might be beneficial to implement compilers with a focus on simplicity and a minimum of dependencies. For example, implement a compiler on an ARM CPU in assembler. The translation step to run this code on an actual CPU is too small and simple to be backdoor'd, and the CPU should be simple or even open.

Such a simplicity oriented compiler could provide a source of truth, if all components are too simple to backdoor'd.


It would be easy for them to fingerprint it and block it at a server level, given that it uses some hardcoded headers (which are probably sent in the wrong order versus the browser it's spoofing), doesn't fetch any of the images/stylesheets/etc on the page, and probably fetches scripts/manifests/etc in a predictable order that differs from YouTube's own scripts. Maybe they already do this (fingerprinting and logging, that is), but I haven't heard of anyone being banned for it, so it's probably not something to worry about.

It would also be easy for them to just break the extraction code. The old code used to break every time the signature function changed, and while the current code solves that problem, there are still so many things that they could do to break it, and yet they don't (the current code has only broken a few times that I can think of, and I don't think any of those were intentional on Google's part).

Technically it could violate these parts of the ToS, but they're all grey areas:

- access through anything than the site or 'approved' clients (but youtube-dl does use the site, so it could just be classed as another user agent)

- running automated services against them (running youtube-dl manually is probably fine, even for whole playlists or channels, but running a 'youtube-dl as a service' site like the one in this case is almost certainly not)

- downloading videos (but youtube-dl can also be used for streaming, despite the name)

I'm guessing that Google simply doesn't give a shit, as long as you're not using using it abusively (e.g. offering it as a service or using it to do mass-scraping).


Whether it's a good addition to the language or not is debatable, but as a browser feature, it sounds terrifying. Browsers have a huge attack surface as it is (I mean, WebGL is a thing – exposing GPU drivers to random untrusted code on the internet, what a brilliant idea...), and exposing threading will only make it worse. Every single API would have to be carefully audited and made thread-safe, and I'm sure that many 'fun' bugs would crop up as a result.


Which is why the proposal specifically says that only a few DOM APIs would be exposed to background threads (Like console.log())


Writing performant code by using the correct idioms, data structures, algorithms, etc, from the start is just common sense rather than 'premature optimisation'.

Writing unreadable, micro-optimised code in the name of performance without even attempting to profile it first is another matter.

My personal rule (as a not-very-good hobbyist) is that if I have to refactor everything in order to accommodate an optimisation in a path that's already 'fast enough', or introduce some arcane hackery that only makes sense with comments into otherwise clean code, then it must be backed up with realistic benchmarks (and show a significant improvement).


> Adolescents are horrible at consequence extrapolation: it's why they're famously risk hungry and blasé about doing stuff grownups would be terrified of. Why they never think they'll be the ones who die or get pregnant or fail.

With that in mind, I think the worst effects of smartphones/social media/etc are yet to come.

Kids these days broadcast every last embarrassing detail of their lives all over the internet, and smartphones make it possible to do so conveniently from any location, so there's no time to reflect or consider whether it'll come back to bite you. When these kids grow up and realise the extent of the embarrassing (or possibly incriminating) information they've shared about themselves (and how little control they have over that data), it's going to hurt.

Whether smartphones themselves have 'destroyed' a generation is debatable, but the combination of smartphones and social media is certainly going to cause some mental health issues in the future (if it hasn't already).


Resources/frames/XHRs/etc from 'file://' might be blocked, but what about top-level redirects?

At the very least, user-initiated top-level navigations should bypass any policies. If you're out to cause mischief, you could just link to the dodgy path on forums/comments/etc – there'll always be people out there who are careless and/or clueless enough to click on it.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: