Hacker Newsnew | past | comments | ask | show | jobs | submit | asteroidburger's commentslogin

It's good that MLT did cancel them, but there's still a ton up that way. Mill Creek, Lynnwood, Marysville, just for a few examples.


For me, s/FreeBSD/Debian/. Same reason.


Who is the "we" here? I've poked around a few pages on this fellow's site, and apparently haven't found the right one to answer that.


Where he works... Or perhaps more accurately his co-workers. which is the domain. The university of Toronto.


It's much safer to export a key one time and import it into a new machine, or store it in a secure backup, than to keep it just hanging out on disk for eternity, and potentially get scooped up by whatever malware happens to run on your machine.


Any malware capable of exfiltrating a file from your home folder is also capable of calling the export command and tricking you into providing biometrics.


Not necessarily; "read file" is very different from "execute command." The biometrics part is a substantial lift as well.


You're not adding new features and such like that. Just patching security vulnerabilities in a forked branch.

Sure, you won't get the niceties of modern developments, but at least you have access to all of the source code and a working development environment.


As someone who actively maintains old rhel, the development environment is something you can drag forward.

The biggest problem is fixing security flaws with patches that dont have 'simple' fixes. I imagine that they are going to have problems with accurately determining vulnerability in older code bases where code is similar, but not the same.


> I imagine that they are going to have problems with accurately determining vulnerability in older code bases where code is similar, but not the same.

That sounds like a fun job actually.


If you can find the patches, it's fun to tweak them in the most conservative way possible to apply to the old code base.

However, things get annoying once something ends up on some priority list (like the Known Exploited Vulnerabilities list from CISA), you ship the software in a much older version, and there is no reproducer and no isolated patch. What do you do then? Rebase to get the alleged fix? You can't even tell if the vulnerability was present in the previous version.


> However, things get annoying once something ends up on some priority list (like the Known Exploited Vulnerabilities list from CISA), you ship the software in a much older version, and there is no reproducer

There are known exploited vulnerabilities without PoC? TIL and that doesn't sound fun at all indeed.


Distribution maintainers who do the backports do not necessarily have access to this kind of information. My impression is that open sharing of in-the-wild exploits isn't something that happens regularly anymore (if it ever did), but I'm very much out of the loop these days.

And access to the reproducer is merely a replacement for lack of public vulnerability-to-commit mapping for software that has a public version control repository.


It used to happen, I'd say less than 5% of the total vuln reproducers are probably 'shared' at this point.

At last count I'd written close to 2000 reproducers and approx 400 of those were local privesc for product security.

Security teams are usually highly discouraged from sharing exploits/reproducers as they have leaked in the past. My spectre/meltdown ended up on the web and someone else took credit, sad.


This guy backports.


The unfortunate problem is that, the more popular software is, the more it gets looked at, its code worked on. But forked branches as they age, become less and less likely to get a look-at.

Imagine a piece of software that is on some LTS, but it's not that popular. Bash is going to be used extensively, but what about a library used by one package? And the package is used by 10k people worldwide?

Well, many of those people have moved on to a newer version of a distro. So now you're left with 18 people in the world, using 10 year old LTS, so who finds the security vulnerabilities? The distro sure doesn't, distros typically just wait for CVEs.

And after a decade, the codebase is often diverged enough, that vulnerability researchers, looking at newer code, won't be helpful for older code. They're basically unique codebases at that point. Who's going through that unique codebase?

I'd say that a forked, LTS apache2 (just an example) on a 15 year old LTS is likely used by 17 people and someone's dog. So one might ask, would you use software which is a security concern, let's say a http server or what not, if only 18 people in the world looked at the codebase? Used it?

And are around to find CVEs?

This is a problem with any rarely used software. Fewer hands on, means less chance of finding vulnerabilities. 15 year old LTS means all software is rare.

And even though software is rare, if an adversary finds out it is so, they can then play to their heart's content, looking for a vulnerability.


> I'd say that a forked, LTS apache2 (just an example) on a 15 year old LTS is likely used by 17 people and someone's dog.

Likewise, the number of black hats searching for vulnerabilities in these versions is probably zero, since there isn't a deployment base worth farming.

Unless you're facing something targeted at you that an adversary is going to go to huge expense to try to find fresh vulnerabilities specifically in the stack you're using, you're probably fine.

I agree with your sentiment that no known vulnerabilities doesn't mean no vulnerabilities, but my point is that the risk scales down with the deployment numbers as well.

And always keeping up with the newest thing can be more dangerous in this regard: new vulnerabilities are being introduced all the time, so your total exposure window could well be larger.


If no one is posting CVE that affects these old Ubuntu versions then Canonical doesn’t have to fix them. I realize that’s not your point, but it almost certainly is a part of Canonical’s business plan for setting the cost of this feature.

The Pro subscription isn’t free and clearly Canonical think they will have enough uptake on old versions to justify the engineering spend. The market will tell them if they’re right soon. It will be interesting to watch. So far it seems clear they have enough Pro customers to think expanding it is profitable.


You typically need to maintain much newer C++ compilers because things from the browser world can only be maintained through periodic rebases. Chances are that you end up building a contemporary Rust toolchain as well, and possibly more.

(Lucky for you if you excluded anything close to browsers and GUIs from your LTS offering.)


The referenced Bluetooth bug on the Github readme seems like a pretty good reason. "We don't want to work around or deal with the bugs on other platforms" seems like a reasonable position.


This bug prevents injecting magic handshake that enables all features. It wouldn't be relevant if Apple didn't block these features in the first place.

Btw it's not some magic feature set they spent years to research. Sub $60 Soundcores have most of them if not all.


This is anti-competitive measures, it's not like it's the only thing they do like this. The charging cables were the same.


If this is about lightning, what connector do you think apple should have used? USB-C came out long after lightning.


> USB-C [Aug 2014] came out long after lightning [Sep 2012].

1 year 11 months :)

The old 30-pin connector before Lightning came from 2003.

Meanwhile it took until 2023 for iPhones to use USB-C.

    30-pin     2003 - 2012 (2014)
    Lighting   2012 - 2023 (2025)
    USB-C AAPL utter mess -
               iPhone 2023 -
    USB-C RoW  2014 -


How was USB C on Apple a “mess” when it came out on the iPhone 14? It supported all of the standard USB protocols - video, networking, mass storage, audio etc.


The transition to USB-C was spread out across product lines over many years, hence "mess". The iPhone is on a separate line in the table.


Yeah, I don't think the connector criticism of Apple really stands up to any scrutiny. 30-pin was strictly better than USB-based solutions when it came out, as was Lightning. They supported both of those for a very long time and kept tons of iDevice accessories around the world functioning.


Yes because Apple has a monopoly on - Bluetooth headphones you can use with Android devices??

Do console makers have to make sure that their accessories work with other consoles? Do TV manufacturers have to ensure their remotes work with other TVs?

And no you never had to buy Apple branded or licensed charging cables.


The camera on the back of the phone actually helps quite a bit with said data entry.


I normally ingest csv files exported from my bank - then have to manually tag and relate them (like internal transfers).

I have a bunch of scripts to help and wrote a custom web scraper to pull the data, automating much of this, but much is still quite manual.


finance apps will soon automate that kind of things, and most probably the phone app will have it before the web version

there's resource rot it seems, desktop/web have less than the phone, and it shows

hp printer/scanner app is way leaner than anything they've ever released on windows (not saying much but still), same for my bank app it's a bit faster, and better designed (features and ui)


Amazon Web Services, Inc. is a subsidiary of Amazon.com, Inc.


Neat idea, even if it doesn't end up going anywhere.

Out of curiosity, are you able to share what your source of data is? Isn't GeoIP data typically licensed?


I work for IPinfo, focusing on data adoption and supporting the wider use of our free services.

The dataset the user is accessing (IPinfo Lite) is licensed under CC-BY-SA 4.0 and does not come with an End User License Agreement (EULA).

Most free IP geolocation databases in the market do include restrictive EULAs. These usually require each individual user to register, obtain their own copy of the database, and limit usage to themselves only. In many cases, sharing the data or keys outside of the original intended scope is considered a violation of license agreement. Because of these restrictions, some providers also offer paid redistribution licenses, which can be expensive and typically targeted at enterprises.

Our approach is different. By licensing IPinfo Lite under CC-BY-SA 4.0 WITHOUT an EULA, we allow anyone to share or redistribute the data freely, as long as proper attribution is given. The goal is twofold:

  - Make it easier for projects to use the data without legal barriers.
  - Ensure that any questions or issues about the data are directed to us rather than to project maintainers.
Some large open-source projects, major global enterprises, and governement institutes are already using IPinfo Lite right now.


Data source is IPinfo Lite MMDB file, which seems to be offered freely without restrictions. I'd love to offer comprehensive GeoIP attributes but I'm afraid to even ask how much the DB download of that costs... I'm working on supporting new data sets now like security CVEs, shodan integration, etc.


IP2Location LITE has free data on ZIP code and timezone for enrichment. You can consider it too.


There's a crowdsourced collection of ALPRs in OpenStreetMap. deflock.me/map has a display of that data.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: