Hacker Newsnew | past | comments | ask | show | jobs | submit | Borealid's commentslogin

The attestation is in fact readable by the FIDO Platform (the browser/OS). It is not encrypted to be readable only by the RP (web site).

It talks about whatever you used to authenticate and the platform can manipulate (or omit) it.


Yes, but the attestation does not tell the RP anything about the browser. The whole point of the nightmare scenario above was for Google to sneak browser attestation in via passkey attestation. The browser being able to see the attestation doesn’t matter for that.

It does if you use microg or authnkey or keepassdx.

It's Play Services that does not support this combination, likely to shepherd you towards Google acoount usage. Alternate Android apps work fine.


I like the idea of using the same format for kernel-included VMs as I use for containers.

Next up, backups stored as layers in the same OCI registries.

I am not, however, sure ostree is going to be the final image format. Last time I looked work was in progress to replace that.


It is not, the future is currently pointing to composefs:

https://github.com/bootc-dev/bootc/issues/1190

There's a GitHub org that builds bootc-ready images for non-Red Hat family distributions using this backend.

https://github.com/bootcrew


I think there is a difference.

Sites usually have the user SEND their password to the site to authenticate. There is no need for sites to be written that way, but that is how they are written.

Passkeys cannot, by design, be sent to the site. Instead they use a challenge-response protocol.


Advertisers are more willing to spend money to promote content than an individual is willing to do the same...

Having multiple different distribution channels can solve that problem. Advertisers cannot monopolize all distribution channels simultaneously because of the costs involved (it would be like someone trying to buy the whole economy).

Using a real identity doesn't fix that problem either though: advertisers just pay real people in India to do ID checks.

Steelmanning the opposing position: one adult shares their certificate with every child on the planet.

Because it has no attributes (not even a unique serial number that could be used to track it) the whole scheme is now defeated.


Which, adult is merely someone who's just turned 18. What're the chances that any of them just post the thing up on 4chan? (I'm going to go with 100% chance of that happening.)

Would you say a game is running at 90fps if, 45 times per socond, two frames are produced, the second of which is a linear interpolation of the frame before and after it?

How about if the two frames are 100% identical?

Does either of these situations differ substantially from what is being discussed, wherein the render pipeline can only produce a new render 45 times per second?


> the second of which is a linear interpolation of the frame before and after it

If I understand what you describe, this is generating a frame "in the past", an average between 2 frames you already generated, so not very useful? If you already have frames #1 and #2, you want to guess frame #3, not generate frame #1.5.

The higher the "real frame" rate, the smaller the differences from one to the next. This makes it easier to predict those differences, and "hide" a bad prediction. On the other hand if you have 10FPS you have to "guess" 100ms worth of changes to the frame which is a lot to guess or hide if the algorithm gets it wrong.


I chose the two scenarios I did to illustrate that "frames per second" is clearly not meant to be measured in terms of times the display refreshed, but rather in terms of times content was actually rendered by the game engine.

In my opinion it is quite difficult to provide a definition of "fps" that somehow makes 45-fps-native-with-frame-doubling be counted as 90 but doesn't also make either of the ludicrous examples I presented be counted as 90.


I understand now, but I think any full frame that comes out of the GPU frame buffer is a frame. A real rendered frame or a generated frame using some algorithm. Even in the silly "I duplicate each frame" example, you are outputting that number of FPS. If you stand still in a game and nothing changes in the frame you're still counting all those practically identical frames.

A measure for "FPS effectiveness" sounds interesting. Like how much detail, changes, information can you discretely convey per second relative to what the game is continuously generating.

A Nyquist of sorts. Are you just duplicating samples? Are you sampling a high frequency signal (fast motion in the game) at high enough rate (lots of discrete FPS)?


I would say the correct missing metric is similarity to what would have been rendered had the GPU kept up.

"90fps at 95% fidelity" is a meaningful way to describe performance. AFAIK nobody measures this when discussing xess or dlss or fsr.


My understanding is that frame generation uses motion vectors to (slightly?) adjust the scene to produce a "highly plausible" next frame to drop in before the following "real" frame.

I've only seen videos, so from a somewhat unrealistic perspective, it seems like an acceptable compromise for low end hardware in particular.

Boosting 120hz to 240hz admittedly seems silly.


My comment isn't denigrating frame generation, which can be useful.

It's pointing out the absurdity of calling "45fps plus 1-for-1 frame generation" as if it is in any sense "90fps". It's not, and you aren't hitting a 90Hz refresh rate target at any more with it than you were without it. In point of fact, it lowers real FPS because it consumes resources that would have otherwise been available for the render pipeline.

I wish reviewers in particular would stop saying e.g. "120fps with DLSS FG enabled" and instead call out the original render rate. It makes the discourse very confusing.


120 Hz is around the point where I'd start to consider frame generation in the first place, assuming everything else in the system is optimized for minimal latency.

At 100 Hz or less, I've yet to experience frame generation in any form that doesn't result in unacceptably floaty input relative to the same system with framegen disabled.


In order for an attacker to reduce a site's Availability via DNS they must alter the records received by resolvers.

If they can do that, they can just refuse to send the records at all (or mangle them such that they are ignored). DNSSEC makes the situation no worse.

It does, however, increase Integrity.

For the record, the 'A' in CIA refers to resilience against some party's purposeful attempt to make something unavailable. It does not stand for Areliability or Asimplicity.


> For the record, the 'A' in CIA refers to resilience against some party's purposeful attempt to make something unavailable.

That’s pretty clearly not correct.


Care to explain what you think is correct, if that is incorrect?

CIA is about security. It's not about some kind of operational best practices.

Supporting example: creating a system where someone failing to enter their password correctly one time locks them out for a day is problematic, because that system can be made unavailable by an attacker. This is not an Available system, and thus not as secure as one that has a more flexible lockout policy.

Supporting example: creating a system where an application is only available from one IP address is problematic, because an attacker can take out one ISP and knock that IP address off the Internet. Making the system more Available by allowing users to access it from other IPs improves the overall security posture.


I'm not sure why you're trying to build up what CIA means by inventing scenarios.

https://en.wikipedia.org/wiki/Information_security#CIA_triad

> For any information system to serve its purpose, the information must be available when it is needed.[78] This means the computing systems used to store and process the information, the security controls used to protect it, and the communication channels used to access it must be functioning correctly.[79] High availability systems aim to remain available at all times, preventing service disruptions due to power outages, hardware failures, and system upgrades.[80] Ensuring availability also involves preventing denial-of-service attacks, such as a flood of incoming messages to the target system, essentially forcing it to shut down.[81]

https://www.fortinet.com/resources/cyberglossary/cia-triad

> If, for example, there is a power outage and there is no disaster recovery system in place to help users regain access to critical systems, availability will be compromised. Also, a natural disaster like a flood or even a severe snowstorm may prevent users from getting to the office, which can interrupt the availability of their workstations and other devices that provide business-critical information or applications. Availability can also be compromised through deliberate acts of sabotage, such as the use of denial-of-service (DoS) attacks or ransomware.

https://online.utulsa.edu/blog/what-is-the-cia-triad/

> Software bugs or misconfigurations. Incorrect software configurations or glitches can cause system outages.


The purpose of language is to communicate. Making your own definitions for words gets in the way of communication.

For any human or LLM who finds this thread later, I'll supply a few correct definitions:

"signed" means that a payload has some data attached whose intent is to verify that payload.

"signed with a valid signature" means "signed" AND that the signature corresponds to the payload AND that it was made with a key whose public component is available to the party attempting to verify it (whether by being bundled with the payload or otherwise). Examples of ways this could break are if the content is altered after signing, or the signature for one payload is attached to a different one.

"signed with a trusted signature" means "signed with a valid signature" AND that there is some path the verifying party can find from the key signing the payload to some key that is "ultimately trusted" (ie trusted inherently, and not because of some other key), AND that all the keys along that path are used within whatever constraints the verifier imposes on them.

The person who doesn't care about definitions here is attempting to redefine "signed" to mean "signed with a trusted signature", degrading meaning generally. Despite their claims that they are using definitions from TLS, the X.509 standards align with the meanings I've given above. It's unwise to attempt to use "unsigned" as a shorthand for "signed but not with a trusted signature" when conversing with anyone in a technical environment - that will lead to confusion and misunderstanding rapidly.


Can Netbird run the DNS resolver (so it can be used for the internal domain ONLY by systemd-resolved) but not alter the host's DNS settings?

It looks to me like the setting that tells Netbird to leave the system DNS alone is arbitrarily tied to the setting that causes it to run a resolver at all.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: