> Does privacy of Netflix ratings matter? The issue is not “Does the average Netflix subscriber care about the privacy of his movie viewing history?,” but “Are there any Netflix subscribers whose privacy can be compromised by analyzing the Netflix Prize dataset?”
For this type of repetitive application I think it's common to "fine-tune" a model trained on your business problem to reach higher quality/reliability metrics. That might not be possible with this chip.
Just looked - Microsoft Authenticator doesn't appear to work. I might be able to get off of it but it will take some prep. My banks are supported so that's good.
Because many admins are horrible and disable TOTP for "security".
My uni does it and I've had use the only alternative option, cell call, and rigged Tasker to automatically answered and play the needed tone so I don't need to carry it with me.
Microsoft authenticator should work on GOS, I can only find single person saying it doesn't but there's plenty of reasons it might not work for them (vpn, too strict exploit protection settings). And there's multiple people mentioning it working fine.
> if you could exclude all of the R&D and training costs
LLMs have a short shelf-life. They don't know anything past the day they're trained. It's possible to feed or fine-tune them a bit of updated data but its world knowledge and views are firmly stuck in the past. It's not just news - they'll also trip up on new syntax introduced in the latest version of a programming language.
They could save on R&D but I expect training costs will be recurring regardless of advancements in capability.
Recently llama.cpp made a few common parameters default (-ngl 999, -fa on) so it got simpler: --model and --context-size and --jinja generally does it to start.
We end up fiddling with other parameters because it provides better performance for a particular setup so it's well worth it. One example is the recent --n-cpu-moe switch to offload experts to CPU while filling all available VRAM that can give a 50% boost on models like gpt-oss-120b.
Do you really need a H200 for this? Seems like something a consumer GPU could do. Smaller models might be ideal [0] as they don't require extensive world knowledge and are much more cost efficient/faster.
I use Readdle Documents to sync PDF folders with my server PC via FTP. Free version supports PDF highlighting & simple annotations, basic file management, and automatically syncs back everything.
Basically, yes it may write snapshots per-file (never per-commit) locally but there is a separate routine to transparently repack the whole thing with deltas.
There's a heading control to include rotation in link: https://earth.google.com/web/@3.63731074,-23.1618975,-2690.7...
reply