No, We should be fighting tooth and nail against these companies. They're not here to save us from ourselves. They're using public streets to Alpha (beta if you want to be generous) test autonomous lethal weapons, and then profit off of it when it works.
I can't find anything saying waymo has a thermal camera. They aren't expensive- certainly not compared to the LIDAR- and provide extremely discriminated input on "am I about to kill something?" They're not perfect as foul weather and fog are likely to blind thermal- but they shouldn't be driving in suboptimal conditions until they have a track record of safety in optimal ones.
What criteria would you consider sufficient for deployment on public streets? My experience is that people opposed to AV technology usually aren't familiar with the level of validation that's been done and tend to have expectations that are either impossible or are already met.
Waymo has experimented with thermal imaging in the past. I've never seen experiments indicating it's a particularly valuable modality for AVs, and high resolution thermal cameras exceed the price of decent LIDAR these days. You can easily spend $10k+ on a FLIR sensor with a pixel count higher than 4 digits.
Waymo was started partly to save lives by Sebastian Thrun who lost a friend to a car accident when he was 18. They have about 1/3 the accident rate of human drivers. Calling this stuff evil is kind of sad.
Correct. It's rather routine for wafers manufactured for "last call" orders on ASICs exiting production to be stored as wafers due to not knowing how they need to be Packaged.
Let's not forget that if it's not illegal now, it could be illegal in a matter of days. Add 12 if a president decides to sit on their thumbs, it's happened before.
It doesn't really matter, because the first question is: can the government suspend the contract (injunction?) while this is sorted out.
There's also the question of if OpenAI operated In good Faith (from a search: "Another sign of bad faith is withholding crucial information..."), and- of course- the South Korean government can step in as well. In fact- as a worldwide issue- any sufficiently large State(or group of States) can take issue with it.
OpenAI will have issues if they find themselves unable to buy power equipment (Schnider, Eaton). Or, perhaps anybody associated with OpenAI management or funding is arrested the second they step foot in Europe. This is already a nightmare of an International Incident.
Don't mind them. I've had a similar thing happen, but with power line Ethernet. In your case however, I'd be at least a little concerned about the building wiring.
In many analog pro audio applications, it's actually recommended that a shield be connected at one side only, for this reason. By convention but not necessarily necessity, the bond is typically kept at the receiving end, as that's almost always a device with a grounded power cord (such as a mixer). Many DI boxes feature a ground lift switch as a convenient way to achieve this. But you wouldn't want to disconnect it at both ends, as then the shield has no effect at all.
Anyway, if you had problems with your unshielded cables that would be solved by a shield, but your shielded cables caused a different problem due to the bond at both ends, this technique of using shielded cables but severing the shield at one end of them would get you the best of both worlds.
Huh, I had no idea that cables would have their shield grounded at both ends... Single point ground is such a standard in electrical design that the guidance is generally "do otherwise only if you have the ability to make many prototypes to nail RFI issues".
If you're building an audio cable your signal will peak out at a few kHz, so the cable acting as an antenna and picking up a signal in the MHz range isn't an issue. Similarly, you're not transmitting anything significant either. But a ground loop can easily ruin your day.
If you're building a cable for multi-gbps data transmission, that ground loop noise might as well not exist - it's basically DC. But ground your shielding at only one end, and suddenly you're ruining everyone's wifi!
Building a device which needs high-speed data on one side, and analog audio on the other side? Good luck...
Ruled out the monitor(s)? There's been cases where they've backfed power, and they certainly backfeed EMI as well. And, it could also be tied to FPS- assuming gsync/free sync.
If you have a multimeter its probably worth double checking if the case is low resistance grounded to the end of the cord. I'm assuming you have checked already, but as a shock hazard it bares repeating.
They're also claiming regulatory requirements as features. At least consumers might be able to sue in addition to several governments when it turns out to be a bunch of crap.
I'm not sure how you can call chatgpt "ostensibly my own computer" when it's primarily a website.
And honestly, E2EE's strict definition (messages between user 1 and user 2 cannot be decrypted by message platform)... Is unambiguously possible for chatGPT. It's just utterly pointless when user2 happens to also be the message platform.
If you message support for $chat_platform (if there is such a thing) do you expect them to be unable to read the messages?
It's still a disingenuous use of the term. And, if TFA is anything like multiple other providers, it's going to be "oh, the video is E2EE. But the 5fps ,non-sensitive' 512*512px preview isn't."
> it's primarily a website … unambiguously possible[sic] for chatGPT … happens to also be the message platform
I assume you mean impossible, and in either case that’s not quite accurate. The “end” is a specific AI model you wished to communicate with, not the platform. You’re suggesting they are one and the same, but they are not and Google proves that with their own secure LLM offering.
But I’m 100% with you on it being a disingenuous use.
No, No typo- the problem with ChatGPT is the third party that would would be Attesting that's how it works, is just the 2nd party.
I'm not familiar with the referenced Google secure LLM, but offhand- if it's TEE based- Google would be publishing auditable/signed images and Intel/AMD would be the third party Attesting that's whats actually running. TEEs are way out of my expertise though, and there's a ton of places and ways for it to break down.
> And honestly, E2EE's strict definition (messages between user 1 and user 2 cannot be decrypted by message platform)... Is unambiguously possible for chatGPT. It's just utterly pointless when user2 happens to also be the message platform.
This is basically the whole thrust of Apple's Private Cloud Compute architecture. It is possible to build a system that prevents user2 from reading the chats, but it's not clear that most companies want to work within those restrictions.
> If you message support for $chat_platform (if there is such a thing) do you expect them to be unable to read the messages?
If they marketed it as end-to-end encrypted? 100%, unambiguously, yes. And certainly not without I, as the user, granting them access permissions to do so.
I can't find anything saying waymo has a thermal camera. They aren't expensive- certainly not compared to the LIDAR- and provide extremely discriminated input on "am I about to kill something?" They're not perfect as foul weather and fog are likely to blind thermal- but they shouldn't be driving in suboptimal conditions until they have a track record of safety in optimal ones.
reply