I've co-authored a book that a lot of the models seem to know about. The models consistently get the names of the authors incorrect and quote the material with errors. If the canonical representation of our work is now embedded within AI models, don't we deserve to have it quoted and represented correctly and fairly? If you asked a human who had read the book, I think there is a fair chance they would likely give you the reference to the source material.
I do concede that the book does contain a distillation of material that is also available from other sources, but it also contained a lot of personal experience. That aspect does seem to be lost in this new representation.
I am not saying that letting AI models read the material is wrong, but the hubris in the way models answer questions is annoying.
I've come to the realization that getting multiple external monitors and then more than one set of them to work right is not a "trivial" problem. Any decision is going to be wrong or might result in a compromise or bad outcome.
The CalDigit support response sort of suggests that some monitors (or interfaces that fake it) aren't providing unique EDID data.
EDID is supposed to contain a serial number. Connecting two monitors that claim to be the "same" device because the serial number is the same is going to be a problem - which one is on the left or right? How can you know / remember? Basing it on which "port" they are plugged into will also lead to frustration if they get swapped or a hub is used.
What if you have two sets of the same monitors at home (L and R), and two more of the same monitors at work (L and R). What do you want the experience to be? (mapping Apps to the correct displays when moving environments is also an issue that, ahem, hasn't always worked well!)
I'd want (and I've been TOLD) that plug and play experience needs to be the same at home and at work after the manual setup of the monitor placement. At home the built-in display is on the right, with built-in switched to the left at work. NB: I really cannot see how to do all of this unless the monitors can uniquely identify themselves.
Recording or noting these setups / placements also could be a technical challenge - plugging in external monitors in a different order, including timing, does that create a new monitor "environment"? Given what I've observed, I think it does.
My wife has a setup like the above with 2x LG 4K displays (two USB-C thunderbolt cables / connections) in two locations. It has been "mostly working, but slow" for her old Intel MacBook pro or "working well" for a newly acquired Apple Silicon MacBook Pro with the current macOS Monterey.
Doing the numbers, home has 7 configs. IR=Internal-right. (I), (IR, LH), (IR, RH), (IR, LH, RH), (LH), (LH, RH), (RH). Work also has 7 configs. IL=Internal-Left. (I), (IL, LW), (IL, RW), (IR, LW, RW), (LW), (LW, RW), (RW).
Ramble: For the (exaggerated) 400 open windows any display plug/unplug event this is going to cause a re-render storm for a new DPI, colour depth and location for every window for each plug / monitor transition. Ouch.
I am dinosaur and this is an X11 flashback. How can or does any of this even work for Linux? An X11 app opens a connection to a "Display" and the DPI (size), colour depth and other params used to be "fixed" at that point in time, the client does all of the rendering for the window, providing a bitmap to the X11 server. In the past it was not possible to move windows between incompatible display parameters - making dynamic changes not possible. How does moving windows between monitors of different DPI / colour depth now work in X11? I need to look into this. Prediction: I likely will be a casualty in the war of display rendering.
Final ramble. With some of these new "standards" (looking at you, USB-C), it seems the goal is to get the license payment but there is no requirement that your product passes a conformance test in order to ship it.
I think this is a poorly sourced and not a reliably fact checked article.
This is at least the 2nd time on HN that a report has suggested Ashley Gjovik was complaining about "toxic chemicals at work". The previous article referred to something published at "The Verge" - "her office is in an Apple building located on a superfund site"
<https://www.theverge.com/2021/9/9/22666049/apple-fires-senio...>
The nytimes ought to be very embarrassed about this stupid error.
I note that the personal toxic superfund site issues were previously discussed on HN.
I am also somewhat confused about the persona of "Cher Scarlett". I do not think it is a real identity. I also have some serious doubts if they, or their alter ego, is actually employed at Apple. Reputable journalists could actually verify this with employment and tax records -- journalists would actually have to do the necessary due diligence.
Twitter suggests that "Cher Scarlett" is located in Seattle. (This would make them a remote worker for Apple). Also seemingly making quite a lot of tweets. When does this person do any work for Apple? Is this person the reason why Apple isn't responding to Security reports and the bug bounty program?
After reading many tweets I am failing to detect any comprehension of, or demonstration of, a computer security "mindset" -- something that, in my experience, does tend to manifest itself in the personality of security folks over extended periods of time.
I am unable to determine what sort of security role this person has.
I am not suggesting any malice or ill will towards "Cher Scarlett". I am trying to present this as a technical analysis.
In summary, I really question if "Cher Scarlett" is actually a real person in they way they are presenting themselves to be.
As the other commenters have mentioned, your "technical analysis" is very wrong here. I don't want to rehash the details that the others have mentioned about why you're wrong, but I do want to take a moment to say that what you've posted is pretty much perfectly embodies why many people hate "the orange site". It's these kind of faux-objective breakdowns that really hurt the site as a whole.
I'm willing to believe that you meant well with your comment, but I think you need to realize that even when you attempt to be objective bias creeps up readily. It starts with which stories you even decide to call out. You might feel that this person is fake or lacks the position that she says she has, but fact checking this inherently involves a selection process. Remember when Hacker News decided to "check" whether Katie Bouman had "actually worked on the black hole image"? This is where problems arise, because it's obvious this doesn't happen for everyone–just people that are thought to be "fakes", which is something that is selected by decidedly subjective criteria.
The second problem is that as you go through your analysis you bake in assumptions–in this case many that are wrong–and use it to arrive at an "objective" answer. Trying to reason from your armchair and present it under a guise of factualness is the biggest problem with any kind of "rationalist" analysis on the internet, including the kind that Hacker News is unfortunately known for. Here you literally have no idea of how Apple works internally, and at one point you openly claim that her personal Twitter doesn't demonstrate "a computer 'mindset'" (how can you possibly evaluate this objectively, even putting aside questions of why her Twitter is the right way to judge this?). Trying to submit it as "technical analysis" is just wrong, period.
It's good to be skeptical, and apply your own reasoning to things you read online. But try to be mindful of which things you're choosing to apply it to, as well as any flaws of your own you may be injecting when doing your own evaluation. Hacker News should be a place of healthy curiosity and discussion, but to do that we can't possibly accept this kind of content.
I appreciate your response and agree with what you have said.
I cannot undo what I've said - it is clearly very incorrect. I would like to retract it.
Do you have any recommendations for getting better at critical thinking? How can it be practiced in a way that doesn't get you banned when making mistakes?
I really would like to avoid making these sorts of mistakes in the future.
The first, and possibly the most important part, of critical thinking is to recognize that you can't be right all the time, and embrace the instances where you're wrong as learning opportunities. I am glad to see that you seem to be pretty good at that already :)
Aside from that, I don't actually have anything concrete for you, unfortunately. What's worked for me is reflecting on my own biases and confidence in the information I am bringing to the conversation. In your case here it's clear that you started your comment with "I think this person is fake" and constructed a (tenuous) chain forwards to arrive there using assumptions rather than concrete information. We all do this to some extent, but specifically taking time to look for this kind of thing can help reduce the chances of it happening. Another skill you can learn (generally, by interacting with people you disagree with) is the ability to run your own devil's advocate on your comments. It sounds a bit strange to say it, but a lot of what I write gets much stronger pushback from myself before I even send it than it does once it's out for others to respond to.
As for practice, you can do this anytime you interact with anyone. As long as you are interacting in good faith, an open mind, and with genuine curiosity, people are unlikely to ban you. What you might want to keep in mind, however, is the context surrounding the conversation: getting something wrong about Java is regrettable, sure, but ultimately not a big deal. But outright calling someone a fraud is a pretty serious accusation, especially considering that certain groups of people are often more affected by this problem. When talking about real-life people, you should be very careful about the conclusions you draw and what their consequences may be.
Cher is a real employee, but anything more I'll leave unsaid out of respect for her privacy. Apple employees can easily look up other employees, and there are easy ways to tell whether someone is an Apple employee should they choose to reveal some of those things.
Also I don't know whether she's remote or onsite, but there are multiple orgs with offices in Seattle - one of my friends just got hired as an engineering manager onsite for Apple in Seattle.
Some of these comments could use some fact checking of their own :) .
I'm not sure I understand the rest of your argument. You think this person a) doesn't exist b) doesn't work for Apple c) does exist and works for Apple remotely, because there is no Seattle office d) works for Apple in the critical path for security reports and bug bounties, and is therefore why Apple is (allegedly) not responding to them because she's spending all her day tweeting e) works for Apple in an unknown capacity, which you cannot figure out f) tweets too much g) doesn't tweet enough about a specific security "mindset," which all people in a security role have?
This "technical analysis" doesn't seem to hold up, and I don't see any particular reason to suspect this person doesn't exist. I suspect the journalist's due diligence was more sound.
I started skeptical, but by the end tended to believe Ashley Gjovik. So I don't know about Cher, and maybe Ashley kind-of got radicalized by her experience, but she does have a lot of evidence about the apartment issue.
Are you suggesting there was not a real complaint about Ashley’s office being on a superfund site, or that you don’t find the claim credible and that therefore the nyt shouldn’t have reported it?
FISA orders are written by a Judge. Only judges can write these, this is the literal definition of a warrant. Warrants require specifics - Person X, person Y. These are enumerable. There is paperwork.
PRISM, based on the data available, is all about consuming data WITHOUT a warrant -- vacuuming data associated with identities that are not associated with ANY identities subject to a court order. Violating laws and possibly (USA) constitutional rights in quite a few ways. PRISM likely exists.
I ask of "sneak" to confirm their assertion that "PRISM == FISA orders" is true. Please present this "evidence" and the evidence of connection. If you cannot you are, by default, distributing mis-information, bad logic or at worst tying to mislead.
(my naive searching suggests that "sneak" is definitely not in a position to make these claims)
Judges can write lots of orders but that doesn't make them search warrants which are defined by the US constitution as requiring probable cause. FISA court orders are not search warrants.
FISA Amendments Act (FAA) section 702 is the legal basis claimed by the NSA in a secret interpretation by the FISA court as the basis for PRISM targeted collection without search warrants, including US persons/citizens.
I am neither. A similar exchange with sneak has happened previously.
It is a frustrating exchange.
The words that have been used attempt to tie two controversial topics together PRISM and FISA. The logic then seems to be that because companies can now report on FISA orders, this means they also willingly participated in PRISM.
What has been said seems to ignore that the FISA reporting by companies shows the number of identities that data has been provided for. PRISM on the other hand looks like a program to collect as much data as possible, regardless of identity.
At this point it is going to just be agree to disagree.
Sorry, but I do not believe that is what the leak revealed.
There was a slide that indicated that data from Apple and other companies was now part of the PRISM program.
I am not trying to deny or refute Snowden's whistleblowing. I think it is highly likely that PRISM exists. What I dispute are the speculations that the companies listed are complicit.
The 2012 date is quite suspicious - it is precisely the same year that a new Apple datacenter in Prineville came online. Facebook also has a datacenter. Literally next door. Facebook also appears on those slides. I am not sure who else is also now in the area.
I wonder where all of the network cables go?
I personally think that PRISM works by externally intercepting data communication lines running to these facilities. Similar to the rumors that international comms links have been tapped. The companies themselves have not participated, but the data path has been compromised.
The NSA has previously tapped lines (AT&T), but they made the mistake of doing it inside the AT&T building. Google "Room 641A at 611 Folsom Street, SF". That is where "beam splitting" was done. This eventually leaked out. The NSA isn't stupid, I doubt they wanted to repeat that sort of discovery. The best way to keep something from being discovered is to not let people know. This is why I think it is believable and likely that the companies listed on the slides have no idea what has been done.
I will also note that PRISM and "beam splitting" are a rather cosy coincidence.
I think it is most likely that PRISM is implemented without the knowledge of anyone except the NSA and in Prineville there is some "diversion" of network cabling to a private facility that is tapping the lines.
> I personally think that PRISM works by externally intercepting data communication lines running to these facilities. Similar to the rumors that international comms links have been tapped. The companies themselves have not participated, but the data path has been compromised.
That wouldn't work without the company being at least passively complicit. Links between datacenters are encrypted. If you want even basic PCI-DSS compliance then links between racks must be encrypted (and a rack that uses unencrypted links must be physically secured). And properly implemented TLS or equivalent (which is table stakes for a company that takes this stuff at all seriously) can't be broken by the NSA directly (and if it could be then everything would be hopeless). Thus the MUSCULAR programme where the NSA put their own equipment in Google's datacenters - that's really the only way you can do it.
Remember how the legal regime in the US works with National Security Letters. Companies can be, and are, required to install these backdoors and required to keep their existence, and the existence of the letter itself, secret. Of course Google, Apple, Facebook, every other company with a significant US presence is in receipt of one of those letters and has installed backdoors - the NSA aren't stupid, what else would those laws and their funding be for?
PCI-DSS does not mandate encryption between racks or datacenters, maybe your own PCI compatible policy does. I’ve worked in PCI-DSS environments (one of which being tier 1 with on-site cardholder data) and we didn’t need to have encryption between racks.
Site to site VPNs are common for smaller companies too, those are encrypted, but the thing with encryption is that there are physical limits to throughput.
For a standard CPU I think it was 3.5Gbp/s or so in 2018, if you want to get much higher (like 9Gbps) then you need special hardware offloading which is expensive.
What is cheap (comparatively), is laying your own fibre cables.
Then it’s “basically” secure and you can have a single cable carrying 100GBPs over a mile.
This is what google used to do, I suspect this is what Apple used to do- this is what many people do.
Google’s solution does not involve site to site VPNs, Google’s solution was to make all internal network traffic encrypted, but the lines do not get implicitly encrypted because they go over that path, like a vpn would.
This thinking is based on trusting "encrypted" links. Did you build the hardware that drives these links? Did you audit the Verilog or code that operates this hardware?
I know of at least one way a to implement a "secure" TLS product that you could purchase and deploy in your datacenter that would leak all of the the keying material to compromise every data connection to the NSA. You would be 100% in compliance of all technical requirements, but your data would be utterly transparent. You would not be able to detect this using an internal or external audit.
Did you purchase your rack-to-rack equipment from the equivalently Trojaned "Solar Winds" vendor? The "Solar Winds" event was a "commercially" botched exploit.
Sorry, NSL(s) do not scale. It is an ever expanding "circle of trust".
Containing secrets is only effective if they are only shared within "your shared culture" and your culture is very stable -- nobody leaves because of a difference of opinion.
>That wouldn't work without the company being at least passively complicit. Links between datacenters are encrypted.
They aren't always. In fact the Snowden leaks were the actual event that got many of these companies to do just that.
You mentioned MUSCULAR, but it was that revelation that the DC to DC connections were not in fact encrypted. I believe that program was taps on the DC connections, since the SSL connectivity was added and then removed in the front end, leaving the replication in the clear. Google seemed to be relying on the physical security of those links and them not being on some shared infra. [1]
WARNING: the link below has classified info from the Snowden leaks. If you have a security clearance, dont click it.
This can be entirely explained if the NSA had already performed a "solar winds" supply chain attack on the vendor that supplied the TLS encrypt / decrypt endpoints. Is the vendor of that hardware known or discoverable?
Google would have no idea the traffic could be intercepted. The NSA could use the Smiley face, perhaps with a nudge, nudge, wink, they are now a "supplier of data" on slides.
A 13 gallon "garbage" bag, commonly used to dispose of waste in the USA weighs 22grams. Each Airpod, according to Apple, weights 4grams. (yeah, I'm just talking about plastic, but plastic really is part of this). The actual material (resource) cost of an Airpod is utterly insignificant (over the 3 years you cite as the lifetime) to the daily and weekly waste caused by each person in a western culture. The average waste per human for western cultures seems to be about 500+Kg per year. Over that time, the equivalent of about 375,000 Airpods. This is for materials disposed of each week, without thought - the numbers I'm using do not include any materials that are recycled. What is the actual useful "lifetime" of this garbage or trash? Perhaps a week? Or if it was a Starbucks coffee cup lid, perhaps 30 minutes. Doubtful it was 3 years.
The outrage at not being able to repair a 4gm Airpod when compared to the complete disregard to wasting other resources, in my opinion, has a very dubious moral stance.
I get that the price paid is indicative of the engineering that went into the product and you feel it might be outrageous to pay that amount for 2x 4gm.
Any argument about repairability needs to be considered in the context of actual resources used to make the product, what the cost to humanity was when compared to what it will cost to repair the product, including the human cost.
At the time the GPLv3 was written, the GPLv3 was deliberately crafted to put commercial companies into a conflict situation.
(A) Give up selling audio with DRM (DRM did go away)
(A) Give up selling video with DRM
(A) Give up using HDMI as an output port
(A) Give up using digital signatures to secure the integrity of software
-OR-
(B) Give up using GPL v3 software, GCC, Bash
Pretty sure most customers actually want many of the things in (A) and a minority actually care about (B). The items in (B) can be post-installed by those that really want it. Sure those in (B) have a loud voice, but, HN, are they really representative (democratically a majority)?
How would you make the choice between (A) or (B)?
I would bet that if a subeoena for FSF/GNU email was issued there would be a lot of messages related to manipulation and coercion for licensing and re-licensing. The sort of stuff, when associated with companies turns into scandal, lawsuits and monopoly investigations. (GCC almost got to be what could be considered a monopoly compiler)
I do believe that open software has a place and is a really good choice. My opinion is that the GPLv3 is completely toxic, spawned from negativity and is ultimately anti-open source. I will never release any software under the GPLv3.
> Give up selling audio with DRM (DRM did go away) (A) Give up selling video with DRM (A) Give up using HDMI as an output port (A) Give up using digital signatures to secure the integrity of software.
Does GPLv3 really prevent an OS from doing those things? I think it would just prevent it in the same executable as GPLv3 software?
I don’t actually understand Apple’s “allergy”. On the iPhone I’m sure they object to the Tivoization clause (which coincidentally is why I like to use GPLv3), but I’m pretty darn sure macOS doesn’t fall afoul of anything...
How exactly is the above supposed to work when /usr/bin/python doesn't exist. If Apple chose to break the #! contract by doing something else, how would you feel about that?
This answer is both a great answer, but also a terrible one.
This answer implies all "python" binaries across all operating systems and distributions for all time, are backwards and forwards compatible, no work needed. Guaranteed 100% equivalent.
What about Python2 v.s. Python3?
This isn't true and cannot ever be. This happens for other scripting and interpreted languages.
This approach means that the burden of choice and setup is transferred 100% to the person running the script. If you have 2 scripts that require different dependencies, then you will encounter this problem. I think it is this that encourages folks to include hard coded paths to enumerate explicit dependencies.
Lots of the "package" managers for these scripting languages also don't deal with this very well. They advocate a "do it my way" or "do it yourself". Different languages do it different ways.
Ultimately, the person wanting to run the script, just wants to make it run -- they will follow the instructions to make it work and along the way will make "global" changes -- which will impact what will happen for any other script in the future. This will likely be diverged from anyone else who has a "base install".
The above is mostly about my observations with using Ruby, not Python. However, few attempts in using pre-packaged complex python "recipes" has always resulted in similar conflicts.
> This answer implies all "python" binaries across all operating systems and distributions for all time, are backwards and forwards compatible, no work needed. Guaranteed 100% equivalent.
Practically speaking, how does using env versus the absolute path make this any better or worse? You as the script author don't know what version of Python/Ruby/Bash/etc lives in /usr/bin. Maybe you could do some kind of automagic detection based on the user's OS, but any such assumptions are likely to go stale over time.
All env does is give users a choice about where to put their binaries. The versioning situation is a real problem, but I don't think using env makes it better or worse.
I do concede that the book does contain a distillation of material that is also available from other sources, but it also contained a lot of personal experience. That aspect does seem to be lost in this new representation.
I am not saying that letting AI models read the material is wrong, but the hubris in the way models answer questions is annoying.