That's overly simplistic. AML laws serve more purposes than simply catching money laundering, they discourage the underlying crime in the first place. Discouraging that can't be counted, yet you are saying is "AML doesn't catch anything!" But you can't prove the negative.
AML also serves to buttress public support in the financial system because the public interest is being served, and at least recently, without AML the housing crisis would be significantly worse.
"Let the criminals put their cash wherever they want, because it's too expensive to stop them" Isn't exactly the rallying cry you think it is
The public option was killed by Arlen Specter, a Democrat, who refused to go along with the other Democrats. Rs were united, which made his defection critical. Kind of like how Joe Manchin throws a wrench in the works every few years in return for oil and coal extraction benefits for his company in WV
The state-run exchanges solution was itself a compromise for Republicans and on the fence Democrats, particularly Arlen Specter, who feared a federal healthcare system would inevitably lead towards universal healthcare. They received their compromise, then refused to implement the exchanges in the hopes that the law would die.
As I understand it it's common for one Dem senator to "take the PR hit" when there are actually multiple members who would not want a bill to pass. A bit of kayfabe.
As someone who has studied financial crashes extensively, I agree with you but worry that we lack the regulations. All these bank-ish companies offering credit cards are having impacts on the money supply (every loan they issue becomes an asset somewhere), and at some point their interconnections with the financial system are going to become a risk. I assume most to all fund their loans with money market borrowing, for example.
Then there's the broader question of whether this is good for productivity. If every company is a financial company, who actually makes tangible stuff?
What do you mean when you say these companies are offering credit cards? Aren't those cards still managed by Visa, Mastercard, AMEX, or Discover? My understanding is that they're just running the rewards system and putting their name on the card.
Visa and Mastercard just operate the network, they do not control the funds or take on any credit risk. Amex and Discover do operate as lenders, and also operate the network.
Right, but what's the risk to the financial system paulusthe was talking about above in an airline partnering with Chase and Mastercard to to offer a credit card? The lender in all cases isn't going to be the airline, right?
When you triple the money supply every couple years what's a few extra trillion here and there?
/s
Hyperinflation is coming, the kind that will be THE central issues for everyones life for awhile. When it happens it won't be these guys fault. I would not blame airlines and home Depot credit cards for the coming hyperinflation, just a symptom of its approach.
How does hyperinflation even work in a country like the United States, in an age like ours?
If you needed a wheelbarrow of cash to buy break at the bakery, it was still true that there was a tiny downward pressure from the baker in that the bread would eventually rot, so he might as well sell it now if they were just shy of the asking price.
If everyone's buying household goods off of Amazon, their pricing algorithm will never be even that much forgiving.
When it last happened here, many workers were still being paid in cash as soon as the timeclock whistle went off on Friday. Now everything's direct deposit, but not necessarily instantaneous. At my last job, the funds were released at midnight that payday, but with the current job for some reason they're not released until the morning (business open, I imagine).
Are people going to starve, because they have the wrong bank and the money's not there for several hours before everyone else's and it has lost too much value?
What makes you think hyperinflation is coming? If anything, inflation seems to have peaked and is now starting to fall. The only way I can see hyperinflation happening is if there’s another major conflict, climate change causes some major simultaneous disasters, or some kind of black swan event like another pandemic. Of course, individual countries might see hyperinflation if they’re mismanaged (e.g Argentina right now) but I can’t see it happening globally except in the cases listed above.
> If anything, inflation seems to have peaked and is now starting to fall.
Before folks make comments about currency still inflating (gerund), let us stipulate that the noun "Inflation" is a positive rate and the rate has recently decreased. Let us all be thankful that there exists some amount of inflation which in a broad sense reflects a growing and dynamic world (how closely remains to be seen) as opposed to deflation.
It's more complex than just that. Sure, there's the people trying to make a dollar who are willing to do bad science in order to get the result they want. But there's also the general publication bias against replication studies - who wants to read them, and who wants to do them (they're not usually seen as prestigious academically: most academics want to test their ideas, not those of others.
And then there's cultural differences in which people sometimes see a negative result as a "failure", don't publish it as a result, and instead skew the data and lie their asses off in order to gain prestige in their career. As long as nobody double checks you, you're good.
> ut there's also the general publication bias against replication studies - who wants to read them, and who wants to do them (they're not usually seen as prestigious academically: most academics want to test their ideas, not those of others.
Academia seems like the idea place for this. Why not require a certain number of replicated studies in order to get a degree? Universities could then be constantly churning out replication studies.
More importantly, why do we bother taking anything that hasn't been replicated seriously? Anyone who publishes a paper that hasn't been verified shouldn't get any kind of meaningful recognition or "credit" for their discovery until it's been independently confirmed.
Since anyone can publish trash, having your work validated should be the only means of gaining prestige in your career.
The magazine I worked for at the time was about to publish an article claiming that DeepMind had failed to comply with data protection regulations when accessing records from some 1.6 million patients to set up those collaborations—a claim later backed up by a government investigation. Suleyman couldn’t see why we would publish a story that was hostile to his company’s efforts to improve health care. As long as he could remember, he told me at the time, he’d only wanted to do good in the world.
In the seven years since that call, Suleyman’s wide-eyed mission hasn’t shifted an inch. “The goal has never been anything but how to do good in the world,” he says via Zoom from his office in Palo Alto, where the British entrepreneur now spends most of his time.
Thanks, I hate him already.
A messianic SV hand waver who doesn't care about anything but his special mission, doesn't care about breaking rules, and reflexively gaslights people who complain. As if "Why don't you support the mission bro?" is a reasonable response to "you should protect people's information."
There is a real argument on the other side, though. We're dealing with technologies that they've when given access to loads of data. Health data is heavily regulated, and rightly so, but that regulation greatly hinders innovation.
Hell medical data access problems are bad enough even when we aren't talking about innovation: simple problems in sharing data between different systems/providers leads to bad outcomes all the time.
So it's a case where fragmentation and regulation are already leading to bad outcomes for patients, and where innovation is suppressed because of lack of access, especially to population-level data.
Even without ai, imagine being able to identify various kinds of outbreaks by correlating nearby diagnoses in real time, and flashing the local nurses that there's a serious food poisoning outbreak happening fire their consideration when people call in with early symptoms. We should be able to do this easily.
We should protect people's information, but we also need to build a road to a better tomorrow. The current rules are, in fact, broken, and we need new rules which lead to better outcomes.
"regulations slow innovation" is not a valid reason to ignore any regulation one finds annoying.
That said, my problem isn't that he broke the rules. My problem is that, when confronted about having broken the rules, he lied about it then retreated into "why don't you believe the mission bro?" As if his solution is the only possible solution to the problem.
He's full of himself, doesn't care about rules, and gaslights those that criticize him. His messianic do-gooder-ism a bullshit marketing cover for him doing what he wants.
"regulations slow innovation" is not a valid reason to ignore any regulation one finds annoying.
Eh. We'd still be stuck with taxis, Prohibition, and 55 MPH speed limits if we followed this dictum. Not to mention paying taxes to the King of England.
I struggle with why this access is an issue. If the health data were used in making insurance decisions, marketing, employment, or any sort of way that has a personal impact on the people in the data set then absolutely not. But presumably personally identifying information is in no way relevant to the task of training models, so what specifically is the concern? Medical data is a constellation of observations over time. Why is this particularly sensitive, especially if it’s not associated with any specific identifiable person?
HIPPA has seemed to create a view that anything related to medical care is the most secret information in all the world, when in fact it’s pretty useless to anyone but yourself and people that want your money in some way that exploits your health situation. HIPPA itself only really erects barriers between your health data and disclosure to insurers and providers without your consent.
Suitably anonymized, I'd say "Yes." You can't make progress in any field without data. Usually, the more data the better, as long as it's good data. If archaic, misguided laws have to be broken to save lives, well... so be it.
The real trick is keeping personally-identifiable data out of the hands of insurance companies.
Well if they are only going to use the data for good purposes and not for nefarious purposes or for sale, then there is no downside to just writing it into their contracts and privacy policy.
Just add a irrevocable guarantee that they will never sell or transfer to someone who will sell any data and if they do the company will immediately dissolve and become encumbered with a debt of the highest seniority equal to all lifetime company revenues to the people whose medical data they have. The C-suite and Board of Directors must also provide a personal financial guarantee equal to their entire compensation package, and must provide sworn testimony yearly that they are engaging in no business deals which include the sale of private medical data.
Since they do not intend to ever use the data for bad purposes, they have nothing to lose by keeping their word. Literally no downside to them since they were not going to do it anyways and it provides peace of mind to the public, a win-win.
I mean, do people no longer have any concept of ethics? And I don't mean this in the abstract sense, I mean literal practical everyday ethics. Understanding the concept of tradeoffs and consequences of actions and the rest.
I feel like we've built a church (or possibly cult) that had mantras of "Innovation at all cost. Liquidity at all costs...." among a few others. With no view whatsoever as to what the implications
And I seriously am starting to think that HN and general SV culture. Specifically here on HN, the number of times I've seen a justification end in one is those thought terminating cliches is legitimately concerning. The amount of reasoning that boils down to "this is good because it improves innovation. And because it improves innovation it is good." And not only zero thought on the implications of taking the action suggested - what seems like an unawareness that one should even consider the consequences of taking the action. It's as if "we've reached the 'innovation is good' stage of the thought state machine so the state machine should terminate and return success".
It's absolutely mind-boggling to me that anyone could post a comment on saying yes we should give up medical privacy and not even have a single sentence on the negative consequences of doing that. "Why would one need to think about the negative consequences? It has a positive consequence so clearly we should do it."
Is it a gap in CS education? General education? Is it the personality type of us engineers? Is it nature? Nurture? Both? Is it social? Others don't step in to provide that feedback when it happens? How do we even approach it?
I didn't say that we should give up medical privacy at all. Simply that there is, in fact, a trade off, and that the current regime looks to me both overly restrictive and poorly implemented.
Wholly getting rid of medical privacy is obviously a bad idea. But perhaps we could agree that there are research purposes where greater access to data would be helpful, and that creating exemptions under certain circumstances could help on the research side. (Eg, the data is securely silo'ed and access restricted, and stays inside the research org only.)
(It's mind boggling to me that people have such poor ability to think in anything but stark binaries. It's a total failure of critical thought which degrades the quality of policy discussion. How do we even approach it?)
There is no real trade off between medical privacy and research. That's a total red herring. Researchers can already ask patients for consent to use their data. Many patients will agree, especially if researchers explain the potential benefits and take responsible steps to safeguard the data.
HIPAA regulations also allow researchers to use de-identified data.
I think this question boils down to individualism vs collectivism. If you think it is ok to override individual rights in order to "benefit the many" then you will be in favor of your proposal. If you view individual rights as _unalienable_ then you won’t value collective benefits over the individual rights to privacy & self-determination.
I don’t see how your particular viewpoint is any less "binary" than the other one. Collectivism or individualism is the binary option, where you land on the collectivist side of the coin.
It's mind boggling to me that people still resort to ad-hominem attacks when discussing viewpoints here on HN, when it's clearly against the rules. How do we even approach it?
These people genuinely terrify me because they obviously don't have have the faintest idea of what consent means or what FRIES looks like. They don't see other people as equals, but mere tools to use and manipulate in any way they desire.
It’s weird you mention fragmentation and regulation as the culprits when it’s pretty obviously consolidation (to Epic and Cerner) that led to this and it’s regulation (like 21st Century Cures Act) that’s actually undoing it… by requiring that consolidated players can’t disrupt fragmentation efforts (via FHIR).
At least in the US, basically Epic and Cerner took over the EHR market and took effective ownership of all medical records and actively prevented care providers, patients, and researchers from easy access to those records (including for migrations to a competitive EHR — basically impossible).
Two pieces of legislation and guidance in 2016 and 2020 basically required that any EHR has to allow providers and patients to pull their records, which at first Epic was like “okay go for it, but the data model of those records is proprietary.” The government had to issue additional guidance that records must be exportable via a standard interface (e.g. HL7 FHIR), which extricates the records from any of the EHR’s internal data model.
The pre-FHIR/pre-21st Century Cures Act was pretty horrible for America’s biomedical research posture: it simply isn’t capable of doing the sort of national-scale research that e.g. the NHS system can do, which is especially valuable for understanding things like COVID and for doing research on any treatments/vaccines being used in the wild.
During COVID it became clear lot of Americans have this implicit idea that there’s a way for researchers to just “look at what’s going on” in the wild and there literally isn’t. It just recently went from effectively impossible (due to consolidation of EHR records and affiliated commercial interests locking them down) to now just very hard (due to privacy, data quality, data harmonization, and still commercial interests). That change happened via regulation and will open the door to fragmentation (while maintaining interop).
A key point here is that the centralization in this case is from a for-profit company.
If the records were centralized by the government, the issues that we see now would likely not be as prevalent.
There would be other talking points and issues to be sure, but the point is there are many ways to 'centralize' information, even including, ironically, the technologies that are infamous on HN that "de-centralize" information.
> There is a real argument on the other side, though
No there isn’t. The rest of your comment can be safely disregarded thanks to you opening with this.
“We need to build a better tomorrow!,” we will, the people actually trying to within accepted norms. Not SV grifters who’ve destabilized our entire society and ended privacy all for ad revenue.
Perhaps you should bother to read the rest of the comment. It's not just about a better tomorrow, it's about a better today that we're missing out on because data handling is such a mess in the health sector.
I have a higher risk of death because health data handling sucks. That's a trade-off. You might like that the status quo is what it is, but it doesn't mean the trade-off isn't real.
Is there not a win-win situation through post-quantum homomorphic encryption?
I'm not an expert in the area, but I imagine it's possible to set up a centralized system that contains pretty much all patient data for a nation, in a completely encrypted state. Then each institution/hospital could apply for access to make an application which generates only an aggregated metric of interest as an output, such as the map example given above, which could provide real-time notifications to medical personnel in the area.
I realize that many will come to say that the homomorphic encryption capabilities right now require a long time and many CPU cycles to compute some plaintext equivalent, but there would still be a huge improvement in the time required to be notified in the world today.
Additionally, performing studies in academia would likely be a far better experience if the data was available in a single place, and aggregate information could be gathered with a single API.
Sadly, I doubt that any of the luddites in the room (or French regulators) will be willing to trust the technical solutions to the problems...
Differential privacy has the same problem: users have to trust that it's applied appropriately, which doesn't help with the vocal group who say we can never trust anyone who makes their paycheck by touching computers, or anyone who lives in California, or...
Finance is highly regulated and SBF also claimed to only want to do good. He even recently had 250 pages of thoughts and memoirs released that underscore his own self-confidence and belief in his own innocence. Should SBF had more room to innovate?
The argument isn't so much about how to go about technical progress but who and how to trust a Suleyman, or a SBF, etc. Some will do the hard work, meticulously build both pre- and post-regulation products, diligently deal with stakeholders, and succeed or fail to move the market. Being comfortable with saying divisive things on-the-record is a pretty key lapse in rigor.
I feel like part of the problem is that there’s a lot of difficulty in giving access to this kind of data for a specific purpose, and a specific purpose only (please correct me if I’m wrong). This is a problem that can be (and should be!) solved with time.
Advanced cryptographic techniques allow you (as the data owner) to restrict the function(s) you can compute on the data. In addition to that, they ensure that the only thing the parties on the other end would learn is the result of the function computed. But of course, we’re still a ways away from these techniques being practical, as the field of ML moves at a much higher pace.
I've never understood the privacy boner. Sure, people can abuse information - can exploit or punish based on it.
But there are also so many positive uses of information. Research, understanding, a fuller picture of the world, helping people.
The need for privacy feels antisocial and backwards to me. We're not living in a totaliarian state where ppl get killed for tweeting the wrong thing, so let's not act like it. Part of maturing is accepting others for the good + bad, and you can't do that with a wall up.
Oppression is inherent to capitalism, it requires to create and exploit a bunch of suffering persons without any good option to keep efficiency and to do the most undesirable work for very bad salary, even though it should be worth more given its undesirable.
Work concerning sewage, construction (a lot of the time safety requirements are not met cause employees wants to save money and persons hurt themself in accidents because of that), even being an Amazon warehouse worker without an option to go to the bathroom, etc.
AML also serves to buttress public support in the financial system because the public interest is being served, and at least recently, without AML the housing crisis would be significantly worse.
"Let the criminals put their cash wherever they want, because it's too expensive to stop them" Isn't exactly the rallying cry you think it is