Hacker Newsnew | past | comments | ask | show | jobs | submit | pageandrew's commentslogin

Would you prefer that militaries have less-capable software to make targeting decisions?


Perhaps it would be preferable at least to not mix civilian health data or regular business data, with mass surveillance data, and with military industrial complex and kill chain data. It would make sense to have an interest in keeping different kinds of personal data in separate places and not have it thrown around companies with quite different interests or collected together within some company that's involved in quite different industries. So why does it not make sense to apologists of this company?


Are you claiming palantir will put a back door in their software and steal NHS data?

If so, is there any example of them ever doing this to a customer, or is it baseless speculation?

Alternatively, are you climing the NHS is giving planter data and usage rights?


It doesn't matter whether they do or not, the desire to keep separate things separate could be there as is. It might as well not be any of that but just about the kinds of things some companies are involved in.

Again, kind of amusing how that immediately devolves into "are you making an accusation".


How capable it is do you think at this moment. I guess we need 30 more years for software to get better, so less than 20 thousand children dies in the Gaza genocide.


I would prefer that militaries do not deliberately genocide civilians and antagonize non-combatants.


That's a "motherhood statement"[1] - you haven't answered the question.

Militaries make targeting decisions with data. That's entirely separate to whether they have been ordered by civilian government to target something, and Palantir do not control that part of decision making (you as a voter do! You did vote right?)

1. https://en.wiktionary.org/wiki/motherhood_statement


No it’s not. It’s totally conceivable that the (perceived) quality of targeting data would contribute to the decision of whether to run a mission at all, and if so how extensively.


isn’t that essentially true of any technology that reduces the civilian casualties of a conflict?


The companies involved definitely want you to think that part of their noble goal is reducing civilian casualties. As far as I can see, though, that is pure propaganda.


You can reduce civilian casualties by reducing the number of people considered civilians.


i am not saying that is the case here, all i am saying is that your argument would apply to any technology that lets you better differentiate/target enemies vs. civilians, which suggests to me it is overbroad.


You are reading perhaps more generality than I intended. To be clear, I am talking about the present greater-Anglo-American military-industrial complex, driven by present ideologies, in which the distinction between “enemy” and “civilian” itself is extremely debatable.


Absolutely.

And that the people who stand to benefit the most from another war might want to filter/target that data in a way to make that more probable?

I mean, I know it's a stretch. Especially with how benevolent our current class of billionaires are. But just imagine a guy who thinks money is more important than anything else. I know... another stretch. lol.


> > Palantir do not control that part of decision making (you as a voter do! You did vote right?)

You're not actually suggesting that the company providing the data isn't at all part of that process, are you?

Can you, for a second, imagine a company collecting/forwarding only data that's beneficial to it's core objective? Especially one whose led by a guy who has quite literally benefits off of a war????


I am 100% sure you have absolutely no idea what Palantir actually do, and I suggest you go actually read about it. There's plenty of resources out there which will explain in considerable detail what the product is, what the services are, how the services get provided, and the benefits (for example one of Palantir's premier operations is "forward deployed engineers" - which is super-handy for organizations which work in government because it means they come to you and do the installs and setup, but don't keep or even have access to any of the data - the relationship is between you and the specific employees - who in turn need to have Federal clearances - and not Palantir corporate).

This arrangement is extremely conventional, but most company's hate doing it and so don't unless they're operating with the expertise to manage those types of orgs (which is usually only profitable if you have a unique advantage or specialize in seeking a lot of contracts and then navigate the data handling rules to realize - hopefully - some synergies).

I don't like Thiel, but his detractors are also very obviously ignorant as to how any of the Federal government normally works.


lol Found Karpas burner account

You're not actually contending that people at Palantir don't need clearances are you?


Can't even get STS tokens. RDS Proxy is down, SQS, Managed Kafka.


Not just you


What's unethical about selling to DoD?


Anthropic specifically are the people who talk about "model alignment" and "harmful outputs" the most, and whose models are by far the most heavily censored. This is all done on the basis that AI has a great potential to do harm.

One would think that this kind of outlook should logically lead to keeping this tech away from applications in which it would be literally making life or death decisions (see also: Israel's use of AI to compile target lists and to justify targeting civilian objects).


Why do you think humans would make better life or death decisions? Have we never had innocent civilians killed overseas by US military as a result of human error?


The problem with these things is that they allow humans to pretend that they are not responsible for those decisions, because "computer told me to do so". At the same time, the humans who are training those systems can also pretend to not be responsible because they are just making a thing that provides "suggestions" to humans making the ultimate decision.

Again, look at what's happening in Gaza right now for a good example of how this all is different from before: https://en.wikipedia.org/wiki/AI-assisted_targeting_in_the_G...


With self-driving cars some human will be held responsible in case of the accident, I hope. Why would it be different here? It seems like a responsibility problem, not a technology one.


I'm not talking about matter of formal responsibility here, especially since the enforcing mechanisms for stuff like war crimes are very poor due to the lack of a single global authority capable of enforcing them (see the ongoing ICC saga). It's about whether people feel personally responsible. AI provides a way to diffuse and redirect this moral responsibility that might otherwise deter them.


I hear where you are coming from, but if an AI company is going to be in this field, wouldn't you want it to be the company with as many protections in place as possible to avoid misuse?

We aren't going to stop this march forward, no matter how much it is unpopular it will happen. So, which AI company would you prefer be involved with DOD?


"Avoid misuse"? This is the United States Military we're talking about here. They're directly involved in the ongoing genocide in Gaza at this very moment. There is no way to be ethically involved. Their entire existence is "misuse".


I see from your username that your opinion on this matter was likely extremely set-in-stone before reading my comment, or the article (if you did).


Do you really not know? It's a difficult question to answer in an HN thread, because on one hand, it requires a review of the history of empire and war profiteering. But on the other hand, it's just obvious to the point of being difficult to even articulate.


What you’re describing is the result of the issue being complicated, not obvious.


Not invalidating your concerns but don't see a strong reason to not do it considering that every other nation is going to leverage this tech.


Is it unethical for a drywall installer to accept a contract for a building on a military base?


Depends. Is that military base Gitmo?


It's not unreasonable to take such a position, yes.

Look, if you believe that:

a) humanity is headed toward sustained peace

b) a transition from the current world order to a peaceful one is better done in an orderly and adult fashion

...then yes, at some point we all need to back away from participation in the legacy systems, right down to the drywall.

My observation, especially of the younger generations, is that belief in such a future is more common than it has ever been, and it's certainly one I hold.


Actions within that system may be unethical: certainly nobody is defending what America did to Cambodia, or countless other war crimes. But you're painting participation in the system as unethical. Therefore, Ukrainians defending their homeland are unethical.

Let me reframe what you said in terms of christianity:

----

If you believe that:

a) Jesus is our savior b) The salvation of humanity depends on accepting (a)

...then yes, at some point everyone needs to back away from other religious systems, right down to atheism.

----

I'm not trying to make light of what you believe, but framing others' participation in a system you don't believe in as unethical is exactly what leads to oppression of religious minorities and other outsider groups. It's a tactic of religion, not reason.


If you live in US, taxes you pay directly fund DoD. So if you sponsor their activities, why can't Anthropic do business with them? Which other company would you rather get their (your) money?


Yes of course on some level, people who pay taxes to violent imperial actors are doing a disservice to humanity, and are in some sort of moral quandary.

We all wish that everyone who has ever lived in such a situation has had the bravery to resist. Right?

But I don't think that makes forbearance of such resistance equivalent to taking money from that same actor in exchange for expanding its capability. Those are related but distinct types of transaction.


This might makes sense if you believe US is an evil empire, DoD is doing bad things, and AI will help DoD do even worse things. But it's not so black and white, is it?


Paying taxes is not voluntary, unlike business deals.


Living in US is voluntary.


For large swaths of the population it is not. Moving is expensive, for one. Obtaining a citizenship elsewhere is non-trivial (and often also expensive). There are non-monetary costs as well, like having to leave your friends and extended family behind.


Taxes don't directly pay for military spending. If tax revenue, for whatever reason, dropped off a cliff, they'd continue giving money to the DoD, and just increase debt / money printing to cover the difference.


If there's not enough money from taxes, they will borrow/print more to cover total deficit (not specific to DoD). Otherwise, tax money will go directly to DoD.


Genuine question, and with due regard to some of the valid concerns you have: what would your opinion on this have been in 1940-1945? What about the Cold War?


Yeah, I don't get what could be bad about selling to one of the largest exporters of death and misery in the world either.


[flagged]


Not everyone believes defense contracts are inherently unethical, or at least that they are any more unethical than all of the other consumers GenAI firms are already serving. Given that a (if not the) main business proposition for GenAI is massive reductions in employment costs (which means unemployment and massive economic disruption) this is not a business sector built on any ethical high ground.


Google is claiming the root cause is with some of their central IAM services, which would have a cascading effect to the rest of their services.


Where did you see this information? Was it on a social media channel? I do see the IAM services in the list of affected services in the incident report.


check https://status.cloud.google.com/incidents/ow5i3PPK96RduMcb1S...

> Multiple GCP products are experiencing impact due to Identity and Access Management Service Issue


Scroll up. Its literally in this HN comment section highly upvoted.


It was not when I posted the reply that you are replying to 2 hours after I posted it.


How do these detect tsunamis? They must be observing elevation changes, right? Is that GPS based?


Triangulation can tell you where a point is between three sources but you need four to determine elevation, because it’s not the radius of a circle but the radius of a sphere.

But at sea there’s not much to obscure satellite signals so I believe resolving buoy position was a solved problem back when gps car navigation still sucked balls because tall buildings make everything harder. You need a lot more satellites to see three or four at the same moment.


i think they monitor pressure waves somehow

here is an interactive map, looks like some of them are picking up something https://www.ndbc.noaa.gov/obs.shtml?lat=13&lon=-173&zoom=2&p...


Can they recover this OTA considering the systems can’t even boot?


I’m sure the patch itself can be fixed, and there will be a workaround to boot up the machine to fix it. My only concern is the BitLocker keys. If the hard drive is encrypted by Windows and assuming no backup for that key has been done, the system admins will have to activate their disaster recovery plans for these devices, and I hope they have that too, but hope isn’t a strategy!


Why would the BitLocker keys not be recoverable?


If the keys aren’t backed up, you will be locked out of the system, and as soon as you try to boot into the safe mode to perform that workaround, you will be asked to enter it manually (or if you have it back it up on a USB drive), if you don’t have, or don’t know the key, you will have an encrypted drive with all of your data locked there.


So far requires going into recovery mode and removing/rename the cloud strike executable. Then you can boot into Windows from there, it will probably be a sys admin thing dependent on the organisation setup.


I don’t know Windows systems. I’ve read it’s causing Blue Screen of Death.

I take that to mean that systems can’t even boot. Right?

Can this be fixed over the air?


Right now the workaround is doing brain surgery on the system in safe mode, so probably no ota fix.


== kernel panic if that clears it up


Not an airline pilot / medical examiner, but I do read r/flying quite regularly.

Your story is not uncommon. The general wisdom on DWIs is that yes it does hurt your chances of getting into airlines, but if it was (1) minor/borderline, (2) a long time ago, and (3) you have shown a pattern of recovery, it is not a career ender.


I am still going to go through the motions and earn my hours, criminal history be damned. I was under .15 at the time, which as I understand it, helps me significantly.

Also looking at helicopter/flight medic but for now I am starting back to school in January to finish my AAS in Cybersecurity followed by BS in CS.


Ousted from his board chairmanship, but still works at OpenAI.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: