"Give me ten sentences spoken by the most innocent of men, and I will find something in them with which to hang him." -- Voltaire.
All sorts of companies (not just Google) manage to store rather more than ten sentences these days.
This is why data privacy should be considered so important. Sooner or later all sorts of people (including the government) discover this <treasure trove>, and some of them will use it to hang you.
If you think it's not possible for ordinary citizens to care about data-privacy, maybe visit Germany. The number of times even pretty regular people asked me to do or not do something in the name of "datenschutz" was pretty large!
I double-checked and thought I had indeed made a mistake in attribution here.
However, while Richelieu is the most commonly cited as the source of the quote, the source is actually somewhat disputed, and has been attributed to various people (who may or may not have been (mis)quoting Richelieu themselves), including -apparently- Voltaire.
So I get off on a technicality :-/ . Going forward, I do promise to be more careful with attribution of quotes in future.
I've let Google track my location history for over 10 years now. On top of being incredibly useful (and fun) to remember where I was and when, I always imagined it would come in handy as an alibi if I was ever accused of a crime.
This isn't true. Contemporaneous evidence may often be admissable in court, subject to other legal tests and challenges regarding its relevance, reliability and integrity.
Any evidence that you have that can corroborate your story will be useful to you, hence digital trails are defensive tools to innocent people, even though appearing on CCTV or near a crime could be sufficient for further investigation including questioning, detainment, or other forms of inquiry.
That’s not how any of this works.
An alibi shows the investigators that you aren’t a suspect. It happens before anything goes to trial. I’m not in favor of location data collection, but once it is collected it can be an easy way to show you weren’t involved.
The evidence would have to be looked at in regards to the totality of the evidence. They couldn't just say you handed the phone to somebody else without knowing this and showing evidence that this was the case. Assertions without evidence are generally not persuasive, and in criminal cases they must prove that you committed the crime beyond "reasonable doubt". The burden of proof is on the prosecution.
And, if there was corroboration with the digital trail of the mobile device, and you appearing on a CCTV at a location matching the GPS at some point or chatting to somebody on the phone, it'd possibly further break the insinuation that you didn't have the device during this time period.
In practice reasonable doubt is generally ignored by juries. Interviews of post conviction juries show they often convict people who they have serious doubts actually committed the crime. They say stuff like, “If he’s innocent it will be reversed on appeal.”
Risking a bench trial is poor odds as they are significantly more likely to convict when things aren’t clear. Roughly 3/4 the time they agree, but when they disagree judges are ~5x more likely to convict. The difference seems to largely be bias on the side of Judges.
While the system is not perfect and errors do occur, the claim that juries "generally" ignore the standard of reasonable doubt is a sweeping statement. The principle of "innocent until proven guilty beyond a reasonable doubt" is a cornerstone of many legal systems and taught and emphasized to jurors in instructions before they deliberate.
It’s a sweeping statement backed up by actual evidence.
There’s a common option that someone only gets to trial when there’s significant evidence they are guilty. Which on the surface is a reasonable assumption, but that bias has real impact on people’s behavior.
Taking a plea deal as an innocent person is often a very wise decision.
My statements didn’t included citations but my argument did reference multiple studies and post verdict interviews, a subtle difference.
Here’s a quote for my 3/4 agree, 5x more likely to convict when they disagree: “ The judge and jury in the Kalven-Zeisel survey of 3,500 criminal cases agreed in 78% of the cases on whether or not to convict. When they disagreed, the judge would have convicted when the jury acquitted in 19% of the cases, and the jury convicted when the judge would have acquitted in 3% of the cases” https://criminal-justice.iresearchnet.com/forensic-psycholog...
Feel free to look up some more, there’s some variation here.
The term is quite literally meaningless to some people yet: “In one case, a jury asked for a “layman’s” explanation of “reasonable doubt,” and an appellate court said the trial judge acted properly by simply rereading the original charge.”
I could go on, but instead I am going to simply ask you for some demonstration as to why you think it is commonly applied.
> Yet you made no reference to it in your poorly executed attack.
I am not attacking you and I did make direct reference to this.
You wrote:
> In practice reasonable doubt is generally ignored by juries.
> Interviews of post conviction juries show they often convict
> people who they have serious doubts actually committed the crime.
> They say stuff like, “If he’s innocent it will be reversed on appeal.”
I wrote:
> While the system is not perfect and errors do occur, the claim
> that juries "generally" ignore the standard of reasonable doubt
> is a sweeping statement.
You then wrote:
> It’s a sweeping statement backed up by actual evidence.
I then wrote:
> Actually your post containing the sweeping statement doesn't
> contain any evidence.
It's correct that you didn't support the claim.
Your own evidence didn't substantiate your claim, it directly refuted it.
You posted a Kalven-Zeisel study comparing the decision-making processes and outcomes of judges and juries in trials which concluded that judges and juries generally agree on verdicts in a high percentage of cases (78%). In fact, it is stated that "when they disagreed, the judge would have convicted when the jury acquitted in 19% of the cases, and the jury convicted when the judge would have acquitted in 3% of the cases—a net leniency rate of 16%". The study that you've given me shows that when the jury and judge disagree, the jury is more lenient than the judge. This directly refutes your subsequent claim that "Interviews of post conviction juries show they often convict people who they have serious doubts actually committed the crime. They say stuff like, “If he’s innocent it will be reversed on appeal”" and indicates that the jury err on the side of reasonable doubt rather than ignore it.
The article you have subsequently posted is about the interpretations and articulation of what "reasonable doubt" means to juries. It does not substantiate any claims about "reasonable doubt" being generally ignored by juries or make any claims about them being more likely to convict. It's conclusions are about how "reasonable doubt" might be better communicated or understood.
I specifically said they agree most of the time. The fact that judges are more likely to convict means it’s a risky option, but says nothing about accuracy of either option.
> It does not substantiate any claims about “reason doubt”
There’s zero possibility for someone to use a standard they don’t understand. Every single case where a Jury is confused as to the standard is a case where they aren’t using it.
Any suggestion that they follow a standard requires them to both understand the standard and for them to apply it. If 60% understand and 60% of those that understand follow it then 36% are following the standard. (No those numbers shouldn’t be taken as an argument.)
The Kalven-Zeisel study indicates that juries and judges agree on verdicts 78% of the time, suggesting they have similar interpretations of the evidence. When they do disagree, juries are more lenient 19% of the time, indicating they don't 'ignore' the standard of reasonable doubt but may actually err on the side of it. Your supplementary article doesn't prove that juries generally misunderstand 'reasonable doubt' either. So, neither of your sources substantiate your claim that juries 'generally ignore' the standard of reasonable doubt and instead choose to convict when they have "serious doubts".
78% agreement isn’t a sign of one side consistently following the reasonable doubt standard. Juries could beat 50% by picking randomly.
3% of the time a judge disagrees with all 12 members of the jury that someone is innocent. Considering what percentage of defendants are likely guilty that’s a surprisingly high probability.
The 78% agreement rate between judges and juries suggests a shared interpretation of evidence, which indirectly indicates both are following the same 'reasonable doubt' standard.
The 3% disagreement where judges would have convicted doesn't prove juries 'ignore' reasonable doubt; it reflects normal variance in human judgment.
Legal decisions aren't coin tosses—such a high agreement is unlikely if both parties weren't generally applying the same legal standard.
Again, Juries that don’t know what is meant by this term will use some standard, but reasonable doubt is supposed to have a specific meaning.
I am not suggesting legal results are actually a coin toss, just that this level of agreement doesn’t require shared logic let alone using the same standard. If you have two people roll a dice with 90% yes and 10% no then they will agree 0.9 * 0.9 + 0.1 * 0.1 = 82% of the time with absolutely zero logic involved. A much lower standard say ‘the preponderance of the evidence’ is easily enough to hit 78% agreement which is really quite low.
Nobody other than yourself is arguing that reasonable doubt is being generally ignored by juries. You either misunderstood the Kalven-Zeisel study that you brought to the argument or you failed to read it.
The study showed that juries had a higher threshold for conviction than judges, which is a direct repudiation of your point that "reasonable doubt is generally ignored by juries. Interviews of post conviction juries show they often convict people who they have serious doubts actually committed the crime." It also explained that similar results to this had been reproduced many times over.
It's not even conclusive that it is purely due to juries having a different interpretation of reasonable doubt as it is explained that "Much more research is needed to map experimentally the differences and similarities between the judgments of judges and juries before concluding that judges are better than juries at specific tasks (e.g., assessing risk) or that deliberations enable juries to outperform judges on other tasks (e.g., assessing conflicting testimony)."
To be clear, your claim that juries are more likely to convict was disproven by your own data and your claim that they ignore "reasonable doubt" is unlikely given that they err on the side of reasonable doubt. The later article you posted doesn't really have anything to say about either of these things, but instead presents better ways to articulate the idea of reasonable doubt to a jury.
I don't find your latest argument about dice rolling persuasive either as (1) it's not an empirical argument, and (2) this oversimplifies the complex dynamics of legal decision-making and doesn't effectively challenge the implications drawn from the Kalven-Zeisel study -- unlike the randomness of dice, legal decisions are derived from a deliberative process grounded in law, evidence, and rational argumentation.
Summarizing the 2 statements (without commenting on their accuracy or veracity):
(1) A statistical smell test: A random dice roll (under reasonable[?] assumptions) achieves an agreement rate of 82%.
(2) unlike the randomness of dice, legal decisions are derived from a deliberative process grounded in law, evidence, and rational argumentation.
Retrics argument appears to be: For (2) to empirically pass (1), the agreement rate between judges and juries must thus be >>82%. The Kalvin-Zeisel study found a rate of 78%.
I'm out-of-steam with this argument but making a probability-derived argument based on a cherry-picked agreement rate of 90% instead of an empirical argument that considers the legal deliberation process and culture that produced this agreement rate isn't persuasive to me.
Kalven-Zeisel specifically argues "Disagreement rates were no higher when the judge characterized the evidence as difficult than when the judge characterized it as easy, suggesting that the disagreements were not produced by the jury’s inability to understand the evidence." It seems impossible to mathematically tease out how much of the agreement comes from a shared understanding of the reasonable doubt standard vs other facts from the case -- either way, the idea that juries generally ignore reasonable doubt and err towards convicting people is not evidenced.
>And, if there was corroboration with the digital trail of the mobile device, and you appearing on a CCTV at a location matching the GPS at some point or chatting to somebody on the phone, it'd possibly further break the insinuation that you didn't have the device during this time period.
I see no advantage to sharing the data in the cloud, vs just storing it locally in your phone.
You can still use it for your own benefit, while preventing it from being used against you.
> I see no advantage to sharing the data in the cloud, vs just storing
> it locally in your phone.
You have to look at this from the perspective of the reliability and integrity of digital evidence.
If the data is stored locally it could be argued that it's possible that you have tampered with it, particularly since you possess the skill set necessary to do so. On the other hand, if it is stored on the cloud, it's likely that you won't have the level of access required to tamper with the data, and that there will be an auditable transaction history with information about the identity of anybody that made changes. Google will be able to provide information to the court on the level of security that applies to the data -- in fact, data from Google services is already known to be legally admissable assuming it passes relevance, reliability and integrity tests.
Of course, you might be able to create something similar locally, but it's more complicated and untested: you'd need to be able to prove that you don't have the ability to retroactively amend the data (write access, encryption keys, etc).
I agree that it would be nicer if: the digital evidence wasn't accessible by others and was also reliable and with integrity protection. Maybe an option would be to encrypt it with your public key and write it into a public blockchain?
>On the other hand, if it is stored on the cloud, it's likely that you won't have the level of access required to tamper with the data...
Location data can easily be fabricated. This has been discussed extensively in other subthreads. Thus insufficient by itself.
However, it can be corroborated with e.g. cameras along the path if need be, providing an alibi. By having control of the data, rather than giving it up to the cloud, it can be used in this way, without allowing it to be used against you in the "location data says you were near the scene of the crime" scenario.
True. We are getting into territory that might also consider: what type of mobile device do you use, has it been jailbroken, and are there signs of GPS spoofing in the location history?
But, in general, if you have a digital trail which corroborates with other digital or real-life trails, you have something which does have weight as evidence.
That would be true if it were an option. Google hasn't implemented such a feature und 3rd party apps differ features and are probably general worse, especially considering how android blocks background activity in low battery situations
So? Much of the evidence in most criminal trials is circumstantial. For example fingerprints and DNA are circumstantial evidence.
Direct evidence is evidence that directly links a person to a crime, which for the most part means testimony of a person who claims to have personal knowledge of the crime. Evidence that requires making inferences to get from the evidence to the crime is circumstantial.
A good non-criminal example of direct vs. circumstantial evidence would be if you were trying to prove that it snowed last night and you found someone who said that they woke up at 3 AM and looked out a window and saw it snowing. That would direct evidence because you have someone who claims to have personal knowledge that what you are trying to prove happened actually happened.
If instead of a witness who actually saw the snowfall you had a person who testified that there was no snow on the ground when they went to bed but when they woke up in the morning there was snow that would be circumstantial evidence.
From that testimony you do have direct evidence that there was snow on the ground in the morning, but you have to make inferences if you are trying to prove that snow fell during the night and so that evidence is circumstantial for proving that.
It's not unheard of confused old ladies insisting it's you in the line up, even though you volunteered to be part of the line up to help out the police.
Then the police start questioning you and it turns out you don't happen to have a supportable alibi.
The reason you were chosen for the lineup is you looked a bit like the suspect they were really originally testing.
Same goes for geolocation sweeps - you already fit part of the profile - you were there - by definition your data doesn't help you. Add that to some other circumstantial thing and the police start wondering if they got lucky.
Except when... imagine you were helping a friend do something perfectly normal but temporarily considered as a crime... the downsides are just too great to ignore.
Unless you happened to be physically located near a crime, in which case your location data could be falsely used to accuse you. And you'd better hope there aren't other circumstantial bits of evidence which makes it look like you're actually guilty.
Yes. But it can also prove you were near a crime you did not commit and thus used as evidence you did commit it.
The problem is you don’t have control over the data.
Example. Somebody was killed in a hit and run. PD says “give me all the people where in this location at this tome in a car”. Your name is the only one that shows up. You just became the top suspect. And and after the investigation a homeless guy who was given a hot meal in return for looking at some photos indicates he saw you do the hit and run.
Good job. Your data locked you up in jail for 10 years. Hope Google is logging how many times you dropped the soap.
Well argumented! However, consider that any evidence, google data or not, has the problem that it might be held against an innocent.
Any evidence has to be considered with caution. Basing a conviction on, let's say location only, is not passing the reasonable doubt test.
The problem I see is when the justice system would become 'lazy' and would not look beyond automatically collected data. The non-auto-collected evidence is harder to get, hence the result (from a total set of evidences) could get heavily biased.
Google phone location doesn't work as an alibi in court, you'd need people to verify your location. In court, it only proves your phone was there not that you were so it's thrown out as an alibi. For police, they use it to generate leads for investigation not as evidence in court to prove someone they charged was at a location.
Even if the phone was completely stationary and you made no recorded interactions/transactions at all with it, it wouldn't get "thrown out as an alibi". This evidence would get considered with regards to other corroborating evidence that you provide.
Obviously, if you picked up the phone and chatted to your wife, or used it to buy something at the local store, it validates your location data even further.
Perhaps you might have to prove it was you who was carrying the phone at the date(s) and time(s) in question.
Obviously if Google or its subsidiaries are your adversaries, then you might not get the data you need from the source. For example if the company or its subsidiaries are negatively affected by the alleged crime, or they just don't like you for any reason, they might even be the ones who report it.
Why not let your government track your location history for a decade. That way, it will know that you did not commit any crimes. That is how silly this sounds to me.
By law and by agreement, Google is actually less accountable to you than your government (assuming you are American).
Or, you know, the criminal committing the crime without a phone, and OP walking past the wrong place and the wrong time. A lazy police officer making a geofenced information request to Google and getting only one result...
as I plotted the perfect murder I noticed that extr often walked past my victim's house at 9 at night on a relaxing evening stroll listening to music on their iphone. I decided to give the police a lead to good too pass up!
on edit: had edited seconds after post to change to to too, and mixed up the order of to and too, this edit fixes that mixup.
Its a shame Agatha Christie isn't still alive today. No doubt, she'd have thought up some marvelous plots involving smartphones and I dare say the internet and cyber crime. :-)
The fact that social media demonstrated(faked by censorship) apathy after the Snowden revelations did the most damage to personal freedom as a single incident in history.
Now everyone that respects privacy feels like they're alone. And all the broken people have a target to bash, to feel better about themselves. I'm talking about the folks using "I don't need privacy, because I have nothing to hide (this means I'm better than you)" to avoid facing their personal problems.
The main problem of the situation we're finding ourselves in is this: How can you dispute information obtained this way?
Reporting this: https://www.militarytimes.com/off-duty/military-culture/2021...
..is what gets you arrested for child porn possession. This man was snatched from his home without a warrant. The last remaining journalists, still working in this environment, took note. Make no mistake. The original story from Rolling stone magazine has been removed. Of course the people that followed the story closely, looked the other way in fear of being accused of supporting pedophilia. There was no outrage.
I'm talking about the folks using "I don't need privacy, because I have nothing to hide (this means I'm better than you)" to avoid facing their personal problems.
As between someone who says I really don’t care if they know I use Viagra, wear size 13 shoes and like guacamole, and someone worried about a knock on the door for child porn, I can tell you who probably needs to spend some time with a professional on their personal problems.
The same right wing waste of sperm who go around crying for freedom of speech because they can’t vomit their hate on blacks and others, did nothing in response to snowden revelations
Please don't post like this here, regardless of how wrong people are or you feel they are. It's against the site guidelines because we're trying for a different sort of discussion: https://news.ycombinator.com/newsguidelines.html.
You may not owe wastes of sperm better, but you owe this community better if you're participating in it.
(Edit: I appreciate that you downregulated this in your replies to other users - I'm sure that prevented the thread from turning hellish.)
Hey dang, as said I made a mistake by pointing a finger towards a single faction, but I was not flaming, its a really wealthy point of view to look at everything that look unpleasant as aggressive/bad/flame, because it’s about the classy salot to don’t empathise with human rights abuse/racism etc and wealth redistribution, but some people can empathise and feel the anger and not keep calm and when talking about clear violations, but not necessarily be trolls or flamers, I was not flaming, i was genuinely hoping that those cross-faction segment of people would realise how crap they are
But yeah if you can keep the distance and express calmly the contempt for some people go ahead, they had a lot of reasonable people in the ignavi circle of dantes inferno, I see that as italian
I dont know but to me look weird to talk nicely as upper class “hello my dear, have you by any chance read about today’s violation of human rights, have you got a calm idea?”
I believe you that it wasn't your intent, but the definition of flamebait isn't about intent, it's about effects [1]: what effect is a post likely to have? what subthread is it likely to generate? That's what determines the expected value of a post [2]. If you post things like "right wing waste of sperm", "vomit their hate", and so on, the likely outcome is a flamewar whether you intended it or not—and you're responsible for it whether you intended it or not. (And this case was not a borderline call!)
But you're also making a good point in your reply and I don't want you to feel like it hasn't been heard—especially because I think we mostly agree.
Emotions are important and need to be given a place. HN's rules are not intended to constrain everybody into a starched, repressed state, nor to exclude anyone for bringing up something unpleasant or something they feel strongly about. That would end in a kind of heat death. We're not interested in that. (And I agree that there's a class component to this.)
The question is how to express emotions in a way that leads to more curious, respectful conversation and not to a flamewar in which people just get mad and rush to defend their side and bash the other side. This is an open question—it's not something that we've figured out yet as a community or a culture or in terms of moderation. But FWIW, here's my current take.
It's much better if you express your emotion explicitly as something you are feeling. If you directly share what you're actually feeling, that will make your comment more interesting to other people and make it less likely to come across as an attack.
Many people (including me) have a hard time doing this, so instead of saying openly how we feel, we encode our feeling into a statement about a thing, an idea, a person or group—and then we bring our emotion in through the 'back door', by packing as much intensity as we can into our language. Sound familiar?
This feels safer because it's less vulnerable up front, but it's a recipe for internet disaster because people will get triggered by your intensity rather than hearing your actual feeling—and to be fair, it's not their fault, because you didn't really give them a chance.
To take a blunt and vulgar example, if I tell someone "you're a piece of shit", or even if I say about a third person, "they're such a piece of shit", I'm using intense language that is clearly coming out of my emotions, but I'm not actually sharing what I feel or why I feel it—I'm withholding the parts that have to do with me. If I could actually say what I'm feeling and maybe share some relevant experiences, then I would not be attacking anybody, and I'd be giving people information that maybe they could respond to on a more human level.
I actually hate what I'm saying here because personally all of my conditioning runs in the opposite direction—I don't want to relate to anybody, I don't want to share myself, and I certainly don't want to talk about my emotions. That's probably why I come to the internet in the first place, to get away from all of that; and to put it mildly, I'm far from alone in this. But as a community, I don't think we have much choice, because the only other options are "exclude all emotion" or "just vent", and neither of those works—at least not for HN.
This is one of those cases where it's super helpful to have a single variable that you're optimizing for and can follow wherever it leads. On HN, we're lucky to be in just such a situation: we're optimizing for curiosity and nothing else [3]. How does that relate to the above? Very directly: conversation gets more interesting when people share themselves, rather than just their opinions. This is better for curiosity than talking about who is a "waste of sperm" (to use your phrase) or a "piece of shit" (to use mine). So this is the direction we need to move, even though some of us might prefer not to.
Anger is a valid emotion, but it does not advance the conversation or gain you adherents to your cause. Here, it will almost certainly do the opposite.
Just to give you an idea, you could have easily undermined the argument that reaction to Snowden's revelation was a societal shrug.
No mine was not anger, was a display of contempt (english is not my first language, i translated “disprezzo”) towards that part of society that advocates for principles only when it shrinks what they think its their rights, but when it affects in negative ways people that they don’t consider people then turn the other ways
Right now, who do you think looks more ridiculous?
a) Your strawman, who is apparently comfortable without privacy from the government, but does support free speech so they can be racist?
b) The guy who replies to a comment advocating for privacy by derailing the discussion with an inflammatory post, seemingly forgetting that the patriot act had bipartisan support, and in an ironic twist, also presents an anti-free speech position which demonstrates they themselves lack the integrity to defend civil liberties.
Am I missing something here? Wouldn't most reasonably smart criminals leave their smartphones switched on at home before they go on a job? 'Switched on' at home would be an alibi if later questioned. Alternatively, at least switching to airplane mode or switching the phone off (but that could indicate intent).
It seems to me then that only the stupid would get caught this way. That raises the question of how much longer will this method be effective when even the stupid get wind of the fact that their phones will betray them.
From the unsolved crimes mentioned it seems the smart have caught on already. I wonder if there are any stats on whether crooks are actively aware of the dangers of carrying a phone. If so, then is there any trend downward in solving crimes as the knowledge of the fact spreads?
"Only the stupid".. what about " the desperate" instead?
I believe many may engage in illegal activities because their situation (be it financial or otherwise) is so dire that they themselves see no other option (even if there are options).
Most petty crime is not orchestrated by some super intelligent super criminal like depicted in a heist movie.
Note that desperation does not preclude a degree of planning, nor does intelligence rule out desperation.
Considering how long phone tracking has been part of movies about crime, I imagine "turn off your phone" would be in the same category as "don't leave blood stains", "don't fire a traceable fire-arm", etc.
... But maybe we should try to avoid accidentally making this thread read like a how-to for committing crime.
I agree, funnily enough though a period where the phone is turned off may be used to prosecute if there is otherwise a consistent location history.
Planning aside, it's actually a hard challenge so I still personally think most criminals probably don't intelligently overcome it.
Agreed, but it's curious it's still so effective. Perhaps much of the population thinks differently to many posting here and law enforcement relies on this.
"Most petty crime is not orchestrated by some super intelligent super criminal like depicted in a heist movie."
Right, that's the problem, it's why strict safeguards have to apply. Given half a chance law enforcement would welcome a real-time connection to Google's location data. If it ever reaches that stage then we'd be in Orwell's world.
Stealing bread to feed your starving family is one thing, planning a heist doesn’t feel like the same thing. Isn’t shoplifting not really the kind of case for which Google data would be requested?
I've seen a few stories of murders where the "smart" criminal turned their phone off during the exact window the crime took place. That and being seen at the scene of the crime was used as evidence.
So the intent part is definitely a thing if they simply turn off their phone.
Well, you can check in current Ukraine vs. Russia war how both sides lost soldiers for using mobile phones. People tend to forget they carry device that might hurt them in the end.
I'm surprised soldiers are actually allowed to carry mobiles whilst on active duty.
It's a long time since I did any military stuff and it was before smartphones but I distinctly remember the incessant lectures about carrying unnecessary items that would identify one or one's mission. By those standards carrying a smartphone would definitely be outlawed and if caught one would be up on a charge.
Edit: it seems to me that back then that had smartphones been around they'd have been classified as radio equipment and there were very strict rules and procedures about 'radio silence'.
Wasn't ever part of any military but as far as I understand having something to kill time on while there's almost nothing happening for hours and days on end is one of the main uses of having mobile phones in the trench-lines of Southern Ukraine. Also, to almost always keep in touch with the loved ones from back front, I can imagine that that has to be good for morale when you don't know if you're going to be still standing the next day.
And as far as front-line battles go, the amount of cheap drones being employed by both sides makes "identification by mobile-phone signal" pretty moot, if you're close to the front-line the other side knows exactly where you're located long before checking with your mobile-phone tower or whatever. I agree though, use of mobile phones is still a pretty good location giveaway when there's troop concentrations just at the back of the front, where it's more difficult to employ drones.
Also, this is the first protracted war fought between near-peer adversaries since the advent of the "IT revolution" (or whatever we want to call the last 30 or so years), there's lots of things that militaries are now learning when it comes to adapting said IT/tech to the battle-fronts and to the soldiers themselves.
Drones can spot camouflaged positions only from some closer distance, certainly less than 1000m.
Mobile phone signals can be pinpointed from further away, 10km is easily possible, and the cellular network means that you have a network of antennas in a tighter grid than that. And beyond switching off your phone, there is no camouflaging cell phone signals.
So even if you then employ the drones e.g. to aim and correct the artillery (300m precision from cellphone tower cross location isn't precise enough), cell phones will quickly tell you where to start looking with your drones.
> Drones can spot camouflaged positions only from some closer distance, certainly less than 1000m.
You should check some of the war videos from Ukraine, there's lots and lots of drones up in the sky, it's the classic "quantity that has a quality of its own". Some events/attacks/actions get filmed with 3 or more drones, I remember several such events where there were three different POVs coming from those drones.
I'm tempted to say that camouflage has lost most of its value on a battlefield like that of Southern Ukraine. Even though that would probably be wrong, seeing individual soldiers chased by artillery (mortars, mostly) under heavy foliage, while those soldiers are still wearing adequate camouflage, makes me have second thoughts about the relevance of said camouflage.
More generally, I've yet to see a serious discussion coming from the tech circles about how cheap drones have transformed warfare. Eric Schmidt had tried something like that earlier this year in a Foreign Affairs piece [1], but unfortunately the article wasn't that smart, too many platitudes. In his defence, he was writing at the start of the year, for example this "prediction" of his:
> Marines in urban warfare, for example, could be accompanied by microdrones that serve as their eyes and ears.
has already been implemented by both sides (UA and RU) during this past summer (replace "urban" with the fields around Rabotyne or Bakhmut).
Those videos are only published in cases where there is something interesting and propaganda-wise useful to show. Mostly a success of the publishing party. What you do not see are the hours and hours of flying along tree lines hoping to spot something. For every soldier you see being chased by a drone, you don't see the dozens overlooked by that same drone.
Camouflage is never panacea, but useful camouflage exists, even today. I've (almost) stepped on comrades in an exercise that were hidden well enough to not see them from 3m in broad daylight. But for things to work that well, you need to know what makes humans detect their prey. Movement is a dead giveaway, so as soon as you are running you are toast anyways, and only sheer luck might save you. Form and contour is another problem, something human-shaped and green will stand out, even among other green things. See something rectangular in the forest? Bomb it, rectangular trees are rare. Texture is another giveaway, things looking smooth, regular or out-of-place compared to nearby textures. That's why your camouflaged tank will look like a bush or pile of leaves when properly camouflaged.
All this means that of course you have to go beyond camouflage paint and camouflage clothes. Both are a start, and will help to a point, as a base layer, and for great viewing distances. But even before drones, proper camouflage also involves covering yourself and objects in leaves, branches, nets, parking below trees, etc.
Another interesting point about Ukraine war videos: What you also usually do not see are videos of camouflage being broken by thermal imaging. Both sides have thermal equipment, but I guess are reluctant to point their enemy at how much or how little of it they have, and how vulnerable or not vulnerable the others' camouflage techniques are to it.
Also, smaller drones have a limited reach. To get at the juicy targets (logistics, headquarters) way in the back, you need more expensive and rare long-range drones. So I guess the drone density will sharply drop off beyond the first lines of trenches.
"What you do not see are the hours and hours of flying along tree lines hoping to spot something. For every soldier you see being chased by a drone, you don't see the dozens overlooked by that same drone."
Agreed, and I also agree with your other comments. But those limits (luckily for troops on the ground) are because to make detection better and more sensitive is expensive and this has to be traded off against the loss of drones and the risk of downed or crashed ones being reverse-engineered to determine their capability. Also, providing countermeasures such as thermiting the electronics when in trouble or downed not only adds precious weight but also increases costs significantly. Moreover, not only is detection sensitivity a tradeoff but the tech one gets often depends on what's actually available at that instant. Next month, week or even next encounter the tech might have improved.
As I said in my post to the comment, I'm not up to full speed on the tech being used in drones in Ukraine but over 20 years ago I was working on tech that could easily identity many targets at once. What's happened since is that this tech has become much, much more miniaturized and a great deal more sensitive not to mention much cheaper.
Right, seriouly advanced detection tech is already here as its development was already underway before the conflict for both industrial and military purposes. It's now just a matter of packaging it to be suitable for drones, and you can bet London to a brick that there are dozens working to better integrate the different technologies.
Combine it with new miniature camera tech, new LIDAR and back-end (AI) processing and it's phenomenal what can be achieved nowadays. I'd venture that if this conflict continues to drag on then you'll likely see a huge and very worrying change in drone capability.
"Camouflage is never panacea, but useful camouflage exists, even today. I've (almost) stepped on comrades in an exercise that were hidden well enough to not see them from 3m in broad daylight."
Agreed. Whilst it was long ago, I've not forgotten going A-over-T after tripping over another's legs during war games and ruled as 'dead' by the umpire thus no longer a participant because camouflage worked. That said, back then one of camouflage's giveaways was the face. We were trained to look for eyes which was effective (if close enough) because unless facial camouflage is applied properly the eyes stood out, and as most soldiers applied it to themselves and the fact that they weren't much good at incorporating the eyes in ways that created a 'delusional' image, it was a known weakness. Moreover, the facial camouflage took a lot of removing so there was resistance to applying enough of it.
Again, I'm not up to date on how facial camouflage has improved or is applied today but I'd be almost certain it wouldn't stand scrutiny from the best electronic systems. It'd be a real worry if one side had sophisticated tech in their helmets and the other didn't.
"...makes me have second thoughts about the relevance of said camouflage."
I can't say that I'm fully up on the latest cameras used in drones but I've been employed on the engineering side of video tech and surveillance and it's clear there's no end of the trickery that modern cameras can get up to. Modern sensors can easily respond to both IR and UV and with the right filtering can be made very selective in what they see.
Combine this with front-end motion detection and back-end processing and one has a really powerful detection system. I'd reckon with this tech that traditional camouflage would be nigh on useless. Moreover, if the drone were coupled by link to 'home' and very powerful back-end processing with AI then the game would be up for troops on the ground (seems to me either these guys are either extremely patriotic or they're unfamiliar what the tech will do). That's not the end of it either, there's also LIDAR that will see through much camouflage, combine it with those camera sensors and anyone on a battlefield would stick out like the proverbial.
Incidentally, this type of detection goes back a long way so it's now well developed tech. Film was made IR-sensitive before WWII and during the War Kodak perfected a false-color IR reversal film for detection of camouflage, etc. As a keen photographer I used to use a later 35mm version of this film and it was surprisingly good at emphasizing objects that weren't highly visible with normal color film.
"...having something to kill time on while there's almost nothing happening for hours and days on end is one of the main uses of having mobile phones in the trench-lines of Southern Ukraine."
That's understandable, but for as long as war has been conducted soldiers' lives have always been long stretches of utter boredom followed by short episodes of horror, death and destruction. It's always been this way, there are many accounts of where waiting in anticipation have seemed worse than the conflict. Managing armies in quiet periods has always occupied commanders' attention, it's why, say, brothels were allowed near the Western Front by high command in WWI despite the risk of VD (which was often a chargeable offense).
Also, to almost always keep in touch with the loved ones from back front, I can imagine that that has to be good for morale.
Reckon it definitely would be, but from what is being said about consequential deaths one wonders if it's too much of a luxury. There's no doubt that decades ago that a luxury of this type on the battlefield (even if it had been available) would have been unthinkable as commanders would have deemed it a strategic danger—giving away a platoon or battalion's position, etc. I can't see why something as fundamental as weakening strategy would have changed nowadays but then I'm not a commander. (Incidentally, there's any number of references to such strategies but there's two that stand out: Sun Tzu and von Clausewitz, I doubt there's a commander in the world that hasn't studied them—BTW, I'm not suggesting for a moment you read them.)
Perhaps, these days, with the abandonment of conscription in many countries, militaries have made a pragmatic decision to allow smartphones as an enticement for people to join the military. I've not followed this recently (as I once would have), if so, then I'd reckon it's a nasty callous tradeoff.
Re your last point, it's my understanding that troops in Iraq and Afghanistan also carried phones but I'm unfamiliar with the rules that would have applied.
Most often when people engage in small crime, they are not engaging in smart behavior overall. Having phone with them is just one of multiple stupid decisions that were done that week. They are used to have smartphones with them all the time, so they do not think about them at all.
Another pretty common way of catching you is that you brag to friends or family about the crime and one of them tells the cops.
Accepted, that's the stupid class. And no doubt such detection would be very useful in solving serious but unintended crime such as a hit-and-run accident.
Answered this in another comment here, but can repeat. Phone location is not a valid alibi in court. As you pointed out with your example here, it only proves your phone was at a location not that you were so it isn't accepted as an alibi. Police use phone location to generate leads not as evidence in court to prove someone they charged was at a location. They need better evidence than that in court to prove someone committed a crime.
Not every burglar is a "cat burglar". Many of them respond to crimes of opportunity, where they could not have foreseen the need to leave their phone on but at home.
Of course, this investigation method is going to decrease in quality. Only going to take a few popular songs mentioning leaving Netflix on at home before this method only catches the innocent and amateur criminals.
I was a on a jury back in 2002 where the prosecutor was presenting cell tower data to show the suspect was in the area. We are at 20 years from then and people are still committing crimes with their phones on, and the phone is more part of daily lives now.
Google also tracks your app interactions so leaving your phone at home, switched on but not showing any interactions is possibly telling the authorties something.
just like opening letters, or search warrants, as long as the data is accessed via a court order, which has audit and transparency attached, it should be OK.
What's not allowed is indiscriminate vacuuming of data for analysis to see if "crimes" have been committed.
How is this different from the fact that they can toss your house with a court order, if necessary?
Yes, search warrants generally limit themselves more than that, but a warrant is a warrant and the authorities can search anything they need to that's reasonably connected to what they have probable cause to suspect. How is your digital life any different from your physical life in that regard?
Police and other LE already have to go to a judge and be "very specific" to be able to search anywhere you have a reasonable expectation of privacy.
The number of houses that can be searched is limited.
Actual police officers or detectives need to visit said house, spend time on-prem, collect, and analyze evidence. A single site can be accessed at a time.
Digital media are searched automatically, by hardware and processes, with many potential targets searched simultaneously and at vanishingly small cost.
Physical evidence and print media are far slower to access and assess than digital media are.
Physical evidence and print media have vastly lower data density than digital media do. Both temporal and spatial data resolution is far lower for the former. At the same time, provenance, integrity, and chain-of-possession over physical evidence media are also generally easier to ensure and assess, and both are generally less susceptible to either fabrication or modification.
(The point here isn't that physical media cannot be tampered with. It's that digital media can be tampered with absolutely no evidence of this on the media itself. Typically some external validation or integrity check is necessary. Digital media can be entirely fabricated, all the more so now with AI and generator technologies, or can be entirely destroyed without a trace.)
Digital media can be stored at volumes and for durations which would be utterly impossible with physical evidence.
A 1 TB storage device, which can fit on your pinkie nail, can hold the equivalent of ~200,000 books, roughly the size of a fairly substantial public library. The entire Library of Congress book collection, 140 million volumes, assuming 5 MB each, would fit in 700 TB of storage. With currently-available 20 TB drives, that's about 35 disks, which would fit within a few standard rack units (mostly depending on the size of disk you'd opted for). The total cost would be a few thousand dollars.
Digital media, particularly in the form of SaaS providers and Cloud storage, is an absolute trove of information on hundreds of millions or billions of people.
When they "toss your house" you are generally on notice that a warrant has been obtained. When they get data from Google, does Google notify you.
(NB. Correct me if I am wrong but in the case of Google, unless it's a search of Google's premises, it's not a warrant, it's a subpoena. Without being notified of a subpoena there's no opportunity for you to challenge it.)
The effects of a no-notice warrant remain profoundly different for a physical location, in which it's obvious that a search has occurred, and a digital search, in which it is not.
My personal biggest fear is that I feel like my search history is full of curious things than can be misconstrued in court.
I’ve always been fascinated by red-teaming things, and a very curious person in general. For example, I took some advanced chemistry classes and wanted to look up how hard it would be to synthesize RDX one day.
I feel like that could be used against me in the wrong situation. My computer is full of things that only exist in my mind, and I feel like that’s too personal for something like a jury to handle.
This is different because they are physically limited in number of houses they can toss per day, plus said activity is quite more observable to targets.
The number of warrants that can be served has nothing to do with the price of beer. Either a judge has approved the warrant based on probable cause presented to him/her by the police, or they haven't.
If the police could hypothetically toss every house in the country twice a day and once on Sundays, that has precisely nothing to do with anything. So long as the only houses they DO toss are ones where a judge has reviewed the evidence, and determined that there is probable cause that tossing the house will turn up fruits or instrumentalities of a crime. Capability to do a thing has nothing to do with approval to do a thing under the law or ethics. I'm fully capable of being a (very bad) male prostitute, but I refrain from doing that for many obvious moral and legal reasons.
> So long as the only houses they DO toss are ones where a judge has reviewed the evidence, and determined that there is probable cause that tossing the house will turn up fruits
And to take a measure of the creepiness, you just have to remind yourself what would be the non digital equivalent of what google is doing. You would be followed all day long by a guy who would log in his notebook every move you do, everything you say, who you meet, what they tell you, he would follow you into your doctor's office or at the coffee machine where you are flirting with a colleague.
Anyone would go crazy with being spied on like that, it's worse than what the Stasi could ever do. And of course that notebook would be a treasure trove way more sensitive than a letter you would have written.
Are you concerned that it advantages Google to have a cozy relationship with authorities everywhere? Do you think being friends with police big and small everywhere is powerful? Can politicians even offer "Do your job" to the same extent?
Reasonable take, where I'd perhaps feel I'd disagree is that 90% of the Western World uses Google for search, mobile OS or whatnot- is it reasonable to presume they all read the ToS(s) and understand the consequences, or do they really have a choice in the matter?
e.g. search engine choices wrt Google being scrutinised on a possible monopoly position.
This is very inefficient. There was never a stronger case for defunding the police. Google can arrest individuals with at least 79% better efficiency. Just needs a catchy name, something like GJail
I know we can't know, but is having location history off enough? I still share my location with a few friends and family.
I guess I have to trust that it's enough, since I already can't trust that Google isn't using some other channel to continuously monitor and record my location.
I believe the only way is to not have the phone on you.
There are many ways to track you other than GPS.
Your phone might be collecting data even while it's in airplane mode and broadcast it once airplane mode is off.
I use Bing for porn. For reverse image search, I use Yandex. And for researching psychedelics and conspiracy(censored) topics - I use Brave Search.
I use Google only for vanilla search that shows I am a good citizen(navigation, online shopping, general research on movies and sports...etc). I don't want my search Identity to be fully profiled by Google. Police(and other bad actors like a hostile government) are more likely to access my data on Google than on other lesser know search engine.
semi-related: I do get a jolly deep belly laugh when "a threat to national security" is loosely tossed out as doublespeak for "an augment to individual privacy".
Without a VPN or tor on at least some of those browsers, though, I imagine the metadata strewn about could get stitched back together on the data-broker (ad-tech) end of all those transactions.
this is why RMS ssh's into a pine server to read email, i reckon.
> ... what assurances do you have they would not comply ...
While compliance is certainly not optional, the value of your compliance is highly dependent on your technology. (The authors mention this in the article by writing that Apple does not collect the same kind of data as Google.)
I think it is like choosing VPN providers. You never know if they were involved in compromising their customers until they are caught compromising their customers.
So if you have a choice between two similar VPN providers, and one of them is known to be involved in compromising their customers, you use the other one.
I do not see how this would be any different with search.
I quit using DDG because when it ran out of answers based on what I searched for, it would spam results of 'random word in search query + city/area based on IP address'. Which was not only completely useless, but creepy as fuck.
If you use google (the search engine) without being signed in and without sharing your location data, I doubt they could tie down any relevant information to you.
All sorts of companies (not just Google) manage to store rather more than ten sentences these days.
This is why data privacy should be considered so important. Sooner or later all sorts of people (including the government) discover this <treasure trove>, and some of them will use it to hang you.
If you think it's not possible for ordinary citizens to care about data-privacy, maybe visit Germany. The number of times even pretty regular people asked me to do or not do something in the name of "datenschutz" was pretty large!