When I was a kid growing up in Texas, our ocean visits were to the Gulf of Mexico, off the Texas coast, and you would grab little alcohol wipes for when you got out of the ocean, to wipe the oil off.
Years later, swimming in Hawaii, I found myself looking for wipes. I mentioned it to a snorkel-outfit operator, and she looked at me like I was insane. They didn't even put damaging sunscreen in the water, and there was no expectation of little 1-2 inch sticky spots of oil.
The good old days, in the 80's, where we swam in oceans filled with slow-motion natural disasters. I wonder how much of it was place (Hawaiians seem to have a stronger relationship with the land and nature surrounding them) and how much of it was the time (20 years later).
Crude oil floating in the ocean used to be a big nuisance in parts of California. It is a natural phenomenon, created by oil deposits on the ocean floor leaking into the environment. Santa Barbara was particularly famous for it.
Extraction of that oil via commercial wells greatly reduced the natural seepage, which is why there is so little crude oil floating in that ocean water today. Oil drilling actually made the water cleaner.
Per NOAA and USGS, ~20 million liters of crude oil naturally seeps into that part of the California ocean each year. That is more crude oil each year than the worst oil spill in California history[0].
You are projecting your biases. There was no "drilling is good for the environment" narrative. I was recounting an interesting fact about the environment there.
Many of these seeps are under considerable pressure as there is substantial natural gas mixed in. The seepage rate of each has been mapped and studied for many decades. It has long been observed that the introduction of drilling appears to substantially reduced the seepage rate at many of these underwater sites. Drilling wells significantly reduces natural pressure in these reservoirs, likely leading to the observed reductions in seepage.
For it to be a "narrative", there would need to be an additional claim that this specific case and context, which is factual, generalizes to most unrelated cases. That is not in evidence. Thinking that this was an attempt to create a narrative is a failure of reading comprehension.
This insistence that acknowledgement of facts has an ideological narrative is a pernicious strain of anti-science thinking.
"This insistence that acknowledgement of facts has an ideological narrative is a pernicious strain of anti-science thinking."
That is very well put. This should be added to the general list of fallacies in argument, and like the other ones (the slippery slope, hasty generalization, Post hoc ergo propter hoc, etc.) more general awareness should exist about these.
The current wave of anti-science, anti-logic, rejection of objective data, etc. is like nothing I've experienced in my lifetime. This is a subjective observation, maybe it has always been this way and I never paid attention because I was caught up in whatever I used to be caught up in.
To this day if you walk on the beach your soles or the soles of your shoes will get sticky tar spots. You need baby oil wipes to clean them up before entering your home.
And some of it, if not most of it is not natural seepage but early environmental catastrophes in the 50s and 60s, particularly around Summerland.
For what it's worth you still need the alcohol wipes (mineral oil works well too) when swimming off the coast of Santa Barbara. It's naturally occurring oil that gets all over your feet in little annoying sticky spots.
yeah same for the Gulf Coast, oil just seeps right out of the ground at some beaches or at some times. There's plenty of man-made pollution to go around though.
Growing up on the Atlantic coast of Florida, we kept a can or Renuzit solvent in the garage to wipe tar spots off our feet after coming home from the beach. I'm sure that stuff was toxic. The tar was everywhere for a few weeks, then gone for a while.
Hawaii has other problems. When I lived there, I went through a lot of Neosporin because every scrape you get from a reef pushes in bacteria that got into the ocean from the leaking sewer pipes.
Half-ish (don't get hung up on being exact, they are at least of similar orders of magnitude) of the oil that makes its way into the ocean is natural. That is, leaking out of the ground into the water not at all as a result of human activity. Obviously enormous anthropogenic oil spills make this a very spiky statistic one way or the other.
Oil production and natural oil seepage happen in the gulf of mexico because there's oil there, there's not much oil around Hawaii.
So there's likely both a human and non-human reason for this in Texas.
Why stop there? Just call the LLM with the data and function description and get it to return the result!
(I'll admit that I've built a few "applications" exploring interaction descriptions with our Design team that do exactly this - but they were design explorations that, in effect, used the LLM to simulate a back-end. Glorious, but not shippable.)
Because you often need the result not as a standalone artifact, but as a piece in a rigid process, consisting with well-defined business logic and control flow, with which you can't trust AI yet.
What was the gap you discovered that made it not shippable? This is an experimental project, so I'm curious to know what sorts of problems you ran into when you tried a similar approach.
1. Confirmable, predictable behavior (can we test it, can we make assurances to customers?).
2. Comparative performance (having an LLM call to extract from a list in 100s of ms instead of code in <10ms).
3. Operating costs. LLM calls are spendy. Just think of them as hyper-unoptimized lossy function executors (along with being lossy encyclopedias), and the work starts to approach bogo algorithm levels of execution cost for some small problems.
Buuuuuut.... I had working functional prototype explorations with almost no work on my end, in an hour.
We've now extended this thinking to some experience exploration builders, so it definitely has a place in the toolbox.
The author seems to think they've hit upon something revolutionary...
They've actually hit upon something that several of us have evolved to naturally.
LLM's are like unreliable interns with boundless energy. They make silly mistakes, wander into annoying structural traps, and have to be unwound if left to their own devices. It's like the genie that almost pathologically misinterprets your wishes.
So, how do you solve that? Exactly how an experienced lead or software manager does: you have systems write it down before executing, explain things back to you, and ground all of their thinking in the code and documentation, avoiding making assumptions about code after superficial review.
When it was early ChatGPT, this meant function-level thinking and clearly described jobs. When it was Cline it meant cline rules files that forced writing architecture.md files and vibe-code.log histories, demanding grounding in research and code reading.
Maybe nine months ago, another engineer said two things to me, less than a day apart:
- "I don't understand why your clinerules file is so large. You have the LLM jumping through so many hoops and doing so much extra work. It's crazy."
- The next morning: "It's basically like a lottery. I can't get the LLM to generate what I want reliably. I just have to settle for whatever it comes up with and then try again."
These systems have to deal with minimal context, ambiguous guidance, and extreme isolation. Operate with a little empathy for the energetic interns, and they'll uncork levels of output worth fighting for. We're Software Managers now. For some of us, that's working out great.
Revolutionary or not it was very nice of the author to make time and effort to share their workflow.
For those starting out using Claude Code it gives a structured way to get things done bypassing the time/energy needed to “hit upon something that several of us have evolved to naturally”.
It's this line that I'm bristling at: "...the workflow I’ve settled into is radically different from what most people do with AI coding tools..."
Anyone who spends some time with these tools (and doesn't black out from smashing their head against their desk) is going to find substantial benefit in planning with clarity.
So, yes, I'm glad that people write things out and share. But I'd prefer that they not lead with "hey folks, I have news: we should *slice* our bread!"
> The author's post is much more than just "planning with clarity".
Not much more, though.
It introduces "research", which is the central topic of LLMs since they first arrived. I mean, LLMs coined the term "hallucination", and turned grounding into a key concept.
In the past, building up context was thought to be the right way to approach LLM-assisted coding, but that concept is dead and proven to be a mistake, like discussing the best way to force a round peg through the square hole, but piling up expensive prompts to try to bridge the gap. Nowadays it's widely understood that it's far more effective and way cheaper to just refactor and rearchitect apps so that their structure is unsurprising and thus grounding issues are no longer a problem.
And planning mode. Each and every single LLM-assisted coding tool built their support for planning as the central flow and one that explicitly features iterations and manual updates of their planning step. What's novel about the blog post?
> A detailed workflow that's quite different from the other posts I've seen.
Seriously? Provide context with a prompt file, prepare a plan in plan mode, and then execute the plan? You get more detailed descriptions of this if you read the introductory how-to guides of tools such as Copilot.
Making the model write a research file, then the plan and iterate on it by editing the plan file, then adding the todo list, then doing the implementation, and doing all that in a single conversation (instead of clearing contexts).
There's nothing revolutionary, but yes, it's a workflow that's quite different from other posts I've seen, and especially from Boris' thread that was mentioned which is more like a collection of tips.
Having LLMs write their prompt files was something that became a thing the moment prompt files became a thing.
> then the plan and iterate on it by editing the plan file, then adding the todo list, then doing the implementation, and doing all that in a single conversation (instead of clearing contexts).
That's literally what planning mode is.
Do yourself a favor and read the announcement of support for planning mode in Visual Studio. Visual Studio code supported it months before.
Since some time, Claude Codes's plan mode also writes file with a plan that you could probably edit etc. It's located in ~/.claude/plans/ for me. Actually, there's whole history of plans there.
I sometimes reference some of them to build context, e.g. after few unsuccessful tries to implement something, so that Claude doesn't try the same thing again.
> Anyone who spends some time with these tools (and doesn't black out from smashing their head against their desk) is going to find substantial benefit in planning with clarity.
That's obvious by now, and the reason why all mainstream code assistants now offer planning mode as a central feature of their products.
It was baffling to read the blogger making claims about what "most people" do when anyone using code assistants already do it. I mean, the so called frontier models are very expensive and time-consuming to run. It's a very natural pressure to make each run count. Why on earth would anyone presume people don't put some thought into those runs?
This kind of flows have been documented in the wild for some time now. They started to pop up in the Cursor forums 2+ years ago... eg: https://github.com/johnpeterman72/CursorRIPER
Personally I have been using a similar flow for almost 3 years now, tailored for my needs. Everybody who uses AI for coding eventually gravitates towards a similar pattern because it works quite well (for all IDEs, CLIs, TUIs)
I don’t think it’s that big a red flag anymore. Most people use ai to rewrite or clean up content, so I’d think we should actually evaluate content for what it is rather than stop at “nah it’s ai written.”
>Most people use ai to rewrite or clean up content
I think your sentence should have been "people who use ai do so to mostly rewrite or clean up content", but even then I'd question the statistical truth behind that claim.
Personally, seeing something written by AI means that the person who wrote it did so just for looks and not for substance. Claiming to be a great author requires both penmanship and communication skills, and delegating one or either of them to a large language model inherently makes you less than that.
However, when the point is just the contents of the paragraph(s) and nothing more then I don't care who or what wrote it. An example is the result of a research, because I'd certainly won't care about the prose or effort given to write the thesis but more on the results (is this about curing cancer now and forever? If yes, no one cares if it's written with AI).
With that being said, there's still that I get anywhere close to understanding the author behind the thoughts and opinions. I believe the way someone writes hints to the way they think and act. In that sense, using LLM's to rewrite something to make it sound more professional than what you would actually talk in appropriate contexts makes it hard for me to judge someone's character, professionalism, and mannerisms. Almost feels like they're trying to mask part of themselves. Perhaps they lack confidence in their ability to sound professional and convincing?
I don't judge content for being AI written, I judge it for the content itself (just like with code).
However I do find the standard out-of-the-box style very grating. Call it faux-chummy linkedin corporate workslop style.
Why don't people give the llm a steer on style? Either based on your personal style or at least on a writer whose style you admire. That should be easier.
Because they think this is good writing. You can’t correct what you don’t have taste for. Most software engineers think that reading books means reading NYT non-fiction bestsellers.
> Because they think this is good writing. You can’t correct what you don’t have taste for.
I have to disagree about:
> Most software engineers think that reading books means reading NYT non-fiction bestsellers.
There's a lot of scifi and fantasy in nerd circles, too. Douglas Adams, Terry Pratchett, Vernor Vinge, Charlie Stross, Iain M Banks, Arthur C Clarke, and so on.
But simply enjoying good writing is not enough to fully get what makes writing good. Even writing is not itself enough to get such a taste: thinking of Arthur C Clarke, I've just finished 3001, and at the end Clarke gives thanks to his editors, noting his own experience as an editor meant he held a higher regard for editors than many writers seemed to. Stross has, likewise, blogged about how writing a manuscript is only the first half of writing a book, because then you need to edit the thing.
My flow is to craft the content of the article in LLM speak, and then add to context a few of my human-written blog posts, and ask it to match my writing style. Made it to #1 on HN without a single callout for “LLM speak”!
> I don’t think it’s that big a red flag anymore. Most people use ai to rewrite or clean up content, so I’d think we should actually evaluate content for what it is rather than stop at “nah it’s ai written.”
Unfortunately, there's a lot of people trying to content-farm with LLMs; this means that whatever style they default to, is automatically suspect of being a slice of "dead internet" rather than some new human discovery.
I won't rule out the possibility that even LLMs, let alone other AI, can help with new discoveries, but they are definitely better at writing persuasively than they are at being inventive, which means I am forced to use "looks like LLM" as proxy for both "content farm" and "propaganda which may work on me", even though some percentage of this output won't even be LLM and some percentage of what is may even be both useful and novel.
If you want to write something with AI, send me your prompt. I'd rather read what you intend for it to produce rather than what it produces. If I start to believe you regularly send me AI written text, I will stop reading it. Even at work. You'll have to call me to explain what you intended to write.
And if my prompt is a 10 page wall of text that I would otherwise take the time to have the AI organize, deduplicate, summarize, and sharpen with an index, executive summary, descriptive headers, and logical sections, are you going to actually read all of that, or just whine "TL;DR"?
It's much more efficient and intentional for the writer to put the time into doing the condensing and organizing once, and review and proofread it to make sure it's what they mean, than to just lazily spam every human they want to read it with the raw prompt, so every recipient has to pay for their own AI to perform that task like a slot machine, producing random results not reviewed and approved by the author as their intended message.
Is that really how you want Hacker News discussions and your work email to be, walls of unorganized unfiltered text prompts nobody including yourself wants to take the time to read? Then step aside, hold my beer!
Or do you prefer I should call you on the phone and ramble on for hours in an unedited meandering stream of thought about what I intended to write?
Yeah but it's not. This a complete contrivance and you're just making shit up. The prompt is much shorter than the output and you are concealing that fact. Why?
Very high chance someone that’s using Claude to write code is also using Claude to write a post from some notes. That goes beyond rewriting and cleaning up.
I use Claude Code quite a bit (one of my former interns noted that I crossed 1.8 Million lines of code submitted last year, which is... um... concerning), but I still steadfastly refuse to use AI to generate written content. There are multiple purposes for writing documents, but the most critical is the forming of coherent, comprehensible thinking. The act of putting it on paper is what crystallizes the thinking.
However, I use Claude for a few things:
1. Research buddy, having conversations about technical approaches, surveying the research landscape.
2. Document clarity and consistency evaluator. I don't take edits, but I do take notes.
3. Spelling/grammar checker. It's better at this than regular spellcheck, due to its handling of words introduced in a document (e.g., proper names) and its understanding of various writing styles (e.g., comma inside or outside of quotes, one space or two after a period?)
Every time I get into a one hour meeting to see a messy, unclear, almost certainly heavily AI generated document being presented to 12 people, I spend at least thirty seconds reminding the team that 2-3 hours saved using AI to write has cost 11+ person-hours of time having others read and discuss unclear thoughts.
I will note that some folks actually put in the time to guide AI sufficiently to write meaningfully instructive documents. The part that people miss is that the clarity of thinking, not the word count, is what is required.
Well, real humans may read it though. Personally I much prefer real humans write real articles than all this AI generated spam-slop. On youtube this is especially annoying - they mix in real videos with fake ones. I see this when I watch animal videos - some animal behaviour is taken from older videos, then AI fake is added. My own policy is that I do not watch anything ever again from people who lie to the audience that way so I had to begin to censor away such lying channels. I'd apply the same rationale to blog authors (but I am not 100% certain it is actually AI generated; I just mention this as a safety guard).
If your "content" smells like AI, I'm going to use _my_ AI to condense the content for me. I'm not wasting my time on overly verbose AI "cleaned" content.
Write like a human, have a blog with an RSS feed and I'll most likely subscribe to it.
It is to me, because it indicates the author didn't care about the topic. The only thing they cared about is to write an "insightful" article about using llms. Hence this whole thing is basically linked-in resume improvement slop.
Not worth interacting with, imo
Also, it's not insightful whatsoever. It's basically a retelling of other articles around the time Claude code was released to the public (March-August 2025)
The main issue with evaluating content for what it is is how extremely asymmetric that process has become.
Slop looks reasonable on the surface, and requires orders of magnitude more effort to evaluate than to produce. It’s produced once, but the process has to be repeated for every single reader.
Disregarding content that smells like AI becomes an extremely tempting early filtering mechanism to separate signal from noise - the reader’s time is valuable.
I think as humans it's very hard to abstract content from its form. So when the form is always the same boring, generic AI slop, it's really not helping the content.
And maybe writing an article or a keynote slides is one of the few places we can still exerce some human creativity, especially when the core skills (programming) is almost completely in the hands of LLMs already
LLM's are like unreliable interns with boundless energy. They make silly mistakes, wander into annoying structural traps, and have to be unwound if left to their own devices. It's like the genie that almost pathologically misinterprets your wishes.
Then ask your own ai to rewrite it so it doesn't trigger you into posting uninteresting thought stopping comments proclaiming why you didn't read the article, that don't contribute to the discussion.
Agreed. The process described is much more elaborate than what I do but quite similar. I start to discuss in great details what I want to do, sometimes asking the same question to different LLMs. Then a todo list, then manual review of the code, esp. each function signature, checking if the instructions have been followed and if there are no obvious refactoring opportunities (there almost always are).
The LLM does most of the coding, yet I wouldn't call it "vibe coding" at all.
I use AWS Kiro, and its spec driven developement is exactly this, I find it really works well as it makes me slow down and think about what I want it to do.
I’ve also found that a bigger focus on expanding my agents.md as the project rolls on has led to less headaches overall and more consistency (non-surprisingly). It’s the same as asking juniors to reflect on the work they’ve completed and to document important things that can help them in the future. Software Manger is a good way to put this.
AGENTS.md should mostly point to real documentation and design files that humans will also read and keep up to date. It's rare that something about a project is only of interest to AI agents.
I really like your analogy of LLMs as 'unreliable interns'. The shift from being a 'coder' to a 'software manager' who enforces documentation and grounding is the only way to scale these tools. Without an architecture.md or similar grounding, the context drift eventually makes the AI-generated code a liability rather than an asset. It's about moving the complexity from the syntax to the specification.
It feels like retracing the history of software project management. The post is quite waterfall-like. Writing a lot of docs and specs upfront then implementing. Another approach is to just YOLO (on a new branch) make it write up the lessons afterwards, then start a new more informed try and throw away the first. Or any other combo.
For me what works well is to ask it to write some code upfront to verify its assumptions against actual reality, not just be telling it to review the sources "in detail". It gains much more from real output from the code and clears up wrong assumptions. Do some smaller jobs, write up md files, then plan the big thing, then execute.
It makes an endless stream of assumptions. Some of them brilliant and even instructive to a degree, but most of them are unfounded and inappropriate in my experience.
'The post is quite waterfall-like. Writing a lot of docs and specs upfront then implementing' - It's only waterfall if the specs cover the entire system or app. If it's broken up into sub-systems or vertical slices, then it's much more Agile or Lean.
Oh no, maybe the V-Model was right all the time? And right sizing increments with control stops after them. No wonder these matrix multiplications start to behave like humans, that is what we wanted them to do.
> The author seems to think they've hit upon something revolutionary...
> They've actually hit upon something that several of us have evolved to naturally.
I agree, it looks like the author is talking about spec-driven development with extra time-consuming steps.
Copilot's plan mode also supports iterations out of the box, and draft a plan only after manually reviewing and editing it. I don't know what the blogger was proposing that ventured outside of plan mode's happy path.
If you have a big rules file you’re in the right direction but still not there. Just as with humans, the key is that your architecture should make it very difficult to break the rules by accident and still be able to compile/run with correct exit status.
My architecture is so beautifully strong that even LLMs and human juniors can’t box their way out of it.
I've been doing the exact same thing for 2 months now. I wish I had gotten off my ass and written a blog post about it. I can't blame the author for gathering all the well deserved clout they are getting for it now.
Don’t worry. This advice has been going around for much more than 2 months, including links posted here as well as official advice from the major companies (OpenAI and Anthropic) themselves. The tools literally have had plan mode as a first class feature.
So you probably wouldn’t have any clout anyways, like all of the other blog posts.
I went through the blog. I started using Claude Code about 2 weeks ago and my approach is practically the same. It just felt logical. I think there are a bunch of us who have landed on this approach and most are just quietly seeing the benefits.
> LLM's are like unreliable interns with boundless energy
This isn’t directed specifically at you but the general community of SWEs: we need to stop anthropomorphizing a tool. Code agents are not human capable and scaling pattern matching will never hit that goal. That’s all hype and this is coming from someone who runs the range of daily CC usage. I’m using CC to its fullest capability while also being a good shepherd for my prod codebases.
Pretending code agents are human capable is fueling this koolaide drinking hype craze.
It’s pretty clear they effectively take on the roles of various software related personas. Designer, coder, architect, auditor, etc…
Pretending otherwise is counter-productive. This ship has already sailed, it is fairly clear the best way to make use of them is to pass input messages to them as if they are an agent of a person in the role.
Why would you test implementation details? Test what's delivered, not how it's delivered. The thinking portion, synthetized or not, is merely implementation.
The resulting artefact, that's what is worth testing.
Because this has never been sufficient. From things like various hard to test cases to things like readability and long term maintenance. Reading and understanding the code is more efficient and necessary for any code worth keeping around.
It's nice to have it written down in a concise form. I shared it with my team as some engineers have been struggling with AI, and I think this (just trying to one-shot without planning) could be why.
Cool. Good for him. I've been building agentic and observational systems and have been working to make them safe and layered in defense. And, well, I probably should have just said "fuck it" and put a disclaimer sticker on the front to let it fly.
Yeah, these systems are going to get absolutely rocked by exploits. The scale of damage is going to be comical, and, well, that's where we are right now.
Go get 'em, tiger. It's a brave new world. But, as with my 10 year old, I need to make sure the credit cards aren't readily available. He'd just buy $1k of robux. Who knows what sort of havoc uncorked agentic systems could bring?
One of my systems accidentally observed some AWS keys last night. Yeah. I rotated them, just in case.
Having ripcorded out after realizing the author was trying to prove that water was wet, I'll assume that it's "normalized entropy", in a range of 0-1, indicative of the distribution across the space.
I used to have a Sikh manager who wore a turban. Whenever we traveled together, he would get "randomly" stopped. While they were patting him down, he would inevitably chuckle and say something like "So what are the odds of being 'randomly' selected 27 times in a row?"
I don't know the specifics of the process for selection, but I can confidently say that the process is bigoted.
Same thing used to happen to me when I had dreadlocks. Made the same joke too. "what are the odds I'd get randomly selected 100% of the time I go through a checkpoint..."
Besides being racist this is kind of dumb. If you’re going to bring down the plane you’re defo not going to look like someone who gets randomly selected 100% of the time. Even the 9/11 terrorists knew this and shaved their beard instead of looking like the fundamentalists scumbags they were.
In proper English usage it would only be a bigoted
(obstinately or unreasonably attached to a belief, opinion, or faction, in particular prejudiced against or antagonistic towards a person or people on the basis of their membership of a particular group)
check if it was unreasonable to suspect a Sikh of carrying a Kirpan.
The Rehat Maryada would suggest that is in no way whatsoever an unreasonable suspicion.
Sure, your manager likely didn't carry one on airplanes .. but that still falls short of being an unreasonable check.
As a white guy who was caught accidentally carrying a large knife once through security, at the bottom of a carry-on backpack I'd had since high school, I don't think it's in any way essential to use racial or ethnic markers to figure out whether someone is taking something dangerous onto a plane. I didn't even know I was trying to bring a knife onto a plane at a regional airport. There's no reason to think that Sikhs are explicitly going out of their way to hide something.
Interesting that none of these comments seem to be questioning why we can’t just carry a small pocketknife on the plane. We used to be able to before 9/11. The 9/11 hijackings only worked because the policy was comply, land, and let the negotiators do their work. Suicide attacks using commercial airlines just wasn’t a thing. We now have armored locking cockpit doors and no airplane would give up control to hijackers anymore. United Flight 93 was already taken over and heard about the World Trade Center and they revolted.
Now, knives could only be used to commit a crime i.e. assaulting another passenger or crew. Banning liquids does more to prevent terrorists than banning knives. I can see banning them for the same reason concerts ban them, that it is a lot of people in a small space, but that is very different than “national security” or “preventing terrorism”.
Welcome to the club. I inadvertently traveled with not one, but two large box cutters in my carryon satchel for at least 20 flights before I discovered them while searching for some swag. I put them in there for a booth setup in Vegas years prior. Sent a completely calm, even sympathetic report to the powers that be, got put on the DNF list for my troubles.
Still screened and detained 100 percent of the time, sometimes for hours, sometimes having to surrender personal devices, decades later.
> Sent a completely calm, even sympathetic report to the powers that be, got put on the DNF list for my troubles.
What were you hoping to achieve by sending that report?
Most people would have just thought "wow, lucky I wasn't caught with that", taken it out of the bag so it didn't happen again and carried on with their lives.
Deviating from that normal response makes it look like you're just trying to cause trouble.
Yeah, if I had a "Crap, what was that doing in there?" I'd be very quiet about it.
As I wrote in a very different thread, I avoid putting anything in baggage that I might carryon that is even marginally prohibited. I used to do a lot more travel and it's inevitable that knives and the like would inevitable get left in a pocket.
Some of us genuinely believe all that "cops are there to help you, so try to be helpful to cops" stuff we were raised on. Right up until the point when you actually try to do it and find out how things really work...
You sent a report saying you were not searched for 20 times and now you are searched all the time? Has it been over 20 times that you have been searched?
Honestly, I would just give them a pass to carry a ceremonial knife, if they could prove they were Sikhs and not someone pretending to be. But I guess that's why we can't have nice things and why the same rules have to apply to everyone. I think most reasonable people understand that they can't preserve every aspect of their personal beliefs or pride in a situation involving the safety of millions of people flying daily. Carrying a weapon is certainly a bit unusual as a pillar of faith, but there are plenty of others that could also be deemed antipathetic to the well functioning order of a modern society trying to move people safely from A to B. And the same way I would consider trained and licensed gun owners to be a relatively low threat and a rule-abiding group of citizens, that's how I would view Sikhs with their blades (or even more so). So if you're Amish, take a horse. If it's Shabbat, wait til Sunday. If you're the TSA and you want to be more efficient by discriminating, look at people who have no discrenable ideology, or those whose ideology actively conflicts with your mission of preventing attacks.
Sikh's carrying a knife, a bracelet, a comb, etc. has never bothered me in the slightest in all the decades I've known about this - the Khalistan movement in a particular location during a particular time aside, they're not exactly actual postcards for terrorism (despite what some might think when faced with people and turbans).
They always had a pass here in Australia for many years until things tightened up.
Not that I'm a fan, but in general Rules are Rules and making exceptions while fair in some senses will be unfair in others <shrug>.
Circling back to my initial comment- it is the case that there is an actual reason rather than a made up bit of bullshit, to reasonably suspect that a Sikh might be carrying a knife ... if they are they're almost certain to also have a comb .. so that's handy.
okee yeah, and rules are rules, and there's a reason to think that. It would be nice if we lived in a world where rules could be bent in some cases for individuals if they actually posed no theeat, but we all have to deal with the lowest common denominator wanting to cause the most damage, so here we are.
I must say, one thing that this reminds me of is what happens if you board an El Al flight. They don't racially profile you, they just ask you some fairly innocuous questions and watch your responses. I assume they have some way of monitoring your blood pressure, heart rate, and pupil dilation at a distance... but this hasn't really changed since the 1980s, when those things had to be read or guessed in realtime by a trained human. They have a phenomenally safe record, for a country under constant terror attacks.
My takeaway from flying El Al is that there is a much better way to deal with security, that analyzes and addresses the potentially bad individual motives of anyone getting on a plane, and mostly lets everyone else pass. Which is to say that security in its best form should be almost transparent to people without malicious intentions. Having good intelligence coupled with treating each person as their own potential bomb threat is far superior to superficially treating everyone as a threat and having no real security, and far better than just creating security theater around certain people because they're of one race or ethnicity. But El Al's methods probably don't scale well to the size of US or European air travel, because you need highly trained people to stand there in the airport make those calls on the fly for every single passenger.
If I were to guess - I'd guess El Al would let a Sikh bring a blade if they looked him in the eye for 10 seconds and decided he was okay.
The issue isn't really whether a Sikh might be carrying a knife (as Sikhs generally advocate non-violence and pacifism), but if an exemption is afforded to give Sikhs the right to carry weapons on a plane, whether a terrorist might then impersonate being a Sikh in order to get a weapon onboard.
The Sikh blade is ornamental, and usually blunted. There's no reason why they shouldn't be able to carry a blunted blade that basically isn't even a knife. There is no concern of a terrorist using it anymore than any other blunted object, as Sikhs could be required to bring the blunted blade and the blade checked at security.
This is completely absurd and backwards. Violence on planes (at least, phsycial, weapon-assisted) is basically exclusively the purview of organized ideological groups, it's not like crime in the streets. While I'm not aware of any Sikh group who has ever attempted to hijack a plane, the extremely well established general pattern is exactly that extremist sincere believers in a religion or cause are the most dangerous people on a plain.
Not a hijacking, but also maybe a reason not to give all Sikhs a pass on airport security.
> The bombing of Air India Flight 182 is the worst terrorist attack in Canadian history and was the world's deadliest act of aviation terrorism until the September 11 attacks in 2001. It remains the deadliest aviation incident in the history of Air India, and the deadliest no-survivor hull loss of a single Boeing 747
I think you misunderstood me. That's exactly what I'm saying. And I'm saying that Sikhs with or without ceremonial blades are no more of a threat than Mormons wearing special underwear.
[edit]
To be more specific: An individual with an extreme belief about anything is as dangerous as an extremist member of a group with extreme beliefs. So the smart thing is to look at the beliefs and extramicy of each person. If you find someone trying to board an aircraft who doesn't care if they make it to the end of their flight, that is a security problem.
I think the best and easiest idea is to prevent people from carrying weapons on airplanes. Taking over an airplane with special underwear is not a realistic threat.
In contrast, trying to interview and run background checks on every person boarding a plane to figure out if they are an extremist on a mission or not is (a) much more invasive, and (b) much less likely to work out. Especially when you actually don't want to prevent fundamentalists from flying on planes (I don't think preventing some major evangelical church leader or some radical rabbi from flying would even be constitutional, and clearly not a popular move if attempted).
Note that I am not at all advocating for extra security targeting of Sikhs or any other such religious or ethnic targeting. I am just saying that no one should be allowed to carry a weapon on board a commercial airplane, for any reason.
Notably in India, there have been a few times where Sikhs have been at the head of violent revolts - and a few times where they have been targeted by violent purges/genocides.
They’re generally pretty chill, but they aren’t pacifists.
I'd say that incident falls under political extremism, not religious extremism. Which is all the more reason to check people's individual beliefs rather than their race or ethnicity. Anyone from any background can be radicalized; some formatting is more prone to it than others. Sikhs, as you say, are pretty chill. Not being pacifist doesn't mean you want to go out and kill anyone.
Indeed, I didn't know about this incident, thanks for sharing it.
Anyway, I wasn't trying to say that Sikhs are more or less likely than any other group to be pacifist. I was saying we shouldn't even be having this discussion, and simply scan people for weapons, and use things like actual random screening to help as needed. And that religious reasons for carrying weapons are not a valid excuse.
scuse me, is there another major religion in modern times whose popular leaders sanctify taking the lives of disbelievers to get to heaven? I'm waiting, I'd love to hear about another one.
@defrost: I apparently can't respond directly to you. It's a mistake to ascribe a singular focus to someone you don't know. There may be one out of ten thousand people in any group who might want to cause chaos or violence, and they may very well have their own reasons. It would be absurd, though, to not acknowledge that there are some "gospels", if you will take that term in the broadest sense possible, or sub-religions, which preach that violence is a path to salvation, and which tend to recruit people for the purpose of violence. There are also some political movements which fill the same vacuum for an aimless, angry human soul without religion.
It is not that I have a singular focus on one religion nor one political movement, so much as that the evidence suggests that, currently, some movements have more violent offshoots and a more violent profile. There are a handful of political and religious ideologies in the world that lead to more suicide bombings and hijackings per year than, say, the total number done by believers in Zoroastrianism, Sikhs, Confucians, Hindus, Yazidis, Jews, Buddhists, Libertarians, Democratic Socialists, Freemasons and Christians combined.
If you had, for instance, Jim Jones's cult or the Aum Shinrikyo boarding airplanes and blowing them up on a regular basis, and your response was that a person had to be a single-minded bigot to notice the fact that most airplane bombings originated with this particular ideology, then I'd say you were ignoring facts or willfully making excuses for ideologies which brainwashed people into doing those things. Possibly for reasons related to disliking your own society, which is perfectly fair, but certainly not neutral or scientific.
No, not at all. I was simply combating the idea that the kinds of reasons that lead to people being less likely to become regular criminals (a religious reason to carry a weapon, being licensed and trained with a weapon) would apply to their risk profile on airplanes.
Isn’t that what the scanners are for? To find large metallic objects? Why do you need additional “random” screenings behind that? Or are you saying the scanners don’t work to find even obvious weapons? If so, we should get rid of the scanners.
Err, not that I know of, I generally use the OED to look up the various recorded uses of words.
> To find large metallic objects?
The OED is for finding words, "scanners" that I've used or made are for mapping background geological structures via seismic waves, gravitational waves, magnetic waves, gamma waves. Medical scanners I've worked with have generally not bee used for finding large metallic objects and some should not be used if a patient has large metal objects attached or within.
> Why do you need additional “random” screenings behind that?
In 40+ years of scanning things there's not been a single time I've needed an additioan "random" scan - a few times scans have been repeated due to various failures to save data.
> Or are you saying the scanners don’t work to find even obvious weapons?
In the comment you responded to I said that it is not unreasonable to think that a Sikh you meet, anywhere, might be carrying a knife, a comb, a bracelet, etc. I did not mention anything about scanners. No, seriously, go and recheck the comment.
> If so, we should get rid of the scanners.
We? All scanners? Okay, well, thanks for sharing that opinion.
I figure various groups of scanner users will want to keep using them, of course. I personally am in favour of scanners for exploration and medical work.
Possibly, but Waymos have recently been much more aggressive about blowing through situations where human drivers can (and generally do) slow down. As a motorcyclist, I've had some close calls with Waymos driving on the wrong side of the road recently, and I had a Waymo cut in front of my car at a one-way stop (t intersection) recently when it had been tangled up with a Rivian trying to turn into the narrow street it was coming out of. I had to ABS brake to avoid an accident.
Most human drivers (not all) know to nose out carefully rather than to gun it in that situation.
So, while I'm very supportive of where Waymo is trying to go for transport, we should be constructively critical and not just assume that humans would have been in the same situation if driving defensively.
Certainly, I'm not against constructive criticism of Waymo. I just think it's important to consider the counterfactual. You're right too that an especially prudent human driver may have avoided the scenario altogether, and Waymo should strive to be that defensive.
> I'm not against constructive criticism of Waymo.
I feel like you have to say this out loud because many people in these discussions don't share this view. Billion dollar corporate experiments conducted in public are sacrosanct for some reason.
> I just think it's important to consider the counterfactual
More than 50% of roadway fatalities involve drugs or alcohol. If you want to spend your efforts improving safety _anywhere_ it's right here. Self driving cars do not stand a chance of improving outcomes as much as sensible policy does. Europe leads the US here by a wide margin.
> I feel like you have to say this out loud because many people in these discussions don't share this view. Billion dollar corporate experiments conducted in public are sacrosanct for some reason.
Yes, and I find it annoying that some people do seem to think Waymo should never be criticized. That said, we already have an astounding amount of data, and that data clearly shows that the experiment is successful in reducing crashes. Waymos are absolutely, without question already making streets safer than if humans were driving those cars.
> If you want to spend your efforts improving safety _anywhere_ it's right here.
We can and should do both. And as your comment seems to imply but does not explicitly state, we should also improve road design to be safer, which Europe absolutely kicks America's ass on.
>data clearly shows that the experiment is successful in reducing crashes.
That's fine. But crashes are relatively rare and what matters is accountability. Will Waymo be accountable for hitting this kid the way a human would? Or will they fight in court to somehow blame the pedestrian? Those are my big concerns when it comes to self driving vehicles, and history with tech suggests that they love playing hot potato instead of being held accountable.
And yes, better walkable infrastructure is a win for all. The minor concern I have is the notion that self driving is perfect and we end up creating even more car centric infrastructure. I'm not sure who to blame on that one.
I hope so too. I'll be keeping a close eye on how they handle this, though. My benefit of the doubt for tech was already long drained, and is especially critical for safety critical industries.
I believe how they navigate the legal system through this will indeed affect their credibility. It's the one channel where you need to be the most honest (if you aren't the government itself), so I hold a lot more weight on that than on PR statements.
> and that data clearly shows that the experiment is successful in reducing crashes
I disagree. You need way more data, like orders of magnitude more. There are trillions of miles driven in the US every year. Those miles often include driving in inclement weather which is something Waymo hasn't even scraped the surface of yet.
> without question
There are _tons_ of questions. This is not even a simple problem. I cannot understand this prerogative. It's far too eager or hopeful.
> We can and should do both
Well Google is operating Waymo and "we" control road policy. One of these things we can act on today and the other relies on huge amounts of investments paying off in scenarios that haven't even been tested successfully yet. I see an environment forming where we ignore the hard problems and pray these corporate overlords solve the problem on their own. It's madness.
> You need way more data, like orders of magnitude more. There are trillions of miles driven in the US every year.
Absurd, reductive, and non-empirical. Waymos crash and cause injury/fatality far less frequently than human drivers, full stop. You are simply out of your mind if you believe otherwise, and you should re-evaluate the data.
> Those miles often include driving in inclement weather which is something Waymo hasn't even scraped the surface of yet.
Yes. No one is claiming that Waymos are better drivers than humans in inclement weather, because they don't operate in those conditions. That does not mean Waymos are not able to outperform human drivers in the conditions in which they do operate.
> I see an environment forming where we ignore the hard problems and pray these corporate overlords solve the problem on their own. It's madness.
What's madness is your attitude that Waymos' track record does not show they are effective are reducing crashes. And again, working on policy does not prevent us from also improving technology as you seem to believe it does.
> More than 50% of roadway fatalities involve drugs or alcohol. If you want to spend your efforts improving safety _anywhere_ it's right here. Self driving cars do not stand a chance of improving outcomes as much as sensible policy does. Europe leads the US here by a wide margin.
Could you spell out exactly what "sensible" policy changes you were thinking of? Driving under the influence of drugs and/or alcohol is already illegal in every state. Are you advocating for drastically more severe enforcement, regardless of which race the person driving is, or what it does to the national prison population? Or perhaps for "improved transit access", which is a nice idea, but will take many decades to make a real difference?
>Driving under the influence of drugs and/or alcohol is already illegal in every state.
FWIW, your first OWI in Wisconsin, with no aggravating factors, is a civil offense, not a crime, and in most states it is rare to do any time or completely lose your license for the first offense. I'm not sure exactly what OP is getting at, but DUI/OWI limits and enforcement are pretty lax in the US compared to other countries. Our standard .08 BAC limit is a lot higher than many other countries.
That's true, but note that getting much more severe on enforcement and punishment for DUI/OWI will result in an even higher prison population, more serious life consequences for poor and minorities, etc, when the US is constantly getting trashed for how bad those things are already.
To be a bit snarkier, and not directed at you, but I wish these supposedly superior Europeans would tell us what they actually want us to do. Should we enforce OWI laws more strictly, or lower the prison population? We can't do both!
I suspect you could step up enforcement in ways that don’t involve prison time simply by taking away people’s licenses, and then having a fast feedback loop to catch people driving without a license.
Taking away licenses is a bad way to enforce driving rules because so many people have to be able to drive or their life collapses. The problems of aggressive license revocation are similar to the problems of aggressive prison time.
I get where you're coming from, but it's pretty hard to be sympathetic given the crimes we're talking about and the impact they have on others.
Like that would sound nuts if we applied it to other things - e.g. "take away the professional license of a mid-career pilot/surgeon/schoolteacher/engineer because he was drinking on the job and his life collapses".
Various people can't drive because of e.g. visual impairments, age, poverty, etc. - I find it an ugly juxtaposition to be asserting that we must allow people with DUIs to drive because otherwise their lives would "collapse" to the same point as those other people who can't drive.
> Like that would sound nuts if we applied it to other things - e.g. "take away the professional license of a mid-career pilot/surgeon/schoolteacher/engineer because he was drinking on the job and his life collapses".
The analogy is closer to "take away their ability to get any job" and then it sounds even more harsh.
> Various people can't drive because of e.g. visual impairments, age, poverty, etc. - I find it an ugly juxtaposition to be asserting that we must allow people with DUIs to drive because otherwise their lives would "collapse" to the same point as those other people who can't drive.
If you can't see well enough to drive, then life was unfair to you, and you can often get help with transportation that isn't available to someone that violated the law. For age, if you're young then your parents are supposed to care for you, if you're too old to drive you're supposed to have figured out your retirement by now. For poverty, you kinda still need a car no matter what, that's just how the US is set up in most areas. And it's not ugly to make the comparison to extreme poverty, to say that kicking someone down to that level is a very severe punishment.
> must allow
I wasn't saying what we should do, just that turning up the aggressiveness has serious unwanted consequences.
> The analogy is closer to "take away their ability to get any job" and then it sounds even more harsh.
If you take away the license of a pilot mid-career, they may be able to pivot to something else, but have a huge sunk cost of education and seniority where they ground out poor pay/schedules and then never made it to the part of the career with better pay. For a substantial segment of them, the career impact would be comparable to taking away the ability to drive from a random person.
> For poverty, you kinda still need a car no matter what, that's just how the US is set up in most areas.
You really don't. If you don't already live somewhere with public transit, you'll probably have to move. You'll have to make some sacrifices. But it's workable, I lived without a car and relied on city busses for all my transportation for several years. (And while I wouldn't necessarily recommend it, prior to that, I lived in a small town of ~4k people without transit service. I walked everywhere, and took the inter-city bus when I needed to leave the town.)
In addition to what the sibling said regarding the impracticality of not driving in most of the US, which I completely agree with, I'd also ask exactly what you want to do with your "fast feedback loop to catch people driving without a license". What do you do with the people who drive anyways because not driving is so impractical and get caught?
We already took their license, we can't double-take it to show we really mean it. Fining them seems a bit rough when they need to drive to get to the job to make the money to pay those fines. Or we're right back to jail time and an even higher prison population.
> I'd also ask exactly what you want to do with your "fast feedback loop to catch people driving without a license".
Unless the vehicle is stolen, seize and impound the vehicle. If the driver is the owner, auction it off and give them back the proceeds, minus costs.
I feel like I'm living in some different world where drunk driving is a-okay when I face these types of objections to actually enforcing the rules around it.
It's more that you don't seem to engage much with the trade-offs of all of the possible options. This debate has been going on for decades and society has swung back and forth multiple times already. "Let's enforce things much more harshly" is not at all a new take. Enforcing things harshly enough to actually cut down on the rates of DWI will most definitely cause serious damage to a bunch of lives, including many poor and minorities, and there isn't going to be some clever way around that.
It is a possible position at the end of the day though. You may come across as more honest and experienced if you just explicitly say that you think it's worth that damage to cut down on DWI related accidents. I would even agree that we should probably swing that pendulum a bit more towards enforcement. It seems kind of silly and naive to me though to pretend that you can just hand-wave the resulting damage away,
I don’t think the pendulum has ever really swung towards high-effectiveness interventions, only, as you call them, harsh ones.
As far as DUIs are concerned I’m specifically not in favour of harsh jail time and fines due to their lack of effectiveness and collateral damage.
Interventions to allow a short feedback loop to stop the crimes being prevented simply haven’t been tried at scale for DUIs - think efforts like NYC’s anti-idling laws where you can collect a portion of the fine for reporting idling trucks.
Based on, among other things, my experience living for years without a car in both a medium-sized city and a small town, I find it unpersuasive to claim that anyone, including poor and minorities are better served by having community members drive drunk rather than not driving at all. We’ve quantified the costs of drunk driving (hundreds of billions of $) - I’d welcome anyone to quantify the economic benefits we get from allowing those with DUIs to continue to drive.
Absolutely, I can tell you right now that many human drivers are probably safer than the Waymo, because they would have slowed down even more and/or stayed further from the parked cars outside a school; they might have even seen the kid earlier in e.g. a reflection than the Waymo could see.
It seems it was driving pretty slow (17MPH) and they do tend to put in a pretty big gap to the right side when they can.
There are kinds of human sensing that are better when humans are maximally attentive (seeing through windows/reflections). But there's also the seeing-in-all-directions, radar, superhuman reaction time, etc, on the side of the Waymo.
17MPH is way to fast, depending on the details. I do not think the article gives the details to know if it was a reasonable speed to be going or not, enough details to know it might be to fast, like proximity to a school and children present, yes.
Next time you're vibe coding something, have the system generate a mermaid diagram to show its understanding. Though visual generation can be hard for models, structure/topology in formats like mermaid is pretty gettable.
I've even found sonnet and opus to be quite capable of generating json describing nodes and edges. I had them generate directed acyclic processing graphs for a GUI LLM data flow editor that I built (also with Claude - https://nodecul.es/ if curious)
Years later, swimming in Hawaii, I found myself looking for wipes. I mentioned it to a snorkel-outfit operator, and she looked at me like I was insane. They didn't even put damaging sunscreen in the water, and there was no expectation of little 1-2 inch sticky spots of oil.
The good old days, in the 80's, where we swam in oceans filled with slow-motion natural disasters. I wonder how much of it was place (Hawaiians seem to have a stronger relationship with the land and nature surrounding them) and how much of it was the time (20 years later).
reply