so the firing party has an active homing beacon, aimed at you, just before they are about to fire? hmmmmmmmmmmmmmmmm yep no problems with that approach at all
> Can you avoid a hellfire missile if you know it’s coming?
Not really, but if the aircraft launching that Hellfire announces itself like that prior to firing, just to check who you are, it may give you just enough information to fire at it preemptively, or at least know (or let others know) where to look for it.
It's not a problem if the US is fighting a technologically inferior opponent, but apparently (from what I've read on HN the other day), the US military is retooling itself towards competition with China, so they have to consider their technological trickery being used against them.
> at least know (or let others know) where to look for it
I think this only works for the targeted party, which sends a pulse, which you could potentially triangulate on. The firing aircraft sends a very narrow beam of signal, so unless you have a specific setup to detect the direction of the signal, you’ll just know you were hit by an unidentified radio beam.
I’m not a radio expert though, who knows what I’m missing :)
> "Using a proper, single-threaded executor and running it on the current thread seems like it would work, yes. (To be fair, I also feel like just having the sync version of a Python API call "trio.run(self.equivalent_async_api)" would probably also work, and I don't totally follow why that's insufficient....)"
Why do I need all this other bloat just to run a function? Why can't I just run the function?
I'm not sure exactly what you're asking, but I think it's either answered by the "What color is your function?" article I linked above, or by the answer that this is exactly why unasync exists and why I suggested that approach is worth considering, or by the answer that you can, in fact, just run the function and the "bloat" (which is just syntactic bloat - note that performance is generally going to be better!) is taken care of behind the scenes by a wrapper that calls an executor for you.
yes sorry, those were rhetorical questions. Your point about asking you fail to see why not using a blocking executor to deal with the async code. My problem is with needing the executor at all. I must have skipped a couple of you previous pints in this thread. Apologies about that...
Maybe we should start trying to think about async as being something can use if they want and ignore if they want. Code being async compatible rather than async required.
How would this work? (I do really think this is the right model, I'm just trying to figure out what that model is, exactly. :) )
Let's say I have code like this, in Python asyncio:
class ShardedDBClient:
async def query(self, key):
tasks = [self.query_shard(key) for shard in self.shards]
results = await asyncio.gather(*tasks)
for partial_result in results:
if key in partial_result:
return partial_result[key]
return None
How do you run this without an executor?
The obvious way to make it not be "async required" is to say, we get rid of the async/await keywords - but what do you do with that "await asyncio.gather" instruction? Do you call each of those callbacks serially?
Generally, even in Rust (perhaps especially in Rust), I would expect this to use some OS facility for waiting on multiple sockets (possibly even just boring select(), but preferably epoll/kqueue) to send a bunch of database requests out in parallel and then wait on all their sockets to handle responses as they arrive. I would expect that even if my own code doesn't involve async/await at all.
which creates an asyncio executor just to run that one function.
This is going to be a lot faster than querying those shards one at a time! And it also can semantically change how the library behaves - imagine that there's a timeout parameter, and I set a 100ms timeout. I probably mean that to be 100ms for the entire operation, not 100ms per request, but I probably also don't expect my calls to always fail if each query takes 10ms and there are more than 10 shards.
The downside is that this library is quietly using asyncio without you knowing. But how exactly is that a downside? I already expect the library to be using select/epoll/kqueue without me knowing. And in a language like Rust, the executor should basically compile out - it should be a "zero-cost abstraction" compared to writing the event-handling code by hand.
As long as the async code doesn't depend on calling itself concurrently it should be straightforward to simply execute it on the current thread, right? (Basically using an executor that has 1 thread, the current thread.)
And it'd be great to check and optimize away all this at compile time.
Complexity kills code. Being able to reason about what your code is doing, is FAR more valuable to me than async. Having tokio act as my runtime and switch tasks as it sees fit will be debug hell.
The problem I see is the current async story is opt-out. It's use async or go find something else. Async should be opt in. As in, the code works regardless of an async runtime, async is added magic if you want it, but it will run like normal single threaded code if not.
Yes, in fact, you cannot even use async Rust without writing your own executor or bringing one in via a library. It is very, very much opt in. That was a hard constraint on the design.
However, I think what the parent is getting at is the feeling of the total package, not the technical details. If every library you want to use is async, you can't really "opt out" exactly, even if technically the feature is opt out.
An executor is required, in name or in spirit. Every async system has software that does this. Most language runtimes that do simply give you no choice in the matter.
Rust is a language that does things different to other languages because it is a better way. I challenge you to do the same with Async. There is a different better way.
IIRC our system of time is more so aligned with midday rather than midnight. As that is when the sun is at its highest point in the sky, which allowed us to create a common point between timezones.
As someone who bought a RX580 for playing with Deep learning with ROCm (It was supported at the time). After posting to one or two bug threads, I had the same experience as the gp -- our issues were ignored. The issues have recently closed as the RX580 is no longer supported.
As for long term success, good luck, but once bitten twice shy.
I empathize. There have been plenty of mistakes. Trust is difficult to earn, and clearly we have not acted in a manner deserving of yours.
When I first encountered ROCm, my impressions were mixed. The idea thrilled me, but the execution did not. I planned to ignore ROCm until it was clear that it would meet my needs. Obviously, my plans changed, but I haven't forgotten the perspective I had as a potential user.
There are still rough edges and I know that nobody gets third chances, so it's good that you're cautious. We are steadily improving and I believe we will do better in supporting our users going forward, but to earn that trust back, we will have to prove it through our actions.
It still won't. If you look at history we have been always shifting towards information storage that is easier and thus less energy intensive to copy.
Art by definition has done the opposite. Unique pieces that are HARD to copy (IIRC art only gains value once the artist is dead) and thus can gain value.
This feels to me like a weird version of DRM which has no real world use case.