Hacker Newsnew | past | comments | ask | show | jobs | submit | blobcode's commentslogin

(1) Aircraft rarely fly in anything close to formation in combat - large gaps are the norm (1-10 miles), and one would think that increased distance is something that could be exploited by an unmanned platform (able to take more risk, etc.)

(2) Remains to be seen.

(3) Individual Patriot missiles are around that price point, with S300/S400 anywhere from 500k-2M depending on capability. One would think that cost-per-kill would be favorable considering the increased capability granted.


At 10-mile intervals you're maintaining a high-bandwidth, low-latency mesh network in a contested electronic environment. If the command aircraft is 10 miles away and the enemy is jamming the link, the drone is going to be making split-second (potentially) lethal decisions without the pilot.

You're right about them both costing about the same, so the real leverage only comes if these drones can stay out of the engagement envelope while sending cheaper submunitions (likely using something like these Ragnaroks (~$150k) https://www.kratosdefense.com/newsroom/kratos-unveils-revolu...) to do the actual baiting.


> high-bandwidth, low-latency mesh network in a contested electronic environment.

Hard to win at jamming, when you're further away and the opponents are frequency agile.

1. They can use directionality more effectively to their advantage

2. Inverse square law works against you (unlike e.g. jamming GPS where it works for you).

3. They can be frequency agile, strongly rejecting everything outside of the 20MHz slice they're using "right now"-- and have choices of hundreds of those slices.

Fighters already have radars that they expect to "win" with despite that being inverse fourth power, a longer range, and countermeasures. They can send communications-ish signals anywhere over a couple GHz span up near X-band. Peak EIRP that they put out isn't measured in kilowatts, but tens of megawatts.


Fair point, “jammed” was too binary.

My concern is less total link loss than what happens under degraded or intermittent connectivity. If the wingman still depends on the manned aircraft for tasking or weapons authority, then the interesting question is how it behaves when the link is noisy rather than gone.

That feels like the real hinge in the concept.


Stealth is less effective against long range radar, stealth is more effective closer in against targeting radars.

When you're high up you can have pretty long 'line of sight' so it's not unreasonable that these could fly way way ahead. 100 miles and way more is not unreasonable.

You basically get 'double standoff'.

I can see this as being almost as effective as manned stealth and if they are cost effective they could very plausibly defeat f22 scenarios.

Once you add in the fact that risk is completely different (no human), then payload, manoeuvrability, g-force recovery safety, all that goes out the window and you have something very crazy.

3 typhoons with 2-3 'suicidal AI wingmen' each way out ahead is going to dust them up pretty good at minimum. It's really hard to say for sure obviously it depends on all the other context as well.


That may be true, but it seems to strengthen the case for moving the human out of the forward cockpit rather than keeping them there.

If the unmanned aircraft are the ones flying far ahead, taking the risk, and extending the standoff envelope, then why is the human still sitting in the forward fighter rather than supervising from a safer node further back?

At that point it seems like the architecture is optimizing for tactical latency and current doctrine, not necessarily for the cleanest end-state.


The human is 100 miles back, that's the point.


at 10 miles, the data link cannot be jammed. and it won't be observed, either. military is very good at this 'mesh networking' thing. L16 is 40 years old at this point, I expect they have something much better.


The Link16 replacement is called MADL. It is used in the F-35 and has capabilities not available using Link16.

https://en.wikipedia.org/wiki/Multifunction_Advanced_Data_Li...



Some of the stories could plausibly have SC involvement (the story of the same name is most likely) - but I don't think it's explicitly mentioned anywhere.


I feel like SAT solvers and the like are getting a lot more attention on HN recently (for example https://news.ycombinator.com/item?id=44259476) - justifiably so! I think that they're a great tool that's often criminally underused in industry for a whole subset of problems.


> a whole subset of problems

Like what?

In my experience, 95% of the times I'm considering applying SAT/SMT to a problem, I should actually think about it for another day (perhaps while throwing a SAT solver at it, if that seems fun) and will invariably find that the problem I'm trying to solve is actually something else... In the remaining 5% of problems, there's usually a solution you can download (which maybe uses SMT under the hood).

Sure enough, SMT is really cool and extremely powerful where it's applicable.


You are not wrong. But I can wear both hats (no pun intended, I think).

On one hand people are not going to be using SAT/SMT to solve problems on a dayly basis.

On the other hand these algorithms are a bit overlooked in CS books (not Knuth, of course). Compare, for instance with a FFT. In the livetime of an average programmer they might actually find it convenient to use SAT solvers here and there on a few occasions. Maybe just as much as a FFT.

Combinatorics is a hard subject and SAT shed light in many situations where better tailored algorithms might exist but might be difficult to come up with.

Maybe I'm biased


It's occasionally helped me with the NP-hard problem of "finding a regular language consistent with a set of samples that also satisfies some structural constraints". But more often, the minimal DFA (when it exists) has a few dozen too many states, and the solver gets trapped in the exponential pit of despair, which hasn't really endeared me to the approach. I've yet to actually run into a class of problems where things like SAT or ILP are wildly successful while all other approaches fail.



> the requirements for advanced math classes for every computer science major seem unnecessary

Computer science is much more than programming - and I think that most of the value derived is from being able to think about problems, which largely require the abstract type of thinking encouraged by more advanced math. Code is just a tool.


Then use any of the plethora of alternative text formatting tools (TeX, for one).


Only if the US has a change to sane leadership to go along with it


This is one of those things that MIT’s missing semester course aims to help with (https://missing.csail.mit.edu/), and although computer science is different from software engineering, the reality is that most CS grads go into software engineering, and thus should try and learn these essential skills.


Computer science in the graphics space requires a ton of gnarly coding. If we consider post-doc computer science in particular, the idea you dont then need to be at least a semi-competent programmer is.. a surprising idea to have.

Which is to say, the idea CS do not need to ever code is a stretch. How does a person work on cutting edge graphics algorithms without building something with it? Without coding in a pretty serious way?

One example, at the end of my intro to programming, the final assignment was to build something, I wrote a chess program (with a GUI). While some classes are quite theoretical, I strongly disagree that a person can get through all of a CS program without their coding skills being challenged.


A mildly interesting note that this paper was submitted as part of SIGBOVIK (http://www.sigbovik.org/), a collection of similarly funny papers.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: