Still far from what you would expect from a truly automated production line, but it has dedicated stations that allows workers to assemble multiple satellites in parallel.
A satellite dish does not transmit any information to the satellite. Satellite TV is a pure broadcast system in the forward direction.
Moreover, comparing a parabolic receiver with a phased array is quite unfair. The amount and complexity of the electronics and processing power required is several orders of magnitude different.
I agree, this was the real reason why the project failed. Moreover, it was not clear at all that fractionation would bring any lifecycle costs savings, neither that any of the alleged extra flexibility (and maneuverability, resiliency, maintenability, and other -ilities) would result in any added value for the missions.
Fractionation is very hard,it introduces a lot of complexity in the design and interfaces, and it requires lots of coordination between multiple vendors. Project Ara by Google (a modular cellphone similar to Phoneblocks) also vouched for this idea of fractionation (and in fact was also led by Paul Ermenko) and was also cancelled.
These analyses are quite optimistic as they only consider propagation delay.
Moreover, the idea of using user-terminals/gateways as ground-relays to bounce signals up-and-down is quite impractical, since you would be greatly reducing the capacity available to satellites on those "intermediate" satellite.
Using user terminals was as much of a thought experiment as anything. The graphs in the video show you can do pretty well just using a few well placed groundstations - you can shave another millisecond or three off with higher densities, but it's not a lot. The big advantage of higher densities is really that you can get better spatial reuse - with more possible choices you can use RF uplinks in places where the spectrum would otherwise go unused, whereas you've less freedom to choose if you've fewer groundstations.
Having said that, this kind of wide-area low-latency bounced routing is never going to be used for you or me to watch Netflix. It will be reserved for high paying customers who really really care about latency. For you and me, we'll be dumped into the terestrial network at the nearest possible location that isn't already saturated.
The second generation of satellites should have optical inter-satellite links, and then you would only use ground relays rarely.
Thanks for your reply. I agree with you, using user-terminals is extremely challenging, especially from a link-budget perspective. You cannot pump enough data to make it worth it. For the gateways, my main concern is that the Ka-band spectrum would have to be shared between user-data and "inter-satellite" data.
Finally, I think that the use-case your described for those latency-sensitive customers is going to be hard to pull off, mainly because of link-availability concerns. There are too many "passes" through the atmosphere to guarantee the availability numbers that a user of such a service requires (99.5%?). Rain in any of these links might cause an outage or a re-route (causing too high jitter). Plus, it would be extremely difficult to have signals traveling from one continent to another.
On routing: the best path in this sort of network changes something like every ten seconds or so. To make it work, you need to consider all the links that can delivery good enough SNR, so you're already factoring in rain for example. You've got many many possible paths, and you're considering the best route constantly, but you're only factoring links with acceptable quality into the computation. If you do this (and can solve the queue avoidance issues too), turns out jitter isn't such a big problem. As one path gets longer, it eventually gets longer than the next path, and you switch to the new path as close to that moment as you can. If you do this, you minimize step changes in latency. It only works if you've got enough satellites, but it looks like SpaceX will have.
If you can't eliminate all queuing delays, you can't factor all delays into your decision of when to change routes, and so you will get some jitter and hence reordering. If so, you need a reorder queue in the final receiving groundstation, so as to avoid confusing TCP. This removes jitter at the expense of adding some latency. How much latency depends on the queuing delays you're trying to smooth out, so it all really comes down to avoiding queues in the satellites.
Completely agree. While the video is awesome (nice job!), the logistics of pulling something like this off is extremely difficult. It's different from the "you could have said that about landing a rocket", in that there are thousands of different routes taken for different customers per second, and each route has its own issues like portillo said. The minute one stops working, debugging that will be a huge challenge.
The user terminal as a gateway idea is also not practical due to link budget (EIRP and G/T), but also the much lower availability they'll be dealing with.
I'm running Dijkstra across this mesh at 30fps in real-time on my laptop while also doing the 3d animation. My laptop fan does spin a bit, but it's not crazily optimized code, and for the video I was also recording to H.264 simultaneously. Doing routing for all customers simultaneously is certainly feasible if their groundstations do the computation, based on routing state supplied in real-time by the constellation. Other solutions are probably possible too, but this seems simplest to me, and scales linearly with customers.
While that's true for a first approximation, weather, gateway outages (very common as you increase the number of them), total satellite capacity (making sure you aren't overloading a single link), and just non-working paths are commonplace. Just making sure that you aren't overloading a particular satellite is an extremely difficult problem. That would be really neat if you could work that into your simulation, where it would bypass certain satellites if there's just no more bandwidth available.
That is not the round trip time, it's the one-way delay. Also, processing time can be significant in satellite networks. Finally, given the current architecture of SpaceX without crosslinks, the total RTT will be ~30 ms + processing time + whatever is the RTT of the backbone network they deliver the traffic to.
I think the processing time might be heavily driven by the waveform and how they do the networking. As complexity increases and they move towards a software defined networking schema, the latency goes up. If they're not using ready-made/proven ASICs, the latency goes up. I don't believe any of this is public from them.
I think so too. In addition, they seem to have regenerative payloads on-board (otherwise I don't get how they get ~20Gbps/sat with a single Ka-band antenna), so they need to do full decoding + encoding on the satellite. Not sure how fast that can be done, but it will add some latency ms probably.
This is the first I've heard someone speculate about that. I didn't think they were doing 20G with a single antenna, but that they were adding all spectrum together that it's capable of processing (even if it doesn't get access to it all). On-board processing would also require a significant amount of power, and it's riskier.
I think that 5M is a high number (at least with the current number of satellites and their architecture), specially if they do not apply data caps.
I would say that 1M customers in the US is a more reasonable figure. If they hit these number, I expect the performance to degrade significantly.