> Many home routers try to preserve the source port in external mappings. This is a property called “equal delta mapping” – it won’t work on all routers but for our algorithm we’re sacrificing coverage for simplicity.
It is precisely this point that has flummoxed me when connecting my p2p wireguard config[1] with a friend that uses a pfsense router, no matter what we tried, pfsense always chooses a random source port.
But in the simple case this blog outlines, if both ends use the same source port, this method punches through 2 firewalls effortlessly:
In my experience, Cisco ASA does source port persistence by default (when it can’t do it then it falls back to random), fortigates can do it (in various ways depending on version, although fallback method in the map-ports doesn’t work), juniper SRXs can’t, unless you guarentee a 1:1 map.
This is easily solved in your source NAT configuration on pfSense. It's a single checkbox to not randomize ports on outbound flows. This will enable full cone NAT.
You can scope it to just your IPsec service, or whatever it is your hosting, or you can enable full cone for the whole subnet.
It is not DNAT, nor is it port forwarding. If you host a SIP proxy, SBC or peer to peer gaming, it will enable these use cases as well.
Lord, we're how many years into using LLMs, and people still don't understand that their whole shtick is to produce the most plausible output - not the most correct output?
The most plausible output might be correct, or it might be utter bullshit hallucinations that only sound correct; the only way to tell is to actually try it or cross-reference primary sources. Unless you do, the AI answer is worthless.
The reason why they're getting so good at code now is that they can check their output by running and testing it; if you're just prompting questions into a chatbot and then copying their output verbatim to a comment, you're not adding any meaningful value.
Exactly! This is what LLMs do: they bullshit you by coming across as extremely knowledgeable, but as soon as you understand 5% of the topic you realise you've been blatantly lied to.
Even if you get 70% blatant lies and 30% helpful ideas, if you can cheaply distinguish the two due to domain expertise, is that not still an extremely useful tool?
But to the point of this thread: If you can't validate their output at all, why would you choose to share it? This was even recently added to this site's guidelines, I believe.
But then why make this comment at all, even despite the disclaimer? Anyone can prompt an LLM. What's your contribution to the conversation?
To be clear, I use LLMs to gut check ideas all the time, but the absolute minimum required to share their output, in my view, is verification (can you vouch for the generated answer based on your experience or understanding), curation (does this output add anything interesting to the conversation people couldn't have trivially prompted themselves and are missing in their comments), and adding a disclaimer if you're at all unsure about either (thanks for doing that).
But you can't skip any of these, or you're just spreading slop.
I wrote this to publish Org docs to S3 - https://github.com/EnigmaCurry/s3-publish.el - I wanted something extremely lightweight, not even wanting to commit things to git like I normally would and waiting for CI to build something. Uploading html directly to S3 means it gets published in <1s from push.
That's neat! For org, if it had an option to generate the HTML file name from slugifying the org file name instead of the salted hash, it could be fantastic for rapid lightweight blogging.
Fair enough. I haven't used an Android device since 2017... Do people have these issues on iOS too?
On Linux, I have no problem running either bare wireguard or tailscale alongside Forticlient. On Windows and macOS it's a bit more janky, specifically the DNS resolution, but I don't daily drive these platforms so I may be missing some kind of knowledge to fix this.
On a linux box, is it possible to run tailscale/wireguard as an exit node along with Forti vpn?
Aka what I want to achieve is (my-machine + tail/wireguard) --> (server with tailscale/wireguard + forti vpn) --> Corporate network. So wireguard or tailscale to receive traffic and forward it through forti.
Or another option (my machine fortivpn over tail/wireguard) --> (server as exit node) --> corporate network
Rather than using the official forticlient I am using https://github.com/adrienverge/openfortivpn. It has some options to configure custom pppd/routes/dns etc if necessary, which I have not touched as I don't know enough :P
DNS resolution is not important for my usecase, only traffic.
I have heard not so great things about Forti VPNs, sorry to hear you have to work with those.
In theory, as long as the Forti VPN does not overlap with the Tailscale IP address range, the simplest solution is to just run Tailscale and openfortivpn on a single node. You can then advertise the Forti VPN subnets within Tailscale, that's effectively what my image does as well in a nutshell, except that it's parsing the WireGuard config and setting up firewall rules for convenience.
Tailscale does NAT automatically by default, so it will look like all traffic is coming from the openfortivpn client itself.
When I just try to run tailscale and forticlient together naively, tailscale does not like it very much heh. Looks like I'll need to study what your image is doing in depth
I don't know about FortiClient specifically, it's a sorry piece of crap that's more often borken than not.
With openfortivpn, you can usually ignore whatever routes you receive and set up your own. I haven't tried the specific set up you talk about, but I don't see why it wouldn't work. However, you would most likely need to NAT on the machine running the Fortinet client.
Sounds like I'll need to learn how to setup custom routes and it's syntax. I have tried to run away from it all my professional life but maybe now I need to.
> However, you would most likely need to NAT on the machine running the Fortinet client.
Could you please elaborate a little more here? NAT from where to where?
Yea on Linux I can run 10 different VPNs (or 10 wg peers) no problem, this limitation of Android is super annoying to me. I think OPs solution is quite a good one for Android users.
Yeah you're exactly on point here, and this limitation exists on both iOS and Android alike. I got very frustrated with switching between VPNs and connections breaking every time I did that.
I feel like my Rust code takes 3x as long to write as my Python code, but the gpt results for rust are about 10x better, because the tooling is a backstop against hallucinations.
I really like the Rust tooling and I like exhaustive pattern matching. Python post-dev debugging time is probably 10x vs Rust. That's why I choose Rust.
There's no way to combine the NVMe drives into a larger sized unit for redundancy / failover though, so not sure what kind of future uptake this could have.
Everyone who uses NVMe-over-network-transport simply does redundancy at the client layer. The networking gear is very robust, and it is easier to optimize the "data plane" path this way (map storage queues <-> network queues) so the actual storage system does less work, which improves cost and density. That also means clients can have their own redundancy solutions that more closely match their requirements, e.g. filesystems can use block devices and implement RAID10 for e.g. virtual machine storage, while userspace applications may use them directly with Reed-Solomon(14,10) and manage the underlying multiple block devices themselves. This all effectively improves density and storage utilization even further.
NVMe-over-network (fabrics w/ RDMA, TCP, ROCEv2) is very popular for doing disaggregated storage/compute, and things like Nvidia Bluefield push the whole thing down into networking cards on the host so you don't even see the "over network" part. You have a diskless server, plug in some Bluefield cards, and it exposes a bunch of NVMe drives to the host, as if they were plugged in physically. That makes it much easier to scale compute and storage separately (and also effectively increases the capacity of the host machine since it no longer is using up bandwidth and CPU on those tasks.)
Yeah. It seems like directly presenting raw disks to the network means any kind of redundancy would need to be done by whatever device/host/thing is mounting the storage.
And doing that over the network (instead of over a local PCIe bus) seems like it'll have some trade-offs. :/
EndeavourOS sway edition (community) [1] has been a pretty great start for preconfigured wayland+sway+waybar and various integration. I added some more stuff to my ~/.config in my own repo [2], but it was a good place to start, and EndeavourOS is basically just rolling Arch Linux with some extra niceties. The included EnvyControl [3] switcher between nvidia to integrated is nice (yea you do have to reboot tho to switch), so I have used the regular i3 config with Xorg for playing some games (nvidia hardware graphics), but use sway for my day to day use (integrated graphics on wayland).
They're not much different, they just run on a different time scale.
Even if your same application continues to work forever, one of the following will happen:
- Complementary software evolves. E.g.: My old photo editor doesn't support new image compression formats
- Alternatives become more attractive. E.g.: I paid for a copy of Sublime Text but now I prefer VSCode because of its additional functionality, my old copy of Photoshop CS2 works fine but the new one will save me time during XYZ workflow compared to the old version.
- The utility of the application is exhausted. E.g.: I already played this single player game 10 times and it's not fun anymore, my copy of Final Cut Pro 6 can't produce 4K HDR movies that my customers demand.
I have versions of Paint Shop Pro from the 90s that can open JPEG, GIF, PNG, TIFF, etc. files.
I have used versions of After Effects of similar vintage. Premier and AE were doing 4K back then because that’s what Hollywood needed for their productions. Illustrator and Photoshop are mostly functional.
Likewise, my Nikon camera from 2011 doesn’t stop working just because it’s old. The tools in my garage are no less effective because home additive manufacturing exists.
It is precisely this point that has flummoxed me when connecting my p2p wireguard config[1] with a friend that uses a pfsense router, no matter what we tried, pfsense always chooses a random source port.
But in the simple case this blog outlines, if both ends use the same source port, this method punches through 2 firewalls effortlessly:
[1] https://blog.rymcg.tech/blog/linux/wireguard_p2p/
reply