IIRC the mylobot botnet is responsible for providing the vast majority of residential (home) IP addresses for residential VPN providers (who are then sold to expressvpn/nordvpn). The whole business is incredibly shady and nefarious and nordvpn/expressvpn must know from whom they contract their residential vpn services from.
BHProxies is the largest residential proxy provider on the internet and almost all of their proxies are acquired through the botnet above.
Seconded. I refer to them as shady because I have no way of knowing what they do with your data. I didn't even consider that they'd have a whole botnet market going on too. This definitely needs to be more public.
Agreed - I assumed they had some way of getting IP addresses that don't come from an AWS/Azure/Google/whatever datacentre block but I just assumed they bought residential blocks from ISPs or something like that.
Is there a source for expressvpn actually using BHProxies? I had no clue it was that sketchy. It is owned by a public company, so that's pretty substantial news if true.
I would be very skeptical of the claim, quite worrying to see multiple people accepting that as a fact without any kind of evidence to support the claim.
I'd be shocked if any of the major VPN providers were involved with illegal residential proxies. It just doesn't make sense, can you imagine just how unstable and slow those connections would be? Why would they risk being legally liable when there exists legal residential proxy providers that get their IP's from people that voluntarily share their connection (honeygain etc.)? I've never heard of any of the big VPN providers offering residential connections. As I understand the VPN providers that promise support for netflix and similar streaming services just acquire newer IP's from time to time but the connection still goes through a regular datacenter, definitely not from some random dude's home.
The proxy market is more so targeted towards developers who scrape data and criminals that do credential stuffing/other criminal activity.
I'm not saying I trust the above claim (I have no idea) but this
>can you imagine just how unstable and slow those connections would be
Yes, yes I can and they are. I tried them some time ago before I found out how shady they are and encrypted connections were like 2 Mbit while Mullvad gave me many many times faster bandwidth with higher encryption. Their support was completely useless.
I don't know I assume it would tell you put some cortisone cream on it or to seek a dermatologist. 99% of internet advice boils down to that I don't think chatgpt/gpt 4 would be much different in that regard. It definitely can bullshit but for really generalized stuff like that I doubt it.
It's not like your asking a singular person the question like it is when you ask an accountant but an abstraction of written human thought and interactions.
Yeah, the example given in the OpenAI GPT4 twitter video is someone asking it to write a python script to analyze their monthly finances and it simply just importing dataframes, importing "finances.csv", running a columnar sum for all finances and then displaying the sum and the dataframe. I'm sure it's capable of some deeper software development but it almost always makes radical assumptions and is rarely ever self sufficient (you don't need to look it over and don't need to change the architecture of the code it produced).
Wouldn't you just use go/python/node for a simple crud API? fastapi for python is pretty performative if you use gunicorn as your runtime and time to iterate is must faster than it is in rust.
It depends exactly how simple that CRUD API is. If there's any business logic, I'd rather get all the cheap correctness guarantees that Rust provides. I don't find myself making many truly dumb CRUD APIs.
Time to iterate is also only much faster in certain situations, e.g. local development; if you have to e.g. build a container image, push to a registry, and redeploy to a k8s cluster somewhere, those savings become somewhere between less significant and nonexistent.
> Time to iterate is also only much faster in certain situations, e.g. local development; if you have to e.g. build a container image, push to a registry, and redeploy to a k8s cluster somewhere, those savings become somewhere between less significant and nonexistent.
Can you expand on what you mean here? I know you're not implying Rust is faster to move thru a CICD pipeline, so can you tell me what you do mean? I seem to be unable to make a different reading
I think the point being made here is that all CI/CD/SDLC stuff is effectively slowing down development, so the difference in iteration speed between Python and Rust is less explicit. But I dare to disagree, I just can't connect the dots here, moving code further down the CICD pipeline doesn't mean we can't work on the code itself or think about project improvement ideas.
Not your parent, but I have some ideas on this. I'm not sure how true they are. Maybe I'll write a longer version some day and see what people think. But the summary is this:
I suspect it has to do with how familiar you are with type systems, and the way that you use them. I find that Rust's constraints help guide me towards a solution more quickly, and I spend less time chasing down strange edge cases. Not eliminate! But reduce.
that prompt is going to receive a dark response since most stories humans write about artificial intelligences and artificial brains are dark and post-apocalyptic. Matrix, i have no mouth but i must scream, hal, and like thousands of amateur what-if stories from personal blogs are probably all mostly negative and dark in tone as opposed to happy and cheerful.
I wonder if you were to just spam it with random characters until it reached its max input token limit if it would just pop off the oldest existing conversational tokens and continue to load tokens in (like a buffer) or if it would just reload the entire memory and start with a fresh state?
It is, it's libgen + commoncrawl + wikidump + a bunch of other datasets. OpenAI claim that commoncrawl is roughly 60% of its total training corpus and they also claim they use the other datasets listed. They probably also have some sort of proprietary Q&A/search query corpus via Microsoft.
Yeah, like if you removed chatgpt from the equation what would change? A couple hundred-thousand moderators would not have an above-average hourly wage and would instead need to find another international company or a domestic company to hire them who are all abusing the lack of unions and low wages.
That's fair, but I have a half a dozen servers sitting in a colo with networking/energy/cooling included in the lease agreement so energy is not a concern.
I usually just run game servers with the extra RAM/cpu threads for my friends. If my elasticsearch cluster is particularly unused some days and I have extra bandwidth I might turn on wireguard and let the 450GB disk of torrents I have on disk seed. Anything that makes the btop graph look active and lively makes me feel nice.
I also read that iPhones are quickly growing in market share in China as Chinese people see them as more luxurious than their domestic brands. Which raises the question, how does iMessage, the App Store, data collection and western app policy stuff work on Chinese iPhones? There has to be some collusion/government pressure on Apple to regulate their Chinese App Store the same way Huawei is forced to regulate its domestic app store.
It's no longer just about luxury. Apple has won over the CCP with their China/Taiwan-first outsourcing practices -- not to mention $270+B invested to train young, unskilled laborers from rural China and prop up China's domestic chip business -- and now some Chinese even consider Apple as their own.
BHProxies is the largest residential proxy provider on the internet and almost all of their proxies are acquired through the botnet above.
https://www.bitsight.com/blog/mylobot-investigating-proxy-bo...