But there are people who didn't play WoW back in the day who still love classic, so it can't just be nostalgia. Vanilla WoW really did have a different design ethos than the later expansions did, and some people prefer that experience.
> Vanilla WoW really did have a different design ethos than the later expansions did, and some people prefer that experience.
Right, and that's my point. When you take away the nostalgia for the content, you reveal what players are asking for, which is a reversion to what is effectively a previous game as modern WoW lost all of what made it a good game, to those players, in the first place.
So yeah, there was definitely a group of players that literally did want Classic WoW, original content and all, but I also feel like Blizzard would have saw success continuing that Classic formula with new content. Blizzard sucked the soul and charm out of WoW. For all intents and purposes, modern WoW is a completely different game.
Yes. I've watched some videos on wow by Kevin Jordan, who was on the original team. He said the original game was built on 3 pillars:
1. Advancement over time.
2. Player interaction. Hard content should be hard in order to push players into working together to overcome challenges.
3. The world is a character in the game. Even when you eventually got mounts later in the game, they made it so mobs can knock you off & daze you. The world is big and full of wonder. Especially at earlier levels, just getting from point A to point B can be a journey in itself. Wanna play with your friends who started in a different zone? Fine, but just getting there will be an epic journey.
They abandoned those principles later on. Eg, by adding "sidekick" characters you can summon when you're playing on you own, to overcome hard content. The point of that content was to push you into making friends with other players. Flying mounts made the world small and safe. Just point in the vague direction you wanna go and go AFK for a few minutes. They also added more and more teleports, so you don't have to schlep overland yourself. And the LFG system.
It was a really different game back then. I didn't get that much out of classic wow. But I'd enjoy playing a new MMO built around those same principles.
Just understand that you are one of the player groups that Blizzard targets and they found that a significant if not plurality of their player groups were solo players. This is why they've actively changed the product to try to keep that player base subbed between expansion. By their account it seems to work.
I do think Blizzard is big enough they can maintain multiple experiences. One thing that is challenging is a vocal group of players really feel like they need to do everything in the game. It's compulsory (some game design choice did also force that at times). This leads to them not enjoying the content not designed for them. Blizzard has a challenging line to solve.
Classic was the right move, I do agree with your idea of someone making a similar game with the original principles. It probably can't be Blizzard anymore, their have a 0-1M user problem. Anything they make has to cater to everyone or they get flak. So a smaller outfit needs to do it. Challenging in this funding environment.
> The LFG system basically killed most social interaction in WoW.
I started playing Anniversary vanilla one year ago. I played through it all and now I'm playing Anniversary TBC. I visited many dungeons. There's no LFG system, yet I didn't find any social interactions in dungeons. I'm pretty sure the whole social interactions thing is overblown. 99% of dungeons is like leader silently invites you or you write "inv holy pala GS 1400" and he silently invites you. You silently run through dungeons, silently leave. That's about it. There are no interactions. Zero. Some people write "hi" and "ty", some don't bother.
That's exactly what's happening on Anniversary servers. They crammed like 20 servers into one megaserver, so there are like 100 000 players on the same server and you're very unlikely to meet the same person twice.
Another example is Old School Runescape, who reverted back to an earlier save and has now diverged as an entirely separate game running with older systems as they lost a ton of players with their "Evolution of Combat" update. While nostalgia is definitely a powerful tool, I agree with the previous commenter that the original WoW was a very different game than the modern version and it seems like that is one of the core aspects of what people desired.
True, I had the same feeling. The article does go off 256K elements in a bloom filter of 2M. After 1M elements, using 2 bits actually increases false positive rate, but at that point the false positive rate is higher than 50% already.
But not everything can be "fair game" when providing a service for free. Surely it wouldn't have been OK if they suddenly included a bitcoin miner or extracted credentials. They offered a free service, people trusted it, depended on it. Now, in my view, they have some responsibilty to their users.
Giving a notice in advance and releasing a final image that patched the CVE would've been reasonably responsible.
But when you plug in the numbers: that the farmer raised $126 million, and hosting unlimited Docker Hub pulls costs $11/month, it doesn't quite feel the same.
> if you're not running the test-suite on pre-push your tests are slow
Of course tests are slow. If they're fast they're skipping something. You probably want to be skipping something most of the time, because most of the time your change shouldn't cause any side-effects in the system, but you definitely want a thorough test suite to run occasionally. Maybe before merge, maybe before release, maybe on some other schedule.
Each individual change might be fine in isolation but cause excessive memory pressure when accumulated with other changes, for example. Unit tests won't catch everything, integration & functional (hardware in-the-loop) tests are needed. I even sometimes have to run tests in a thermal chamber repeatedly to cover the whole -40-105°C temperature range, since the firmware I work on runs on hardware that allows that range & performance varies with temperature.
What is a good time to run slower tests? My full test suite takes around 4 minutes to run, and it’s trending upwards. I am a solo developer, so I run it pre-push.
And what about tests that only need to run intermittently, such as broken link checkers?
Is there a reason for that? I figure that catching bugs before pushing is faster than getting an email from GitHub, and my hardware already has all the containers running. Spinning up machines in the cloud for this feels wasteful.
if you want to reuse the local setup for ci or pushing broken code is somehow undesirable, then run all the tests pre-push
i tend to run only the fast set of tests, or a subset of the environments matrix, on pre-push — CI can take care of the rest while, i move onto do something else
when working with other the CI status is what matters
Certainly not the formatting you have done manually, because the auto-formatter will likely throw it away. You likely do that, because you care about the consistency of the formatting, not for the formatting already present in the file.
Personally, I don't like auto-formatters. It's like having someone else cleaning up your room. I have put every byte of whitespace where it belongs, I certainly don't want someone else to mess that up.
Freelance engineer and founder with an MSc in Computer Science and 10+ years of experience delivering production systems end-to-end under tight timelines.
The main issue is that a reader might mistake Redis as a 2X faster postgres. Memory is 1000X faster than disk (SSD) and with network overhead Redis can still be 100X as fast as postgres for caching workloads.
Otherwise, the article does well to show that we can get a lot of baseline performance either way. Sometimes a cache is premature optimisation.
That's the reader's fault then. I see the blog post as the counter to the insane resume-building over-engineered architecture you see at a lot of non-tech companies. Oh, you need a cache for our 25-user internal web application? Let's put an front a redis cluster with elastisearch using an LLM to publish cache invalidation with Kafka.
There's also a sort of anti-everything attitude that gets boring and lazy. Redis is about the simplest thing possible to deploy. This wasn't about "a redis cluster with elastisearch using an LLM" it was just Redis.
I sometimes read this stuff like people explaining how they replaced their spoon and fork with a spork and measured only a 50% decrease in food eating performance. And have you heard of the people with a $20,000 Parisian cutlery set to eat McDonalds? I just can't understand insane fork enjoyers with their over-engineered their dining experience.
After quite a few years of being an employee (and the whole CV driven crap), I’m now in entrepreneur mode, stealth for now (that’s relatively easy, because I’m bootstrapping and funding this with my own money, no investors), and I made an executive decision to (forcibly) think many times, before adopting any “cloudy” thing.
Hardware…is cheap, and bare metal performance outweighs anything cloudy by multiples of magnitudes. If I have to invest money into something, I’d rather invest that in bare metal tooling, than paying for a managed service, that’s just a wrapper around tooling. E.g RDS, EC2, Fargate… or their equivalents across other CSPs.
I can run a Postgres cluster on bare metal, that will obliterate anything cloudy, and cost less than a 3rd if not less. Is it easy? No. But that’s where the investment comes in. A few good Infra resources can do magic, and yes, I hope to be large enough that these labor costs will be way less than a cloud bill.
This was my take as well, but I'm a MySQL / Redis shop. I really have no idea what tables MySQL has in RAM at any given moment, but with Redis I know what's in RAM.
> The main issue is that a reader might mistake Redis as a 2X faster postgres. Memory is 1000X faster than disk (SSD) and with network overhead Redis can still be 100X as fast as postgres for caching workloads.
Your comments suggest that you are definitely missing some key insights onto the topic.
If you, like the whole world, consume Redis through a network connection, it should be obvious to you that network is in fact the bottleneck.
Furthermore, using a RDBMS like Postgres may indeed imply storing data in a slower memory. However, you are ignoring the obvious fact that a service such as Postgres also has its own memory cache, and some query results can and are indeed fetched from RAM. Thus it's not like each and every single query forces a disk read.
And at the end of the day, what exactly is the performance tradeoff? And does it pay off to spend more on an in-memory cache like Redis to buy you the performance Delta?
That's why real world benchmarks like this one are important. They help people think through the problem and reassess their irrational beliefs. You may nitpick about setup and configuration and test patterns and choice of libraries. What you cannot refute are the real world numbers. You may argue they could be better if this and that, but the real world numbers are still there.
> If you, like the whole world, consume Redis through a network connection, it should be obvious to you that network is in fact the bottleneck.
Not to be annoying - but... what?
I specifically _do not_ use Redis over a network. It's wildly fast. High volume data ingest use case - lots and lots of parallel queue workers. The database is over the network, Redis is local (socket). Yes, this means that each server running these workers has its own cache - that's fine, I'm using the cache for absolutely insane speed and I'm not caching huge objects of data. I don't persist it to disk, I don't care (well, it's not a big deal) if I lose the data - it'll rehydrate in such a case.
Try it some time, it's fun.
> And at the end of the day, what exactly is the performance tradeoff? And does it pay off to spend more on an in-memory cache like Redis to buy you the performance Delta?
Yes, yes it is.
> That's why real world benchmarks like this one are important.
That's not what this is though. Just about nobody who has a clue is using default configurations for things like PG or Redis.
> They help people think through the problem and reassess their irrational beliefs.
Ok but... um... you just stated that "the whole world" consumes redis through a network connection. (Which, IMO, is wrong tool for the job - sure it will work, but that's not where/how Redis shines)
> What you cannot refute are the real world numbers.
that is an interesting use case, I hadn't thought about a setup like this with a local redis cache before. Is it the typical advantages of using a db over a filesystem the reason to use redis instead of just reading from memory mapped files?
> Is it the typical advantages of using a db over a filesystem the reason to use redis instead of just reading from memory mapped files?
Eh - while surely not everyone has the benefits of doing so, I'm running Laravel and using Redis is just _really_ simple and easy. To do something via memory mapped files I'd have to implement quite a bit of stuff I don't want/need to (locking, serialization, ttl/expiration, etc).
Redis just works. Disable persistence, choose the eviction policy that fits the use, config for unix socket connection and you're _flying_.
My use case is generally data ingest of some sort where the processing workers (in my largest projects I'm talking about 50-80 concurrent processes chewing through tasks from a queue (also backed by redis) and are likely to end up running the same queries against the database (mysql) to get 'parent' records (ie: user associated with object by username, post by slug, etc) and there's no way to know if there will be multiples (ie: if we're processing 100k objects there might be 1 from UserA or there might be 5000 by UserA - where each one processing will need the object/record of UserA). This project in particular there's ~40 million of these 'user' records and hundreds of millions of related objects - so can't store/cache _all_ users locally - but sure would benefit from not querying for the same record 5000 times in a 10 second period.
For the most part, when caching these records over the network, the performance benefits were negligible (depending on the table) compared to just querying myqsl for them. They are just `select where id/slug =` queries. But when you lose that little bit of network latency and you can make _dozens_ of these calls to the cache in the time it would take to make a single networked call... it adds up real quick.
PHP has direct memory "shared memory" but again, it would require handling/implementing a bunch of stuff I just don't want to be responsible for - especially when it's so easy and performant to lean on Redis over a unix socket. If I needed to go faster than this I'd find another language and likely do something direct-to-memory style.
This is modern backend development. The server scales horizontally by default, nodes can be removed and added without disrupting service. With redis as cache, we can do e.g. rate limiting fast without tying a connection to a node, but also scale and deploy without impacting availability.
reply