Hacker Newsnew | past | comments | ask | show | jobs | submit | default-kramer's commentslogin

You're right, but for EDM this was pretty much already the case. The scene survives in large part thanks to DJs who wade through countless mediocre tracks looking for the few hidden gems to deploy at the right moment. I think AI means that DJs will become much more important in all genres.


> But for the time being there remain a few things that humans can do very easily which computers find difficult. Along with counting traffic lights and crosswalks, one of those things is finding the exact BPM of a song. Not an estimate like most software does, but the exact value with extreme precision across the entire song.

I thought BPM detection has been extremely precise for some time now (for electronic music anyway). Does this mean when software like Mixxx reports (for example) 125 BPM the raw output of the algorithm might have been 124.99, but some higher logic replaces it with an even 125?


I formerly worked for a travel company. It was the best codebase I've ever inherited, but even so there were select N+1's everywhere and page loads of 2+ seconds were common. I gradually migrated most of the customer-facing pages to use hand-written SQL and Dapper; getting most page loads below 0.5 seconds.

The resulting codebase was about 50kloc of C# and 10kloc of SQL, plus some cshtml and javascript of course. Sounds small, but it did a lot -- it contained a small CMS, a small CRM, a booking management system that paid commissions to travel agents and payments to tour operators in their local currencies, plus all sorts of other business logic that accumulates in 15+ years of operation. But because it was a monolith, it was simple and a pleasure to maintain.

That said, SQL is an objectively terrible language. It just so happens that it's typically the least of all the available evils.


Every time I've worked on a project that used AutoMapper, I've hated it. But I'll admit that when you read why it was created, it actually makes sense: https://www.jimmybogard.com/automappers-design-philosophy/

It was meant to enforce a convention. Not to avoid the tedium of writing mapping code by hand (although that is another result).


> But, unless you have some way of enforcing that access between different components happens through some kind of well defined interfaces, the codebase may end up very tightly coupled and expensive or impractical to evolve and change

You are describing the "microservice architecture" that I currently loathe at my day job. Fans of microservices would accurately say "well that's not proper microservices; that's a distributed monolith" but my point is that choosing microservices does not enforce any kind of architectural quality at all. It just means that all of your mistakes are now eternally enshrined thanks to Hyrum's Law, rather than being private/unpublished functions that are easy to refactor using "Find All References" and unit tests.


> most productive applications work at similar scale

What do you mean by "productive" here? The overwhelming majority (probably >99%) of billed/salaried software development hours are not spent working on FAANG-scale software. Does none of that count as "productive"?


Yes but the vast majority of applications that make money doing productive things have scale and complexity high enough that a monolith with sql won’t cut it.


I think you're underestimating the huge variety of productive apps in existence. For every system that handles >1M requests per second, there are probably at least 10 systems that won't even see 1M requests per hour. For example: Twice I've worked on apps for configuring motor control centers. I think you would consider these "productive" apps, but even if we had 100% market share there just aren't that many people in the world who need to configure a motor control center on any given day. The world is full of such apps.


People using and suggesting service oriented architecture are concerned with not just scale in terms of rps but also complexity in terms of lines of code and how much the code changed.

The number of apps that are also productive, also low rps, also not that complex, also not dynamic are few in number.

I guess this website is such an example.


It's not insane. The best codebase I ever inherited was about 50kloc of C# that ran pretty much everything in the entire company. One web server and one DB server easily handled the ~1000 requests/minute. And the code was way more maintainable than any other nontrivial app I've worked on professionally.


I work(ed) on something similar in Java. And it still works quite well. But last few years are increasingly about getting berated by management on why things are not modern Kubernetes/ micro services based by now.


I think the reason they get hotly debated is that people's personal experiences with them differ. Imagine that every time Alice has seen an ORM used it has been used responsibly, while every time Bob has seen an ORM used it has been used recklessly/sloppily. I'm more like Bob. Every project that I've seen use an ORM performs poorly, with select N+1s being the norm and not the exception.


Hmm, maybe, but somehow Marvin Gaye's estate still pulled it off. Yes it was a copyright case, not a patent case, but Robin Thicke and Pharell Williams had a well-funded defense. Seems like Nintendo could easily bully an indie game out of existence if they wanted to.


I've done something like that too. I also noticed that enums are even lower-friction (or were, back in 2014) if your IDs are integers, but I never put this pattern into real code because I figured it might be too confusing: https://softwareengineering.stackexchange.com/questions/3090...


FWIW, I extensively use strong enums in C++[1] for exactly this reason and they are a cheap simple way to add strongly typed ids.

[1] enum class from C++11, classic enums have too many implicit conversions to be of any use.


> classic enums have too many implicit conversions

They're fairly useful still (and since C++11 you can specify their underlying type), you can use them as namespaced macro definitions

Kinda hard to do "bitfield enums" with enum class


it is not really hard, you need to define the bitwise operators. It would be nice if they could be defaulted.


> classic enums have too many implicit conversions

They're fairly useful still (and since C++11 you can specify their underlying type), you can use them as namespaced macro definitions


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: