Reasons 1-3 could very well be done with ClickHouse policies (RLS) and good data warehouse design. In fact, that’s more secure than a compiler adding a where to a query ran by an all mighty user.
Reason 4 is probably an improvement, but could probably be done with CH functions.
The problem with custom DSLs like this is that tradeoff a massive ecosystem for very little benefit.
As long as you don't deviate too much from ANSI, I think the 'light sql DSL' approach has a lot of pros when you control the UX. (so UIs, in particular, are fantastic for this approach - what they seem to be targeting with queryies and dashboards). It's more of a product experience; tables are a terrible product surface to manage.
Agreed with the ecosystem cons getting much heavier as you move outside the product surface area.
Personally I think that's worse. SQL - which is almost ubiqutous - already suffers from a fragmentation problem because of the complex and dated standardization setup. When I learn a new DBMS the two questions I ask at the very start are: 1. what common but non-standard features are supported? 2. what new anchor-features (often cool but also often intended to lock me to the vendor) am I going to pick up?
First I need to learn a new (even easy & familiar) language, second I need to be aware of what's proprietary & locks me to the vendor platform. I'd suspect they see the second as a benefit they get IF they can convince people to accept the first.
I actually 100% agree with your for a new DBMS and share your frustration with vendor-specific features and lock-in. At that level, it's often actively counterproductive for insurgent DBs - ecosystem tooling needs more work to interface with your shiny new DB, etc - and that's why we always see anyone who starts with a non-standard SQL converge on offering ANSI SQL eventually.
I think an application that exposes a curated dataset through a SQL-like interface - so the dashboard/analytic query case described here - is where I think this approach has value. You actually don't want to expose raw tables, INFORMATION_SCHEMA, etc - you're offering a dedicated query language on top of a higher level data product offering, and you might as well take the best of SQL and leave the bits you don't need. (You're not offering a database as a service; you're offering data as a service).
You’re right RLS can go a long way here. With complex RBAC rules it can get tricky though.
The main advantages of a DSL are you can expose a nicer interface to users (table names, columns, virtual columns, automatic joins, query optimization).
We very intentionally kept the syntax as close to regular ClickHouse as possible but added some functions.
Is this also not solvable with views? Also, clickhouse heavily discourages joins so I wonder how often this winds up being beneficial? For us, we only ever join against tenant metadata (i.e. resolving ID to name)
> query optimization
This sounds potentially interesting - clickhouse's query optimizer is not great IME, but it's definitely getting better
That’s just not true at all. Even if 100% of the population keeps 2x guns at home, at 9.5M people that would mean 19M guns. The US has more than 20x that many owned by civilians.
That’s exactly how it has been working for me in code. I have a bunch of different components and patterns that the LLMs mix and match. Has been working wonderfully over the past few months.
No, it isn’t that simple. Who bears the most weight depends on the elasticity of the curve on each side. This is confirming demand is more inelastic, which causes it to bear the burden, but could have been the other way around.
Producer has minimal margins and cannot lower their price. Consumer, at least in the immediate future, has more money to spend. Never were the curves going to be any different in this case. Only in the case of a poorer country placing tariffs on a wealthier country with higher margins, would this be any different than the blindingly obvious outcome here.
The only fruit of this is real economic pain for the American consumer. But that was likely the goal, so mission accomplished I guess.
An Asian factory of imperial rulers and scales might have had to bear the burden because they have only the USA to sell to. However if they have products that they can sell to all the world and they manage to, why lower the prices to the USA instead of selling more to other markets?
Countries geographically closer to the USA might reason differently because close countries usually trade more and they have more to lose. But even in this case, if a Mexican or Canadian company can find other markets or discovers that it can keep selling at the same price, they will not bear any of the burden of the tariffs.
Russian like sanctions were applied to Italy about 100 years ago because of colonial wars in Africa. Despite the sanctions lasted only 6 months, Italy discovered that they ended up trading less with the usual partners and more with others. Tariffs are somewhat similar to sanctions as they apply friction to trading.
What does elasticity matter if you no longer make a profit?
Isn't the only thing that could matter - apart from strategic considerations of financing a loss for a time - if the margins are big enough? Who wants to pay for people to take their products below the full cost of making them, apart from some investor-financed hype startups?
> Isn't the only thing that could matter ... if the margins are big enough
2) No, the standard price elasticity of demand curve does not directly include profits. It primarily models the relationship between price and quantity demanded.
No, the standard price elasticity of demand curve does not directly include profits. It primarily models the relationship between price and quantity demanded.
Supply curve??? The OP wrote "This is confirming demand is more inelastic"
No a k8s dev, but I feel like this is the answer. K8s isn't usually just scheduling pods round robin or at random. There's a lot of state to evaluate, and the problem of scheduling pods becomes an NP-hard problem similar to bin packing problem. I doubt the implementation tries to be optimal here, but it feels a computationally heavy problem.
In what way is it NP-hard? From what I can gather it just eliminates nodes where the pod wouldn't be allowed to run, calculates a score for each and then randomly selects one of the nodes that has the lowest score, so trivially parallelizable.
And yet they seem to have lost all that knowledge from Win8 onwards. WinForms, WPF, UWP, WinUI, MAUI... All of these with their own metaphores, design language, and they all feel half-baked, full of bugs.
Reason 4 is probably an improvement, but could probably be done with CH functions.
The problem with custom DSLs like this is that tradeoff a massive ecosystem for very little benefit.
reply