Hacker Newsnew | past | comments | ask | show | jobs | submit | more mbell's commentslogin

Client side validation, regardless of the technology used, is a performance and UX optimization.

The backend must always validate and it's this validation that matters for any sort of 'security' or data integrity issue.


> In React 17, React automatically modifies the console methods like console.log() to silence the logs in the second call to lifecycle functions. However, it may cause undesired behavior in certain cases where a workaround can be used.

I feel like when you've gotten to the point that something like this has been proposed and accepted as a good solution to a problem your framework is facing, it may be a good time to stop and reconsider the approach of the framework.


This gets at something deeper: React has a policy of “in the dev environment anything goes; we will mess with your code and its runtime performance to help detect bugs or future bugs; you want us to do this, even if you think you don’t.”

But this assumes good QA and staging environments exist and are used correctly. In reality, many users of React test on local and immediately deploy into production. The more the environments differ, the less friendly React is to these users. Timing issues and race conditions may surface in production. And thank goodness this isn’t Python where a log statement can consume an iterable if you allow it to do so!

And for those saying “Strict Mode is opt-in” - your coworker may opt in your whole app without you knowing it. Hopefully you see the PR or the Slack thread. So much in React now enables this “spooky action at a distance.”

The React team has put a lot of thought into balancing the performance needs of large codebases with the stability needs of smaller teams with less robust SDLC. I can’t think of a better workaround. But it’s still going to cause chaos in the wild.


I agree with the sentiment of not being comfortable with dev and prod differents.

That said, with React these dev mode differences are intentionally about making obscure production issues (such as timing ones) rear their ugly heads in development. The goal is that if your code works without issues in dev, it’ll work even better in production. It’s a thin line for sure, but I broadly support the way they’ve done it so far.


can be solved this way:

- npm start when developing

- deploy on dev using the artefacts from npm run build

builds take a bit longer this way, but it guarantees QA will test using a non-dev build


> In reality, many users of React test on local and immediately deploy into production.

> deploy on dev using the artefacts from npm run build

These two statements do not fit together.

I agree that "You should have dev/test/staging/... environments" is the correct answer, but clearly that's not the reality for everyone.


> clearly that's not the reality for everyone

might not be the reality, but multiple envs is the only correct answer.


Broadly speaking I think React is in a weird spot where they have painted themselves into a corner with no obvious way of fixing it by essentially forking the DOM and doing so many things independently of the wider web platform. That approach made a lot of sense when it was first released but is just a liability at this point.

If you’re deep in React land and it’s all you know I think 2022 might be a good time to expand your horizons a bit because the landscape around you has changed a lot in the past five years.


> Broadly speaking I think React is in a weird spot where they have painted themselves into a corner with no obvious way of fixing it by essentially forking the DOM and doing so many things independently of the wider web platform. That approach made a lot of sense when it was first released but is just a liability at this point.

The problems that strict mode is addressing with this somewhat unusual approach have absolutely nothing to do with the VDOM. Rather it is that code using React is subject to restrictions (being pure functions and following the hook rules when using hooks) that are impossible to express or enforce at compile time in javascript.


Sorry just to be clear, I wasn’t tying this to the strict mode situation but was attaching myself to the concept that this is just one of several points where the framework is showing some signs that it’s time to start thinking about alternatives particularly because I don’t think some of them are fixable at this point, they are just too central to the entire project and those decisions no longer make sense.


Outside of Vue, and maybe Angular, are there any other stable alternatives with a decent ecosystem? I'd love to hop off the React train, but I haven't found anything that compares to the experience of just using create-react-app with Typescript support.


Try solid.js [1]

I have been using it for a month now and love it. If you are coming from react the API is familiar enough that you can get productive in a day or two. Reliance on observables is a big plus for me (no virtual dom diffing) and the dom reconciliation is very similar to lit. Check out the author's blog posts [2] for more details.

If are into jamstack, Astro [3] has good support for solidjs already and offers an easy way for selective hydration of dynamic parts of the UI.

The component ecosystem is a bit lacking compared to react/vue, but it pairs well with pure css libraries like bulma/daisyui or webcomponent libraries like ionic/shoelace.

And you can also wrap your solid components as web components and use them in a larger app without converting it all to use solidjs.

[1] https://www.solidjs.com/

[2] https://ryansolid.medium.com/

[3] https://astro.build/


RiotJS and SvelteJS get mentioned as alternates in my circles. I like Riot - but only done it on smaller projects.


If you haven't check out ember-cli + ember.js latest Octane release. Full typescript support, thriving community, lots of companies using it, lots of active development.


Any thoughts about [Lit](https://lit.dev/)?


For whatever it is worth this is the one I am betting on.


Aside from the bustling ecosystem, Surplus has been around for a long time, consistently beats benchmarks, and is quite lightweight and unopinionated. I've used it for a lot of projects, small and large, quite successfully.


Have you checked out https://mithril.js.org/


I remember Ember being solid for a while, but I fell off of the front-end stuff


When React first came out, it was a breath of fresh air. There were some weird kinks, but instead of getting better they doubled down on the magic, and it got worse over time.

I would really like something like React, but completely explicit - no hidden state, no hooks, no monkey patching. Everything class based, you have one render method, one method where you can launch your ajax, and you give your data dependencies explicitly instead of React trying to infer something and calling your functions at random.


> Everything class based, you have one render method, one method where you can launch your ajax, and you give your data dependencies explicitly instead of React trying to infer something and calling your functions at random.

Class based components in React got pretty close. But for whatever reason, everyone jumped on the hooks bandwagon, against which i have some personal objections: https://blog.kronis.dev/everything%20is%20broken/modern-reac... (harder to debug, weird render loop issues, the dev tools are not quite there yet)

That said, using the Composition API in Vue feels easier and somehow more stable, especially with the simple Pinia stores for data vs something like Vuex or Redux.


Reginald Braithwaite had some really great responses to the move away from OOP as well: [0]http://raganwald.com/2016/07/16/why-are-mixins-considered-ha... [1]http://raganwald.com/2016/07/20/prefer-composition-to-inheri...


It would be enough for functional components to have extensible input and outputs

    function Button({props,ref,key,hooks}){
      useEffect(hooks,()=>{...})
      return {render: "text", ...stuff}
    }


Yeah come on, the global hidden variables are completely unnecessary.


You may just want HTMX, right?


It's not a "solution". It's more akin to a lint rule imho. Javascript provides no language level guarantees. There is no static type checker or compiler to enforce these things. So we rely on linters, tests, and assertions. Yay dynamic languages.


Thank you for the feedback. I agree this behavior was confusing. (We changed it to slightly dimming the second log in response to the community discussion. This change is released in React 18.) The new behavior seems like the tradeoff we dislike the least but new ideas are always welcome.

Overall, we’ve found that calling some methods twice is a crude but effective way to surface a class of bugs early. I absolutely agree that it’s not the best possible way to do that, and we’re always thinking about ways to surface that earlier — for example, at the compile time. But for now, the current solution is the best we have found in practice.


Thanks for clarifying Dan. I think it might be appropriate for React to console.warn that strict mode is active (in dev mode) with a short description of this side effect, or perhaps the “dimmed” log message could have an explanation after it. Not all users have explicitly enabled strict mode (e.g. in my case I just started a new Next.js project) so this behaviour can be quite surprising and hard to track down why it’s happening.


The problem with logging a message is that it creates permanent console noise for people who enable it, thereby incentivising people not to. It seems like a small thing but people are very sensitive to extra logs. And if you allow silencing the log, we’re back to where we were.


I guess I was imagining a single warning message at the top, “React is running in strict mode. This may result in you seeing console log messages twice. Read more here. Click here to disable this message for this site. Click here to disable this message globally”. Not too much noise really, especially if you can disable it per site or globally.

I do see where you are coming from, I’m just thinking of my experience where I spent 30 mins+ thinking I’d found a bug or fundamentally misunderstood React all these years or whatever, and it turned out it was just the strict mode - but because I was using Next.js I never explicitly opted in, so had no way of knowing this might start happening (unless I read every release note). I’m guessing a lot of other developers using Next might be similarly confused!


I agree, at first reading this sounds just plain horrible. I haven't tried doing any research, can anyone point out why it's not?


"Any idiot can build a bridge that stands, but it takes an engineer to build a bridge that barely stands."

- Someone


Your quote is better, but for people that have a hard time with that analogy, I like using the example of building a 4-story concrete/brick building - anyone can make a building stand strong by filling it with concrete / bricks. It takes precise engineering to know how thin you can make the walls/supports to make that building worth building.


If you just mindlessly fill everything with bricks and concrete, you likely need to make the lower walls thicker than the upper ones to support the weight...

The whole engineering process can be done without ever looking at the budget as long as the framework is given ("use these materials", "you have this much space").


And who came up with that framework?


With different constraints...

"The perfect race car crosses the finish line first and then completely disintegrates"

- Ferdinand Porsche

(I took some liberty with the translation - the idea is that everything breaks at the same time.)


"The perfect racing car crosses the finish line first and subsequently falls into its component parts." ?

* https://quotefancy.com/quote/1791488/Ferdinand-Porsche-The-p...


"An engineer can do for a dime, what any idiot can do for a dollar" - Someone else

[edit] Achshually "An engineer can do for a dollar what any fool can do for two" - Arthur C. Wellington


Brilliant!


Truncating the DB between every test is indeed horrifically slow. However it's much faster to wrap the test in a transaction and roll it back at the end. Transaction based cleaning also allows parallel testing. That mostly leaves the argument of not writing tests that rely on the state of the database being clean. I have mixed feelings on this one.

Just last week I opened a PR to fix some tests that should not have been passing but were due to an issue along these lines. The tests were making assertions about id columns from different tables and despite the code being incorrect the tests were passing because the sequence generators were clean and thus in sync. The order in which the test records were created happened to line up in the right way that an id in one table matched the id in another table.

So, I get the pain. But I'm not yet convinced it's worth a change.

Another option that I think isn't a bad approach is the default testing setup that Rails uses. Every test runs in a transaction but the test database is also initially seeded with a bunch of realistic data (fixtures in Rails lingo). This makes it impossible to write a test that assumes a clean database while also starting every test with a known state.


You must have this backwards.

Truncating a table is extremely fast. Rolling back a transaction is very slow. If you're not seeing this then there's something wrong with your setup.


(Not grandparent commenter) I think you're usually right but I doubt it makes a difference at the scale of 2-5 objects created in a test case. The big game changers IME are in-memory dbs (SQLite) or parallel execution of tests.

This idea of "transaction rollback in test teardown because performance" has a life of its own. The recommended base class for django unit tests (informally recommended, via code comments, not actual docs) uses transaction rollbacks instead of table truncation [0].

On top of this, I think, db truncation gets mixed up with table truncation sometimes too. For example, from OP:

> The time taken to clean the database is usually proportional to the number of tables

... only if you're truncating the whole db and re-initializing the schema, no?

And people sometimes actually do clear the whole db between tests! One unfortunate reason being functionally necessary data migrations that are mixed up with schema-producing migrations, meaning truncating tables doesn't take you back to "zero".

[0]: https://docs.djangoproject.com/en/2.2/_modules/django/test/t...


> ... only if you're truncating the whole db and re-initializing the schema, no?

Nope. In PostgreSQL the cost of truncating tables is proportional to the number of tables while doing a rollback is constant time (and a low constant at that, less than a commit for example).

In other databases like MySQL I believe truncating data is still proportional to the number of tables while rollback is proportional to the amount of data inserted or updated. So which is cheaper depends on your application.


This is definitely not true in PostgreSQL. In PostgreSQL rolling back a transaction just requires writing a couple of bytes of data while truncating requires taking a lock on every table you want to truncate and then for every table probably write more data than the rollback required and then you need to do the commit which is also more expensive than a rollback.


Not with PG. A couple weeks ago I was working on a project that used truncation cleaning, the test suite took 3m40s. I switched it to cleaning with transactions and the the test suite ran in 5.8s.

Truncation cleaning is extremely slow, not only because the cleaning is slower but because you actually have to commit everything your test code does.


I used to effectively do this in postgres with rsync on a known fixture snapshot of the data files. It would usually take under two seconds to reset the state and restart the servers, which was easily fast enough to do effective TDD.

I had a few other ideas to speed it up, also.


There's also creating a template database that exists at a known good state, and using that database template to CREATE DATABASE from.

https://www.postgresql.org/docs/current/manage-ag-templatedb...


Unless the test is checking performance, another option might be to start up multiple instances of the database, and run many tests concurrently each on identically prepopulated separate databases.


> Truncating the DB between every test is indeed horrifically slow

Using a database at all in unit tests is horrifically slow - one of the (many) reasons you shouldn’t.


They’re for integration tests not unit tests. Although the distinction is frequently treated as something that means something by purists, I only use it as a way to distinguish conceptually how many complex layers are being stacked since both run under “unit test frameworks” usually for reporting and assertion purposes. I view mocking as usually an anti-pattern. Careful DI usually gets you far enough and is easier to work with. You want the code under test to resemble what’s happening as much in production as possible. The more “extra” you have, the more time you’re wasting maintaining the test infrastructure itself which is generally negative value (your users don’t care about the feature being late because you were refactoring the codebase to be easier to test each function in isolation.

Empty databases should generally start quickly unless there’s some distributed consensus that’s happening (and even then, it’s all on a local machine…). You also don’t even need to tear it down all the way - just drop all tables.


Ultimately what matters is the entire application, database queries and all, works. I think calling out to the DB in tests is important for ensuring the entire app works.

And tests hitting the database can be fast: https://www.brandur.org/nanoglyphs/029-path-of-madness


This is true. I think however this is not relevant to unit tests. If you choose not to do unit tests, because they're not valuable in your software compared to automating end to end tests, that's fair enough, but on the topic of unit tests, talking to a db isn't really a thing.


Transactions / savepoints and parallelism make a huge difference. I have an app using Ecto and PostgreSQL, and running its ~550 tests takes under 5 seconds. Almost all of them hit the DB many times. The DB is empty and each test starts from a blank slate, inserting any fixture it needs.

An important trick when doing this is to respect unique constraints in fixtures. For instance if you have a users table with an email column as primary key, make the user fixture/factory generate a unique email each time ("user-1@example.com", "user-2@example.com", ...) Then you don't get slowdowns or deadlocks when many tests run in parallel.


One supposes horrifically slow might be a bit subjective.

I notice in a VM on my laptop establishing the initial connection to postgres seems to take 2-3ms, and running a trivial query takes 300-1000us.

I routinely involve the database in unit tests, it is certainly slower but my primary concern is the correct behavior of production code which uses real databases.


If testing using the db is slowing you down that means the test has discovered slow code, and worked, not that you should get rid of the test.


It depends on what is under test. If you're testing a model file that is highly coupled to the database, and whose entire purpose is more or less to function as an interface to the DB, tests need to include the DB almost by necessity. The alternative is to mock so much out that you're essentially testing your mocked code more than the unit under test.


What is the purpose of automated testing? Is it to ensure the code works correctly or is it to "run fast"?


To be able to say to your bosses that you have 100% code coverage.

No, I agree. Hitting the database is slower but not that much slower (at least if you use PostgreSQL and do rollback after every test). And since the goal is correctness I think that this performance hit is small enough to be worth taking.


> to ensure the code works correctly

It's to ensure the code works correctly and indicate where the problem is, whenever it doesn't work correctly. Querying a live database during unit tests fails on both accounts. It doesn't tell you whether or not the code works correctly - it tells you either that the code didn't work correctly or that the database wasn't available at the time the test ran.


Well both things are problems which is nice to know about so you can fix them. Certainly better than not knowing about either problem.


I would say that pulse-density modulation is more of an abstract concept and sigma-delta is a particular implementation. Or in other words, sigma-delta is a one technique used to produce a pulse-density modulation.


Notion seems like an interesting data storage problem. The vast majority of the data is in the `block` entity, which are organized into a sort of nested set, written/updated individually (user edits one at a time) but read in ranged chunks (a doc).

Off hand this seems like an almost worst case for PG. Since the updates to blocks could contain large data (causing them to be moved often) and there is one big table; it seems likely that the blocks for a single notion document will end up being non-continuous on disk and thus require a lot of IO/memory trashing to read them back out. PG doesn't have a way to tell it how to organize data on disk so there is no good way around this (CLUSTER doesn't count, it's unusable in most use cases).

Arm chair engineering of course - but my first thought would be to find another storage system for blocks that better fits the use case and leave the rest in PG. This does introduce other problems, but it just feels like storing data like this in PG is a bad fit. Maybe storing an entire doc's worth of block entities in a jsonb column would avoid a lot of this?


I would try to use a simple k:v system that is much easier to scale. Even S3 would be a good candidate. Maybe I am missing the point.


Could you explain that k:v system a little bit? Junior eng. here.


KV means key value. Essentially the suggestion is to organize your data as a big key value structure(hashmap-like). This term usually means data is not normalized/separated as in regular SQL.

for instance, in SQL:

User table, that has columns: id, email

Article table, that has columns: id, text, user_id

KV/noSQL equivalent:

Article document, that has properties: id, text, user_email


> PG doesn't have a way to tell it how to organize data on disk so there is no good way around this (CLUSTER doesn't count, it's unusable in most use cases).

Aren't tablespaces (https://www.postgresql.org/docs/10/manage-ag-tablespaces.htm...) supposed to help with that?

Haven't used them, I'm honestly curious


Tablespaces allow you to store tables in defined directories on the file system.

What I was talking about is controlling the ordering of the rows within a table on disk. If you are going to be reading some group of rows together often, ideally you want those rows to be contiguous on disk as a sequential read of a range is much faster than bouncing around to dozens of locations to collect the needed rows. This becomes more important for very large tables. Imagine a 5TB `blocks` table and you need to read 50 blocks to render a given notion doc but those blocks could be scattered all over the the place on disk, it's a lot more work and it thrashes the page cache.

PG doesn't normally make any guarantees about how rows are ordered on disk and it may move rows around when updates are made. It does has a CLUSTER operation, which re-orders rows based on the order of an index you give it, but this is a one time operation and locks the table while running. This makes it functionally useless for large tables that are accessed and updated frequently.

Some other databases do give you control over disk ordering, SQL Server for example has `CLUSTERED INDEX` which you can apply to a table and it'll order data on disk based on the index order, even for new insertions / updates. It does cost a bit more on the write side to manage this, but it can be worth it in some cases.


Got it, thanks.


That quote is a bit of a cherry pick resulting a wide interpretation that isn't supported.

The actual ruling is something more like "Epic failed to prove that Apple is a monopoly in the market the judge decided is the relevant market: digital mobile gaming transactions".

Here are the relevant sections of the ruling:

> The Court disagrees with both parties’ definition of the relevant market.

> Ultimately, after evaluating the trial evidence, the Court finds that the relevant market here is digital mobile gaming transactions, not gaming generally and not Apple’s own internal operating systems related to the App Store. The mobile gaming market itself is a $100 billion industry. The size of this market explains Epic Games’ motive in bringing this action. Having penetrated all other video game markets, the mobile gaming market was Epic Games’ next target and it views Apple as an impediment.

> Further, the evidence demonstrates that most App Store revenue is generated by mobile gaming apps, not all apps. Thus, defining the market to focus on gaming apps is appropriate. Generally speaking, on a revenue basis, gaming apps account for approximately 70% of all App Store revenues. This 70% of revenue is generated by less than 10% of all App Store consumers. These gaming-app consumers are primarily making in-app purchases which is the focus of Epic Games’ claims. By contrast, over 80% of all consumer accounts generate virtually no revenue, as 80% of all apps on the App Store are free.

> Having defined the relevant market as digital mobile gaming transactions, the Court next evaluated Apple’s conduct in that market. Given the trial record, the Court cannot ultimately conclude that Apple is a monopolist under either federal or state antitrust laws. the trial record, the Court cannot ultimately conclude that Apple is a monopolist under either federal or state antitrust laws. While the Court finds that Apple enjoys considerable market share of over 55% and extraordinarily high profit margins, these factors alone do not show antitrustconduct. Success is not illegal. The final trial record did not include evidence of other critical factors, such as barriers to entry and conduct decreasing output or decreasing innovation in the relevant market. The Court does not find that it is impossible; only that Epic Games failed in its burden to demonstrate Apple is an illegal monopolist. Case Court does not find that it is impossible; only that Epic Games failed in its burden to demonstrate Apple is an illegal monopolist.

> Nonetheless, the trial did show that Apple is engaging in anticompetitive conduct under California’s competition laws. The Court concludes that Apple’s anti-steering provisions hide critical information from consumers and illegally stifle consumer choice. When coupled with Apple’s incipient antitrust violations, these anti-steering provisions are anticompetitive and a nationwide remedy to eliminate those provisions is warranted.


Sounds like if Spotify sued, they'd have a better chance.


Is there is a solution for using `docker-compose` on mac with podman? I know that podman supports it natively now, and there is `podman-compose`, but I couldn’t find much on getting either working on a mac due to the remote setup.


Regarding Devise + Rails cookie session store:

1. Logging out deletes the cookie, you are really logged out. If your session cookie got stolen, you have other issues but I don't think this is really a matter of being 'logged out'. It is pretty easy to implement 'revoke all sessions for this user' type of logic with Devise and Devise does this of the box when a user changes their password.

2. Permissions are orthagonal to Devise. Devise stores the user ID in the session and loads the user model on every request, any permissions / blocking system would chain from there.

3. I can't think of anything that devise stores in the session where staleness would matter, other than things intended to be checked for staleness, like the salt that is used for the aforementioned revoke all sessions on password change functionality.


For 2 and 3 I was mainly referring to Identity, although I'm not sure how it works internally. For 1, I think the main issue is that when someone logs out, or you log someone out, you aren't guaranteed that they are actually logged out. There are cases where this does matter.

How does Devise handle 'revoke all sessions for this user'?


> How does Devise handle 'revoke all sessions for this user'?

The cookie has both user id and a special token which IIRC is a substring of the user's password salt. Retrieving current user from cookie includes not only looking up by id, but also verifying the salt. So if you change the password, the salt is also changed and all the old sessions will stop working.


Ah okay, so Devise does lookup the user in the database to authenticate. I guess it's not applicable to my premise then.


That was true of older systems, most newer adaptive cruise control systems I've used have no issue with stop and go traffic.


A Model 3 owner confirms this...


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: