Yeah I second this. Despite having a superior type system, flow is so behind in both tooling and 3rd party library definitions it's really hard to justify continuing to use it over typescript. It's weird to say, but the unsoundness of typescript doesn't matter much next to all the other advantages.
Besides, I feel like if I really want a strong type system https://reasonml.github.io/ might be the better choice over flow.
Thanks! That just about answers the burning question: why another charting library?
It all boiled down to us needing some simple charts for our report data; nothing fancy, just something that went with our classic design. All we needed was some mappers to translate numbers to relatively sized shapes (or positions). The graphs over at GitHub looked awesome to start the styles from; from there on we focussed on making them generic.
As the author of this—yep, I can agree with that. I'm generally baffled when I see folks complain about Electron's slowness—but that's probably 'cause I started out building Swing apps :)
Memory usage is often a bad indicator of bloat. Especially GC languages like Java and JS will just go ahead and allocate a lot of memory from the OS without using all of it all the time. If you sum the "used" memory of all apps, it will often be more than the available RAM, even though there is no swapping going on. The OS memory management is really clever nowadays.
Actually, an app not using much memory may be sacrificing performance by not using a caching opportunity.
Unfortunately, apps coordinating how much memory to best use is an unsolved problem, as far as I know.
Is it really the case ? I would expect the language to expand the size of the heap only if after a GC the amount of free memory is still below a certain threshold.
My question would be what's the alternative? Cross platform native apps rarely look as good, and would require entirely new code bases and different sets of developers. Sure electron sucks for users, but what's the business incentive to ship native apps other than to satisfy the minority of users who are even aware electron saps so many resources?
I think that for whomever is developing using electron in most cases the desktop versions of the apps wouldn't even exist if electron wasn't a thing. Also, just as a side note, electron apps are so _easy_ to develop. I don't know if any other platforms that target desktop are as simple, but I might just be out of the loop.
I'm a dreamer, I know it, but I still would like some kind of easy-to-use, developer friendly cross-platform UI toolkit. I used to do Swing development and, eventually, I learned to appreciate it. I understand why more people aren't jumping on the Swing train at this point but I have to say, the UI provided by Swing was less restrictive than the one provided by Electron.
Something in between would gain traction, I think, as Electron developer's run up against the hard edge of the kinds of UI Electron apps will allow.
Warning: this video is old, lengthy, poorly paced and rather dull so feel free to click ahead. Still, it demonstrates a reasonably complicated Swing application that looks somewhat like a native Mac OS X application. The video was put together around 2003 so the look and feel is pretty dated. But the interface is fairly complicated and it does get feedback to the customer as soon as they get data out of range, etc.
Some useful ideas in that post which were buried:
Google itself has unexamined biases; There's real fear among some people that they can't speak their mind on sensitive issues because they might be ostracized for fired for them (which the author was); Google needs to have an open and honest discussion about the costs and benefits of our diversity programs; Google needs to focus on psychological safety, not just race/gender diversity.
By my count, zero of those were discussed after the memo. Related to the content of the linked article we're discussing -- the author tried to actually promote diversity with some ideas (albeit in a hamfisted, and potentially offensive, way)
The memo is pushing a fairly outdated and unsupported notion of fundamental biological differences between the genders. The memo did not out and out say "women can't code" but it sure did seem like that's what he meant.
The memo author did not for even an instant think about what affect his memo would have on other people, and so whether misinterpreted or not, he may be guilty of creating a hostile work place.
The memo called for "De-emphasize empathy". This would validly seem to many people like a really bad idea.
On a sensitive cultural issue, engineers will tend to get upset if what they write/think is misinterpreted. But on emotional issues, you'd best pay attention to how your message arrives, because it's always more important than what you intended your message to mean.
His opponents are currently drowning him out, but the way that even they are doing it isn't terribly intellectually honest. For example, making simple claims that all of his claims are refuted by decades of science but neglecting to proffer that science is a fairly normal but annoying intellectually dishonest thing to do in a debate.
--
Google got put in a bad situation here. Diversity is a very important cultural value to them. So they couldn't do nothing. And I also think it's a terrible idea to fire a person who was explicitly writing about how the company won't permit open discussion of hard issues. This is what makes him a culture war football, because the extremes on both sides will be able to look at this case, and see all of their worst fears.
I do not see "women can't code" anywhere in that memo. I see "women are less likely to take interest in coding (for biological reasons possibly)" which is completely different thing.
Granted. To this point, I'd say this: more than half the internet read his memo and understood "women can't code". At some point, it doesn't matter if that's not what he said. The impact of his work is how people perceive it, not what he meant. This isn't really fair, but it is so.
One thing that's obvious: the author clearly did engage in broad generalizations about half of the population; your quoted sentence is proof. He hedged these statements with others, saying that of course it always depends on the individual. But when a person makes broad generalizations about populations, they're always playing with emotional fire. And once people get bothered and emotional, all of the rational hedging in the world won't save your job.
What's happening is that you're passing an object (hashmap / hashset) into a function that returns a filtering function, and that object is used inside the closure to track the dupes. It's still a pure function because even though you're mutating the passed in object, the filtering function is still deterministic and referentially transparent.
Yeah you're reading it wrong. This is for node so it'd never be client side, and I'm sure the pw examples are in plaintext just for simplicity in the README.
What's the problem with them being on random machines? I believe the purpose of Filecoin and other projects like Swarm is to incentives people to store the files long term and in some cases verify that those files are still being held.
What I don't understand is how this will guarantee the existence of these files. How do I know my files are secure? If my files are spread over several machines, and one or several go offline, are reinstalled, whatever. How do I know my files are secure?
I don't know how Filecoin strengthens reliability, but Sia splits your file to 30 small pieces and distributes them randomly among hosts. You only need 10 of the pieces to recover a file. So, in effect, you rely on 10 out of 30 random hosts to be online at the time of recovery. The chance of more than 20 random hosts that are holding your file are offline at the same time is extremely small as long as there's enough diversity among hosters (particularly many different hosting companies and many individual hosters in different regions, so hosting isn't mostly "centralized" to one region or company).
I think Sia's method would prove to be extremely reliable, but again, it depends on the hosters. If the largest single hoster owns less than a third of the capacity of the network, even if they suddenly go offline, you should be able to recover your file since you rely only on a third of the 30 hosts who hold your file to be online.
Also, I think that the number of pieces you need to recover a file can be tuned, so if 33% would prove to not be reliable enough, Sia/Filecoin can tune it down to increase reliability.
I haven't had a chance to dig into this paper yet, but for a similar system Swarm [0], there is a system of insurance and escrow on file chunks, and a system of verifying the hash of piece of the file to see if they still have its contents.
The file is encrypted locally before uploading to the network. Then the file is sharded, and the shards are replicated among the nodes. Once a shard falls below a certain replication threshold, it will get copied to a new node.
There are other schemes that don't require a copy of each shard, allowing you to reconstruct the file if any subset of the nodes is online.
Presumably your data is duplicated to the point that all nodes containing any particular piece of data are very unlikely to be offline at a given time.
Besides, I feel like if I really want a strong type system https://reasonml.github.io/ might be the better choice over flow.