Hacker Newsnew | past | comments | ask | show | jobs | submit | prance's commentslogin

I agree with your reasoning. Also, particularly

> software freedom, which in turn is a tool to protect user freedom

is AFAIK an assumption, or has this ever been proven?

I mean, isn't it entirely possible that helping software startups protect their business model of whatever "near-enough" OS/free software they build will help user freedom more than taking a purist OSS stance and leaving it all up for grabs by Amazon, Alibaba etc.? Because of increased choice and competition?


I suppose anything is "possible"...


Well, I care, to some extent. Like many other things, it can be used as a filter. At the least, lacking other demonstrable evidence (such as open source or industry experience which can be confirmed by referrals), a finished master or PhD shows that someone is able to finish a non-trivial project.


The thing you're missing is that in order to achieve scaling, Lightning Network sacrifices one of Bitcoin's main advantages, decentralization: https://medium.com/@jonaldfyookball/mathematical-proof-that-...


The big difference is that jpeg creation with default parameters works fine for probably > 99.9% of use cases. A jpeg encoding the picture of a car will be just as fine as that of a cat. This is not at all the case with the current state of ML, as the originator of this thread has also pointed out with their example.

But there is actually one more problematic area for jpeg: encoding of graphs, drawings etc. with a limited color palette and straight lines. Here, jpeg artifacts become more visible, and to reduce them, you can either turn up the quality, or use a better approach like svg or png. For this, at least a bit of more technical knowledge is required. How many non tech-savy people do even know about svg or png?

But an even more appropriate comparison with ML would be to ask how to improve jpeg to better deal with straight lines. For this, you clearly need to understand the maths.


Incidentally, I also installed Opera Mini on my iPhone, living in Nairobi. However not because of data concerns, but because it allows me to access Facebook private messages without having to install any of the Facebook apps.


> Patterns do fit stereotypes perfectly.

No they don't. "Stereotype" is a psychological concept, and therefore by definition incorporates human subjectivity. There are various conflicting definitions, but most include the possibility or likelihood that many overstate or even completely falsely construct generalizations.

Or, as Wikipedia[1] states,

> By the mid-1950s, Gordon Allport wrote that, "It is possible for a stereotype to grow in defiance of all evidence."

> Research on the role of illusory correlations in the formation of stereotypes suggests that stereotypes can develop because of incorrect inferences about the relationship between two events (e.g., membership in a social group and bad or good attributes). This means that at least some stereotypes are inaccurate.

[1] https://en.wikipedia.org/wiki/Stereotype#Accuracy


> accents and other similar markings changes the pronunciation and meaning of words

Yes. In some languages those are actually not "markings" but denote proper letters, like in German ä,ö,ü and ß. But even if not, like in French, it can alter the meaning of words. E.g la != là. Therefore, for most Europeans and speakers of other languages that depend on more letters than ASCII provides, it is very annoying when that is not supported properly.

However, I have made the experience in a few cases that particularly Americans have a hard time understanding this. The remark about your wife not caring seems to be in this vein, too. Recently, I decided to convert our MySQL DB tables from latin1 to UTF8. (I wasn't even aware that we didn't have some form unicode, as our DB is only few years old, and I thought some unicode is the default nowadays everywhere. But then MySQL...)

Anyway, my CEO (also an American incidentally) was trying to keep me from it because he thought it's not high priority. However, we're about to go live in a French-speaking region, but which also has other indigenous languages (and therefore names), with their own "special" characters (I put "special" in quotes because for those languages, they're not "special" at all -- but I guess you get my gist by now).

Also, in previous jobs I have converted legacy systems to unicode and know what a pain it is down the road. Not to mention all the hard-to-find bugs if you don't do it, because some strings don't compare as they should, or people are just annoyed because their name is not shown correctly.

So I went ahead with the conversion anyway. We may never know for sure, but I'm convinced that I saved us some major customer frustrations, days of bug hunting and weeks of converting everything later, when existing data would need to be migrated.

So please everyone, just use UTF8 or some other unicode variant from the get-go. The few bits you might save otherwise are just not worth it.


This reasoning is probably behind some the OP's cases...


> One is with macros, which you can change the definition of when necessary.

I concur on this. In my phd thesis, I defined macros for all frequently used mathematical terms and expressions. This way, many places got almost as readable as the rendered formula.

For example, "\tfuzzy(1,2,3)", denoting a triangular fuzzy number with min. support 1, modal value 2 and max. support 3, is pretty clear after using it once or twice. We can then decide later how exactly to render it by changing the macro definition.

Similarly, when citing a lot of names (just using the names here, not actually paper references) wich characters which are not in my commonly used alphabet, I frequently define macros for them too. Then I can use e.g. "\Mares" in the text instead of "Mare{\v{s}}". Much easier, also when compared to hunting those special chars in a character map in a WYSIWYG word processor, or copy-pasting all the time.

However, I find this use of macros really only really pays off when working on larger documents, or living documents that keep changing, like CVs. For shorter and short-lived docs, I'd use Latex only if they have at least some math or special layouting requirements. I tried using Word's formula editor, but even though one can sort of get used to it, I find Latex much easier to use, especially with amstex. But for shorter documents without math, I find a word processor just much more convenient.


I don't agree with the view that a component doesn't need a prop "itself" if it "only" passes it on to a child component. That child component is a part of the parent. If one decided that e.g. the child is too simple to warrant to be its own component and inlined its render() content in the parent instead, suddenly its used props would be needed by the parent "itself".

Another thing to consider is that one tightly couples components to stores when connecting. So reusability of these components is hampered.


If was my understanding that if a component has a prop which changes, even if that prop is passed down the tree, the component will still update.

This is one way I use redux, the connector watches for changes in state and then updates the component directly, rather than passing props down the tree.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: