Choosing the wrong evaluation method may lead us to subobtimal models, while there may be new architectures (the author suggests to look back to neuroscience for inspiration) that can better handle what we expect from future models
While it looks good and even possibly useful, it seems to be a great way to leak sensitive cookies (especially since "copy as cURL" is so easy on the browser's network tab).
I would 100% forbid its use in a company environment and I would encourage people in general not to use it for any non-trivial use case.
The best real application of compressed sensing (that I know of) is subsampled reconstruction of magnetic resonance imaging.
MR scanners actually sample data from the frequency spectrum to build a visible image by using several techniques involving electromagnetic fields and pulses that are applied to some organic body. One could think of it like taking a picture of the Fourier Transform of the actual image you want to see, to which you later take the inverse FT and get the image.
This is done "pixel-by-pixel" where each of them is a specific sample of an specific set of frequencies, and for each of them a different set of physical parameters need to change in the scanner, a process that takes some non-negligible time. As we would like to have big pictures with lots of details we need to do this as fast as we can, but there are physical limitations on the scanner and the actual scanned region (we can't have a person lying still for hours to get an image of their brain). So in order to get a good image we have to balance image quality and resolution with scan times.
Using compressed sensing, we can "skip" some of these pixels in some smart way (usually in some random fashion), and be able to reconstruct the image with virtually no loss of quality, there are some beautiful mathematical results that guarantee us that if the reconstructed image is sparse in some domain (e.g. organic images are usually sparse in DCT or several wavelet domains. This is actually what allows us to compress several megapixel images in a few megabytes without loss of percieved quality), you can reconstruct the image with no theoretical loss of information.
Think of it like having to guess an image from a subset of its pixels. If I tell you that it can be any possible image, you won't be able to guess anything until I give you all its pixels. But if I tell you that the image is of a cat, you'll maybe need half, or a quarter of the pixels and then you can start guessing some of the missing pixels, as you know how cats look like.
In a practical sense, this techniques have allowed us to generate high quality MR images with an incredibly low subset of the samples (10% or even less). In my opinion it is one of the most beautiful applications of a theoretical mathematical result in a real life problem with real impact.
That is absolutely not true. While React has a big piece of the market right now, it has not "clearly won this ballgame". Both Vue and Svelte are very much active in the industry and gaining terrain fast.
Frontend development is an extremely fast moving field and claiming a "clear winner" makes no sense.
Having said that, I agree that Angular may not be the best option in the general sense, but given that it is a Google-backed framework, they probably have the best talent available to build tools efficiently.
Look at how many jobs being offered in angular, vue or svelte vs react, in any given freelance job portal.
It's 1 to 5 at best.
If you want to punish your project and have issues hiring from a limited pool, sure go ahead.
Yes, most current job offerings may be for React, but Angular had that spot a couple of years ago and ruby on rails was the cool new thing to work with before that. My point is that this is not a permanent thing. Technologies change, preferences change. React will be replaced with better tools for the job (IMO Vue and Svelte are better designed than React), I am sure of that. Besides, if you are going to hire by limiting your pool by frameworks, you are doing things wrong. Any decent developer that can work in an Angular codebase should have no problem with React, Svelte or Vue.
That is not correct, the vulnerability you are talking about is barely a vulnerability. It did not unmask transactions in the sense that no sender, recipient or amount (which are the properties that are hidden in the monero blockchain) was revealed. The issue only arises in some very specific scenarios, and the only information leaked is that you are more likely to be the one making the transaction (as monero hides the sender by using "decoys") in the case were you receive and spend a transaction in a very short span of time.
While bold cryptographic claims should be taken with responsibility, monero is researched and implemented by well known criptographers and researchers in a very serious way. Almost all (if not all) aspects of the protocol come directly from proven and well understood theory and published research.
> While bold cryptographic claims should be taken with responsibility, monero is researched and implemented by well known criptographers and researchers in a very serious way. Almost all (if not all) aspects of the protocol come directly from proven and well understood theory and published research.
I have a lot of friends and acquaintances who (despite my nagging) work at cryptocurrency shops, and I personally do some entirely separate work on provable computation. To call cryptocurrencies' use of zero-knowledge proofs "proven and well understood" is a tremendous overstatement: they're a brand new area within cryptography. We don't really know what their properties are yet, and we haven't even begun to comprehensively document weakness in construction, implementation, &c. the way we do for actually established cryptosystems. The deluge of published research on ZK/OT/&c. is evidence for this: everybody is scrambling to explore and publish on a new, immature research domain.
You are just talking without any bases. It is true that ZK is a rather new concept in applied cryptography, but the theory has been around for a while now, without any relevant breakthrough in possible attacks. Monero cryptography comes from primitives that are not new to cryptocurrencies and have been in the cryptography scene for a couple of decades now. One could argue that theory and implementation are two very different problems, but even in the implementation side there haven't been any severe vulnerabilities (the only that comes to my mind now is the double spending attack that could be done because of a missing check in a signature). Again, the one you cited is far from a real attack on Monero.
Do you have any concrete examples of parts of the protocol that are so new and immature that we should distrust for this reason?
At the end of the day it is a matter of trust and risk. I trust the mathematics of it because I took the time to read about it and understand the claims of security being made. I also have some trust in the team writing the software because I have been following their development relatively close. You may have done the same and come to the conclusion that they are not that serious or competent, but claiming that Monero is not to be trusted because the cryptography is too new is just an exaggerated view. This things are not being claimed without a proper basis.
Now, I am only talking about Monero here, there are several other crypto-systems using more esoteric methods than Monero that I wouldn't have the same trust in them, like ZCash and its derivatives. They use far more novel cryptography (zk-SNARKs) and some debatable design decisions (trusted setup, optional privacy, developers taking a chunk of mined coins).