I'm not in the US, but as an immigrant myself I can say it's incredibly stressful to be on a working visa during job changes and during a period of unstable economy. Especially if you have family.
To be rational, I work in academia and I shouldn't need to worry too much about job safety and/or visa applications, but I still semi-regularly woke up at night from nightmares of my visa expiring, not being renewed, suddenly being deported, or similar stuff.
I got my permanent residence earlier this year and all of it stopped. It gives you some sense of stability/security. It also makes one feel a bit more accepted in society by not needing to leave it if a work agreement (for whatever reason) ends.
I'm qualified to apply for a permanent residence in 4 months.
The worst thing is that if I get laid off, the 5-year clock resets, and it would waste almost 5 years.
Technically, it takes 45+ days to lay off someone. However, the application requires a letter from the employer to confirm I'm still needed at a job, which means the lay-off just a few days before the application could invalidate the application and has a potential to reset the clock.
> if they trained the model on me visualizing a bear, a fish, and a bird, and then the neural net still outputs "horse" when I visualize a horse
Well, the failure cases figure says it does not work if "training and testing datasets do not overlap". So, it'd just find the closest trained class and then generate new image from that class (i.e., in your example the bear might look more similar to horse than a fish or bird, so it'll generate a random bear).
> We assume the failure cases are related to two reasons. On one
hand, the GOD training set and testing set have no overlapping classes. That is to say, the model could learn the geometric information from
the training but cannot infer unseen classes in the testing set
Now, if all the GT results had been fails, it might be reasonable to conclude that it doesn't work if the sets don't overlap. However, there are only 6 that they graded as fails. (A few more look iffy to me.) If I'm reading their statement correctly, there was no overlap between the two sets:
> This dataset consists of 1250 natural images from 200 distinct classes from ImageNet, where 1200 images are used for training. The remaining 50 images from classes not present in the training data are used for testing
And if I'm understanding this correctly, that makes the results look sort of impressive. I mean, at the very least, the model is getting the right class from the testing set most of the time, even though that class wasn't in the training set. That's ... not ... nothing?
On the other hand it seems they cherry picked the best of five subjects for the results they show in the supplementary, which is ridiculous.
> Subject 3 has a significantly higher SNR than the others. A higher SNR leads to better performance in our experiments, which has also been shown in various literature.
Yeah, I thought the same after seeing this. It's kind of a fun use-case of Diffusion models in this context, but as a scientific paper it seems too overselling. Well, it surely is the kind of clickbaity content to anticipate lots of retweets from.
I only skimmed the paper, but from what I understood, it is essentially a diffusion model pre-trained on a handful classes. The brain information is then used to largely "pick which class to generate a random image from".
The paper itself even picked the "better" examples. The supplemental materials show many more results, and many of them are just that, a randomly generated image of the same object class the person was seeing (or, the closest object class available in the training data).
"Reconstruct" seems a pretty bad word choice. I think the results are presented in a way vastly overselling what they actually do. But that's a worrisome trend in most of AI research recently, unfortunately.
(I have a PhD in a field of Applied Machine Learning. I work at a university in Computer Vision.)
Well to be fair, three of the four apps are Apple ecosystem apps (although Craft now expanding to other platforms).
Maybe Roam, Obsidian, Logseq would be better examples of booming apps note-takers jump ship to? But then, I think all of these apps are rather niche compared to Evernote.
Hugo has a multilingual mode[1] which I am using on two websites. It's a bit confusing at first but works pretty well for me.
For content, you basically put blogarticle.en.md and blogarticle.fr.md next to each other it will treat it as the same article in different languages. Per default it puts different language websites into different subdirectories (with a default language at root level) but it also supports multi hosts setups where you can define different domains for each language (e.g. blog.de and blog.fr).
There's support for generating "this article in other languages" type menus, which allows you to crosslink the website across languages.
There's also i18n support for translating stuff in templates etc., where you provide a dictionary file with translated versions of each string, and it'll replace them based on the language of the current article (useful for stuff like navigational menus etc.).
My portfolio website is 4-lingual and the initial setup was pretty confusing (a bit trial and error until I understood how it works). But now it works very conveniently.
For people who are interested in this kind of idea, there is some research on estimating Word-Color associations.
A popular crowd-sourced dataset for this is [1] which contains average color association for 14,000 words. There is a demo available at [2].
There's also a recent work[3] trying to estimate such association using image data from Google, similar to the OP project, but a bit more sophisticated than just taking an average.
1: Colourful Language: Measuring Word-Colour Associations, Saif Mohammad, ACL 2011 Workshop on Cognitive Modeling and Computational Linguistics (CMCL). https://www.aclweb.org/anthology/W11-0611/
> It would be a shame if ML also became a field people avoided because they didn't want to contribute to evil, in their own view.
Both ethics and privacy considerations have recently become pretty regular at Computer Vision and Multimedia Processing conferences.
One very popular object detection model (called YOLO) had the main author recently leaving the field because he got concerned about military applications using his research results.
Privacy is fine. It's a valuable technology all of us pay for and desire better advances in. There's an entire field in CS/math known as cryptography, which is basically a subset of privacy.
Ethics, however, is a humanities field. People in different political affiliations have widely diverging views on it. It will undoubtedly be used to promote the views of one political affiliation over others. Suppose you need to create a technology that can be used for war in order to better treat cancer? Who gets to choose who lives or dies?
While I think general awareness of ethics concerns is needed, I think it might also sometimes bias research directions in itself.
I.e., dealing with ethics concerns and/or ethics committees becomes a huge additional workload in itself, so the research is prioritized to minimize dealing with it.
For example, one might stop a research which might help to treat cancer but dealing with the necessary approvals for patient data makes it unfeasible. Instead you switch to a general purpose target domain where it suddenly (unintentionally) could be used for war instead, but being general purpose it does not need to be approved by the ethics committee..
These are all hard questions, but what I personally want to avoid is humanities people (or worse, business people) making all the decisions without tech people having input. Finance is a field where there were strong consequences for the misuse of mathematics -- in particular, the use of "value at risk" as a complete and sufficient risk metric or even worse a target/KPI was something widely seen by the mathematical types around as a disaster in the making and by the business types as a great tool for doing whatever the f*(& they wanted and papering it over with math. Look where that got us.
Cryptography is actually a great example. Gauss called number theory the "queen of mathematics" and several key mathematicians (Hardy) escaped there as they figured it could never, ever be used for political purposes or anything else. And then oops, cryptography comes along and it's built entirely on number theory. You never know.
Humanities people know what they know, and I respect that they've done stuff; but I'm sure not going to bow out of the conversation and hand all the ethics stuff off to them. While some really dig deep, some have no idea what the actual technology can do! There was this long thread recently about Proctorio and McGraw-Hill. Is it right for ML researchers who know about the shittiness of facial recognition to simply say "yeah whatever, do whatever you want to students who are more or less trapped by this system, we won't make a peep"? It's improbably that a NeurIPS paper addendum is going to make a huge difference in that particular problem, but we can 1) practice thinking about these things in preparation for disputes we can take part in, 2) provide ideas and information for journalists, politicians, and humanities folks who'll get involved along the way, 3) develop a habit of at least talking about it.
And last, NeurIPS has so many people/teams submitting that I figured it would be inevitable that more checkboxes appear on the checklist for inclusion -- thinning mechanisms always appear when necessary to slow the flow. If not this, it'd be something else.
There's very good advice in there for people in leading or mentoring positions.
I am in academia and I noticed many professors are a mixed bag when it comes to such communication skills. They don't really compliment students much for their work or results. Basically all feedback is target at existing issues and how to improve things. I feel many people can quickly get discouraged by this.
When I give students feedback I always start with thanking them for the work and pointing out some things I think they did very well.. Most students seem to appreciate it a lot.
To be rational, I work in academia and I shouldn't need to worry too much about job safety and/or visa applications, but I still semi-regularly woke up at night from nightmares of my visa expiring, not being renewed, suddenly being deported, or similar stuff.
I got my permanent residence earlier this year and all of it stopped. It gives you some sense of stability/security. It also makes one feel a bit more accepted in society by not needing to leave it if a work agreement (for whatever reason) ends.