Hello. I'd like to share here a potentially valuable resource for those looking to understand how AI is transforming remote sensing, or get into the field of Earth Observation. (See also related Twitter thread: https://twitter.com/alkalait/status/1565710662658953222?s=20...).
Most of the images generated by satellites will never be seen by human eyes. There simply aren't enough humans on Earth to sift through the TBs of imagery acquired daily by satellites. Artificial Intelligence is revolutionising many sectors, including Earth Observation.
EO, Remote Sensing, ML are all independent fields of study, with several textbooks dedicated to each. Despite this, the conglomeration of ML + Remote Sensing + EO (aka. AI4EO) raises basic questions that are rarely motivated in isolated fields. For example, how can we...
* tell what happens on Earth based on observations from space?
* allow data tell the story of a natural or anthropogenic phenomenon?
* meaningfully combine sensors of fundamentally different mechanics?
* place all data streams on the globe continuously and harmoniously?
* do all of the above, mindful of noise, errors and observation gaps?
* Finally, how do we walk away with knowledge of what we don’t yet know?
To appeal to all backgrounds, we have included a handy glossary and an acronym explainer.
This work is now under peer-review. In the meantime instead of uploading it on arXiv, Satellite Applications Catapult (an Innovate UK center) is hosting it as a white paper (no sign-up needed). If you find it useful, please spread the word, or retweet this thread:
https://twitter.com/alkalait/status/1565710662658953222?s=20...
Last week I published this article on Amnesty International's blog, co-authored with Amnesty's human rights researchers.
This is an example of AI helping to protect Human Rights. We discuss new results on satellite imagery of the highest resolution available commercially (Maxar) to help Amnesty track the extent of the Darfur genocide.
A lot of this work is credited to Julien Cornebise, Daniel Worrall, Micah Farfur, Milena Marin who started Decode-Darfur-1 (DD1) in 2017. (see paper published in NeurIPS 2018 AI for Social Good Workshop: https://aiforsocialgood.github.io/2018/pdfs/track1/80_aisg_n...). To Laure Delisle, ZL, Alex Kuefler who continued the project (DD2) in 2019. Buffy Price and I helped bring DD2 to the finish line.
We have no code yet publicly available, not a dataset to reproduce this study - main reasons being that training Maxar imagery come with permission for publicity only. That said, the backbone approach is based on a ResNet50 (pre-trained on ImageNet), so data licensing is the main blocker.
Getting the game state data means deciding a-priori what features the AI should learn on. The whole point of the deep learning paradigm is to allow a machine to learn such features that enable good prediction, visualization, generation (aka. hallucination), etc.
Instead, researchers have provided the raw feed input data to these agents with the hope that the learned features could be interpreted as game state data by humans.
I would say that it is a point of deep learning rather than "the whole point". For an AI to interact with the real-world then building models from vision (as we do) makes a lot of sense. In the virtual world, however, it makes no sense. Model data is already available and the AI has no need for something as inefficient as vision. We humans have to use vision (and sound, etc) in games because we have do not have access to direct data feeds, computers have no such limitation. Why cripple the AI by imposing human limitations on it?
If they want their AGI to applicable to the real world or software with incomplete or insufficient APIS they have to do it the way they are doing it here.
There isn't an API for me to check if i'm still on the foot path and not the road as i walk down the street.
I can't use an API to tell me water is boiling and that i shouldn't stick my hand in it.
PyMC3 uses Theano to create a compute graph of the model which then gets compiled to C. Moreover, it gives us the gradient for free so that HMC and NUTS can be used which work models of high complexity.
I use it in production, despite it still being beta. We're close to the first stable release but there are still some small kinks to figure out.
That's because most were toy examples dealing with discrete distributions in a few dimensions. Granted, these are mathematically easier to deal with, but not representative of real-world scenarios.
I enjoyed MMDS, but the lectures given by Ulman were painful to sit through. The man could simply not deliver a single sentence not read verbatim from a screen.
Most of the images generated by satellites will never be seen by human eyes. There simply aren't enough humans on Earth to sift through the TBs of imagery acquired daily by satellites. Artificial Intelligence is revolutionising many sectors, including Earth Observation.
[Cover.](https://preview.redd.it/avrdtspt7am91.png?width=1449&format=...)
Our preprint of *State of AI for Earth Observation: a concise overview from sensors to applications* serves as an intro to
* sensors
* the core ideas in deep learning for EO, and the current state of research
* how and where AI is applied in EO
* where AI4EO is headed
* and the role of research and technology organisations.
You can download the preprint here (no sign-up needed): https://sa.catapult.org.uk/digital-library/white-paper-state...
EO, Remote Sensing, ML are all independent fields of study, with several textbooks dedicated to each. Despite this, the conglomeration of ML + Remote Sensing + EO (aka. AI4EO) raises basic questions that are rarely motivated in isolated fields. For example, how can we...
* tell what happens on Earth based on observations from space?
* allow data tell the story of a natural or anthropogenic phenomenon?
* meaningfully combine sensors of fundamentally different mechanics?
* place all data streams on the globe continuously and harmoniously?
* do all of the above, mindful of noise, errors and observation gaps?
* Finally, how do we walk away with knowledge of what we don’t yet know?
To appeal to all backgrounds, we have included a handy glossary and an acronym explainer.
[Glossary.](https://preview.redd.it/6do66r009am91.png?width=718&format=p...)
This work is now under peer-review. In the meantime instead of uploading it on arXiv, Satellite Applications Catapult (an Innovate UK center) is hosting it as a white paper (no sign-up needed). If you find it useful, please spread the word, or retweet this thread: https://twitter.com/alkalait/status/1565710662658953222?s=20...
Enjoy.