fAIr is an open AI-assisted mapping service developed by the Humanitarian OpenStreetMap Team (HOT) that aims to improve the efficiency and accuracy of mapping efforts for humanitarian purposes. The service uses AI models, specifically computer vision techniques, to detect objects such as buildings, roads, waterways, and trees from satellite and UAV imagery.
The name fAIr is derived from the following terms:
- f: for freedom and free and open-source software
- AI: for Artificial Intelligence
- r: for resilience and our responsibility for our communities and the role we play within humanitarian mapping
- Intuitive and fair AI-assisted mapping tool
- Open-source AI models created and trained by local communities
- Uses open-source satellite and UAV imagery from HOT's OpenAerialMap (OAM) to detect map features and suggest additions to OpenStreetMap (OSM)
- Constant feedback loop to eliminate model biases and ensure models are relevant to local communities
Unlike other AI data producers, fAIr is a free and open-source AI service that allows OSM community members to create and train their own AI models for mapping in their region of interest and/or humanitarian need. The goal of fAIr is to provide access to AI-assisted mapping across mobile and in-browser editors, using community-created AI models, and to ensure that the models are relevant to the communities where the maps are being created to improve the conditions of the people living there.
To eliminate model biases, fAIr is built to work with the local communities and receive constant feedback on the models, which will result in the progressive intelligence of computer vision models. The AI models suggest detected features to be added to OpenStreetMap (OSM), but mass import into OSM is not planned. Whenever an OSM mapper uses the AI models for assisted mapping and completes corrections, fAIr can take those corrections as feedback to enhance the AI modelโs accuracy.
Status | Feature | Detailed Description | Release |
---|---|---|---|
โ | Adopting YOLOv8 model | Improvements to the prediction algorithm | v2.0.1+ |
โ | New UI/UX | redesign to enhance the user experience | v2.0.10+ |
โ | fAIr evaluation | detailed research with Masaryk University & Missing Maps Czechia and Slovakia, welcome to join the efforts, here is the final report | |
๐ | Handling User Profile | Enable users to log in easily and have insights in their user activity, their own models/datasets and submitted trainings | |
๐ | Notifications features | Training status change would trigger a notification on the web/email to let user know training is finished successfully or with a failure | |
๐ | Replicable Models | Enable users to run a pre-trained model on new imagery/on a different area of their choice and using different satellite imagery | |
๐ | Offline AI Prediction | Enable users to submit requests for prediction using any pre-trained model and any imagery and process it in the background and provide the results back to user. | |
๐ | Post Processing Enhancement | Users would get enhanced geometry features (points/polygons) based on the need of the mapping process | |
๐ | fAIrSwipe | Enable users to validate fAIR generated features and push them into OSM by integrating fAIr with MapSwipe, more details |
|๐| You can follow here the details and scope of each of the above features. and you can see and follow the Figma design progress for current in development ๐ features
A higher level roadmap for 2025 can be found on Github.
- First We expect there should be a fully mapped and validated task in project Area where model will be trained on
- fAIr uses OSM features as labels which are fetched from [Raw Data API] (https://github.com/hotosm/raw-data-api) and Tiles from OpenAerialMap (https://map.openaerialmap.org/)
- Once data is ready fAIr supports creation of local model with the input area provided , Publishes model for that area which can be implemented on the rest of the similar area
- Feedback is important aspect , If mappers is not satisfied with the prediction that fAIr is making they can submit their feedback and community manager can apply feedback to model so that model will learn
The backend is using library we call it fAIr utilities to handle:
1. Data preparation for the models
2. Models trainings
3. Inference process
4. Post processing (converting the predicted features to geo data)
Checkout Docker Installation docs
- Start by reading our Code of conduct
- Get familiar with our contributor guidelines explaining the different ways in which you can support this project! We need your help!
By submitting imagery link to fAIr for model creation, you:
- Grant fAIr permission to download tiles covering your specified area of interest.
- Authorize fAIr to use these tiles for training and inference.
- Allow fAIr to redistribute the downloaded tiles to anyone who wishes to view or reproduce the dataset used for model training.
- The original copyright remains with the imageryโs source or rights holder.
- You grant fAIr the right to license the downloaded tiles under CC BY 4.0.
- If you are using a commercial TMS (Tile Map Service) with your own token, please be aware that fAIr will download , store and derive information from the tiles for your specified area.
- These tiles may be published as part of the training process and made available to others.
You must verify that imagery provider's license is compatible with fAIrโs intended use.
- When submitting imagery to fAIr, ensure you are not violating the license of the TMS or imagery provider.
- If you are grabbing imagery from OpenAerialMap, review legal page for applicable terms.
- If you plan to use the API or imagery services beyond the scope of the listed license, reach out to [email protected] for further guidance.