Tutorials

Tutorials Offered
Artificial Intelligence Applications in Ocean Remote Sensing Half Day Tutorials
Sensing the Seas: AI and Data-Driven enhanced Marine Sensing and Exploration in the science-policy interface
Cloud-Native Geospatial for Earth Observation Workshop
Hand on Earth Surface Monitoring with innovative TomoSAR Persistent Scatterer Processing
LBI (LiDAR Biomass Index) for individual tree forest carbon accounting
Predictive Modeling of Hyperspectral Responses of Natural Materials: Challenges and Applications
Harnessing the power of Earth Foundation Models
Remote Sensing with Reflected Global Navigation Satellite System (GNSS-R) and other Signals of Opportunity (SoOp)
Compressive Sensing in Radar and Related Areas Half Day Tutorials
Raster and Vector Data Cubes Across Spatial Data Science Languages
Effective Science Communication: Bridging the Gap Between Research and Society
GRSS Earth Science Informatics Technical Committee (ESI TC) Tutorial - Lifecycle of Large-Scale AI Models in the Cloud: A Focus on Deploying and Fine-Tuning Geospatial Foundation Models
Using space borne and ground based measurements of night lights for conservation planning
Working with Aquaverse: A remote sensing based end-to-end water quality monitoring platform for Inland and Coastal Waters
Unconventional Technology for Improved Disaster Management: Integrated Application of GeoAI and Cloud-Based Remote Sensing.
Machine Learning-Based Integration of Optical and SAR Data for Monitoring Earth’s Dynamic Processes

Half-Day

Presented by: Xiaofeng Li

Description

Artificial intelligence (AI) is a significant driving force for the ongoing scientific and technological revolution and industrial transformation. Its seamless integration with big data provides notable technical advantages, particularly in handling vast datasets, accelerating model processes, and mastering nonlinear feature learning and modelling. During this lecture, I will present the latest artificial intelligence technology strides in extracting valuable insights from diverse ocean remote sensing imagery. The lecture will encompass four key aspects:

  1. Ocean Image Segmentation and Classification: This involves extracting information from ocean remote sensing images, including mesoscale eddies, ocean internal waves, sea ice dynamics, ship movements, oil spill tracking, and flooding caused by typhoons, among other phenomena.

  2. Information Fusion: I will provide examples of how ocean remote sensing data from various sources can be combined to yield useful information.

  3. Algorithm Development Using Deep Learning Neural Networks: We will delve into creating and optimizing algorithms based on deep-learning neural networks.

  4. Ocean Phenomena Forecasting: Distinct from traditional oceanographic knowledge-driven modelling, data-driven modelling provides reliable and efficient insights. We will discuss lightweight forecasting techniques, which extend to small-scale internal waves, mesoscale sea level fluctuations, and the analysis of large-scale equatorial instability waves and processes.

These topics will showcase the cutting-edge applications of artificial intelligence in extracting information from ocean remote sensing data and forecasting various oceanic phenomena.

Tutorial Learning Objectives

This tutorial aims to offer a comprehensive insight into the recent progress made in applying deep learning techniques to the analysis of ocean remote sensing imagery. It covers four primary facets of working with such data: classification, fusion, algorithm enhancement, and forecasting oceanic phenomena. The tutorial will provide an introduction to fundamental deep learning frameworks, including Artificial Neural Networks (ANN), Convolution Neural Networks (CNN), Recurrent Neural Networks (RNN), and cutting-edge models. Participants can access open-source software, demonstration materials, and published research papers. The tutorial will conclude with a discussion on the future of Earth observation and the pivotal role that ocean remote sensing data play in advancing AI techniques.

Prerequisites

This lecture is designed for oceanographers with a foundation in remote sensing and physical oceanography. While some basic programming knowledge can be advantageous, it is not a prerequisite for participation.

Half-Day

Presented by: Giulia De Masi, Maurizio Migliaccio, Giorgia Verri, Rosalia Maglietta, Hari Vishnu, Laura Meyer

Description

The ocean remains one of Earth’s least understood environments, yet it plays a vital role in global ecosystems, climate regulation, and resources that sustain life. Recent advancements in remote sensing have opened new frontiers in oceanographic research, allowing scientists to monitor, map, and analyze the ocean in unprecedented ways.

Advances in remote sensing, artificial intelligence, and data-driven technologies are revolutionizing how we monitor and understand marine environments. This tutorial brings together leading experts to provide an engaging and comprehensive introduction to these cutting-edge approaches for exploring the "sensed ocean."

It will introduce the fundamentals of Machine Learning and AI with applications to Marine Environment and Ocean Sensing technology and discuss the potential of data mining and AI to enhance our understanding of underwater ecosystems, biological species, chemical and physical ocean dynamics, and environmental changes.

This tutorial is a crucial step towards implementing United Nations Sustainable Development Goal 14: “Conserve and sustainably use the oceans, seas, and marine resources for sustainable development”.

It is sponsored by the “Design the Future Ocean Initiative” Committee of IEEE Oceanic Engineering Society (see the attached support letter from IEEE Oceanic Engineering Society, aligned with the recent MOU with IEEE-GRSS), and will be endorsed by the United Nations Ocean Decade. This UN initiative was proclaimed in 2017 by the United Nations General Assembly, in a global effort aimed at supporting ocean science and knowledge production to reverse the deterioration of ocean ecosystems. Spanning from 2021 to 2030, the initiative seeks to stimulate scientific discoveries and form strategic partnerships that will advance understanding of ocean systems and facilitate solutions to achieve the UN’s 2030 Agenda for Sustainable Development.

Participants will start with an overview of AI applications in marine science, delving into machine learning, computer vision, and language models tailored for ocean data. Key topics include hybrid models for detecting sea oil slicks, leveraging remote sensing for marine animals’ tracking, and integrating physics-based approaches with machine learning for accurate oceanographic predictions.

Real-world case studies, including marine species (sea turtles and sea mammals) monitoring, sea bed mapping, pollutant and oil spill detection will demonstrate the application of remote sensing integrated with sensor networks in solving marine challenges. A hands-on session will equip attendees with practical skills in data analysis and visualization, introducing tools and workflows for processing marine data and implementing machine learning models for remote sensing image analysis.

This tutorial is designed for researchers, engineers, and environmental scientists interested in leveraging remote sensing and multimodal data analytics to reveal the ocean’s hidden patterns and processes, using methods from advanced Statistics, to Machine Learning and AI.

Tutorial Learning Objectives

This tutorial aims to equip participants with foundational knowledge and practical skills in the deployment and utilization of remote sensing for ocean monitoring. By the end of the session, participants will:

  1. Understand the Role of AI in Marine Science:
    • Gain insights into the applications of machine learning, computer vision, and visual language models for interpreting ocean data.
    • Comprehend hybrid models combining physics-based approaches with AI for enhanced oceanic monitoring and analysis.
  2. Apply Remote Sensing Techniques:
    • Explore methodologies for retrieving information about sea oil slicks using hybrid scattering and machine learning models.
    • Understand how remote sensing supports marine ecosystem modeling and species tracking, with a focus on real-world challenges like sea turtles tracking.
  3. Analyze and Visualize Ocean Data:
    • Gain hands-on experience in importing, processing, and visualizing marine sensor data.
    • Develop foundational skills in applying machine learning models to remote sensing imagery for marine analysis.
  4. Learn from Real-World Case Studies:
    • Examine case studies on biological species (sea turtles and cetaceans) detection and tracking, sea bottom mapping, and oil spills analysis to understand practical challenges and solutions in deploying ocean technologies.
    • Leverage Machine Learning for Marine sensing analysis, interpretation, and prediction:
  5. Learn advanced methods for environmental exploration, including seabed mapping, acoustic detection of marine life remote sensing, using data-driven approaches.
    • Understand Ethical and Legal Frameworks:
    • Address legal concerns surrounding privacy, data transparency, and environmental impact in ocean monitoring.

Participants will leave the tutorial with a holistic understanding of how Advanced Statistics, Machine Learning and hybrid models can be applied to marine remote sensing data, including natural phenomena like estuarine coastal area, migration of sea animals (turtles), and address pressing marine and environmental challenges like marine oil spill and microplastic circulation. This knowledge is applicable to both academic research and industrial applications in marine science and ocean engineering.

Prerequisites

This tutorial is designed to be accessible to participants from diverse academic and professional backgrounds with an interest in ocean science, remote sensing, and artificial intelligence (AI) applications for environmental monitoring based on remote sensing. Basic knowledge of marine science and environmental studies will be helpful but is not required, as foundational concepts will be covered in the introduction. Familiarity with programming concepts and data analysis (especially using Python) is beneficial, but participants without a coding background will still be able to follow the conceptual aspects of the tutorial. We will provide clear instructions and simplified code snippets for those who wish to explore data analysis on their own devices.

For participants interested in the AI components, a basic understanding of machine learning or computer vision will enhance their experience, but this is not a strict requirement. We will provide high-level explanations of key AI techniques, including visual language models and object detection, ensuring that non-technical participants can grasp the main ideas and applications.

Participants are encouraged to bring laptops if they wish to engage in optional hands-on coding exercises, but this is not mandatory. All necessary data, example scripts, and resources will be shared, so that attendees can fully benefit from the material without prior extensive technical preparation.

If necessary two parallel groups will be set-up based on their experience in AI and data modeling: “more advanced” and “still exploring”.

Half-Day

Presented by: Alex Leith, Caitlin Adams

Description

The advent of cloud computing has revolutionised the capabilities of researchers and professionals globally, helping them to access and analyse Earth observation (EO) data more easily than ever. Despite the well-understood tools and technologies, such as cloud-optimised GeoTIFFs and the spatio-temporal asset catalog (STAC) specification, many EO professionals have not yet had the opportunity to practically apply these innovations. This workshop aims to bridge that gap by showcasing how cloud-native geospatial technologies simplify the process of working with EO data, using Python as the primary programming language.

Participants will delve into a real-world case study focused on documenting land productivity metrics, a crucial component for monitoring the UN Sustainable Development Goal (SDG) indicators for 15.3.1. The workshop will utilise NASA’s Harmonized Landsat and Sentinel data, accessed through Earthdata, to explore the land productivity metric in depth.

Our workshop hosts, Caitlin Adams and Alex Leith, bring extensive experience from their work on large-scale cloud-native programs such as Digital Earth Africa, Digital Earth Australia, and the recently launched Digital Earth Pacific and Digital Earth Antarctica. These projects leverage petabytes of data to create valuable information products that inform decision-making processes across countries and continents.

Throughout the workshop, participants will gain hands-on experience and insights into how cloud-native geospatial technologies have significantly enhanced the ability to manage and analyse large volumes of EO data. By the end of the session, attendees will have acquired practical examples and knowledge to further develop their skills in this innovative field.

Tutorial Learning Objectives

  • Understanding Cloud-Native Geospatial Technologies: Learn the fundamentals of cloud-native geospatial technologies and their significance in simplifying EO data workflows.
  • Practical Application with Real-World Data: Engage in hands-on exercises using NASA’s Harmonized Landsat and Sentinel data to calculate land productivity metrics relevant to the UN SDG indicators for 15.3.1.
  • Exploring Advanced Tools: Gain familiarity with key Python packages for EO data analysis, including xarray, dask, pystac-client, odc-stac, and odc-geo.
  • Developing Reproducible Workflows: Understand how to build reproducible workflows that can be executed anywhere, independent of specific computing environments.
  • Leveraging Global Data Repositories: Learn how to access and utilise global free and open EO datasets, and how these resources can be integrated into cloud-native workflows.

Prerequisites

Some Python is advantageous, as is familiarity with remote sensing fundamentals, but there are no specific prerequisites. Participants will be provided with an online computation environment and instructions and support in completing the tutorial.

Half-Day

Presented by: Dinh Ho Tong Minh

Description

Unlock the Secrets of Earth Monitoring

Imagine having the power to monitor the Earth's surface with astonishing precision, tracking even the slightest changes every day. With the European Space Agency's SAR Copernicus Sentinel-1 program, you can do just that! Say farewell to the limitations of weather-dependent imaging. Thanks to cutting-edge radar technology, it captures clear snapshots of our planet around the clock, cutting through thick clouds and darkness. This revolutionary program has introduced Interferometric SAR, or InSAR, which has transformed the way we monitor surface deformation, making it an essential tool for understanding our planet's dynamic nature.

Transformative Technology at Your Fingertips

Now, harnessing this powerful technology is easier than ever! Dive into the groundbreaking Persistent Scatterers and Distributed Scatterers (PSDS) InSAR and ComSAR algorithms, all part of our open-source TomoSAR package. Don't let technical jargon intimidate you! Our user-friendly tutorial is designed to make these advanced concepts accessible to everyone, empowering you to explore and utilize this technology effectively.

Hands-On Learning Experience

In this engaging tutorial, we will guide you through the extraordinary capabilities of the PSDSInSAR and ComSAR techniques, all using real-world Sentinel-1 images. No coding skills? No problem! We’ll introduce you to intuitive open-source software like ISCE, SNAP, TomoSAR, and StaMPS, enabling you to gain groundbreaking insights into Earth's surface movements without the steep learning curve.

Master Radar Interferometry in Just Half a Day

Starting with an overview of the fundamental theory, our tutorial will lead you through the process of applying Sentinel-1 SAR data and processing technologies to identify and monitor ground deformation. After just half a day of dedicated training, you'll walk away with a solid understanding of radar interferometry, empowering you to produce time series of ground motion from a series of SAR images. Join us and become part of the future of Earth monitoring today!

Tutorial Learning Objectives

After just a half-day of training, participants will gain the following skills:

  1. Access SAR Data: You will easily access SAR data, making it readily available for your analysis.

  2. Master InSAR Theory: Our expert guidance will help you understand the intricacies of Interferometric SAR (InSAR) processing, breaking down complex concepts into easily digestible information.

  3. Interferogram Creation: You will learn how to create interferograms, a crucial step in the process that provides valuable insights into the Earth's surface.

  4. Ground Motion Interpretation: With our guidance, you will be able to interpret the ground motions revealed by these interferograms, allowing you to understand and analyze changes in the Earth's surface.

  5. Time Series Extraction: We will clarify the process of extracting ground motion time series from a stack of SAR images, empowering you to track and monitor surface movements over time.

Prerequisites

This class welcomes everyone interested in radar techniques for real-world applications. No coding skills? That's okay!

Half-Day

Presented by: Yong Pang, Liming Du

Description

This tutorial, centered on the LiDAR Biomass Index (LBI), will explore how this innovative index leverages LiDAR technology for precise carbon estimation at the individual tree level. LBI has become a valuable tool, particularly for forest biomass assessments and carbon stock measurements. By providing a detailed view of forest structure - from canopy height to crown dimensions - LiDAR data enables researchers and practitioners to gain insights into ecosystem health, carbon sequestration potential, and the impacts of environmental change on forested areas.

Relevance to the IGARSS Community:

The IGARSS community, with its focus on remote sensing advancements, is increasingly addressing environmental challenges such as deforestation, carbon cycle dynamics, and climate change. The LBI offers a practical, high-accuracy method to quantify carbon and monitor forest health in diverse ecosystems. This tutorial will benefit researchers and professionals by:

  • Introducing them to a robust, field-calibrated index that minimizes the need for extensive in-situ data collection while maintaining high precision.
  • Demonstrating how LBI can support large-scale ecological monitoring efforts, particularly under the “dual carbon” targets for carbon neutrality and reduction.
  • Enhancing their ability to integrate LBI into their workflows, fostering cross-disciplinary applications in forestry, ecology, and climate studies.
  • Through practical sessions, participants will acquire hands-on skills in data processing, parameter extraction, and model validation, all tailored to support the goals of sustainable forest management and climate resilience.

Tutorial Learning Objectives

By the end of this tutorial, participants will be able to:

  1. Understand the Fundamentals of LBI:
    • Grasp the core principles behind the LiDAR Biomass Index (LBI) and its importance in estimating forest biomass.
    • Understand the relationship between LBI and forest structural parameters such as tree height and crown size.
  2. Utilize LiDAR Data for Forest Carbon Estimation:
    • Learn how to extract LBI from LiDAR data.
    • Understand the process of transforming LiDAR point clouds into meaningful structural information for carbon analysis.
  3. Develop and Apply LBI Models for Carbon Estimation:
    • Gain the method in developing LBI-based carbon estimation models using LiDAR data.
    • Understand how to calibrate these models using field measurements and validate their accuracy.
  4. Interpret and Analyze Results:
    • Learn how to interpret LBI results for both individual trees and large-scale forested areas.
    • Gain knowledge on the significance of LBI in ecological research, carbon stock estimation, and forest management.
  5. Explore the Application of LBI for Forest Monitoring and Carbon Sequestration:
    • Understand how LBI can be used for forest health monitoring, biomass mapping, and assessing carbon sequestration potential in forest ecosystems.
    • Learn how LBI can contribute to research on climate change mitigation, sustainable forestry, and conservation efforts.
  6. Implement LBI in Real-World Remote Sensing Workflows:
    • Gain experience in integrating LBI techniques with other remote sensing methods such as satellite imagery and UAV-based LiDAR.
    • Explore case studies and examples of LBI applications in forestry and environmental monitoring, including large-scale forest management and climate policy development.

This tutorial will equip participants with the knowledge necessary to apply LBI effectively in their research and professional projects, advancing their ability to work with advanced remote sensing tools for sustainable forest and ecosystem management.

Prerequisites

Participants should have a foundational understanding of LiDAR remote sensing principles, basic data processing, familiarity with geospatial technologies and model regression technologies. A working knowledge of commonly used programming languages (e.g., Python or MATLAB) and remote sensing software (e.g., ArcGIS) will be beneficial for understanding the method.

Half-Day

Presented by: Gladimir V. Guimaraes Baranoski

Description

Computer models are systematically used by remote sensing and geoscience researchers to simulate and analyse the hyperspectral responses of natural materials (e.g., plants, soils and snow), notably with respect to varying environmental stimuli (e.g., variations in light exposure and water content). The main purpose of

this tutorial is to discuss theoretical and practical issues involved in the development of predictive models of light interactions with these materials, and bring forward key aspects that need to be addressed to enhance their efficacy. Furthermore, since similar models are used in other scientific fields, it also aims to foster the cross-fertilization with related efforts in those fields by identifying common needs and complementary resources. Its presentation will be organized into six main sections as follows.

Section 1 provides the required background and terminology to be employed throughout the tutorial. It includes an overview of the main light and matter interaction processes along with a review of relevant optics formulations and radiometry quantities.

Section 2 examines the key concepts of fidelity and predictability. It underscores the benefits and requirements associated with their adoption in the design of models to be employed in physical and life sciences research. It also provides an overview of the main design strategies and simulation approaches employed in the development of these models.

A carefully designed model is of little use without reliable data. More specifically, the effective use of a model requires material characterization data (e.g., thickness and water content) to be used as input, supporting data (e.g., absorption spectra of material constituents) to be used during the light transport simulations, and measured radiometric data (e.g., hyperspectral reflectance and transmittance) to be used in the evaluation of modeled results. Section 3 addresses data availability and quality issues and highlights recent efforts to mitigate them.

No particular modeling design approach is superior in all cases. Researchers need to find an appropriate level of abstraction for the material at hand in order to balance data availability, correctness issues and application requirements. Regardless of the selected level of abstraction, simplifying assumptions and generalizations are usually employed in the current models due to practical constraints and the inherent complexity of natural materials. Section 4 examines these issues and their impact on the efficacy of existing simulation algorithms.

Section 5 discusses different model evaluation approaches, with a particular emphasis to quantitative and qualitative comparisons of modeled results with actual measured data and/or experimental observations. It also examines the recurrent trade-offs involving the pursuit of fidelity and its impact on the performance of simulation algorithms, along with strategies employed to maximize the fidelity/cost ratio of computer intensive models.

Predictive models can provide a robust computational platform for the in silico investigation of phenomena that cannot be studied through traditional “wet” experimental procedures. Eventually, these investigations can also lead to the model enhancements. This final section illustrates this iterative process through selected case studies. It also stresses the importance of reproducibility and discusses barriers that may need to overcome in order to establish fruitful interdisciplinary collaborations.

Tutorial Learning Objectives

This tutorial builds on the experience gained during the development of first-principles light interaction models for different organic and inorganic materials. The lessons learned through this experience will be transparently shared with the attendees. The attendees will be introduced to essential biophysical concepts and simulation approaches relevant for the development of such models, as well as to the key role played by fidelity, predictability and reproducibility guidelines in this context. Moreover, in order to develop models following these guidelines, a scientifically sound framework should be employed. This brings the main learning objective of this tutorial, namely to provide attendees with a “behind the scenes” view of the different stages of this framework, namely data collection, modeling and evaluation. More specifically, theoretical and practical constraints that that need to be addressed in each of these stages will be broadly discussed. These discussions will be illustrated by examples associated with openly accessible light interaction models. Besides providing attendees with a foundation for the enhancement of their own predictive light interaction models and the development of new ones, this tutorial also aims to bring to their attention the wide range of scientific contributions and technological advances that can be brought about by the use of these models.

Prerequisites

The intended audience includes graduate students, practitioners and researchers, from academia, scientific organizations and industry. Participants will be exposed to practical issues which are usually not readily available in the related literature. The proposed tutorial assumes a familiarity with basic optics concepts and radiometric terms. Experience with Monte Carlo methods would be helpful, but not required.

Half-Day

Presented by: Wei Ji Leong, Lilly Thomas, Soumya Ranjan Mohanty

Description

Earth Foundation Models are trained on vast quantities of Earth Observation data, and can enable Machine Learning practitioners to bootstrap their workflow without requiring access to large quantities of label data. However, these Foundation Models can be hard to choose or configure properly, and people may find it challenging to finetune these general-purpose models on specific downstream tasks. This half-day workshop aims to guide participants through the available options, explaining how different models have different capabilities based on how they were pre-trained, and provide some guidance on how to apply Foundation Models in an end-to-end workflow.

Tutorial Learning Objectives

At the end of this workshop, participants should:

  1. Have some familiarity on which Earth Foundation Models are suitable for different use-cases

  2. Be comfortable with installing the pre-trained Foundation Model and understanding the documentation

  3. Know how to fine-tune a model on classification & semantic segmentation downstream task

Prerequisites

Tutorial participants are expected to have:

  • Good experience with Python programming

  • Familiarity with PyTorch or any other Deep Learning library

  • Access to a GPU device (either through cloud platforms like Google Colab/Kaggle or on-premise via a Jupyter-Hub cluster)

Half-Day

Presented by: Prof. James L Garrison, Prof. Adriano Camps, Dr. Estel Cardellach,

Description

Although originally designed for navigation, signals from the Global Navigation Satellite System (GNSS), i.e., GPS, GLONASS, Galileo and COMPASS, exhibit strong reflections from the Earth and ocean surface. Effects of rough surface scattering modify the properties of reflected signals. Several methods have been developed for inverting these effects to retrieve geophysical data such as ocean surface roughness (winds) and soil moisture.

Extensive sets of airborne GNSS-R measurements have been collected over the past 20 years. Flight campaigns have included penetration of hurricanes with winds up to 60 m/s and flights over agricultural fields with calibrated soil moisture measurements. Fixed, tower-based GNSS-R experiments have been conducted to make measurements of sea state, sea level, soil moisture, ice and snow as well as inter-comparisons with microwave radiometry.

GNSS reflectometry (GNSS-R) methods enable the use of small, low power, passive instruments. The power and mass of GNSS-R instruments can be made low enough to enable deployment on small satellites, balloons and UAV’s. Early research sets of satellite-based GNSS-R data were first collected by the UK-DMC satellite (2003), Tech Demo Sat-1 (2014) and the 8-satellite CYGNSS constellation (2016). HydroGNSS, to be launched in 2025 will use dual-frequency and dual-polarized GNSS-R observations with principal science goals addressing land surface hydrology (soil moisture, inundation and the cryosphere). Availability of spaceborne GNSS-R data and the development of new applications from these measurements, is expected to increase significantly following launch of these new satellite missions and other smaller ones (ESA’s PRETTY and FFSCAT; China’s FY-3E; Taiwan’s FS-7R).

Recently, methods of GNSS-R have been applied to satellite transmissions in other frequencies, ranging from VHF (137 MHz) to K-band (18.5 GHz). So-called “Signals of Opportunity” (SoOp) reflectometry methods enable microwave remote sensing outside of protected bands, using frequencies allocated to satellite communications. Measurements of sea surface height, wind speed, snow water equivalent, and soil moisture at the root zone depth have been demonstrated with SoOp.

This half-day tutorial will summarize the current state of the art in physical modeling, signal processing and application of GNSS-R and SoOp measurements from fixed, airborne and satellite-based platforms.

Tutorial Learning Objectives

After attending this tutorial, participants should have an understanding of:

  • The structure of GNSS signals, and how the properties of these signals enable remote sensing measurements, in addition to their designed purpose in navigation.
  • Generation and interpretation of a delay-Doppler map.
  • Fundamental physics of bistatic scattering of GNSS signals form rough surfaces and the relationship between properties of the scattered signal and geophysical variables (e.g. wind speed, sea surface height, soil moisture, ice thickness)
  • Conceptual design of reflectometry instruments.
  • Basic signal processing for inversion of GNSS-R observations.
  • Current GNSS-R satellite missions and the expected types of data to become available from them.

Prerequisites

Basic concepts of linear systems and electrical signals. Some understanding of random variables would be useful.

Half-Day

Presented by: Andriyan Bayu Suksmono, Donny Danudirdjo, Koredianto Usman

Description

Compressed Sensing/Compressive Sampling (CS) is an emerging method for reconstructing signals and images based only on a few numbers of samples, which is much fewer than conventional (Shannon) sampling. This tutorial introduces the concepts and applications of CS in Radar and related areas. Some examples are presented for understanding by participants with basic knowledge on signal processing. This tutorial will covers the following topics:

  1. A brief review on conventional/Shannon sampling
  2. Basic concept of Compressive Sampling/Sensing: Random sampling and signal reconstruction
  3. Review on CS applications: CS-SFCW Radar, CS-VLBI Imaging, CS-Weather Radar Processing
  4. CS reconstruction algorithm using L1-minimization: The problem of L0-minimization, L0 relaxation to L1-minimization, Donoho’s premis, Tropp’s premis
  5. L1-minimization and convex programming: Convex optimization, Karmarkar’s Interior Point Method, The weight point algorithm
  6. Graphical description of L1-Minimization and weight point algorithm
  7. Examples of CS problem using dictionaries method: Direction of arrival estimation and face recognition problems
  8. L1-norm regularization methods: Total variation (TV) minimization, sparse optimization, other regularization and fidelity terms
  9. Applications: Radar signal denoising and target detection, Remote sensing image restoration, Image segmentation

Tutorial Learning Objectives

We expect a participant, after following this tutorial, will

  • Understand the basic concept of CS (Compressive Sampling/Compressed Sensing)
  • Can perform simulation on CS on a digital signal
  • Understand the importance of CS, its difference with conventional sampling, and its importance in building efficient sensing/imaging devices
  • Know various field where the CS can be applied

Prerequisites

This is an introductory session on Compressive Sampling/ Compressed Sensing that will be suitable for graduate (Master and PhD) students, research engineers, and scientists. Basic knowledge on signal processing is required.

Half-Day

Presented by: Yomna Eid, Mohammad Alasawedah, Abhishek Singh, Felix Cremer, Rafael Schouten

Description

Many geospatial datasets can be represented as either raster or vector data cubes, depending on the workflow requirements. Raster cubes store multi-dimensional arrays with coordinates like longitude and latitude, capturing the spatial aspects of a dataset. On the other hand, vector data, represented as geometry objects such as points, lines, and polygons, is traditionally stored as tables. Vector data cubes generalize this concept by having multi-dimensional arrays that include a geometry dimension to represent the spatial domain.

In this tutorial, attendees will learn how to work with raster data cubes that cover large spatial extents to derive vector data cubes that focus on specific areas of interest, allowing to observe how geospatial features and their attributes change over time. The participants will explore workflows for sampling and aggregating raster data cubes using vector geometries to produce vector data cubes, leveraging R, Python, and Julia. Additionally, this tutorial will introduce participants to how the concept of raster and vector data cubes eases the process of data fusion techniques. By employing machine learning (Random Forest) and deep learning (CNN) to fill gaps in Leaf Area Index (LAI) products. These gaps resulted from cloud cover in Sentinel-2 images. Participants will learn all the key concepts through a real hands-on use case, and explore how to implement them in multiple programming languages, expanding their skills and knowledge.

This session will also highlight the advantages of using Quarto as a publishing tool, facilitating cross-language notebooks in spatial data science.

Case Study: In this use case, we will use a dataset from Trentino-South Tyrol region, which covers the Autonomous Provinces of Trento and Bolzano in North-Eastern Italy, particularly on grasslands in the province of Bolzano. While Sentinel-1 and Sentinel-2 provide valuable raster data cubes that cover large spatial areas, handling such extensive datasets can be computationally challenging. In this tutorial, we demonstrate how to obtain Sentinel vector data cubes using farm polygons, limiting the analysis to the areas within farm boundaries. Furthermore, we explore methods for sampling points from the created vector cubes to monitor LAI values over time. These vector data cubes will be used in subsequent data fusion modelling to enhance LAI estimation, by utilizing SAR data from Sentinel-1 and additional covariates for the year 2023.

Tutorial Learning Objectives

  1. A hands-on introduction to the topics of data cube creation (raster and vector) and application analyses
  2. Understanding the concept of multi-source data fusion using ML/DL modelling
  3. Familiarity with how processes are run simultaneously in R, Python and Julia
  4. Familiarity with Quarto as a publishing tool

Prerequisites

Suitable for M.Sc. students, Ph.D. students, and researchers.

  1. Knowledge of R, Python or Julia
  2. Basic knowledge of remote sensing
Full-Day

Presented by: Esther Oyedele

Description

Effective science communication is crucial for advancing public understanding of complex Earth processes and supporting informed, data-driven decisions in policy and community action. As research in Earth observation (EO) and remote sensing grows, so does the need for scientists to effectively communicate findings that address today’s urgent environmental and social challenges. This tutorial will equip participants with practical tools and strategies to transform their technical research into engaging, accessible narratives that highlight the impact of remote sensing in areas like climate resilience, sustainability, and emergency response.

Aligned with the 2025 IEEE IGARSS themes, this tutorial will focus on high-impact topics, including sustainability, climate adaptation, and the synergetic use of multiple EO missions. Participants will learn to communicate these subjects using EO and remote sensing data in ways that resonate with diverse audiences, from policy makers and interdisciplinary collaborators to the public. Through a blend of lecture, case studies, and hands-on activities, this session will cover the essentials of storytelling, audience adaptation, and visual design, offering a step-by-step guide for crafting narratives that are scientifically accurate, visually engaging, and widely appealing.

A major component of this tutorial will focus on building narratives that connect scientific data to real-world impacts. Using examples from climate resilience, water scarcity, disaster response, and sustainability, participants will gain experience in framing remote sensing insights within the larger narrative of resilience and sustainability, enhancing the reach and relevance of their work. The tutorial will guide participants in translating EO data into narratives that emphasize human and environmental impacts. Participants will be better equipped to convey the significance of their findings beyond academia.

This tutorial will also tackle the complexities of communicating scientific uncertainty, a frequent challenge in fields reliant on predictive models and data interpretation. Participants will learn best practices for addressing uncertainty in a way that builds trust with audiences while maintaining scientific integrity. Through practical exercises, participants will practice framing uncertainty within the context of resilience and adaptive action, helping audiences understand its role in advancing knowledge and guiding decisions.

Finally, this tutorial incorporates DEIAB (Diversity, Equity, Inclusion, Accessibility, and Belonging) principles to ensure that science communication is inclusive, empathetic, and accessible to diverse groups. We will explore strategies for making EO data relatable to communities affected by environmental change, fostering inclusive dialogues that respect different cultural, economic, and social contexts. Participants will learn approaches for engaging with communities and decision-makers from varied backgrounds, using empathetic storytelling to build a sense of connection and relevance.

By the end of this tutorial, participants will be equipped to communicate their research as part of a global narrative on sustainability and resilience, contributing to a more informed public that is better prepared to address environmental challenges. This skill set will empower attendees to take on the role of both researcher and communicator, amplifying the societal impact of their work in EO and remote sensing.

Tutorial Learning Objectives

Enhance Science Communication Skills: Participants will build foundational skills in storytelling, data visualization, and message adaptation with a focus on remote sensing applications. They will learn to distill complex scientific information into compelling narratives that convey the significance of their work. Special emphasis will be placed on combining visuals and data into coherent stories that effectively illustrate Earth processes and their societal impacts.

Promote Audience-Centric Messaging: This tutorial will guide participants in developing communication strategies tailored for various audiences, from scientific peers and policy makers to the general public. Participants will be introduced to frameworks for identifying the core message for each group, as well as methods to make presentations engaging, accessible, and relevant. This skill ensures that technical research resonates widely, influencing informed decisions across different sectors.

Hands-on Application of Data Storytelling: Through practical exercises, participants will practice creating storyboards and messaging plans that translate Earth observation insights into relatable messages. Activities will include developing narratives around topics such as climate resilience, and resource management, allowing participants to refine their storytelling techniques and immediately apply what they learn to their own research.

Foster Empathetic Science Outreach: To address the diverse backgrounds and needs of audiences, this tutorial emphasizes the importance of inclusive, respectful language and visuals. Participants will learn to consider factors such as cultural context, literacy levels, and differing perspectives, ensuring that messages are accessible and engaging for all. By aligning with DEIAB principles, participants will gain the skills to make their research more inclusive, fostering a stronger connection between science and society.

Prerequisites

Participants should have a foundational understanding of Earth observation (EO) and remote sensing data applications, as this tutorial will use examples from these fields to build communication skills. However, no prior experience in science communication is required. This tutorial is designed to introduce fundamental concepts in storytelling, data visualization, and audience engagement, making it accessible to those new to these skills.

An interest in public outreach, policy influence, or interdisciplinary collaboration will enhance participants’ experience, as the session emphasizes effective communication across various audiences, from scientific peers to the general public. Participants will benefit from a curiosity about making scientific data more accessible and relevant to broader audiences, including policymakers, educators, and community stakeholders.

Participants should also be open to hands-on, interactive exercises that encourage experimentation with storyboarding, audience adaptation, and message planning. While familiarity with remote sensing data visualization tools (such as GIS or data analysis software) may be helpful, it is not required. The tutorial will guide participants through best practices for creating visuals that complement storytelling and enhance public understanding.

This session will particularly benefit researchers, early-career scientists, and professionals looking to communicate their work on sustainability, climate resilience, and digital innovation within EO data. By the end of the tutorial, participants will be equipped to transform their technical work into compelling, accessible narratives that engage diverse audiences and highlight the real-world impact of remote sensing research.

Full-Day

Presented by: Gabriele Cavallaro, Rocco Sedona, Manil Maskey, Thomas Brunschwiler

Description

In today’s information era, the rapid proliferation of data has led to increasingly complex, data-driven challenges across science and engineering. This evolution has initiated a paradigm shift in Machine Learning (ML), emphasizing unsupervised and self-supervised learning to handle vast datasets and multimodal learning to integrate heterogeneous data sources. Foundation Models (FMs), pre-trained neural networks designed to capture a broad array of visual features, are central to this shift, offering a versatile base for complex tasks in computer vision (CV) such as object classification, detection, and segmentation. By leveraging these FMs, which require less labeled data and perform efficiently across domains, ML practitioners are making significant strides in fields with high data demands, such as Earth Observation (EO). However, unique challenges persist in EO, particularly in adapting traditional CV techniques to multispectral and hyperspectral data and overcoming the high computational costs of FM training. This tutorial addresses these challenges by guiding participants through the lifecycle of deploying and fine-tuning geospatial AI FMs in cloud environments.

Tutorial Learning Objectives

Participants will gain a solid understanding of the foundational principles of Foundation Models (FMs) and engage in hands-on training to develop and apply these models specifically within geosciences. This tutorial will cover key aspects of geospatial data analysis and tackle challenges unique to Earth Observation (EO), such as processing multi-source and multitemporal satellite remote sensing datasets. Participants will acquire skills to use FMs effectively across various stages of geoscience research and practical applications. They will explore cloud-based solutions for training and deploying FMs, learning to apply fine-tuning techniques to adapt models for EO applications and to build pipelines that deploy models into production environments and evaluate them on new, real-time data. AWS Cloud Computing access credentials will be provided. To maximize hands-on time, course organizers will pre-configure resources and tools, preventing setup delays during the tutorial. Participants will work directly with pre-implemented algorithms and data, including multitemporal and multimodal AI geospatial FMs and downstream fine-tuning applications. They may also bring their data and applications for customization, enabling them to navigate the complete lifecycle of an FM project—from fine-tuning a model to optimizing it for a specific EO use case. Additionally, the tutorial will highlight interdisciplinary collaboration, drawing from examples in academia, government, and industry to showcase the collective advancements driving FM development.

Prerequisites

Participants should have a strong background in machine learning and deep learning, along with familiarity with Vision Transformer (ViT) architectures and their variants. Experience in Python programming, including proficiency in foundational libraries such as Numpy, Scikit-learn, as well as in deep learning frameworks like PyTorch and/or TensorFlow, is required. Each participant should bring a laptop (Windows, Mac, or Linux).

Full-Day

Presented by: Noam Levin

Description

The awareness to light pollution has grown in the past two decades, especially to its negative impacts on our ability to observe the night sky, its negative impacts on biodiversity and on human health. Remote sensing of artificial lights is of relevance for conservation planning, both as a proxy of human activities, as well as a stressor and a source of light pollution. Globally light pollution is increasing, and with the current transition to LED lighting, light pollution may increase even faster. Whereas until a few years ago most available night time sensors were panchromatic and coarse spatial resolution, in recent years new multispectral have been developed, both for space borne and ground based measurements, offering higher spatial resolution as well as multidirectional measurements. The proposed workshop will be composed of two (or potentially three) parts: (1) an overview of currently available space borne and ground based sensors for quantifying night time lights, their capabilities and limitations; (2, optional) night time measurements of night lights, demonstrating the use of a DSLR camera with fish-eye lens for measuring night sky brightness, TESS-4C for continuous measurements of night sky brightness, and LANcube for mobile measurements of night lights in five directions; (3) a hands-on exercise in which the participants will be invited to work with spaceborne images of night lights (VIIRS/DNB and SDGSAT-1) as well as with ground based measurements of night lights (LANcube photometer), to examine how different types of sensors can be used to quantify the exposure of ecosystems to light pollution, and how this information can be applied for planning ecological corridors. The exercises will assume the participants have access to ArcGIS Pro, however they can also work on their preferred platform (QGIS, R, Python).

Tutorial Learning Objectives

Acquaintance with space borne and ground based night time light sensors, their capabilities and limitations

Prerequisites

Laptop and some GIS/RS software and/or Python/R

Full-Day

Presented by: Arun M. Saranathan, Akash Ashapure, Ryan E. O’Shea

Description

This tutorial will cover the fundamentals of working with Aquaverse, a machine-learning-centered processing workflow designed to generate atmospheric correction (AC), biogeochemical, and optical property products from remote sensing observations over inland and coastal waters. Aquaverse provides an end-to-end solution for deriving accurate remote sensing reflectance (Rrs), downstream water quality products, along with the associated estimation uncertainties for diverse set of multi- and hyperspectral satellite sensors such as, Sentinel-2 (Multispectral Instrument; MSI), Landsat-8/9 (Operational Land Imager; OLI), Sentinel-3 (Ocean and Land Color Instrument; OLCI), SeaHawk Microsatellite (part of the Ocean Color Imager program), Planet SuperDove commercial Satellite, Ocean Color Instrument (OCI) abord Plankton, Aerosol, Cloud, ocean Ecosystem (PACE), Earth Surface Mineral Dust Source Investigation (EMIT), and Hyperspectral Small Satellite for Ocean Observation (HYPSO).

The tutorial will include a comprehensive introduction to the theoretical and practical aspects of Aquaverse’s components, including ML-architecture the Mixture Density Network (MDN), and its application to AC model, and water quality retrievals. The workshop will also introduce the inherent predictive uncertainty quantification available for MDNs. This tutorial will provide participants with the tools necessary to apply these methods to their specific research and applications.

Participants will gain hands-on experience in:

  1. Atmospheric correction using the MDN-based AC model.
  2. Utilizing Aquaverse to retrieve water quality parameters and their uncertainties.
  3. Interpreting results through visualizations, including maps, scatterplots, and time series.
  4. Leveraging open-source tools, such as the MDN toolbox (available at https://github.com/STREAM-RS/STREAM-RS), to process data for multiple sensors.
  5. Installing and running the MDN-based AC model for generating Rrs products.

By integrating atmospheric correction and water quality estimation into a single workflow, this tutorial provides a unique opportunity to explore synergistic uses of multiple EO missions and sensors, advancing the understanding of aquatic environments and promoting the application of remote sensing data to achieve Sustainable Development Goals (SDGs).

Tutorial Learning Objectives

Participants will:

  • Understand the fundamentals of a Mixture Density Network and the associated prediction uncertainities for this model.
  • The MDN-basedatmospheric correction for inland and coastal waterswithin Aquaverse, including their associated uncertainties.
  • Learn to estimate biogeochemical parameters and optical properties from remote sensing data using MDN, including their associated uncertainties.
  • Gain practical experience in using Aquaverse tools and outputs for scientific research and applications.
  • Develop skills to apply Aquaverse’s processing workflow to a wide range of EO sensors, ensuring flexibility and adaptability for various use cases.
  • Explore advanced use cases, such as generating spectral aerosol optical thickness, creating pixel-wise uncertainty maps, and evaluating the impact of atmospheric correction residuals on downstream products.

Prerequisites

Participants should have a basic understanding of remote sensing principles and aquatic remote sensing applications. Familiarity with Python programming and basic data visualization is recommended but not required.

Full-Day

Presented by: 1. Prof. Ashok K. Keshari (Lecture session), 2. Prof. Jagannath Aryal (Lecture session), and 3. Mr. Rajeev Ranjan (Hands-on session)

Description

Natural catastrophes are becoming more often and intense due to climate change and human interference. Traditional approaches in disaster management predominantly struggle to offer timely and accurate real-time localized information in providing robust and rapid responses for disaster mitigation, especially in the inaccessible and impassable terrain regions. However, the remote sensing data has enabled the users to map the disaster-prone areas more precisely and repetitively. The accessibility and availability of remote sensing data and their integration with Geospatial Artificial Intelligence (GeoAI) technology and cloud-based platform have accelerated the non-structural based disaster management approach. The unconventional technology such as GeoAI and cloud-based remote sensing allows easy and rapid development of tools and having various appealing characteristics such as capability of automation, scalability, affordability, wider applications, minimum computational facility, user-friendliness, high market demand, and substantial job returns. The remote sensing derived attributes of disaster events are also required as input in conventional process-based models. Hence, it has been more pressing to acquire integrated knowledge and skills related to GeoAI, and cloud-based remote sensing computation for developing improved and robust disaster management strategies.

The proposed tutorial will provide lectures and hands-on experience to the IGARSS25 participants on applications of advanced unconventional techniques for improved disaster management. The tutorial will leverage the potential use of unconventional technology such as GeoAI in conjunction with cloud-based platform, particularly Google Earth Engine (GEE) for real-time application of remote sensing in disaster management. The proposed tutorial will cover the end-to-end workflow from data acquisition such as Synthetic Aperture Radar (SAR) to integrated application of unconventional technology for real-time mapping and monitoring of disasters particularly floods including riverine, urban, and Glacial Lake Outburst Floods (GLOFs). The insights gained during this tutorial will enable the participants to develop and customize the automated applications and tools, scalable in nature for their unique research and wider application needs in regional disaster resilience and risk mitigation, particularly, floods.

Relevance

The IGARSS community includes the large number of crowds having immense research interests in the field of emerging unconventional technology such as GeoAI, and cloud-based remote sensing application in disaster management. And the interests and applications of these technology are increasing rapidly due to their many aforementioned appealing characteristics over the conventional technology. Hence, the proposed tutorial is framed to keep the present requirements of the IGARSS community participants and will draw interest from multiple core and applied disciplines. The proposed tutorial perfectly aligns with the goal of IGARSS to foster advancement in remote sensing and geospatial fields for worldwide benefits. The tutorial addresses the combined application of unconventional technology in disaster management which are more appealing to researchers and practitioners in hydrology, climate science, urban planning, and environmental monitoring. At the end, attendees will acquire insights and skills in the integrated application of cutting-edge unconventional technology such as GeoAI and Cloud-based remote sensing, which will build and enhance their capabilities in scientific understanding and development of strategies for preparedness, response, and disaster resilience-building, particularly for floods within their communities and globally.

Tutorial Learning Objectives

The proposed tutorial will provide the insights related to the combined application of unconventional technology, such as GeoAI, a cloud-based (i.e., GEE) remote sensing, particularly, SAR in the field of disaster management with emphasis on floods. The framed specific learning objectives of the proposed tutorial are:

  1. Understand the application of remote sensing (particularly, SAR) and geospatial technology in disaster management, here, floods: Gain a comprehensive understanding of cloud-based advanced remote sensing data for real-time disaster monitoring and response.

  2. Cloud-based advanced remote sensing (particularly, SAR) data and geospatial analysis using artificial intelligence models: Learn the navigation and utilization of GEE cloud-based platform for large-scale, real-time data processing, enabling efficient analysis of multisource remote sensing data, particularly SAR.

  3. Apply GeoAI for real-time mapping and monitoring of real-world disaster scenarios, in this case floods: Explore the application of GeoAI to specific disaster types, here floods, to improve preparedness and mitigation efforts by identifying vulnerable zones.

  4. Hands on practice sessions to learn real-world applications of GeoAI and cloud-based SAR remote sensing data (unconventional technology) for mapping and monitoring of disaster events, particularly floods.

The insights and skills gained by participants during the tutorial would be helpful in enriching their scientific capability of understanding and developing strategies to improve preparedness and mitigation of disasters.

Prerequisites

The participants should have basic knowledge of remote sensing and Geographic Information System (GIS), foundational understanding of AI, beginner experience with Google Earth Engine (GEE) cloud-based platform (preferred but not required), basic programming skills in Python using google colab and Java, and interest in disaster risk reduction and environmental monitoring. The attendees should also bring their good working condition laptops for data processing and visualization and must have a working GEE account (associated with a Gmail account).

Full-Day

Presented by: Dr. Francescopaolo Sica, Dr. Fabio Pacifici, Dr. Ksenia Bittner

Description

This tutorial explores the complementary strengths of optical and Synthetic Aperture Radar (SAR) data for remote sensing, highlighting their unique characteristics, the necessary pre-processing steps, and strategies for optimal integration into machine learning pipelines. These techniques are essential for a wide range of end-user applications that monitor the Earth's dynamic processes, such as environmental change, disaster response and infrastructure management.

The importance of sensor synergies in Earth observation has increased significantly with the availability of high-resolution satellite data. Optical and SAR data provide different but complementary information that, when integrated, provide a richer understanding of the Earth's surface. Optical sensors collect data based on visible and infrared light, providing detailed visual information, but are often constrained by weather conditions and daylight requirements. In contrast, SAR sensors actively emit microwave signals, allowing consistent data acquisition regardless of weather or lighting conditions. This complementary capability allows continuous monitoring of the Earth's dynamic processes, particularly in regions prone to cloud cover or adverse conditions.

Optical remote sensing captures the spectral reflectance of the Earth's surface in the visible and infrared bands, providing detailed information on surface characteristics, vegetation health, land cover and water bodies. High-resolution optical data allow precise mapping of small-scale features, which is essential for tasks such as urban monitoring, vegetation health assessment and change detection. However, optical data can be limited by atmospheric interference, cloud cover and low light conditions. This tutorial provides an overview of optical sensor types, key applications, and the common pre-processing steps required to improve data quality, including radiometric and atmospheric corrections.

SAR sensors operate in the microwave frequency range, transmitting and receiving backscattered signals from the Earth's surface that, depending on the frequency, can penetrate clouds, vegetation and sometimes the ground. SAR data is particularly effective in monitoring surface roughness, moisture content and structural characteristics, making it invaluable for applications such as flood mapping and soil moisture estimation. This tutorial will cover the unique imaging properties of SAR, from polarisation to wavelength considerations, and essential pre-processing tasks such as radiometric calibration and geometric correction, which are critical to aligning SAR data with optical imagery.

The integration of optical and SAR data requires several pre-processing steps and overcoming specific challenges associated with each sensor type. Given the different spatial resolutions, orientations and acquisition methods, the alignment of SAR and optical data is critical to achieve seamless integration into machine learning models. This tutorial will focus on techniques for data co-registration, resolution matching and transformation processes that enable coherent analysis across modalities.

Through case studies and live demonstrations, participants will see the benefits of a synergistic approach in real-world applications. Examples from environmental monitoring, disaster response and infrastructure assessment will be used to demonstrate how the combination of optical and SAR data can improve detection and monitoring capabilities. In flood monitoring, for example, SAR can detect water under cloud cover, while optical data provide visual detail after the event. In infrastructure assessment, SAR's sensitivity to structural changes complements the high-resolution mapping provided by optical imagery.

Tutorial Learning Objectives

This tutorial aims to provide participants with the basic knowledge and practical skills required to exploit the combined strengths of optical and Synthetic Aperture Radar (SAR) data for Earth observation. At the end of the tutorial, participants will be able to:

  1. Understand the characteristics of optical and SAR data: Participants will gain a solid understanding of the unique characteristics of optical and SAR sensors, including their data acquisition processes, spectral and spatial resolutions, and imaging limitations. This knowledge will help them to select the most appropriate data source for different Earth observation applications, such as environmental monitoring and disaster response.

  2. Master pre-processing techniques for optimal data quality: Effective use of remote sensing data requires careful pre-processing to correct for sensor-specific biases and environmental effects. Participants will learn essential pre-processing steps for both optical and SAR data, including radiometric and atmospheric corrections for optical imagery and calibration and geometric corrections for SAR data. These steps ensure that the data is ready for integration into machine learning models.

  3. Apply data integration methods in machine learning pipelines: This tutorial will introduce techniques for co-registration, resolution matching and transformation of optical and SAR data, allowing participants to effectively combine them for analysis. By understanding these integration methods, participants will be able to develop machine learning models that benefit from the complementary nature of these datasets, improving their accuracy and reliability for various monitoring tasks.

  4. Implementing multi-sensor data for real-world applications: Through case studies and hands-on exercises, participants will learn how to use multi-sensor data for practical applications such as flood monitoring, vegetation analysis and infrastructure assessment. They will see how combining optical and SAR data can improve detection capabilities, increase resilience in adverse conditions and provide richer insights for decision making.

Prerequisites

Participants should have

  • A basic understanding of remote sensing and Earth observation, including experience with optical or SAR data.

  • Familiarity with common geospatial and remote sensing software (e.g. QGIS, SNAP, or similar).

  • Basic knowledge of machine learning concepts and techniques, especially for image processing and classification.

  • Some programming experience, preferably in Python, to follow along with code demonstrations.

While previous exposure to SAR or optical data processing is beneficial, the tutorial will provide an introductory overview of key concepts. All tutorial materials will be accessible and provided in an open source format, allowing for continued learning and collaboration beyond the conference.