Create GEO & Photo Data from UAVs

Evhenii Rvachov
13 min readFeb 22, 2023
Photo by Ravi Patel on Unsplash

This tutorial will guide you through the process of using a drone, AWS, and the Google Maps API to create photometry and attach each photo to a location on the map. The tutorial assumes that you have experience as an embedded engineer, AWS engineer, and UAV operator and that you have experience working with the Google Maps API.

The tutorial consists of eight steps, each of which will take you through a different part of the process. In Step 1, you’ll learn how to fly the drone and capture images. In Step 2, you’ll upload the images to an AWS S3 bucket. In Step 3, you’ll set up an AWS Lambda function to process the images. In Step 4, you’ll configure an S3 bucket trigger to automatically invoke the Lambda function when new images are uploaded. In Step 5, you’ll geotag the images with the Google Maps API. In Step 6, you’ll store the processed images in another S3 bucket. In Step 7, you’ll use AWS QuickSight to visualize the data. And in Step 8, you’ll learn how to scale the solution using AWS.

By the end of this tutorial, you’ll have a complete end-to-end solution for creating photometry and attaching each photo to a location on the map, using AWS and the Google Maps API.

Our short plan!

Step 1: Set up your drone

The first step is to set up your drone for aerial photography. You’ll need a drone with a high-resolution camera that can capture images of the area of interest. You’ll also need to make sure your drone is equipped with GPS and other sensors to enable accurate geotagging of each photo.

Step 2: Upload images to AWS S3 bucket

Once you’ve captured images with your drone, you can upload them to an AWS S3 bucket for storage and processing. You can use the AWS Command Line Interface (CLI) or an S3 client tool to upload the images.

Step 3: Set up AWS Lambda function

Next, you’ll need to set up an AWS Lambda function that will process the images. You can use Python or another programming language supported by AWS Lambda to write your image-processing code.

In your Lambda function, you can use image processing libraries such as OpenCV or Pillow to analyze the images. For example, you can crop or resize the images, apply filters, or perform object detection.

Step 4: Configure S3 bucket trigger

To automatically trigger your Lambda function when new images are uploaded to the S3 bucket, you can configure an S3 bucket trigger. This will ensure that your Lambda function is automatically invoked each time a new image is uploaded to the bucket.

Step 5: Geotag images with Google Maps API

Once your Lambda function has processed the images, you can geotag each photo with its precise location on the map using the Google Maps API. You can use the latitude and longitude information captured by the drone to accurately position each photo on the map.

Step 6: Store processed images in S3 bucket

After geotagging the images, you can store the processed images in another S3 bucket. This will make it easy to access and analyze the images later.

Step 7: Visualize data using AWS QuickSight

To visualize the photometry data, you can use AWS QuickSight, a business intelligence and data visualization tool. You can create interactive dashboards that allow users to explore the photometry data, such as by date, time, location, or other parameters.

Step 8: Analyze data using AWS SageMaker

To analyze the photometry data using machine learning, you can use AWS SageMaker. You can train machine learning models on the photometry data to identify patterns or anomalies. The results of the machine learning analysis can be visualized using AWS QuickSight or other tools.

Let’s get started!

Step 1: Set up your drone for aerial photography

Photo by Red Zeppelin on Unsplash
  • Choose a drone: You’ll need a drone with a high-resolution camera that can capture images of the area of interest. Some popular drones for aerial photography are DJI Mavic 3, DJI Matrice, and DJI Inspire.
  • Check regulations: Before you fly your drone, make sure to check local regulations and obtain any necessary permits or licenses. Many countries have specific regulations for flying drones, such as height restrictions, no-fly zones, and registration requirements.
  • Choose a camera: Choose a camera that meets your requirements for image quality and resolution. Many drones come with built-in cameras, but you can also attach third-party cameras to your drone.
  • Choose a lens: Choose a lens that is appropriate for your application. For example, a wide-angle lens can capture a larger area, while a telephoto lens can zoom in on specific features.
  • Configure camera settings: Configure the camera settings, such as exposure, focus, and white balance, to optimize image quality.
  • Choose a flight path: Plan a flight path that covers the area of interest and captures images from multiple angles. You can use software such as DJI FlightHub, Pix4D, or DroneDeploy to plan your flight.
  • Fly your drone: Once you’ve prepared your drone for aerial photography, you can fly it to capture images of the area of interest. Make sure to follow safety guidelines and regulations, and monitor the battery level and flight time to ensure a safe landing.
  • Transfer images to computer: After the flight, transfer the images from your drone’s memory card to your computer for further processing.

That’s it for Step 1! Once you have captured the images with your drone, you can move on to Step 2 and upload the images to an AWS S3 bucket for storage and processing.

Step 2: Upload images to AWS S3 bucket

  • Create an AWS account: If you don’t have an AWS account already, create one at https://aws.amazon.com/.
  • Create an S3 bucket: Log in to the AWS Management Console and create an S3 bucket to store your images. Make sure to choose a region that is geographically close to where the images were captured to minimize latency.
  • Install AWS CLI: Install the AWS Command Line Interface (CLI) on your computer, if you haven’t already. The AWS CLI is a tool that allows you to interact with AWS services from the command line.
  • Configure AWS CLI: Configure the AWS CLI with your AWS access key and secret access key. You can obtain these credentials from the IAM console in the AWS Management Console.
  • Upload images to S3 bucket: Use the AWS CLI or an S3 client tool to upload the images to the S3 bucket. The command to upload a file to an S3 bucket using the AWS CLI is:
aws s3 cp [source file] s3://[bucket name]/[destination folder] --acl public-read

Replace [source file] with the path to the file on your computer, [bucket name] with the name of your S3 bucket, and [destination folder] with the folder in the bucket where you want to store the file. The --acl public-read the option makes the file publicly readable.

  • Verify upload: After the upload is complete, verify that the files are in the S3 bucket by checking the S3 console or using the AWS CLI. The command to list the files in an S3 bucket using the AWS CLI is:
aws s3 ls s3://[bucket name]/[folder]

Replace [bucket name] with the name of your S3 bucket and [folder] with the folder where the files are stored.

That’s it for Step 2! Once you’ve uploaded the images to the S3 bucket, you can move on to Step 3 and set up an AWS Lambda function to process the images.

Step 3: Set up AWS Lambda function

  • Create a Lambda function: Log in to the AWS Management Console and create a new Lambda function. Choose the runtime environment and configuration that best fits your image processing requirements. For example, you can choose the Python runtime environment and configure the function to have access to the S3 bucket where the images are stored.
  • Write image processing code: Write the code that will process the images in your Lambda function. You can use a variety of image processing libraries such as OpenCV or Pillow to perform tasks such as cropping, resizing, filtering, or object detection. Here’s an example of how to use the Pillow library to crop an image:
from PIL import Image

def crop_image(image_path, crop_size):
# Open the image
image = Image.open(image_path)

# Get the size of the image
width, height = image.size

# Calculate the crop coordinates
left = (width - crop_size) / 2
top = (height - crop_size) / 2
right = (width + crop_size) / 2
bottom = (height + crop_size) / 2

# Crop the image
cropped_image = image.crop((left, top, right, bottom))

# Return the cropped image
return cropped_image
  • Test the Lambda function: Test the Lambda function with a sample image to verify that it is processing the images correctly. You can use the AWS Management Console or the AWS CLI to invoke the Lambda function with the sample image.
  • Configure S3 bucket trigger: Configure an S3 bucket trigger to automatically invoke the Lambda function each time a new image is uploaded to the S3 bucket. You can use the AWS Management Console or the AWS CLI to set up the S3 bucket trigger.

That’s it for Step 3! Once you’ve set up the Lambda function to process the images, you can move on to Step 4 and configure the S3 bucket trigger.

Step 4: Configure S3 bucket trigger

  • Create an IAM role: Create an IAM role that allows the Lambda function to access the S3 bucket where the images are stored. You can create an IAM role using the AWS Management Console or the AWS CLI.
  • Configure S3 bucket event: In the AWS Management Console, go to the S3 bucket that contains the images and click on the “Properties” tab. Click on the “Events” section and click “Create Event Notification”.
  • Configure event properties: Configure the event properties to specify that the Lambda function should be invoked when a new object is created in the S3 bucket. You can choose the specific S3 bucket and prefix that should trigger the event, and select the “ObjectCreate” event type.
  • Choose target: Choose the Lambda function as the target for the S3 bucket event. Select the Lambda function that you created in Step 3.
  • Add permissions: Add the IAM role that you created in Step 4. This will allow the Lambda function to access the S3 bucket and process the images.
  • Test trigger: Test the S3 bucket trigger by uploading a new image to the S3 bucket. The Lambda function should be automatically triggered and process the image.

That’s it for Step 4! Once you’ve configured the S3 bucket trigger, the Lambda function will be automatically invoked each time a new image is uploaded to the S3 bucket, and the processed images can be stored in another S3 bucket for easy access and analysis.

Step 5: Geotag images with Google Maps API

  • Create a Google Cloud account: If you don’t have a Google Cloud account already, create one at https://cloud.google.com/.
  • Create a Google Maps API key: In the Google Cloud Console, create a Google Maps API key that will allow you to access the Maps API. Make sure to enable the Maps JavaScript API and the Geocoding API.
  • Install the Google Maps API client library: Install the Google Maps API client library for the programming language that you’re using to geotag the images. For example, you can use the Google Maps Python client library to access the Google Maps API.
  • Retrieve the latitude and longitude of each image: Use the GPS information embedded in each image to retrieve the latitude and longitude of the location where the image was taken. You can use a library such as exifread or PIL to read the GPS information from the image metadata.
  • Geocode the latitude and longitude: Use the Google Maps Geocoding API to convert the latitude and longitude to a human-readable address or location name. Here’s an example of how to use the Google Maps Geocoding API in Python:
import googlemaps

def geocode_location(lat, lng, api_key):
# Initialize the Google Maps client
gmaps = googlemaps.Client(api_key)

# Geocode the latitude and longitude
result = gmaps.reverse_geocode((lat, lng))

# Extract the location information
location = result[0]['formatted_address']

# Return the location
return location
  • Add geolocation data to the image: Add the geolocation data, such as the location name or address, to the image metadata. You can use a library such as ExifTool to add the geolocation data to the image metadata.
  • Store the geotagged images: Store the geotagged images in an S3 bucket for easy access and analysis.

Once you’ve geotagged the images with the Google Maps API, you can move on to Step 6 and store the processed images in an S3 bucket.

Step 6: Store processed images in S3 bucket

  • Create another S3 bucket: Create another S3 bucket to store the processed images. Make sure to choose a region that is geographically close to where the images will be accessed to minimize latency.
  • Upload processed images to S3 bucket: Use the AWS CLI or an S3 client tool to upload the processed images to the S3 bucket. The command to upload a file to an S3 bucket using the AWS CLI is:
aws s3 cp [source file] s3://[bucket name]/[destination folder] --acl public-read

Replace [source file] with the path to the processed file on your computer, [bucket name] with the name of your S3 bucket, and [destination folder] with the folder in the bucket where you want to store the file. The --acl public-read the option makes the file publicly readable.

  • Verify upload: After the upload is complete, verify that the files are in the S3 bucket by checking the S3 console or using the AWS CLI. The command to list the files in an S3 bucket using the AWS CLI is:
aws s3 ls s3://[bucket name]/[folder]

Replace [bucket name] with the name of your S3 bucket and [folder] with the folder where the files are stored.

  • Optionally, encrypt data at rest: You can encrypt the data at rest in the S3 bucket to enhance the security of your data. You can use server-side encryption with Amazon S3-managed keys (SSE-S3), server-side encryption with customer-provided keys (SSE-C), or client-side encryption.

That’s it for Step 6! Once you’ve stored the processed images in an S3 bucket, you can move on to Step 7 and visualize the data using AWS QuickSight.

Step 7: Visualize data using AWS QuickSight

  • Create an AWS QuickSight account: If you don’t have an AWS QuickSight account already, create one at https://quicksight.aws/
  • Connect to your data: In the AWS QuickSight console, create a new data source that connects to the S3 bucket where the processed images are stored. You can use the built-in S3 connector to connect to the S3 bucket.
  • Create a new analysis: Create a new analysis in AWS QuickSight to explore the data. You can create visualizations such as heat maps, scatter plots, and bar charts to visualize the distribution of the images across different locations or time periods.
  • Publish the analysis: Once you’ve created an analysis in AWS QuickSight, you can publish it to share it with other users. You can also embed the analysis in other web applications or dashboards using the QuickSight API.

Once you’ve visualized the data using AWS QuickSight, you can analyze the data and gain insights into the distribution of the images across different locations and time periods.

Step 8: Scale the solution using AWS

  • Automate deployment with AWS CloudFormation: You can use AWS CloudFormation to automate the deployment of the solution. AWS CloudFormation allows you to define and deploy the AWS resources as a single unit and makes it easy to replicate the solution in different regions or accounts.
  • Use AWS Auto Scaling to manage resources: You can use AWS Auto Scaling to manage the resources used by the solution. AWS Auto Scaling can automatically adjust the number of instances based on the demand and can help ensure that the solution is always available and performing optimally.
  • Use Amazon S3 to store and manage large volumes of data: Amazon S3 is a highly scalable and durable object storage service that can store and manage large volumes of data. You can use Amazon S3 to store the raw images, processed images, and any other data associated with the solution.
  • Use Amazon EC2 to run processing jobs: Amazon EC2 is a scalable and highly available computing service that can run processing jobs in the cloud. You can use Amazon EC2 to run image processing jobs using frameworks such as TensorFlow or PyTorch or to run data processing jobs using Apache Spark or Amazon EMR.
  • Use Amazon API Gateway to expose APIs: Amazon API Gateway is a fully managed service that can create, publish, and manage APIs. You can use Amazon API Gateway to expose APIs that allow third-party applications to access the solution, or to build custom integrations with other AWS services.

That’s it for the final Step 8, and you’ve scaled the solution using AWS, you can handle large volumes of data, accommodate increasing demand, and ensure that the solution is highly available and performing optimally.

In this tutorial, you’ve learned how to use a drone, AWS, and the Google Maps API to create photometry and attach each photo to a location on the map. You’ve gone through the entire process, from flying the drone and capturing images to processing the images with AWS Lambda, geotagging the images with the Google Maps API, and visualizing the data with AWS QuickSight. You’ve also learned how to scale the solution using AWS, to handle large volumes of data, accommodate increasing demand, and ensure high availability and performance.

Using AWS for image processing and storage offers several benefits, including scalability, reliability, and cost-effectiveness. With AWS, you can easily process and store large volumes of data, scale your infrastructure to meet changing demands, and ensure high availability and durability. AWS also provides a range of tools and services to help you manage and analyze your data, including machine learning, data analytics, and business intelligence.

Using AWS for image processing and storage can be particularly useful for industries such as agriculture, mining, and environmental monitoring, where drones and other UAVs can capture images and data for analysis. With AWS, you can process and analyze this data in real-time, enabling you to make more informed decisions and take action more quickly.

I hope this tutorial has been helpful and informative, and that you’re now ready to use AWS to create photometry and attach each Drone’s photo to a location on the map. Good luck!

--

--

Evhenii Rvachov

I’m just a Human who likes hardware/software engineer.