Airport Traffic by Domestic and International Carriers¶
As a research and analytics team at an international carrier, we need to create data pipelines to produce reports using publicly available U.S. Department of Transportation (USDOT) airport traffic data. Airline executives have been consuming this information indirectly through USDOT reports and additional assets produced by business analysts. Our mandate is to create a data pipeline that drives a reporting system on international flights. The reporting system, including changes to the data pipeline, will be maintained by business analysts in various lines of business with help from the analytics team. Analysts often work on such pipelines that take larger datasets and shrink them into smaller dimensions. The goal of our team is to help them do so faster, more efficiently and in a reproducible manner.
The data comes from the U.S. International Air Passenger and Freight Statistics Report. As part of the T-100 program, USDOT receives traffic reports of U.S. and other international airlines operating to and from United States airports. Data engineers on the team ingest this publicly available data and provide us with the following datasets:
- Departures: Data on all flights between U.S. gateways and non-U.S. gateways, irrespective of origin and destination. Each observation provides information on a specific airline for a pair of airports, one in the U.S. and the other outside. There are three main columns recording the number of flights–Scheduled, Charter and Total.
- Passengers: It records the total number of passengers for each month and year between a pair of airports, as serviced by a particular airline. The number is also broken down by those in scheduled flights plus those in chartered flights.
We will start with data for year 2017.
The final pipeline in Dataiku DSS will look like this
The Flow has the following high-level steps:
- Download the datasets to form the input data
- Clean the Passengers dataset, group by airport id and find the busiest airports by number of passengers that year
- Clean the Departures dataset and turn the monthly information on airport pairs into market share data
To complete this walkthrough, the following requirements will need to be met:
- Have access to a Dataiku DSS instance–that’s it!
Create a new Dataiku DSS project and name it
International Flight Reporting.
Finding the Busiest Airports by Volume of International Passengers¶
In the Flow, select + Recipe > Visual > Download. Name the output folder
Passengers and create the recipe. Add a new source and specify the following URL:
https://data.transportation.gov/api/views/xgub-n9bw/rows.csv?accessType=DOWNLOAD. Run the recipe to download the files.
Create a New Files in Folder dataset from the passengers folder. With the folder selected, we can get to this option via Actions > Create dataset (found in the upper-right corner). Click Test to let Dataiku detect the format and parse the data accordingly. Create a dataset named
With the new dataset as the input, create a new Sample/Filter recipe. Turn filtering On and create a filter that keeps rows where Year equals
2017. Under sampling, choose No sampling (whole data).
With the new dataset as the input, create a new Prepare visual recipe with the following steps in its script:
- Parse the date_dte column into a proper date column. Dataiku should detect the correct date format as
yyyy-MM-dd HH:mm:ss. If it does not, go ahead select the
yyyy-MM-dd HH:mm:ssdate format manually in the Smart date editor. Name the output column
- Identify the months using month names. One way to do so is to use the Find & Replace processor to replace the numerical values in the Month column to a new column called
month_name. An example of a replacement is using “Jan” instead of the value “1”.
- Concatenate the following three columns Year, Month and month_name using
-as the delimiter. Name the output column
Among the drop-down options for Matching Mode, select
Complete value, since
Substring will result in some errors. For example, substring replacement will each substring in 12 (1 and 2) with JanFeb instead of Dec.
Next, we are going to aggregate the information by airport to create a list of the 20 busiest airports for international travellers. We’ll use the Group visual recipe:
- Group by usg_apt.
- Name the output dataset
- In the Group step, deselect Compute count for each group and then select the following aggregations: fg_apt (Distinct), Scheduled (Sum), Charter (Sum), Total (Sum)
- Rename the column names in the Output step according to the table below, then Run the recipe.
|Original name||New name|
Finally, to narrow down the top 20 airports by volume of international passengers, let’s use the TopN recipe.
- Name the output dataset
- In the Top N step, Retrieve the 20 top rows sorted by the Total column in descending order
- Run the recipe.
This recipe produces a list of the busiest airports by volume of international passengers. We can now export the dataset as a CSV, or share it with other projects in the instance. In a few easy steps, we’ve replicated the table on this Wikipedia page, even down to the total number of passengers.
Adding a Lagged Window to Calculate Year-to-Year Change¶
To compare the total number of departures with the year before, we can re-use the same data pipeline. First we need to widen our Filter to include data from the year 2016.
By going back to the Filter recipe, we can add another condition where Year equals 2016, and keep rows that satisfy at least one of the conditions.
In the Flow, right-click on the Filter recipe and select Build flow from here to re-create all datasets in the pipeline.
Here is documentation about re-building datasets in Dataiku and the difference between building datasets upstream or downstream
The departures_by_month dataset now has totals of departures for two years: 2016 and 2017. Now we can calculate how the traffic changed from year to year. With departures_by_month selected:
- Choose Actions > Window
- Click Create Recipe.
To get the total flights from a year ago to compare with the totals for a particular month, we need to get the same value lagged by 12 months. In the Window definitions step of the Window recipe:
- In the Windows definitions step, turn on Order Columns and select Year and Month so the months are laid out in ascending order. This defines how the dataset will be ordered for the lag to be calculated.
- In the Aggregations step, Retrieve all of the columns and and for the Total column, add the lagged value going back
12rows, i.e. months or a whole year.
- Run the recipe
The output dataset should show that all the months in 2017 now have a value for the lagged total number of flights in the column Total_lag12. For any month that year, it should be the match the value of the same month from a year ago.
To create the final presentation dataset; add a Prepare recipe with the following steps in the Script:
- Keep only rows that from the year we need: 2017
- Calculate a column for year_toyear_change with the formula
(Total - Total_lag12)/Total_lag12 * 100
- Keep only following columns: Year, Month, Total_US_mktshare, Total_IN_mktshare, Total, Total_lag12, year_toyear_change
Great job! Building data pipelines is essential to creating data products. This is a first step in doing more with data. Data products can go beyond static insights like rankings or tables, and the process can be automated for production with scenarios.