From Lab to Flow

Congratulations!

Now that you have prepared Haiku T-Shirt’s order logs, you are ready to join the customer data with it.

In this tutorial, we will join Haiku T-Shirt’s customer data with the prepared orders data. We will then enrich the combined data using the interactive Lab for further analysis.

Before jumping into the hands-on portion of the tutorial, you can watch the following video, which walks through the outline of the steps.

Create Your Project

From the Dataiku homepage, click +New Project, select DSS Tutorials from the list, and select 102: From Lab to Flow (Tutorial).

../../_images/dss-tutorials1.png

Click on Go to Flow.

../../_images/tshirt-customers-flow-01.png

In the flow, you can find the steps used in the previous tutorial to create and prepare the orders dataset. There is also a new dataset customers that we are going to describe in the next section.

Alternatively, you can continue in the same project you started in the Basics tutorial by downloading a copy of the customers.csv file and uploading it to the Project.

Use the Join Recipe to Enrich Customers with Orders Data

A video below goes through the content of this section.

Open the customers dataset by double-clicking on its icon in the Flow.

../../_images/tshirt-customers-dataset-01.png

Each row in this dataset represents a separate customer, and records:

  • the unique customer ID
  • the customer’s gender
  • the customer’s birthdate
  • the user agent most commonly used by the customer
  • the customer’s IP address
  • whether the customer is part of Haiku T-Shirts’ marketing campaign

We are now ready to enrich the customers dataset with information about the aggregate orders customers have made. From the Actions menu choose Join with… from the list of visual recipes.

../../_images/create-join-recipe.png

Select orders_by_customer as the second input dataset. Change the name of the output dataset to customers_orders_joined. Click Create Recipe.

The Join Recipe has several steps (shown in the left navigation bar). The core step is the Join step, where you choose how to match rows between the datasets. In this case, we want to match rows from customers and orders_by_customer that have the same value of customerID and customer_id. Note that Dataiku DSS has automatically discovered the join key, even though the columns have different names.

../../_images/join-recipe.png

By default, the Join Recipe performs a left join, which retains all rows in the left dataset, even if there is no matching information in the right. Since we only want to work with customers who have made at least one order, let us modifiy the join type.

  • Click on the Left Join indicator
  • The join details open. Click on Join Type
  • Click on Inner join and then Close to change the join type to an inner join. An inner join only retains rows of the datasets if they match. This will retain only the customers who have made an order, and remove the others from this analysis.
../../_images/join-type.png

Note

Types of joins

There are multiple methods for joining two datasets; the method you choose will depend upon your data and your goals in analysis.

  • Left join keeps all rows of the left dataset and adds information from the right dataset when there is a match. This is useful when you need to retain all the information in the rows of the left dataset, and the right dataset is providing extra, possibly incomplete, information.
  • Inner join keeps only the rows that that match in both datasets. This is useful when only the rows with complete information from both datasets will be useful downflow.
  • Outer join keeps all rows from both datasets, combining rows where there is a match. This is useful when you need to retain all the information in both datasets.
  • Right join is similar to a left join, but keeps all rows of the right dataset and adds information from the left dataset when there is a match.
  • Cross join is a Cartesian product that matches all rows of the left dataset with all rows of the right dataset. This is useful when you need to compare every row in one dataset to every row of another
  • Advanced join provides custom options for row selection and deduplication for when none of the other options are suitable.

By default, the Join Recipe performs a Left join.

We want to carry over all columns from both datasets into the output dataset, with the exception of customer_id (since the customerID column from the customers dataset should be sufficient)

  • Click on the Selected columns step
  • Uncheck the customer_id column in the orders_by_customer dataset

Click Run to execute the recipe. Since you removed a column, DSS warns that a schema update is required. Click Update Schema to accept the schema change. The recipe runs. When it is done, click Explore dataset customers_orders_joined at the bottom of the screen to explore the customers_orders_joined dataset.

The following video goes through what we just covered.

Discover the Lab

So far you have learnt how the datasets are created with recipes and how these create a data pipeline in the Flow. In this tutorial you are going to see how you can perform preliminary work on data outside the flow in a dedicated environment called the Lab.

Let us see which tools are available in the Lab. Open the customers_orders_joined dataset and click Lab. The Lab window opens.

../../_images/tshirt-customers-analysis-new.png

Note

Key concept: Lab

The Lab is a place for drafting your work, whether it is preliminary data exploration and cleansing or machine learning models creation. The lab environment contains:

  • the Visual analysis tool to let you draft data preparation, charts, and machine learning models
  • the Code Notebooks to let you explore your data interactively in the language of your choice

Note that some tasks can be performed both in the lab environment and using Recipes in the Flow. Here are the main differences and how to use them complementarily:

  • A lab environment is attached to a dataset in the Flow, allowing you to organize your draft and preliminary work easily without overcrowding the Flow with unnecessary items. The Flow is mostly meant to keep the work that is steady and will be reused in the future by you or your colleagues.
  • When working in the Lab, the original dataset is never modified and no new dataset is created. Instead, you will be able to interactively visualize the results of the changes that you will be performing on the data (most of the time on a sample). The speed of this interactivity will provide you a comfortable space to quickly assess what your data contain.
  • Once you’re satisfied with your labwork, you can deploy it to the Flow as a code or visual recipe. The newly created recipe and the associated output dataset will be appended to the original dataset pipeline, thus, making all your labwork available for future data reconstruction or automation.

In this tutorial, we are going to use the Visual analysis tool of the Lab.

Click on the New button below Visual analysis. You will be prompted to specify a name for your analysis. Let’s leave the default name Analyze customers_orders_joined for now.

../../_images/tshirt-customers-analysis-new-2.png

The Visual analysis tool has three main tabs:

  • Script for interactive data preparation
  • Charts for creating charts
  • Models for creating machine learning models
../../_images/visual-analysis-component.png

In this tutorial we are going to cover the first two. Modeling will be the topic of the next tutorial.

Interactively Prepare Your Data

A video below goes through the content of this section.

First, let’s parse the birthdate column. We’ve done this before, so it’s easy: open the column dropdown and select Parse date, then clear the output column in the Script step so that the parsed date simply replaces the original birthdate column.

With a customer’s birthdate, and the date on which they made their first order, we can compute their age on the date of their first order. From the birthdate column dropdown, choose Compute time since. This creates a new Compute time difference step in the Prepare Script, and we just need to make a couple edits.

  • Choose “until” to be Another date column
  • Choose “Other column” to be the first_order_date column
  • Change “Output time unit” to Years
  • Then edit the Output column name to age_first_order.

From the new column age_first_order header dropdown, select Analyze in order to see if the distribution of ages looks okay. As it turns out, there are a number of outliers with ages well over 120. These are indicative of bad data. Within the Analyze dialog, click the Actions button and choose to clear values outside 1.5 IQR (interquartile range). This will set those values to missing. Now the distribution looks more reasonable, but there are still a few suspicious values over 100. Let’s alter the Script step to change the upper bound to 100.

../../_images/visual-analysis-analyse-modal.png

Lastly, now that we’ve computed age_first_order, we won’t need birthdate or first_order_date anymore, so let’s remove them in the Script. Open the column dropdown and select Delete. This creates a new Remove step in the Prepare script.

The following video goes through what we just covered.

Let us now enrich the data by processing the user_agent and ip_address columns.

Leveraging the User Agent

A video below goes through the content of this section.

The user_agent column contains information about the browser and operating system, and we want to pull this information out into separate columns so that it’s possible to use it in further analyses.

Dataiku recognizes that the user_agent column carries information about the User Agent, so when you open the dropdown on the column heading, you can simply select Classify User-Agent. This adds a new step to the Prepare Script and 7 new columns to the dataset. For this tutorial, we are only interested in the user_agent_brand, which specifies the browser, and the user_agent_os, which specifies the operating system, so we will remove the columns we don’t need. Click on the Columns view icon, then select the columns you want to delete. Click the Actions button and select Delete. The Columns view makes it easy to remove several columns at once.

The following video goes through what we just covered

Leveraging the IP Address

A video below goes through the content of this section.

Dataiku recognizes the ip_address column as containing values that are IP addresses, so when you open the dropdown on the column heading, you can select Resolve GeoIP. This adds a new step to the Script and 7 new columns to the dataset that tell us about the general geographic location of each IP address. For this tutorial, we are only interested in the country and GeoPoint (approximate longitude and latitude of the IP address), so in the Script step, deselect Extract country code, Extract region, and Extract city. Finally, delete the ip_address column.

Now we want to create a label on the customers generating a lot of revenue. Let’s say that customers with a value of total orders in excess of 300 are considered “high revenue” customers. Click +Add a New Step in the Script and select Formula. Type high_revenue as the output column name. Click the Edit button to open the expression editor and then type if(total_sum>300, "True", "False") as the expression. Dataiku validates the expression. Hit Save.

Note

The syntax for the Formula Processor can be found in the reference documentation.

../../_images/high-revenue-formula.png

Visualize Your Data with Charts

A video below goes through the content of this section.

Visualization is often the key to exploring your data and getting valuable insights, so we will now build some charts on top of the enriched data.

The visualization screen to build charts is available by clicking on the Charts tab of the analysis.

Note

Key concept: Charts in analysis

We have already used charts on a dataset in the first tutorial. When you create charts in a visual analysis, the charts actually use the preparation script that is being defined as part of the visual analysis.

In other words, you can create new columns or clean data in the Script tab, and immediately start graphing this new or cleaned data in the Charts tab. This provides a very productive and efficient loop to view the results of your preparation steps.