Ascend Developer Hub

Tutorial - Building an Advanced Dataflow

Introduction

In this tutorial we will be introducing the Ascend platform with a high-level overview of the environment including the system dashboard and more importantly how to build your first Dataflow using read connectors, transforms, write connectors and data feeds. We'll be using publicly available NYC weather and taxi cab data to do some useful analysis and explore a few insights using SQL and PySpark transforms. So, let’s get started…

📘

Important!

In order to follow along with this tutorial, you should either have your own Ascend environment (paid account) or a free trial version. Go here to request a trial of the Ascend platform.

The Datasets

Weather data in New York City - 2012 - 2017
Weather data collected from the National Weather Service. It contains monthly data from 2012 to 2017, for a weather station in central park. It contains for each day the weather (e.g. cloudy, rainy, sunny, pouring) and the level of precipitation.

Yellow Cab Taxi Data - 2016
This dataset is based on the 2015 NYC Yellow Cab trip record data published by the NYC Taxi and Limousine Commission (TLC). It contains the following data: Columns: Vendor_id, pickup_datetime, dropoff_datetime, passenger_count, trip_distance, pickup_longitude, pickup_latitude, store_and_fwd_flag, dropoff_longitude, dropoff_latitude, payment_type, fare_amount, mta_tax, tip_amount, tolls_amount total_amount, imp_surcharge, extra rate_code.

Green Cab Taxi Data - 2016
This dataset is based on the 2015 NYC Green Cab trip record data published by the NYC Taxi and Limousine Commission (TLC). It contains the following data: Columns: Vendor_id, pickup_datetime, dropoff_datetime, passenger_count, trip_distance, pickup_longitude, pickup_latitude, store_and_fwd_flag, dropoff_longitude, dropoff_latitude, payment_type, fare_amount, mta_tax, tip_amount, tolls_amount total_amount, imp_surcharge, extra rate_code.

System Dashboard

Upon login, the main Ascend System Dashboard shown here is a great place to get a high-level view of your entire Ascend environment. Here you can see all the data services and their respective data flows along with user and team members in one place.

At the top of the dashboard, you’ll see a graph showing the current state of all components across all Dataflows and the data services they reside in. There are four main components in a Dataflow and here you can see the current state of our read connectors, transforms, write connectors and data feeds. We’ll go into more detail on these later.

In addition, I can filter by component type to get a view of how many of each component exists within my Ascend environment.

Create a Dataflow

Dataflows are continuously running data pipelines. You design a Dataflow with a combination of declarative configurations, SQL, and Python. As you design Dataflows, you are creating a blueprint that is sent to the Ascend Dataflow Control Plane to be automated. For this tutorial, you will create a new Dataflow within your Data Service.

  1. Create a new Dataflow called NYC Taxi by clicking the + NEW DATAFLOW button next to your Data service.
  2. Optionally provide a brief description and click CREATE and you've instantly created your Dataflow.
  3. What you see here is what we refer to as the canvas and it’s in this screen that you’ll be constructing your Dataflow. Let’s jump in.

You should have something which looks like this in your dashboard.

Dataflow Overview

Once inside the Dataflow canvas, we can explore some of the essential things (e.g. menus and options) you'll need to be familiar with before we move on.

  • The dashboard icon will take you back to the system dashboard.
  • The Data Service menu will show you a list of all data services that you can quickly jump to along with being able to manage various settings and permissions.
  • The Dataflow menu allows settings and permissions configuration for this Dataflow or others within this data service.
  • The Add menu is what you’ll be using most of the time and it’s from there where you can create new read & write connectors, transforms, data feeds and the grouping of components for better Dataflow visibility.
  • The Import and Export allow saving and exporting versions of your Dataflow so it can be copied and redeployed in a different data service or for versioning.
  • The other tabs show Dataflow profiling information and the ability to perform a quick on-demand query, much as you would an ad-hoc query into your data warehouse.
  • On the far right-hand of the screen, you’ll see a link to our on-line docs complete with a public discussion board and a drop-down menu underneath your login name for other settings.

Creating Read Connectors

The first step in our Dataflow is to create Read Connectors that access the weather and taxi cab data from S3. Read Connectors allow Ascend to access any data from any source with any format.

Create "Weather" Read Connector

  1. On the upper left side of the screen click + ADD and then click Connector to bring up the Connectors panel. From the Connectors Panel, you'll get a list of all possible pre-built connectors to choose from and you can also easily build my own custom connector if you so desire.

The Connectors panel displays a variety of Read Connectors supported by Ascend.

  1. For this tutorial, we will be connecting to an AWS S3 source so select the S3 Read Connector by clicking on the READ icon for S3. This will bring up the CREATE NEW READ CONNECTOR screen (shown below).
  2. Create the name of the connector as Weather and optionally provide a description.
  3. Leave the Where is Data dialog as the default Amazon S3
  4. Location: File/Object based Read Connectors such as S3 and GCS have a location setting comprised of a bucket name and a pattern. The pattern is used to identify eligible files.
    • Enter ascend-io-sample-data-read as the bucket name.
    • Select Match and enter NYC-TAXI/weather/weather.csv as the pattern.

📘

S3 Pattern Specifications

  • Match: Matches the pattern precisely character-for-character.
  • Glob: Glob applies a pattern matching algorithm.
  • Regex: Regex applies a pattern matching algorithm.
  1. Uncheck the Requires Credentials checkbox.

  2. Use TEST CONNECTION to check whether all S3 bucket settings and permissions are properly and correctly configured.

  1. Parser Settings: Data formats currently available are: Avro, Grok, JSON, Parquet and XSV. However, you can create your own parser functions or define a UDP (User Defined Parser) to process a file format. Schema information will automatically be fetched for JSON, Parquet and XSV files with a header row.
    • Select XSV as the FORMAT TO USE
    • Select Comma as the FIELD DELIMITER
    • For HEADER ROW check the Files have a header row checkbox.
      You will see a preview of your XSV to confirm everything looks accurate (as shown below)
  1. Schema Settings
    Ascend will automatically parse the header and the first few lines of data to generate the schema and allow you to inspect it. You can also preview the schema information here and just make sure nothing looks wrong or needs to be changed. It doesn't appear there are any issues with the detected schema so we'll move on.

  2. Connector Refresh Schedule
    Choose how often you want to check for new data in the configured source.

  • For REFRESH SCHEDULE check theSystem default (Once an hour) option, but you can go as granular as 1 minute if desired.
  1. Processing Priority (optional)
    Processing Priority will be used to determine which components to schedule first. Higher priority numbers are scheduled before lower ones. Increasing the priority on a component also causes all its upstream components to be prioritized higher. For this tutorial, we will leave it unconfigured.

  2. Finish and Preview
    Click the CREATE button to create the connector and have Ascend ingest and parse it into your Dataflow.

When it's complete you should see the Read Connector on your Dataflow canvas (as shown below).

You can choose to preview and inspect the Weather Read Connector by clicking on the Records tab and viewing the contents of them and looking at the Partitions tab for information on column stats and job details like elapsed time, etc. If any errors are detected you can click on the Errors and Warnings tab to view them.

Create "Yellow Taxi" Read Connector

Follow the same directions as we did for the Weather Read Connector, but use the following names and configuration settings.

  1. Click + ADD and then click Connector to bring up the Connectors panel and click on the READ icon for S3.

  2. Enter "Yellow Cabs" as the name of the connector.

  3. Where is Data dialog: Leave as the default Amazon S3

  4. Location:

    • Enter ascend-io-sample-data-read as the bucket name.
    • Select Regex and enter NYC-TAXI/yellow_cab/2016-01/.*.csv as the pattern.
  5. Enter My Creds for Credential Name and your S3 Access Key and Secret Key in the fields provided.

  6. Click TEST CONNECTION to validate S3 bucket settings and permissions.

  7. Parser Settings:

    • Select XSV as the FORMAT TO USE
    • Select Comma as the FIELD DELIMITER
    • For HEADER ROW check the Files have a header row checkbox.
  8. Schema Settings:
    As you scroll through here you'll notice that columns Tip Amount and Tolls Amount are detected as Integer, but need to be set to Float types. By default, if the schema contains some fields that look like non-string data, like dates or numbers we will try to automatically convert them and generate warnings for values we can't convert. You can disable the warning for all non-string columns so we just ingest data with default values of NULL where we can't convert; allowing you to fix things up later in a downstream transform.

But, for now, we will edit the schema for this read connector and adjust the schema as depicted below. Click on the "pencil" icon (as shown below) for each of the "tipamount" and "tolls_amount" columns to edit their schema type. _Note: Be careful not to click the "X" as this will delete the column. If you make a mistake editing the schema you can always click the CANCEL button at the very bottom of the panel.

Change each of them from Integer to Float (as shown below) and continue on with the steps.

  1. Connector Refresh Schedule:
  • For REFRESH SCHEDULE check theSystem default (Once an hour) option.
  1. ASSIGNED PRIORITY: Leave unconfigured as 0 (default)

  2. Click the CREATE button.

When it's complete you should see the Read Connector on your Dataflow canvas (as shown below). As mentioned previously, you can choose to preview and inspect the Yellow Cabs Read Connector by clicking on the records tab and viewing the contents of them and looking at the partition information to too look at column stats and job details like elapsed time, etc.

Create "Green Taxi" Read Connector

Follow the same directions as we did for the Weather and Yellow Cab Read Connectors, but use the following names and configuration settings.

  1. Click + ADD and then click Connector to bring up the Connectors panel and click on the READ icon for S3.

  2. Enter "Green Cabs" as the name of the connector.

  3. Where is Data dialog: Leave as the default Amazon S3

  4. Location:

    • Enter ascend-io-sample-data-read as the bucket name.
    • Select Regex and enter NYC-TAXI/green_cab/2016-0[1-2]/.*.csv as the pattern.
  5. Enter My Creds for Credential Name and your S3 Access Key and Secret Key in the fields provided.

  6. Click TEST CONNECTION to validate S3 bucket settings and permissions.

  7. Parser Settings:

    • Select XSV as the FORMAT TO USE
    • Select Comma as the FIELD DELIMITER
    • For HEADER ROW check the Files have a header row checkbox.
  8. Schema Settings: Leave unconfigured.

  9. Connector Refresh Schedule:

  • For REFRESH SCHEDULE check theSystem default (Once an hour) option.
  1. ASSIGNED PRIORITY: Leave unconfigured as 0 (default)

  2. Click the CREATE button.

When it's complete you should see the Read Connector on your Dataflow canvas (as shown below). As mentioned previously, you can choose to preview and inspect the Green Cabs Read Connector by clicking on the records tab and viewing the contents of them and looking at the partition information to too look at column stats and job details like elapsed time, etc.

Creating Transforms

As soon as you have one or more read connectors (or data feed) you can start creating Ascend Transforms to do things like cleaning, filtering, joining and aggregating your data using SQL. Additionally, you can specify PySpark functions for specific compute and machine learning model training purposes.

In this part of the tutorial, we'll create a Transform for each Read Connector we just created, but first, let's explore a neat trick using a feature in Ascend called Workspace. Workspace allows you to pin any component to the side of the screen so you can easily refer to it. This can be handy when you’re in the middle of writing some SQL in the editor and need to refer to a column name in a source read connector for example. To pin a component simply click on the "Star" icon in the component you'd like to pin. To later view the information or records in that component simply click on the Workspace shelf on the left-hand side of the canvas (examples shown below). Go ahead and give it a try.

Create "Weather" Transform

We need to do some cleanup and prep on our Weather dataset that we just ingested. Specifically, some column heading names, TIMESTAMP formatting and filter out all records that have no precipitation. Let's do that using some SQL code in a Transform.

  1. Create the new Transform by right-clicking the Weather Read Connector and selecting Create new transform from the drop-down menu. Alternatively, you can click + ADD menu and select Transform from there. This will bring up the CREATE NEW TRANSFORM screen (shown below).
  1. The name of the Transform is generated automatically as Weather transform, but you can override and choose your own name. You may optionally provide a description for this Transform.
  1. Select the Query Type based on the type of Transform you're building. In this part of the tutorial, we're going to use SQL code to do the cleanup so we'll be selecting SQL*. Alternatively, if you wanted to create a PySpark function you could do that here as well by selecting PySpark**.

  2. Selecting SQL will provide you with a simple SELECT statement as boilerplate, but we'll be adding editing this. To do that we'll be using the RAW BUILDER option which provides a rich editing experience and suggestions with autocomplete. Alternatively, you may use the QUERY BUILDER which offers a GUI-based, wizard-driven approach for those with novice SQL skills.

  3. Enter the following code into the RAW BUILDER editor.

SELECT
  ww.Date,
  ww.Weather,
  ww.Precipitation AS precipitation,
  TIMESTAMP(ww.Date) AS weather_date_ts
FROM Weather AS ww
WHERE LENGTH(TRIM(ww.Precipitation)) !=0
  1. Ascend will automatically detect any SQL syntax errors ahead of actually creating the Transform.

  2. ASSIGNED PRIORITY: Leave unconfigured as 0 (default)

  1. Click the CREATE button and Ascend will begin processing your SQL query immediately. When it's finished it should look like this.
  1. You can choose to preview and inspect the results of the Transform by clicking on it and then clicking on the Records tab to view the contents of them and looking at the Partitions tab for information on column stats and job details like elapsed time, etc. If any errors are detected you can click on the Errors and Warnings tab to view them.

Create "Yellow Cab" Transform

We need to do some cleanup and prep on our Yellow Cab dataset that we just ingested. Specifically, some column heading names, TIMESTAMP formatting, add new columns to identify these are "Yellow" cabs and filter out all bad records that have no recorded distance. Let's do that using some SQL code in a Transform using the same set of steps used to previously create the Weather Transform.

  1. Create the new Transform by right-clicking the Yellow Cabs Read Connector and selecting Create new transform from the drop-down menu.

  2. The name of the Transform is generated automatically as Yellow Cabs transform.

  3. Select SQL as the Query Type.

  4. Replace the boilerplate code in the RAW BUILDER editor with the following:

SELECT
  i.VendorID,
  TIMESTAMP(i.tpep_pickup_datetime) AS pickup_datetime,
  TIMESTAMP(i.tpep_dropoff_datetime) AS dropoff_datetime,
  i.RatecodeID AS ratecodeID,
  i.passenger_count AS passenger_count,
  i.trip_distance AS trip_distance,
  i.fare_amount AS fare_amount,
  i.mta_tax AS mta_tax,
  i.tip_amount AS tip_amount,
  i.tolls_amount AS tolls_amount,
  i.improvement_surcharge AS improvement_surcharge,
  i.total_amount AS total_amount,
  i.payment_type AS payment_type,
  'yellow' AS cab_type,
  DATE_TRUNC('DAY', TIMESTAMP(i.tpep_pickup_datetime)) AS pickup_date
FROM
  Yellow_Cabs AS i
WHERE
  i.trip_distance > 0
  1. ASSIGNED PRIORITY: Leave unconfigured as 0 (default)

  2. Click the CREATE button and Ascend will begin processing your SQL query immediately. When it's finished it should look like this.

  1. Again, you can choose to preview and inspect the Transform by clicking on it and then clicking on the Records tab to view the contents of them and looking at the Partitions tab for information on column stats and job details like elapsed time, etc.

Create "Green Cab" Transform

We need to do some cleanup and prep on our Green Cab dataset that we just ingested. Specifically, changing all the column heading names to match the ones in the Yellow Cabs dataset, TIMESTAMP formatting, add new columns to identify these are "green" cabs and filter out all bad records that have no recorded distance. Let's do that using some SQL code in a Transform using the same set of steps used to previously create the Weather Transform.

  1. Create the new Transform by right-clicking the Green Cabs Read Connector and selecting Create new transform from the drop-down menu.

  2. The name of the Transform is generated automatically as Green Cabs transform.

  3. Select SQL as the Query Type.

  4. Replace the boilerplate code in the RAW BUILDER editor with the following:

SELECT
  i.VendorID,
  TIMESTAMP(i.lpep_pickup_datetime) AS pickup_datetime,
  TIMESTAMP(i.Lpep_dropoff_datetime) AS dropoff_datetime,
  i.RateCodeID AS ratecodeID,
  i.Passenger_count AS passenger_count,
  i.Trip_distance AS trip_distance,
  i.Fare_amount AS fare_amount,
  i.MTA_tax AS mta_tax,
  i.Tip_amount AS tip_amount,
  i.Tolls_amount AS tolls_amount,
  i.improvement_surcharge AS improvement_surcharge,
  i.Total_amount AS total_amount,
  i.Payment_type AS payment_type,
  "green" AS cab_type,
  TIMESTAMP(SUBSTR(i.lpep_pickup_datetime, 1, 10)) AS pickup_date
FROM Green_Cabs as i
WHERE i.trip_distance > 0
  1. ASSIGNED PRIORITY: Leave unconfigured as 0 (default)

  2. Click the CREATE button and Ascend will begin processing your SQL query immediately. When it's finished it should look like this.

  1. Again, you can choose to preview and inspect the results of the Transform by clicking on it and then clicking on the Records tab to view the contents of them and looking at the Partitions tab for information on column stats and job details like elapsed time, etc.

Create "All Cabs" Transform

Now we want to combine both Yellow Cab and Green Cab data into a single dataset using a UNION. Let's do that using some SQL code in a Transform using the same set of steps used to previously create the Yellow Cabs Transform.

  1. Create the new Transform by right-clicking the Yellow Cabs Read Connector and selecting Create new transform from the drop-down menu.

  2. Overwrite the name of this transform with All Cabs transform.

  3. Select SQL as the Query Type.

  4. Replace the boilerplate code in the RAW BUILDER editor with the following:

SELECT *
FROM Green_Cabs_transform, Yellow_Cabs_transform

5.ASSIGNED PRIORITY: Leave unconfigured as 0 (default)

  1. Click the CREATE button and Ascend will begin processing your SQL query immediately. When it's finished it should look like this.
  1. Again, you can choose to preview and inspect the results of the Transform by clicking on it and then clicking on the Records tab to view the contents of them and looking at the Partitions tab for information on column stats and job details like elapsed time, etc.

Create "NYC Cab Rides on Rainy Days" Transform

Now let's do something interesting by joining the result datasets from the All Cabs and Weather Transforms based on date and filter based on when the weather is rainy and when the number of passengers is greater than 3. Let's do that using some SQL code in a Transform, but this time we'll use the QUERY BUILDER.

  1. Create the new Transform by right-clicking the All Cabs Read Connector and selecting Create new transform from the drop-down menu.

  2. Overwrite the name of this transform with NYC Cab Rides on Rainy Days Transform.

  3. Select SQL as the Query Type.

  4. Click QUERY BUILDER and enter (cut-n-paste) the following into the appropriate dialogue boxes in the QUERY BUILDER screen. You'll notice as you type that autocompletion will help with suggestions that you can actually accept immediately.

FROM: 
All_Cabs_transform AS cabs

INNER JOIN: 
Weather_transform AS w

ON: 
cabs.pickup_date = w.weather_date_ts

SELECT:
cabs.VendorID,
cabs.pickup_datetime,
cabs.dropoff_datetime,
cabs.passenger_count,
cabs.trip_distance,
cabs.fare_amount,
cabs.mta_tax,
cabs.tip_amount,
cabs.tolls_amount,
cabs.improvement_surcharge,
cabs.total_amount,
cabs.payment_type,
cabs.cab_type,
cabs.pickup_date,
w.Date,
w.Weather,
w.precipitation

WHERE: 
w.Weather = 'Rainy' and cabs.passenger_count > 3
  1. ASSIGNED PRIORITY: Leave unconfigured as 0 (default)

  2. Click the CREATE button and Ascend will begin processing your SQL query immediately. When it's finished it should look like this.

  1. Again, you can choose to preview and inspect the results of the Transform by clicking on it and then clicking on the Records tab to view the contents of them and looking at the Partitions tab for information on column stats and job details like elapsed time, etc.

Creating Write Connectors

The last step in our Dataflow is to create a Write Connector that will create a destination file containing the results of the NYC Cab Rides on Rainy Day Transform you just created. For this, you are going to create it within your own S3 account. The public bucket we used earlier in this tutorial does not have WRITE access. Every object in Amazon S3 is stored in a bucket. Before you can store data in Amazon S3, you must create an S3 bucket. If you don't have proper S3 account to write to then you can skip to the next section of this tutorial where we'll create a Data Feed.

Create "NYC Cab Rides on Rainy Days" Write Connector

  1. Create the new Write Connector by right-clicking the NYC Cab Rides on Rainy Days Transform and selecting Create new write connector from the drop-down menu. Alternatively, you can click + ADD menu and select Connector to bring up the Connectors panel where you'll get a list of all possible pre-built Write Connectors to choose from and can select the S3 Write Connector from there.

This will bring up the CREATE NEW WRITE Connector screen (shown below).

  1. The name of the Write Connector is generated automatically as NYC Cab Rides on Rainy Days Transform Connector. You can choose to override this with a different name.
  1. For Location (WHERE IS DATA) select Amazon S3
  1. Location:
    • S3 bucket: Enter a name for the S3 bucket you're going to use from your S3 account.
    • Prefix: Enter an S3 folder path within your S3 bucket.
    • Pattern: Leave empty (for this example).
    • Suffix: Enter .csv as the file suffix.

❗️

WARNING!

Any data that is not being propagated by the upstream transform will automatically be deleted under the object prefix. Therefore, be sure the bucket and object prefix path don't already contain any pre-existing data as it will be overwritten.

📘

S3 Write Location Details

  • Bucket: The S3 bucket name (e.g. my_data_bucket). Do not put in any folder information in this box.
  • Object Prefix: The S3 folders hierarchical prefix (e.g. good_data/my_awesome_table). Do not include any leading forward slashes.
  • Partition Folder Pattern: The folder pattern that will be generated based on the values from the upstream partition column (e.g. you can use ${at_hour_ts(yyyy-MM-dd/HH)} where column at_hour_ts is a partition column from the upstream transform).
  1. Unselect Partition Output Files since we don't have a partitioned data set for this tutorial.

  2. Leave Manifest File unselected.

  3. Enter My Creds for Credential Name and your S3 Access Key and Secret Key in the fields provided.

  4. Click TEST CONNECTION to validate S3 bucket settings and permissions.

  1. Format Settings:

    • Select XSV as the FORMAT TO USE to write the data as .csv compatible.
    • Select Comma as the FIELD DELIMITER
    • Select CR+LF as the LINE TERMINATOR
    • For HEADER ROW check the Files have a header row checkbox to indicate that the header row will contain the column names.
    • COMPRESSION METHOD: Leave unconfigured. (optional).
  2. ASSIGNED PRIORITY: Leave unconfigured as 0 (default)

  1. Click the CREATE button and Ascend will write the file out to S3.

When it's complete you should see the Write Connector on your Dataflow canvas (as shown below).

  1. Awesome.. You’ve now seen a very simple example of a Dataflow which contains a couple Read Connectors, a few Transforms and a Write Connector. Let's add one final component to our Dataflow in the final step next.

Creating Data Feed Connectors

The final step in our Dataflow is to create a Data Feed. Data feeds are powerful constructs designed to allow Dataflows and Data Services to communicate live data with each other. What this means is you can create very complex Dataflow that outputs a single result table and share it with others in your organization (say in different data service) without having to give them access to your actual Dataflow. You connect to existing Data Feed Connectors to act as inputs into other Dataflows.

Create "Rainy Day Cab Rides" Data Feed Connector

  1. Create the new Data Feed Connector by right-clicking the NYC Cab Rides on Rainy Days Transform and selecting Create new data feed from the drop-down menu. Alternatively, you can click + ADD menu and select Data Feed and then Create to open the CREATE NEW DATA FEED panel (shown below).
  1. The name of the Data Feed Connector is generated automatically as NYC Cab Rides on Rainy Days Transform data feed, but we're going to override this with the name NYC Cab Rides on Rainy Days Data Feed.

  2. The upstream is already pre-selected for you as NYC_Cab_Rides_on_Rainy_Days_Transform.

  3. You can choose to apply access permissions to this if you want, but for now we’ll make it available to all in our environment so we'll leave the default "All teams in all Data Services can access this Data feed".

  1. Click CREATE and that’s it. You should see the Data Feed Connector on your Dataflow canvas (as shown below). You now have a dedicated data spigot that contains a very specific data set that I can allow others to subscribe to and build their Dataflows off.

Wrapping it up

Congratulations on creating your first orchestrated Dataflow with Ascend. Hopefully, you got a real good sense of the power in what Ascend can do for you - and that you can now build your own Dataflows. You've seen how easy it is to create Read Connectors, use SQL inside of Transforms and writing out those results in the form of Write Connectors and making them available and reuseable as Data Feeds. Here's what your Dataflow should look like.

In the next tutorial, we'll take the Data Feed you just created and use it for an entirely new Data Flow and introduce the use to PySpark in some transforms as well.

Updated 9 months ago

Tutorial - Building an Advanced Dataflow


Suggested Edits are limited on API Reference Pages

You can only suggest edits to Markdown body content, but not to the API spec.