Data Plane Configuration
Detailed descriptions of the specific parameters required for configuring Data Services in Ascend Cloud.
The Data Plane Configuration pane found within Data Service Settings lets you tailor Data Services to your preferred data cloud. Each data cloud has specific parameters.
BigQuery
Field | Required | Description |
---|---|---|
BigQuery Connection | Required | Choose a BigQuery connection configured for use by the Data Service. This is the processing connection used by Data Service. The connection cannot be changed once set. If you have not created one yet, go to Connections to create a new BigQuery Connection. |
Dataset Template | Optional | The template used to create a dataset. By default, the template is ascend\_\_{{data_service_id}} . |
Table Template | Optional | The template used to create a table. By default, the template is {{dataflow_id}}\_\_{{component_id}} . |
Spark Cluster Pool ID | Optional | Choose a spark cluster pool for Data Service, this is the processing spark cluster pool used by Data Service (unless overridden by Dataflow settings) |
Default Run Mode | Optional | Choose the default run mode for new components in this Data Service. |
Databricks
Databricks Connection Configuration
To use the Unity Catalog with the Databricks Data Plane, you must set up the all-purpose cluster configured for a Databricks Connection as
Single user
.This cluster endpoint ID should be indicated in the Execution Context for SQL Endpoint ID field of the Databricks Connection used for the Data Plane.
Field | Required | Description |
---|---|---|
Connection | Required | Choose a Databricks connection for Data Service, this is the processing connection used by Data Service. The connection cannot be changed once set. If you have not created one yet, go to Connections to create a new Databricks Connection |
SQL Endpoint ID Override | Optional | The ID of the Databricks SQL Warehouse. Utilize the For more information, see Databricks documentation. |
Databricks Cluster ID Override | Optional | If you'd like to override the Cluster used in an existing Databricks Connection, place the cluster ID (cluster_id ) here. |
Database Template | Optional | The template used to create a database. Be default, the template is {{data_service_id}}\_\_{{dataflow_id}} . |
Table Template | Optional | The template used to create a table. By default, the template is {{component_id}} . |
Catalog | Optional | When the catalog field is empty, the data is assumed to exist or be created in the default hive_metastore . there are some restrictions on the type of all-purpose cluster that can be enabled for Unity Catalog.In order for Ascend to access Unity Catalog, the all purpose cluster within Databricks must be configured as Single User . |
Spark Cluster Pool ID | Optional | Choose a spark cluster pool for Data Service, this is the processing spark cluster pool used by Data Service (unless overridden by Dataflow settings) |
Default Run Mode | Optional | Choose the default run mode for new components in this Data Service. |
Snowflake
Field | Required | Description |
---|---|---|
Snowflake Connection | Required | Choose a Snowflake Connection configured for use by the Data Service. This is the processing connection used by the Data Service. The connection cannot be changed once set. If you have not created one yet, go to Connections to create a new Snowflake Connection. |
Warehouse Override | Optional | If you do not want to use the default warehouse from the Snowflake Connection, enter the warehouse you'd like to use here. |
Read Connector Warehouse | Optional | The warehouse to use for ingesting data. The Read Connector warehouse may only be used for ingesting data and cannot be used for transformation or data delivery. |
Metadata Warehouse | Optional | The warehouse Ascend uses to perform metadata tasks. |
Database Template | Optional | The template used to create a database. Be default, the template is {{data_service_id}} . |
Schema Template | Optional | The templated use to create the schema. By default, the template is {{dataflow_id}} . |
Table Template | Optional | The template used to create a table. By default, the template is {{component_id}} . |
Data Cleanup Strategy | Optional | Choose a data cleanup strategy on component/dataflow/data service deletion. The default is delete tables . Other options include Archive Tables and Keep Tables |
Default Cluster Pool | Optional | Choose a default Spark cluster pool for Data Service, this is the processing spark cluster pool used by the whole Data Service (unless overridden by Dataflow settings). |
Read Connector Cluster Pool | Optional | This cluster pool is used by the Data Service for data ingestion workloads. If a default cluster pool is selected above, the Read Connector cluster pool will default to the selection. |
Interactive Cluster Pool | Optional | This cluster pool is used by the Data Service for interactive workloads such as Transforms. If a default cluster pool is selected above, the Interactive cluster pool will default to the selection. |
Default Run Mode | Optional | Choose the default run mode for new components in this Data Service. |
Spark with Iceberg
Field | Required | Description |
---|---|---|
Iceberg Connection | Required | Choose an Iceberg connection for Data Service. This is the processing connection used by the Data Service. The connection cannot be changed once set. If you have not created one yet, go to Connections to create a new Iceberg Connection. |
Iceberg Root Path | Optional | Where to store the Iceberg data. Include a path to a folder for Ascend to access. If you are using an existing Iceberg Connection, this field will act as an override to a new folder. |
Default Cluster Pool | Optional | Choose a default Spark cluster pool for Data Service, this is the processing spark cluster pool used by the whole Data Service (unless overridden by Dataflow settings). |
Read Connector Cluster Pool | Optional | This cluster pool is used by the Data Service for data ingestion workloads. If a default cluster pool is selected above, the Read Connector cluster pool will default to the selection. |
Interactive Cluster Pool | Optional | This cluster pool is used by the Data Service for interactive workloads such as Transforms. If a default cluster pool is selected above, the Interactive cluster pool will default to the selection. |
Default Run Mode | Optional | Choose the default run mode for new components in this Data Service. |
Updated 12 months ago