Component Logs

Ascend components retain the Spark logs per partition of data processed. The UI surfaces these logs under the "Partition" tab, where developers can view them in the UI or download the files. Developers can also programmatically fetch and/or download logs through the Ascend SDK.

Logs Availability

Ascend manages Spark logs for Read Connections and Spark transformations (SQL, PySpark, Scala/Java). Write Connectors and Read Connectors (Legacy)] do not currently support logging.

Ascend surfaces logs of "in-progress" Spark jobs, which allows for viewing logs and refreshing to see the most up to date output of the logs.

Accessing Logs from the UI

  1. Locate a component on the Dataflow that you wish to view logs for.
  2. Open the component detail view.
  3. Navigate to the "Partitions" tab.
  4. Under the "Logs" column, click on either View to open the logs in a new browser tab or Download to download a folder of the log files.

Instrumenting Logs in PySpark

Developers can instrument their own logging in a PySpark transform and have the log statements appear in the partition logs. This logging must go through the Ascend logging interface in order to add the correct logging labels and route to the correct partition.

A small example:

from pyspark.sql import DataFrame, SparkSession
from typing import List
import ascend.log as log

def transform(spark_session: SparkSession, inputs: List[DataFrame], credentials=None) -> DataFrame:
    df = inputs[0]"I am logging!")
    return df

The log module provides functions compatible with the Python glog package. For full reference of the ascend.log module, please see the reference.

The logger stores references to the labels required as part of thread local variables. By default, logs from the Spark driver will be collected but from the executors will not. It is possible to propagate the labels to executors, by using get_log_label, set_log_label, and threading the label through. However, logging from Spark executors may be better suited to appending a result column to the record outputs to keep log volume constrained.

Ascend Log Module Reference





Function to log with level DEBUG

debug("debug statement")


Function to log with level INFO

info("info statement")


Function to log with level WARNING

warning("warning statement")


Function to log with level WARNING

warn("warning statement")


Function to log with level ERROR

error("error statement")


Function to log with level EXCEPTION

exception("exception statement")


Function to log with level FATAL

fatal("fatal statement")


Function to log that takes the first argument of logging level and the second of the log message. Log levels are found in the native Python logging module.

log(logging.INFO, "info statement")


Function to set the severity threshold of messages to emit. Logs with a severity higher than this threshold will be emitted. Log levels are found in the native Python logging module.


Did this page help you?