Ascend Developer Hub

Databricks

Reading data into Databricks Spark using Structured Data Lake

sc._jsc.hadoopConfiguration().set("fs.s3a.endpoint", "https://s3.ascend.io")
sc._jsc.hadoopConfiguration().set("fs.s3n.awsAccessKeyId", 'YOUR ACCESS KEY ID')
sc._jsc.hadoopConfiguration().set("fs.s3n.awsSecretAccessKey", 'YOUR SECRET')
sc._jsc.hadoopConfiguration().set("fs.s3a.attempts.maximum", "1")
data = spark.read.parquet("s3a://trial/Getting_Started_with_Ascend/IoT_Device_and_Weather_Analysis/K_Means_Cluster")

Updated 5 months ago

Databricks


Reading data into Databricks Spark using Structured Data Lake

Suggested Edits are limited on API Reference Pages

You can only suggest edits to Markdown body content, but not to the API spec.