Ascend Developer Hub

Apache Spark

Reading data into Databricks Spark using Structured Data Lake

sc._jsc.hadoopConfiguration().set("fs.s3a.endpoint", "https://s3.ascend.io")
sc._jsc.hadoopConfiguration().set("fs.s3n.awsAccessKeyId", 'YOUR ACCESS KEY ID')
sc._jsc.hadoopConfiguration().set("fs.s3n.awsSecretAccessKey", 'YOUR SECRET')
sc._jsc.hadoopConfiguration().set("fs.s3a.attempts.maximum", "1")
data = spark.read.parquet("s3a://trial/Getting_Started_with_Ascend/IoT_Device_and_Weather_Analysis/K_Means_Cluster")

Updated 4 months ago

Apache Spark


Reading data into Databricks Spark using Structured Data Lake

Suggested Edits are limited on API Reference Pages

You can only suggest edits to Markdown body content, but not to the API spec.