Amazon S3

Learn the required and optional properties of creating an Amazon S3 Connection, Credential, Read Connector, and Write Connector.

amazon s3 logo

Prerequisites

  • Access Credentials
  • Data location on AWS S3
  • Data Schema (column names and column type)

Connection Properties

The following table describes the fields available when creating a new Amazon S3 Connection. Create a connection using the information below and these step-by-step instructions.

FieldRequiredDescription
Access TypeRequiredThis connection type is Read-Only, Write-Only, or Read-Write.
Connection NameRequiredInput your desired name.
BucketOptionalThe bucket name, such as ascend-io-sample-read-data.
S3 API EndpointOptionalThis is an endpoint that is not S3 but S3 compatible.
S3 RegionOptionalThe region name, such as us-west-2.
Requires CredentialsOptionalCheck this box to create a new credential or select an existing credential.

Credential Properties

The following table describes the fields available when creating a new Amazon S3 credential.

Field NameRequiredDescription
Credential NameRequiredThe name to identify this credential with. This credential will be available as a selection for future use.
Credential TypeRequiredThis field will automatically populate with .
AWS Access Key IDRequiredThe access key ID for your account.
AWS Secret Access KeyRequiredThe secret access key for your account.

Read Connector Properties

The following table describes the fields available when creating a new Amazon S3 Read Connector. Create a new Read Connector using the information below and these step-by-step instructions.

Field NameRequiredDescription
NameRequiredProvide a name for your connector. We recommend using lowercase with underscores in place of spaces.
DescriptionOptionalDescribes the connector. We recommend providing a description if you are ingesting information from the same source multiple times for different reasons.
BucketRequiredName of the S3 bucket.
Object Pattern MatchingRequiredThe pattern strategy used to identify eligible files:
- Glob: Glob applies a simple pattern matching algorithm.
- Match: Matches the pattern precisely character-for-character.
- Prefix: Matches the pattern with the specified prefix.
- Regex: Regex applies a pattern matching algorithm.
ParserRequiredWe support several data formats. See Blob Storage Read Connector Parsers for more information about CSV, Excel, JSON, and Python parsers:
- Avro
- CSV
- Excel
- JSON
- ORC
- Parquet
- Python
- Text
Path DelimiterOptionalExample: A newline \n de-limited file
Object Aggregation StrategyRequiredCurrently available strategies are:
- Adaptive
- Leaf Directory
- Prefix Regex Match
Data Replication StrategyOptionalDefines the data replication strategy. Default unselected replicates all source changes. See Replication Strategies for Blob Store.

Write Connector Properties

The following table describes the fields available when creating a new [connection type name] Write Connector. Create a new Write Connector using the information below and these step-by-step instructions.

📘

Browse and Select Data lets you select a location within an S3 bucket to write data. This will automatically file in the Bucket and Output Directory fields.

Field NameRequiredDescription
NameRequiredProvide a name for your connector. We recommend using lowercase with underscores in place of spaces.
DescriptionOptionalDescribes the connector. We recommend providing a description if you are ingesting information from the same source multiple times for different reasons.
UpstreamRequiredThe name of the previous connector the Write Connector will pull data from.
BucketRequiredBucket that the output data will be written to.
Output DirectoryRequiredDirectory within bucket to write the data. If the folder does not exist, it will be created.
Partition Interpolation TemplateOptionalInclude a value from the partition profile as part of the output directory naming. For example, to create Hive style partitioning on dataset daily partitioned on timestamp event_ts, specify the pattern as dt={{event_ts(yyyy-MM-dd)}}/.
Output File SyntaxOptionalA suffix to attach to each file name. By default, Ascend will include the extension of the file format, but you may optionally choose a different suffix.
FormatRequiredWe support several data formats. See Amazon S3 Write Connector Connectors for more information about CSV, Excel, JSON, and Python parsers:
- Avro
- CSV
- JSON
- ORC
- Parquet
- Text
Path DelimiterOptionalExample: A newline \n de-limited file
Manifest FileOptionalSpecify a manifest file which will be updated with the list of files every time they are updated.
Write StrategyOptionalPick the strategy for writing files in the storage:
- Default (Mirror to Blob Store): this strategy allows to keep the storage aligned with ascend. allows inserting, updating and deleting partitions on the blob store.

- Ascend Upsert Partitions: This strategy allows for appending new partitions in Ascend and updating existing partitions, without deleting partitions from blob store that are no longer in Ascend.

- Custom Function: This strategy allows you to implement the write logic that'll be executed by Ascend.