Ascend Glossary
This section provides a list of commonly used terms and concepts within the Ascend platform
Ascend Glossary
A | ||||
---|---|---|---|---|
Analyzing | A component state which indicates that Ascend Platform is determining the optimal data processing strategy | |||
Ascend Dataflow | A single data workflow (DAG or Directed Acyclic Graph) defined on the Ascend Platform consisting of Connectors, Transforms, and Data Feeds | |||
Ascend Platform | An Autonomous Dataflow Platform that enables users to self-serve and iterate on data in a matter of minutes, not weeks. With the platform, users can discover and trace back existing Dataflows, self-serve as you build, iterate and enrich Dataflows, collaborate with your team and other orgs to share and reuse Dataflows. | |||
Ascend Documentation | An online documentation that includes the query expressions supported by the Ascend Platform and resources such as tutorial | |||
Ascend System Dashboard | An overview of all component types and status across the system that the User has access to | |||
B | ||||
Business Logic | Logic that is used to define data transformation in an Ascend Dataflow, usually as SQL or PySpark within Ascend Transforms, code in Parser Functions, and Custom Source Functions | |||
Business Requirements Document (BRD) | A formal document for capturing business requirements for an Ascend Data Service | |||
C | ||||
Clean Transform | A Transform on the Ascend Platform that provides clean canonical data by performing data cleansing, deduplication, and other normalization; usually the first Transform created for each Read Connector | |||
Component | Any object on the Ascend Platform, such as a Read or Write Connector, Transform, or Data Feed. | |||
Component Grouping | Multiple components on the Ascend Platform that are grouped together | |||
Context Menu | A drop-down menu accessed by right clicking a component on the Ascend Dataflow | |||
D | ||||
Data Admin | Ascend users who has full access to all Dataflows and their data within a Data Service | |||
Data Feed | A mechanism in Ascend by which Dataflows and Data Services can communicate live data with each other | |||
Dataflow | Unlike point-to-point pipelines, where you can only query at the static endpoints, Dataflows understand that data is inherently connected and dynamically changing. With Dataflows, you can analyze, iterate, and reuse data and logic at any stage, and always trust that the data remains live and up-to-date. | |||
Development Data | A subset of Production Data suitable for testing and developing a Dataflow | |||
Data Service | The highest level object in the Ascend hierarchy; a Data Service contains Teams, Users, and Dataflows | |||
E | ||||
Environment | An environment is a unique deployment of the Ascend system | |||
Export | The process of retrieving the JSON representation of a collection of components from a Dataflow | |||
Everyone | The default Team that does not have any active permissions besides access to the Data Service | |||
F | ||||
Full reduction | A scenario where a single output partition is produced from all input partitions | |||
G | ||||
H | ||||
I | ||||
Import | The process of adding a collection of components represented in JSON to a Dataflow | |||
Ingestion | The process of importing data into the Ascend Platform for processing and storage | |||
Intercom | The in-application messaging interface for Users to directly chat with Ascend customer support | |||
J | ||||
K | ||||
L | ||||
Listing | A component state indicating that the Ascend platform is checking for unprocessed files in the designated data location | |||
M | ||||
Maintenance | The process of updating or fixing bugs in a Dataflow | |||
Mapping | A scenario where each output partition is produced by exactly one input partition | |||
Materialized View | The result of a query that gets stored in the Ascend Platform; All Transforms in Ascend are materialized | |||
Migration | The process of upgrading from a Development Dataflow to a Production Dataflow | |||
N | ||||
O | ||||
Out-Of-Date | A component state which indicates that the Ascend Platform has discovered new work that needs to be completed | |||
P | ||||
Parser Function | Functions that allow users to embed custom code to expand support for custom formats | |||
Parsing | A component state in which the Ascend Platform is transforming the date files into rows of records | |||
Partition | A single logical chunk of data processed by Ascend; can be materialized as a single data fragment, a table or file in the Ascend backend, a file in a Read Connector, or a file in a Write Connector | |||
Partitioning | A data partitioning scenario in which each output partition is produced with data from one or more input partitions | |||
Permissions | Rules for allowing or denying access to a Data Service or Dataflow | |||
Production Data | The entire dataset currently available for ingestion that the production Dataflow requires | |||
Production Dataflow | A stable Ascend Dataflow in which fully developed Dataflows are running in production mode, and integrated with production systems upstream and downstream | |||
Q | ||||
R | ||||
Raw Builder | An Ascend query builder that supports SQL syntax highlighting, comprehensive auto-complete, and code formatting | |||
Read Connector | An Ascend component that pulls data from the upstream storage location into the Ascend Platform | |||
Read Connector Update Interval | An Ascend parameter that controls the frequency of the system to check Read Connectors for updates; Users can set this parameter on the UI |
|
| | | Reading | A component state in which the Ascend Platform is reading in the source data |
| | | Reduction | A data partitioning scenario that reduces the number of partitions associated with a Transform |
| | | Reshaping | A component state which indicates that the Ascend Platform is modifying the internal data storage format for optimal processing |
| | | Running | A component state in which the Ascend Platform is processing work |
| S | | | |
| | | Smart Partitioning | A feature in Ascend that can automatically put data in different buckets based on a data field of Timestamp type in the Group By clause | |
| | | SQL Builder | A query building tool in Ascend that has an auto-completion feature with built-in clauses and keywords to assist user in query building |
| | | Staging | A component state which indicates that the Ascend Platform is persisting data for optimal loading performance |
| | | Super Admin | Ascend user who is both Data Admin and User Admin |
| | | Sweep Task | An internal task that automatically deletes all outdated files in the file Write Connectors |
| | | Sweeping | A component state which indicates that the Ascend Platform is removing outdated data |
| T | | | |
| | | Team | A group of Users in Ascend that share the same set of permissions and access |
| | | Transform | Data transformation specified in SQL or PySpark in a Dataflow |
| U | | | |
| | | Up-To-Date | A component state that indicates the data in the component is fully processed and internally consistent with other data in the Dataflow |
| | | User | An invidual utilizing the Ascend platform |
| | | User Admin | Ascend User responsible for user management within a Data Service |
| V | | | |
| | | Validation | The process of confirming that the results of a Dataflow conform to the documented business requirements |
| | | Version Control | The system for tracking changes to a file, typically applicable to Ascend JSON exports |
| W | | | |
| | | Write Connector | A data integration point that pushes data from the Ascend Platform to a downstream storage location |
| | | Workspace | A working area where Users can pin reference components to look at side by side while the User is working on another component |
| | | Writing | A component state that indicates the Ascend Platform is writing data to the specified downstream location |
| X | | | |
| Y | | | |
| Z | | | |
Updated 22 days ago