10.11.2023 Release Notes
about 1 year ago
These are the release notes for October 11, 2023
π° NEWS π°
- π Hello EMEA! Ascend Cloud has now expanded its presence in the EU region (eu-central-1 in AWS)!
- Ascend Hosted customers have always been able to choose their cloud/region.
- We are now welcoming customers (and prospects) to our new Ascend cloud region - open for business!
- Questions? Contact Ascend Support or your Ascend sales representative for more details.
- Interested in Ascend Cloud support in other regions? Contact us and let us know!
β¨ FEATURES β¨
- All environments (Gen1/Gen2)
- Incremental backfill - adds the ability to specify a maximum partition row count when using the incremental replication strategy with JDBC and CDATA-based connectors:
- When set, partitions will be limited (approximately) to this row count.
- Upon completion of a partition that has been limited due to hitting the row count limit, a subsequent ingest for the next set of rows is immediately triggered.
- This effectively allows large data backfills to be done in an incremental manner rather than one single, long-running job.
- Incremental backfill - adds the ability to specify a maximum partition row count when using the incremental replication strategy with JDBC and CDATA-based connectors:
- Gen2 environments
- Concurrency controls - adds a new Connection setting to limit the number of concurrently running Ascend tasks spawned to service work on the Connection:
- This can be used to manage the load and/or rate of requests (incurred through parallel tasks) made to external data sources/destinations.
- This setting applies to Connections made via Read and/or Write Connectors.
- Concurrency controls - adds a new Connection setting to limit the number of concurrently running Ascend tasks spawned to service work on the Connection:
β¨ ENHANCEMENTS β¨
- All environments (Gen1/Gen2)
- β‘ The Ascend Observe Dataflow Timeline reports now run approximately 4 times faster than before! β‘
- The Spark BQ connector library has been upgraded to version 0.32.2.
- Ascend Cloud and Gen2 environments
- Default Ascend Cluster pools that have not been customized will be set with smaller initial default configurations:
- This change impacts all Gen2 and Ascend Cloud customers, and all new environments going forward.
- Customized Ascend Clusters (default or ones created by the customer) are unaffected.
- This will help customers save money by starting with a smaller cluster that can be scaled up as needed.
- Stay tuned for guidance on recommended cluster sizing!
- Default Ascend Cluster pools that have not been customized will be set with smaller initial default configurations:
π§ BUGFIXES π§
- All environments (Gen1/Gen2)
- Fix a bug causing "unexpectedly-failed-to-bind-predicate" errors in some partition filter configurations in Transforms.
- Fix the component detail view display issue (in the Dataflow UI) where the component name didn't refresh immediately after it was updated.
- Remove the deprecated Favorites feature from the Build Panel > Browse in the Dataflow UI.
- Fix a bug that resulted in overwriting of column descriptions when using BigQuery Write Connector to write data into an existing table.
- This fix ensures that user-defined metadata remains intact when the table's schema is being altered/modified.
- Revert a change made by the upstream vendor (Google) to the Spark BigQuery Connector that caused it to treat BigQuery RECORD REPEATED data types as an invalid/non-standard Map type in Spark.
- This restored the previous behavior that treated this data type as an ARRAY of STRUCT.
- Gen2 environments
- All Data Planes
- Prevent an issue where large record preview requests caused Data Plane Manager to restart.
- Fix a bug causing "NoSuchElementException: None.get" errors during Data Plane Manager restart.
- BigQuery Data Plane
- Fix schema inference for SQL transforms that had a reserved SQL keyword as a column name within a nested type (such as "map") in input components.
- Spark Data Plane
- Fix the "Component metadata operation failed" error when renaming an in-progress component or a Data Service.
- All Data Planes
- Gen1 environments
- Downgrade the Ascend Cluster Spark 3.1.0 Docker image Python version from Python 3.9 to Python 3.7 and the cryptography module.
- This upgrade was not compatible with older python libraries, so this change resolves the issue.
- Downgrade the Ascend Cluster Spark 3.1.0 Docker image Python version from Python 3.9 to Python 3.7 and the cryptography module.