Using a Custom Image
Prerequisites:
- If you would like to use your own docker image in Ascend, make sure to go through the previous article on "preparing a custom image", to have it properly set up.
Running Components with custom images
PySpark Transforms and Read/Write Connectors run using Ascend Cluster Pools that are configured for corresponding Data Service. If you wish to modify this setup, you'll need to create Cluster Pool with a custom image first. For detailed instructions on creating or editing a Cluster Pool, refer to the guide here: https://developer.ascend.io/docs/spark-cluster-pools
Once you've set up Cluster Pool with a custom image, you can begin using it in the following manner:
- Open the desired Data Service or Dataflow.
- Access the "Data Service Settings" by clicking on the small "Gear" icon located in the upper section of the left panel.
- Proceed to the "Data Plane Configuration" section.
- In the "Ascend Cluster Pools" area, you'll find the options for selecting different Cluster Pools tailored to specific purposes. For additional insights, visit: https://developer.ascend.io/docs/data-plane-configuration
Custom images for your transform will be cached on the underlying infrastructure as they're pulled and used.
To avoid using an old image, change the Label every time the Image is updated.
Updated 10 months ago