Thanks Chandra. Yes you can close the ticket
Pipeline to wait until spark cluster is available
Hello,
We have a very small node cluster and due to multiple spark notebooks running at the same time the notebooks or pipeline fails with 6002 timedout error. Some notebooks do not start at all and fails. Is there a way we can queue the notebooks or pipelines until the spark cluster or node is available?
Thanks,
Arun
Azure Synapse Analytics
2 answers
Sort by: Most helpful
-
-
Dileep Raj Narayan Thumula 5 Reputation points Microsoft External Staff
2025-05-01T10:32:58.0733333+00:00 Glad to hear that you have resolved the issue. As you mentioned, you applied a workaround by using three different pools.
Just to give info from experience ERROR : The
"6002 - timed out while waiting for cluster"
error in Spark typically occurs when a session or job is unable to start within the expected time because the cluster resources are not yet available.This usually happens when There are not enough nodes or executors available to serve new sessions.
The cluster is scaling up but has not provisioned resources fast enough.
Other workloads are currently occupying the available capacity, causing delays for new jobs.
If multiple Spark notebooks are triggered simultaneously on a small Spark pool,
They may fail with a 6002 timeout error while waiting for the cluster, due to insufficient available nodes to handle all sessions at once.
In such scenarios review and scale your Spark pool — If feasible, consider increasing the size of your Spark pool or enabling dynamic allocation to automatically provision additional resources as demand grows.
Kindly consider upvoting the comment if the information provided is helpful. This can assist other community members in resolving similar issues.