Target system temporarily unavailable, Please try again later. [Databricks][DatabricksJDBCDriver](500593) Communication link failure. Failed to connect to server. Reason: HTTP retry after response received with no Retry-After header, error: HTTP Response

MDB Admin 0 Reputation points
2025-04-25T00:13:07.9466667+00:00

Our SAAS Connector through Serverless SQL Warehouse is getting this error trying to write to Azure Databricks table:

Target system temporarily unavailable, Please try again later. [Databricks]DatabricksJDBCDriver Communication link failure. Failed to connect to server. Reason: HTTP retry after response received with no Retry-After header, error: HTTP Response code: 503, Error message: TEMPORARILY_UNAVAILABLE: HTTP Response code: 500.

Azure Databricks
Azure Databricks
An Apache Spark-based analytics platform optimized for Azure.
2,415 questions
{count} votes

1 answer

Sort by: Most helpful
  1. Smaran Thoomu 22,840 Reputation points Microsoft External Staff
    2025-04-25T10:34:55.6133333+00:00

    Hi @MDB Admin
    The error you're encountering:

    [Databricks][DatabricksJDBCDriver] Communication link failure. 
    Failed to connect to server. Reason: HTTP retry after response received with no Retry-After header. 
    Error: HTTP Response code: 503, Error message: TEMPORARILY_UNAVAILABLE
    
    
    

    ...typically indicates a transient availability issue with the Serverless SQL Warehouse in Azure Databricks.

    Here's what it means and what you can do:

    • The HTTP 503 error suggests that the target service (Databricks SQL Warehouse) was temporarily unavailable at the time of the connection attempt.
    • This may be due to high load, auto-scaling delays, or internal transient faults within the Databricks backend.
    • The absence of a Retry-After header means the client isn't given guidance on when to retry, which is why the connector fails without auto-recovery.

    Recommended Steps:

    1. Retry Logic: Make sure your SAAS connector or JDBC client includes robust retry logic with exponential backoff for transient errors like these.
    2. Check SQL Warehouse Status: Confirm that the target SQL Warehouse is running and healthy. If it's set to auto-stop, it may take some time to start up under load.
    3. Review Capacity Settings: If you're seeing this issue frequently:
      • Consider increasing the min/max cluster size for your SQL Warehouse.
      • Review your concurrency limits and query volume.
    4. Monitor via Databricks: Use the SQL Warehouse event logs to see recent startup attempts, scaling actions, or failed connection events for more insight.

    Contact Databricks Support with request IDs, timestamps, and SQL Warehouse names to investigate backend service availability if it doesn’t resolve.

    I hope this information helps. Please do let us know if you have any further queries.

    Kindly consider upvoting the comment if the information provided is helpful. This can assist other community members in resolving similar issues.


Your answer

Answers can be marked as Accepted Answers by the question author, which helps users to know the answer solved the author's problem.