Databricks Cluster no access to on-prem server, while serverless has access

Alex Young 0 Reputation points
2025-04-30T17:00:30.1266667+00:00

In Azure Databricks, when using serverless we have connectivity access to our on-prem server, but when using cluster it'll encounter TimeoutError: [Errno 110] Connection timed out.

Our Databricks contact, suggested below. Can you help expand with more step-by-step details on the setup needed on the Azure end if it's working for serverless but not for cluster?

the IG uses classic clusters so the instances are created in your cloud accounts VPC, the instances will use whatever NAT you have set up in that VPC.

Azure Databricks
Azure Databricks
An Apache Spark-based analytics platform optimized for Azure.
2,415 questions
{count} votes

1 answer

Sort by: Most helpful
  1. Chandra Boorla 12,100 Reputation points Microsoft External Staff
    2025-04-30T17:32:16.21+00:00

    @Alex Young

    Thanks for reaching out. It sounds like you're running into a common connectivity issue where your Databricks classic cluster can’t reach your on-prem server, while serverless compute works fine. This usually happens due to differences in how network access is configured between serverless and classic clusters.

    Why Serverless Works:

    Serverless compute in Azure Databricks is hosted on Microsoft-managed infrastructure, which generally has broader, pre-configured network access. This setup likely already includes routes or firewall exceptions for connecting to your on-prem environment, whether through VPN, ExpressRoute, or public IP.

    Why Classic Clusters Timeout:

    In contrast, classic clusters are deployed in your Azure VNet. For outbound connectivity to your on-prem environment, such as the connection you're trying to establish, the VNet's network settings (e.g., routing and NAT configurations) must be explicitly set up. This means:

    • Site-to-Site VPN or ExpressRoute must be configured between your VNet and your on-prem network.
    • The subnet used by Databricks clusters must be allowed to route traffic through that connection.
    • Firewall, NSG (Network Security Group), or route table settings must allow traffic from your Databricks cluster to your on-prem IPs and required ports.

    Here are some troubleshooting steps that might help you:

    Network Configuration Review - Check the subnet and VNet where your Databricks cluster is deployed. Verify whether your VNet is connected to your on-prem network through VPN or ExpressRoute.

    Firewall & NSG Rules - Ensure your on-prem firewall allows traffic from the Azure subnet hosting your Databricks cluster. Confirm that NSGs don’t block outbound access from the cluster to the required on-prem IP and port.

    NAT Gateway / Outbound Access - Confirm if a NAT Gateway or outbound access rules are properly configured for connectivity to your on-prem server. If your on-prem firewall expects traffic from a known public IP, set up static outbound IP via NAT and whitelist that IP.

    DNS Configuration - If your cluster connects to your on-prem server via hostname, ensure that your DNS resolution is correctly configured. Use Azure Private DNS or a custom DNS server that can resolve your on-prem server's hostname.

    Private Link (if applicable) - If your on-prem services are exposed through Private Endpoints, consider using Azure Private Link for secure communication.

    Test the Connection - From a notebook in your Databricks cluster, try:

    ping <on-prem IP or hostname>
    curl <on-prem URL>
    

    Alternatively, test connectivity by connecting to your on-prem database using JDBC or ODBC.

    I hope this information helps. Please do let us know if you have any further queries.


    If this answers your query, do click Accept Answer and Yes for was this answer helpful. And, if you have any further query do let us know.

    Thank you.


Your answer

Answers can be marked as Accepted Answers by the question author, which helps users to know the answer solved the author's problem.