I am building a simple pipeline to copy data from an on prem to ADLS gen2. I am using a copy activity with redshift as a source inside a for each loop for a given table array.

Tushar Singh 0 Reputation points
2025-05-06T07:27:43.5466667+00:00

The error is the following -

Failure happened on 'Source' side. ErrorCode=UserErrorUnclassifiedError,'Type=Microsoft.DataTransfer.Common.Shared.HybridDeliveryException,Message=Odbc Operation Failed.,Source=Microsoft.DataTransfer.ClientLibrary.Odbc.OdbcConnector,''Type=Microsoft.DataTransfer.ClientLibrary.Odbc.Interop.OdbcException,Message=ERROR [57014] [Microsoft][Amazon Redshift] (30) Error occurred while trying to execute a query: [SQLState 57014] ERROR: Query (361084429) cancelled on user's request

,Source=Microsoft.DataTransfer.ClientLibrary.Odbc.Wrapper,'

Clarifications - I am doing the sequential activity for the loop because of SHIR limited memory, and it all runs successfully except the last run. But for the same table when I run an individual copy activity then it runs without any error.

Azure Data Factory
Azure Data Factory
An Azure service for ingesting, preparing, and transforming data at scale.
11,486 questions
{count} votes

2 answers

Sort by: Most helpful
  1. phemanth 15,490 Reputation points Microsoft External Staff Moderator
    2025-05-06T08:00:24+00:00

    @Tushar Singh

    It looks like you're experiencing an issue with your Azure Data Factory (ADF) pipeline, specifically when copying data from Amazon Redshift to Azure Data Lake Storage (ADLS) Gen2. The error indicates that the query is being canceled, which can sometimes happen due to timeouts or execution limits.

    To troubleshoot this issue:

    Check Query Timeout Settings: It’s possible that the query you're attempting to execute is hitting a timeout limit. Make sure to review the query settings in Redshift and consider increasing the timeout if necessary.

    Monitor Resource Utilization: Since you've mentioned that you're using a Self-hosted Integration Runtime (SHIR) with limited memory, ensure that there are sufficient resources available. Monitor the memory and CPU usage during the execution of your pipeline.

    Log Redshift Execution: Try to log the executed query in Redshift to see if it provides any additional insights into why it might be canceling. If the query takes too long to execute, you might need to optimize it.

    Sequential Processing: You mentioned you're running a sequential activity due to memory constraints. Ensure that the previous iterations are indeed completing successfully before the next one starts. This might help in establishing some pace and avoiding memory overload.

    Debugging: If running individual copy activities works without a hitch, try isolating the last iteration to see if specific data in that table is causing issues. You could perform a simpler query or a sample operation just to confirm if it’s related to the data itself.

    If these steps don't resolve the issue, here are some follow-up questions that may help narrow it down:

    1. What is the size of the data you are attempting to copy in the last iteration?
    2. Are there any specific queries or parameters you are using in this particular copy activity that differ from the others?
    3. What timeout settings are currently configured in your Redshift instance?
    4. Have you looked at Redshift logs to check for any other errors or clues related to the canceled query?
    5. Are there any network latency issues that might affect the integration between SHIR and Redshift?
    1 person found this answer helpful.

  2. Tushar Singh 0 Reputation points
    2025-05-07T09:22:17.28+00:00

    The following solution worked for us given the constraints we had:

    1. Changed the compression technique to snappy instead of gzip. It is much faster compressor than gzip
    2. Increase the integration units count to reduce query run time.
    0 comments No comments

Your answer

Answers can be marked as Accepted Answers by the question author, which helps users to know the answer solved the author's problem.