Edit

Share via


Add stages, dependencies, and conditions

Azure DevOps Services | Azure DevOps Server 2022 - Azure DevOps Server 2019

A stage is a logical boundary in an Azure DevOps pipeline. Stages group actions in your software development process, like building the app, running tests, and deploying to preproduction. Each stage contains one or more jobs.

When you define multiple stages in a pipeline, by default, they run one after the other. Stages can also depend on each other. You can use the dependsOn keyword to define dependencies. Stages also can run based on the result of a previous stage with conditions.

To learn how stages work with parallel jobs and licensing, see Configure and pay for parallel jobs.

To find out how stages relate to other parts of a pipeline such as jobs, see Key pipelines concepts.

You can also learn more about how stages relate to parts of a pipeline in the YAML schema stages article.

You can organize pipeline jobs into stages. Stages are the major divisions in a pipeline: build this app, run these tests, and deploy to preproduction are good examples of stages. They're logical boundaries in your pipeline where you can pause the pipeline and perform various checks.

Every pipeline has at least one stage, even if you don't explicitly define it. You can also arrange stages into a dependency graph so that one stage runs before another one. A stage can have up to 256 jobs.

Specify stages

In the simplest case, you don't need logical boundaries in your pipeline. For those scenarios, you can directly specify the jobs in your YAML file without the stages keyword. For example, if you have a simple pipeline that builds and tests a small application without requiring separate environments or deployment steps, you can define all the jobs directly without using stages.

pool:
  vmImage: 'ubuntu-latest'

jobs:
- job: BuildAndTest
  steps:
  - script: echo "Building the application"
  - script: echo "Running tests"

This pipeline has one implicit stage and two jobs. The stages keyword isn't used because there's only one stage.

jobs:
- job: Build
  steps:
  - bash: echo "Building"

- job: Test
  steps:
  - bash: echo "Testing"

To organize your pipeline into multiple stages, use the stages keyword. This YAML defines a pipeline with two stages where each stage contains multiple jobs, and each job has specific steps to execute.

stages:
- stage: A
  displayName: "Stage A - Build and Test"
  jobs:
  - job: A1
    displayName: "Job A1 - build"
    steps:
    - script: echo "Building the application in Job A1"
      displayName: "Build step"
  - job: A2
    displayName: "Job A2 - Test"
    steps:
    - script: echo "Running tests in Job A2"
      displayName: "Test step"

- stage: B
  displayName: "Stage B - Deploy"
  jobs:
  - job: B1
    displayName: "Job B1 - Deploy to Staging"
    steps:
    - script: echo "Deploying to staging in Job B1"
      displayName: "Staging deployment step"
  - job: B2
    displayName: "Job B2 - Deploy to Production"
    steps:
    - script: echo "Deploying to production in Job B2"
      displayName: "Production deployment step"

If you specify a pool at the stage level, all jobs in that stage use that pool unless the stage is specified at the job level.

stages:
- stage: A
  pool: StageAPool
  jobs:
  - job: A1 # will run on "StageAPool" pool based on the pool defined on the stage
  - job: A2 # will run on "JobPool" pool
    pool: JobPool

Specify dependencies

When you define multiple stages in a pipeline, they run sequentially by default in the order you define them in the YAML file. The exception to this is when you add dependencies. With dependencies, stages run in the order of the dependsOn requirements.

Pipelines must contain at least one stage with no dependencies.

For more information on how to define stages, see stages in the YAML schema.

The following example stages run sequentially. If you don't use a dependsOn keyword, stages run in the order they're defined.

stages:
- stage: Build
  displayName: "Build Stage"
  jobs:
  - job: BuildJob
    steps:
    - script: echo "Building the application"
      displayName: "Build Step"

- stage: Test
  displayName: "Test Stage"
  jobs:
  - job: TestJob
    steps:
    - script: echo "Running tests"
      displayName: "Test Step"

Example stages that run in parallel:

stages:
- stage: FunctionalTest
  displayName: "Functional Test Stage"
  jobs:
  - job: FunctionalTestJob
    steps:
    - script: echo "Running functional tests"
      displayName: "Run Functional Tests"

- stage: AcceptanceTest
  displayName: "Acceptance Test Stage"
  dependsOn: [] # Runs in parallel with FunctionalTest
  jobs:
  - job: AcceptanceTestJob
    steps:
    - script: echo "Running acceptance tests"
      displayName: "Run Acceptance Tests"

Example of fan-out and fan-in behavior:

stages:
- stage: Test

- stage: DeployUS1
  dependsOn: Test    #  stage runs after Test

- stage: DeployUS2
  dependsOn: Test    # stage runs in parallel with DeployUS1, after Test

- stage: DeployEurope
  dependsOn:         # stage runs after DeployUS1 and DeployUS2
  - DeployUS1
  - DeployUS2

Define conditions

You can specify the conditions under which each stage runs with expressions. By default, a stage runs if it doesn't depend on any other stage, or if all of the stages that it depends on have completed and succeeded. You can customize this behavior by forcing a stage to run even if a previous stage fails or by specifying a custom condition.

If you customize the default condition of the preceding steps for a stage, you remove the conditions for completion and success. So, if you use a custom condition, it's common to use and(succeeded(),custom_condition) to check whether the preceding stage ran successfully. Otherwise, the stage runs regardless of the outcome of the preceding stage.

Note

Conditions for failed ('JOBNAME/STAGENAME') and succeeded ('JOBNAME/STAGENAME') as shown in the following example work only for YAML pipelines.

Example to run a stage based upon the status of running a previous stage:

stages:
- stage: A

# stage B runs if A fails
- stage: B
  condition: failed()

# stage C runs if B succeeds
- stage: C
  dependsOn:
  - A
  - B
  condition: succeeded('B')

Example of using a custom condition:

stages:
- stage: A

- stage: B
  condition: and(succeeded(), eq(variables['build.sourceBranch'], 'refs/heads/main'))

Specify queuing policies

YAML pipelines don't support queuing policies. Each run of a pipeline is independent from and unaware of other runs. In other words, your two successive commits might trigger two pipelines, and both of them will execute the same sequence of stages without waiting for each other. While we work to bring queuing policies to YAML pipelines, we recommend that you use manual approvals in order to manually sequence and control the order the execution if this is of importance.

Specify approvals

You can manually control when a stage should run using approval checks. This is commonly used to control deployments to production environments. Checks are a mechanism available to the resource owner to control if and when a stage in a pipeline can consume a resource. As an owner of a resource, such as an environment, you can define checks that must be satisfied before a stage consuming that resource can start.

Currently, manual approval checks are supported on environments. For more information, see Approvals.

Add a manual trigger

Manually triggered YAML pipeline stages enable you to have a unified pipeline without always running it to completion.

For instance, your pipeline might include stages for building, testing, deploying to a staging environment, and deploying to production. You might want all stages to run automatically except for the production deployment, which you prefer to trigger manually when ready.

To use this feature, add the trigger: manual property to a stage.

In the following example, the development stage runs automatically, while the production stage requires manual triggering. Both stages run a hello world output script.

stages:
- stage: Development
  displayName: Deploy to development
  jobs:
  - job: DeployJob
    steps:
    - script: echo 'hello, world'
      displayName: 'Run script'
- stage: Production
  displayName: Deploy to production
  trigger: manual
  jobs:
  - job: DeployJob
    steps:
    - script: echo 'hello, world'
      displayName: 'Run script'

Mark a stage as unskippable

Mark a stage as isSkippable: false to prevent pipeline users from skipping stages. For example, you might have a YAML template that injects a stage that performs malware detection in all pipelines. If you set isSkippable: false for this stage, your pipeline won't be able to skip malware detection.

In the following example, the Malware detection stage is marked as nonskippable, meaning it must be executed as part of the pipeline run.

- stage: malware_detection
  displayName: Malware detection
  isSkippable: false
  jobs:
  - job: check_job
    ...

When a stage is nonskippable, it shows with a disabled checkbox in the Stages to run configuration panel.

Screenshot of stages to run with disabled stage.