Azure DevOps Pipelines Basics

Get Started. It's Free
or sign up with your email address
Azure DevOps Pipelines Basics by Mind Map: Azure DevOps Pipelines Basics

1. Developing Azure Pipeline jobs

1.1. The original method for developing pipeline jobs was the Classic UI

1.1.1. This provides a GUI-based drag and drop approach

1.1.2. It's useful for learning YAML as you can convert jobs defined via the classic UI to YAML

1.2. YAML pipelines is a relatively new innovation for Azure DevOps (as of February 2021) but has very rapidly been accepted as the defacto standard for CI/CD pipeline jobs

1.2.1. YAML pipelines enable unified CI/CD pipelines

1.2.1.1. This means there is no longer a need for separate build and release pipelines (which the classic UI requires - the build pipelines produce artifacts that act as inputs to the release pipelines)

1.2.2. Use Visual Studio Code to develop your YAML pipelines and install the Azure Pipelines extension to get syntax highlighting and Intellisense

1.3. As of time of writing (Feb 2021), Azure DevOps is in a transitionary period of going from the Classic UI to YAML Pipelines, so for some time we will need to understand both

1.3.1. See attached for comparison of Classic UI vs YAML Pipelines

1.3.1.1. YAML Pipelines don't yet support all the scenarios the Classic UI does, but already they support some scenarios that Classic UI does not

1.3.1.2. For mature DevOps platforms with heavy investment in pipeline jobs built using Classic UI, you will likely need to use this approach until such time as a decision is made to migrate to YAML Pipelines

1.3.1.3. The Classic UI is gradually being phased out and will eventually be replaced entirely by YAML Pipelines

2. Extending Azure DevOps functionality

2.1. In addition to the built-in tasks provided by Microsoft, you can extend functionality via DevOps extensions that are published via the Visual Studio Marketplace

2.2. Azure DevOps can also be part of an integrated CI/CD framework that includes other external solutions

2.2.1. This enables you to preserve existing investments in CI/CD solutions like Jenkins or Team City

3. Microsoft-hosted Agents

3.1. These are single-use virtual machines provided by Microsoft

3.2. They have a host OS of a specific version that is automatically patched and upgraded by Microsoft

3.3. Pre-defined software packages necessary to run DevOps pipeline job tasks are installed on each VM

3.4. You may need to wait a while for an agent to become available when you trigger a pipeline job, but normally the wait is no more than 1 or 2 minutes

3.5. All tasks run with the highest level of permissions

3.5.1. This means that on Windows machines all tasks run under local administrator credentials and on Linux they run under superuser (root) credentials

3.5.2. There is no need to factor in permission elevation techniques in any of your tasks - e.g. no need to use sudo in Bash scripts running on Linux

3.5.3. All agents in pool are wholly isolated from each other so these high permissions don't constitute a security threat

3.6. Agents are torn down after all the jobs allocated to it in a single pipeline have completed

3.6.1. This means that any data cached on the agent during a pipeline run is lost and does not persist

3.6.1.1. For example, data fetched from a Git repo does not persist on any agent beyond the agent's lifecycle (i.e. the job run window)

3.7. Additional packages can be installed

3.7.1. This allows you to install later or earlier versions of a particular package, which might be required for your task(s)

3.7.2. There is no interactive access to the agents, so any installation technique must be implemented via code that does not require any manual intervention

3.7.3. It's generally a bad idea to have your pipelines spend a lot of time downloading and installing software packages

3.7.3.1. If you find yourself in this position, then you should consider using self-hosted agents

3.8. All the details about the latest images for DevOps Agents are available in GitHub

3.8.1. To see the documentation, navigate into the windows or linux folder and open the README.md file for the particular version of interest and it will tell you everything about what's included in that image

4. Self-hosted Agents

4.1. Provide much greater control over application binaries

4.1.1. You can add custom applications not part of hosted agent images

4.1.2. You can add older versions of some built-in binary that is needed for some compatibility reasons for a legacy application build/deployment

4.2. Data caches and config persist between runs unlike on hosted agents, which tear down after each job run

4.2.1. This can provide some advantages in certain scenarios and can also create some issues in other scenarios

4.2.1.1. Often there is a need to explicitly add a step to clear out cached data and config after a job run is done

4.3. Self-hosted run can be on Windows, Linux, macOS and Docker

4.3.1. Agents can run on non server versions too, such as Windows 10 instead of Windows Server

4.4. Agents can run either interactively or as a service

4.4.1. Interactive makes sense when running an agent hosted on your own machine during dev, but outside of that running the agent as a service is more likely

4.5. User is responsible for all management and configuration, and all major version agent upgrades

4.6. Networking is an important consideration when considering self-hosted agents and their interaction with the Azure Pipelines cloud service

4.6.1. Microsoft hosted agents automatically have access to the Azure Pipelines Service without any effort needed from you, but self-hosted agents need network firewall config and also represent only secure way to make deployments against on-prem targets

4.6.1.1. Self-hosted agents need outbound port 443 (for https protocol) to be allowed in order to communicate with the Azure Pipelines service that encapsulates the self-hosted agent pool definition

4.6.1.2. Microsoft hosted agents have no access by default to any on-prem deployment targets and best security practice dictates that self-hosted agents with "line of sight" access (i.e. on the same Vnet) are used to make those deployments

4.6.1.3. Self-hosted agents don't have to be on-prem, they could be VMs hosted in an Azure Vnet

4.7. Provisioning self-hosted agents

4.7.1. Before installing a self-hosted agent on a host, you should check the prerequisites required for running the agent software

4.7.1.1. Agents on Windows require minimum versions for OS, PowerShell and .NET Framework

4.7.1.2. Agents on Linux support specific flavours of Linux with minimum versions

4.7.2. Identify a user with permission to administer agent pools in your DevOps project

4.7.2.1. Under Project Settings | Pipelines | Agent pools, you can click the Security button and see the groups with permission to administer the agent pools

4.7.2.1.1. For example, the Build Administrators group has this permission

4.7.2.2. Under Project Settings | Permissions, you can select a group that has permission for the agent pool administration and if required, add a user

4.7.2.3. Create a Personal Access Token for the user, which will be used by the agent in order to register itself into the DevOps agent pool

4.7.3. You also need to consider networking and local/domain user requirements for the self-hosted agent

4.7.3.1. The agent needs to be able to make outbound connections via port 443 (HTTPS connections), and in some organisations this may require configuration using an Internet proxy server

4.7.3.2. If the agent will need to deploy to on-prem targets then it is going to need to run under user credentials that have sufficient permission to connect to those targets and perform the deployment tasks

4.7.3.3. If the agent will only need to deploy to Azure hosted targets then the local user permissions are not important and it's recommended to run the agent under a local service account

4.7.4. Proof of concept: provision self-hosted agent using Docker container

4.7.4.1. On Windows, we can use Docker for Windows

4.7.4.2. Step 1: Run a new container from an community base image

4.7.4.2.1. We will use PowerShell to run the Docker container

4.7.4.2.2. Ubuntu 20.04 is a suitable base image at the time of writing (Feb 2021), and available via Docker Hub

4.7.4.2.3. PowerShell command:

4.7.4.3. Step 2: Create a personal access token in Azure DevOps for the agent to use for authentication

4.7.4.3.1. Go to User Settings in Azure DevOps and create a personal access token (PAT), and keep hold of the token value for use later

4.7.4.4. Step 3: Add some essentials for the container

4.7.4.4.1. mkdir /usr/ian/ cd /usr/ian/

4.7.4.4.2. apt-get update

4.7.4.4.3. apt-get install git

4.7.4.4.4. git config --global user.name "Ian Bradshaw" git config --global user.email "[email protected]"

4.7.4.4.5. apt-get install python3.8

4.7.4.4.6. apt-get install wget

4.7.4.5. Step 4: Create agent in Azure DevOps agent pool

4.7.4.5.1. In Azure DevOps go to Organization Settings and under Agent pools, select the Default pool and click the New agent button

4.7.4.6. Step 5: Download the agent on the Ubuntu Docker container and install dependencies

4.7.4.6.1. wget https://vstsagentpackage.azureedge.net/agent/2.181.2/vsts-agent-linux-x64-2.181.2.tar.gz

4.7.4.6.2. mkdir agent cd agent

4.7.4.6.3. tar xzf ../vsts-agent-linux-x64-2.181.2.tar.gz

4.7.4.6.4. ./bin/installdependencies.sh

4.7.4.7. Step 6: Save container as a new custom image

4.7.4.7.1. exit

4.7.4.7.2. docker ps --all

4.7.4.7.3. docker commit <container_name_or_id> <new_image_name>

4.7.4.7.4. docker image ls

4.7.4.8. Step 7: Configure agent and run interactively

4.7.4.8.1. docker run -it devop-ubuntu-agent-01 bash

4.7.4.8.2. cd /usr/ian/agent/

4.7.4.8.3. ./config.sh

4.7.4.8.4. ./run.sh

4.7.4.9. Step 8: Verify that Azure DevOps agent pool recognises the self-hosted agent

4.7.4.9.1. Go to Azure DevOps Organization Settings and verify that new self hosted agent shows online in the Default agent pool

4.7.4.9.2. If you look at the details of the agent, you should see that its system capabilities have been automatically reported to DevOps and are listed

4.7.4.10. Step 9: Verify that the self-hosted agent is able to successfully execute a simple pipeline

4.7.4.10.1. Create a very simple pipeline for test purposes, specifying that it should run on the Default agent pool and demand an agent running Linux

4.7.4.10.2. On very first pipeline job submitted to a new self-hosted agent you will see the job remain queued... if so, click on it and you will be prompted to grant permission

4.7.4.10.3. Verify that the pipeline job completes successfully

5. Azure DevOps REST API

5.1. Microsoft provides a comprehensive REST API for the Azure DevOps service

5.2. Microsoft also provides a reference for properly forming URLs when working with the REST APIs

6. What are pipeline jobs?

6.1. Every pipeline consists of 1 or more jobs

6.2. Each job consists of 1 or more steps

6.3. Jobs can be grouped into stages

6.3.1. A single pipeline can have multiple stages

6.3.2. Each stage can run on different computing platforms

6.3.2.1. This can make sense when you have jobs for running on a Linux platform vs another job that needs a Windows platform

6.4. The language for developing jobs is YAML

6.4.1. A "hello world" sample job for an Azure Pipeline

6.4.1.1. job key is only needed when you want to provide additional job-level properties like timeoutInMinutes

6.4.1.2. pool key with nested vmImage key is needed when you want to run the job against a hosted agent

6.4.1.3. steps key is always required

6.4.1.3.1. The "-" prefix is interpreted as indentation

6.4.1.3.2. In the very simple example our steps consists of a single step that runs an echo command in Bash

7. Running pipeline jobs

7.1. Agent pool jobs

7.1.1. A pool of physical or virtual machines running a version of Windows or Linux, with each machine in the pool sharing a common hardware and software configuration

7.1.1.1. Each machine in the pool performs the function of an agent for receiving and running Azure Pipeline jobs

7.1.2. The characteristics of a job determine the required agent configuration

7.1.2.1. Jobs that require running a Windows executable will require a Windows agent

7.1.2.2. Jobs that require running a Debian based application will require a Linux agent

7.1.2.3. Jobs that require running Bash, PowerShell or Python scripts can run on either a Windows or Linux based agent

7.1.2.4. In all cases, you need to ensure that not only does the agent have the right host OS but also the necessary runtimes installed for the job

7.1.3. Jobs can only run if the pool has an agent available

7.2. Server jobs

7.2.1. These are also known as agentless jobs, and they run directly on the Azure DevOps (cloud) or TFS (on prem) server

7.2.2. The range of jobs that can run as server jobs is limited

7.2.3. In the YAML template, use one of the following to specify a server job: pool: server server: true

7.3. Agent demands

7.3.1. Demands specify what capabilities the agent must have in order to satisfy the requirements of a job

7.3.2. Demands are linked to OS, applications and versions

7.3.3. Multiple demands can be specified for each job

7.3.4. Demands can be asserted manually or automatically

7.3.4.1. Manual demands are explicitly specified in the YAML for the job

7.3.4.2. Automatic demands occur for various built-in tasks, such as the Visual Studio build task, which automatically and implicitly asserts demands for msbuild and Visual Studio to be installed on a Windows based agent

7.3.5. Agents will only proceed with running jobs if all demand assertions are satisfied

7.3.5.1. If demands are not met, the pipeline job will fail and abort

7.3.5.2. The server will not attempt to remediate demand assertion failures

7.3.6. See attached example of a YAML pipeline job with demands

7.3.6.1. The demands list includes two:

7.3.6.1.1. agent.os -equals Linux

7.3.6.1.2. python3 - equals /usr/bin/python3

7.4. Container jobs

7.4.1. Jobs can run inside a Docker container

7.4.1.1. Both Windows and Linux support Docker containers, so your agent can be either Windows or Linux for running jobs this way

7.4.2. Docker containers provide more control over job execution evironment

7.4.2.1. The agent requires less initial setup and is easier to maintain

7.4.3. When jobs are submitted to an agent as container jobs, the agent will first fetch and run the specified container before submitting each step of the job to run inside that container

7.4.3.1. Container images can be retrieved from Docker Hub or private registries

7.4.4. Container jobs are generally considered for introduction in more mature DevOps environments

7.4.4.1. If just starting out, then stick to agent jobs for time being

8. Agent pools

8.1. Instead of managing each agent individually, you organize agents into agent pools

8.2. In Azure Pipelines, pools are scoped to the entire organization; so you can share the agent machines across projects

8.3. When you configure an agent, it is registered with a single pool, and when you create a pipeline, you specify which pool the pipeline uses

8.4. When you run the pipeline, it runs on an agent from that pool that meets the demands of the pipeline

8.5. By default, each Azure DevOps organisation gets two agent pools

8.5.1. Agent Pipelines is a hosted pool with various Windows, Linux, and macOS images

8.5.1.1. For details of the images that are available within the hosted pool, see GitHub

8.5.1.2. The Azure Pipelines hosted pool replaces the previous hosted pools that had names that mapped to the corresponding images

8.5.1.2.1. Although the Azure DevOps UI makes it look like you can have multiple hosted agent pools, in effect there is only 1 hosted pool available, which is the managed service provided by Microsoft in the cloud

8.5.1.2.2. In some circumstances, you may still see the old pool names, but behind the scenes the hosted jobs are run using the Azure Pipelines pool

8.5.2. Default is to be used to to register self-hosted agents that you've set up

8.6. In a YAML pipeline, you specify that you want to use the hosted Azure Pipelines pool by adding the pool tag with a valid vmImage property

8.6.1. pool: vmImage: 'windows-latest'

8.6.2. There are currently 10 different valid values for vmImage

8.6.2.1. The Azure Pipelines hosted agent pool includes many agents spanning all 10 of these VM images

8.6.3. To specify a self-hosted pool in a YAML pipeline, you name it in the pool tag

8.6.3.1. So, to specify the Default pool:

8.6.3.1.1. pool: 'Default'

8.6.3.2. Or another self-hosted pool you've provisioned named My Pool

8.6.3.2.1. pool: 'My Pool'

8.7. In a classic pipeline, you explicitly specify the hosted pool name and pick the agent specification (VM image) via drop down menus

9. Agent capabilities and demands

9.1. Every self-hosted agent has a set of capabilities that indicate what it can do

9.2. Capabilities are name-value pairs that are either automatically discovered by the agent software, in which case they are called system capabilities, or those that you define, in which case they are called user capabilities

9.3. The agent software automatically determines various system capabilities such as the name of the machine, type of operating system, and versions of certain software installed on the machine

9.3.1. Also, environment variables defined in the machine automatically appear in the list of system capabilities

9.4. When you author a pipeline you specify certain demands of the agent

9.4.1. The system sends the job only to agents that have capabilities matching the demands specified in the pipeline

9.4.1.1. If a demand is specified that cannot be met by any available agent in the agent pool, the job will fail immediately with a message to that effect

9.5. When using Microsoft-hosted agents, you select an image for the agent that matches the requirements of the job, so although it is possible to add capabilities to a Microsoft-hosted agent, you don't need to use capabilities with Microsoft-hosted agents