Azure SQL Database Basics

Começar. É Gratuito
ou inscrever-se com seu endereço de e-mail
Azure SQL Database Basics por Mind Map: Azure SQL Database Basics

1. Feature Limitations vs On-Prem SQL Server

1.1. No Native SQL Backups (.bak)

1.1.1. But see sections on "Backup and Restore" and "Import and Export with BACPAC"

1.2. No Windows Authentication

1.2.1. May need to re-engineer to use Azure Active Directory (or SQL Authentication)

1.3. No Change data capture

1.4. No SQL Agent

1.4.1. But see "Partial functionality with workarounds" below

1.5. No Cross database queries

1.5.1. But see "Partial functionality with workarounds" below

1.6. No Database Mail

1.7. No Linked Servers

1.8. No PolyBase

1.9. No Service Broker

1.10. No MDS/DQS

1.11. No BI Services (SSIS, SSRS, SSAS)

1.12. Restricted VNet Integration

1.13. Restricted Replication

1.13.1. SQL Database can be transactional and snapshot subscriber only

1.14. No Common Language Runtime (CLR)

1.15. Many limitations solved by Azure SQL Database Managed Instance

1.15.1. Removes limitations for all except Windows Authentication, PolyBase, MDS/DQS and BI Services (SSIS, SSRS, SSAS)

1.15.2. Full VNet Integration made possible because it does not have any public IP endpoints

1.15.3. Overall, Azure SQL Database Managed Instance closes the feature gap to on-premise SQL Server greatly, and delivers something like 95-98% of features you expect to find in the on-premises version

1.15.4. Easy lift-and-shift from on-premise to cloud

1.16. Service Limitations

1.16.1. DTU resource limits

1.16.2. vCore resource limits

1.16.3. IO bound apps must scale with compute

1.16.3.1. If you have an application that requires high IO throughput but more modest requirements for CPU and memory, you are forced to pay for more CPU and memory in order to scale up IO throughput due to Compute resource bundling together all 3 of these in balanced configurations

1.16.4. Service cannot be stopped

1.16.4.1. Note: other services can be paused to stop paying for compute resources and only pay for storage

1.16.4.2. This might change in future

1.16.4.2.1. It might be possible to do via REST API

1.16.5. Public endpoints only

1.16.5.1. This means that all traffic to/from Azure SQL Database must flow across public Internet

1.16.5.2. Private endpoints may become available in future

1.16.5.3. Private endpoints available in Azure SQL Database Managed Instance

1.17. Partial functionality with workarounds

1.17.1. Some T-SQL syntax (e.g. cross-database queries) will not be supported

1.17.1.1. Some re-engineering required

1.17.2. Certain DMVs and system stored procedures won't be available

1.17.2.1. Typicaly ones that operate at server level, rather than database level

1.17.3. Always On Availability Groups feature not directly available

1.17.3.1. But Geo-Replication is (which uses Always On Availability Groups under the hood) - we just have less control regarding configuration scenarios

1.17.4. SQL Agent is not available

1.17.4.1. But we can schedule actions using Azure Functions

1.17.5. Replication is only supported for Azure SQL Database as a subscriber, not a publisher

1.17.6. Very limited cross database query functionality provided via external tables

1.17.6.1. External tables provide read-only functionality

1.17.6.2. Cannot write to other databases, nor invoke stored procedures

2. Documentation

2.1. Go to azure.com and click link for Documentation

2.1.1. Find Azure SQL Database

2.1.1.1. Click Download PDF option

3. Management Tools

3.1. SSMS

3.2. Visual Studio with SQL Server Data Tools option selected

3.2.1. From 2019 version, SSIS, SSRS and SSAS are installed as Visual Studio extensions (VSIX), available from Visual Studio Marketplace

3.3. Powershell

3.3.1. Install-Module -Name AzureRM

4. Connecting to Azure SQL Server

4.1. Need to configure firewall to whitelist our IP

4.1.1. choose a helpful naming convention for rules - e.g. IanHomeIP

4.2. By default, only resources allowed to connect to Azure SQL Server are other resources existing inside Azure

4.2.1. Access is controlled via firewall and virtual networks

4.3. Use SSMS to connect to server

4.3.1. <servername>.database.windows.net

4.4. Need to have outbound port 1433 open to make successful connection

5. Security

5.1. Role Based Access Control (RBAC)

5.1.1. Azure service level security

5.1.2. Comparable to Windows permisions

5.1.2.1. If you don't have the right Windows permissions granted, then you cannot install SQL Server

5.1.3. Manages access to Azure resources

5.1.4. Built on top of Azure Resource Manager

5.1.5. No RBAC needed to give user permission to access a database

5.1.6. Types of usage:

5.1.6.1. Allow one user to manage a virtual network and another to manage virtual machines

5.1.6.2. Allow the DBA group to manage the SQL Server and SQL Databases in a subscription

5.1.6.3. Allow a user to manage all the resources in a resource group

5.1.6.4. Allow an application to access the resources in a resource group

5.1.7. Users and groups given RBAC using Azure Active Directory

5.1.8. RBAC consists of Role + Scope

5.1.8.1. Roles

5.1.8.1.1. Reader

5.1.8.1.2. Resource-specific or custom role

5.1.8.1.3. Contributor

5.1.8.1.4. Owner

5.1.8.2. Scopes

5.1.8.2.1. Subscription

5.1.8.2.2. Resource group

5.1.8.2.3. Resource

5.1.8.3. In addition to the universal Reader, Contributor, Owner roles, there are specific roles available within context of resource

5.1.8.3.1. For Azure SQL Database, we have 3 specific roles available:

5.1.9. Setting via Azure Portal for SQL Database

5.1.9.1. Go to SQL servers resource and choose a server

5.1.9.1.1. Note: if you go to SQL database resource, you won't see an Access control option

5.1.9.2. Choose Access control (IAM)

5.1.9.2.1. Note: IAM = Identity and Access Management, which you can consider as a synonym for RBAC

5.2. Server and Database Firewall

5.2.1. First line of defence for your cloud hosted databases

5.2.2. Determines if source of connection is allowed or not

5.2.3. SQL Server and Databases resources are secure by default

5.2.3.1. No external access allowed from any public IP address until you allow it through new firewall rule

5.2.4. Allow Azure services and resources to access this server

5.2.4.1. When this is turned on, it means that any Azure service or resource in ANY subscription can access your SQL server and databases

5.2.4.1.1. Includes Azure services and resources provisioned outside of your organisation

5.2.4.2. You MUST enable this (at least temporarily) if you want to restore to Azure SQL Database from a backup

5.2.4.2.1. This uses a mechanism called BACPAC files that must be stored in Azure Blob storage, and the service that runs the import/export process for BACPAC files does so via a collection of virtual machines that use dynamic IP addresses (constantly rotating)

5.2.5. IP address rules

5.2.5.1. Use for enabling access from the public Internet

5.2.5.2. Only works for static IP addresses

5.2.5.2.1. If you or your company has dynamic IP addresses that change daily, you'll need to update your firewall rules daily

5.2.5.3. Some Azure services have dynamic IPs, including Power BI service and some Virtual Machines

5.2.5.4. Can be applied at server or database level

5.2.5.4.1. For replication scenarios, you'll need to manually copy firewall rules between your primary and secondary server, whereas rules applied at database level will automatically carry across in failover scenarios

5.2.6. Database level rules take precedence over server level rules

5.2.7. Setting Rules

5.2.7.1. Locate SQL Server in Azure Portal

5.2.7.1.1. Select Firewalls and virtual networks

5.2.7.2. Currently you can only add database firewall rules using T-SQL (not via Azure Portal)

5.2.7.2.1. you can also add server level rules via T-SQL

5.2.7.2.2. Database level rules

5.2.7.2.3. Requires CONTROL DATABASE permission as minimum in order to use these system views and stored procs

5.2.8. Virtual Networks

5.2.8.1. Another way to secure your SQL Servers and Databases

5.2.8.2. Only apply at server level

5.2.8.2.1. for replication scenarios, requires manually copying virtual network rules from primary to secondary server

5.2.8.3. Works only in single region

5.2.8.4. Not compatible with Web Apps on Vnet

5.2.8.4.1. If you have Web App on Vnet and SQL Database secured on Vnet, the traffic between Web App and SQL Database will route via public Internet, so you will need an additional IP address rule to facilitate that

5.2.8.5. Not compatible with VPN or ExpressRoute

5.2.8.5.1. Same thing as with Web Apps on Vnet

5.2.8.6. Does not expose any public Internet access

5.2.8.7. Can be created in Azure Portal or PowerShell

5.2.8.8. Primary use case is for accessing Azure storage or virtual machines

5.3. Transparent Data Encryption (TDE)

5.3.1. Encrypts your database files and log files at rest, and can also encrypt your backup files

5.3.2. TDE does not automatically carry over in a replication failover scenario, you need to intervene manually to apply it to your secondary (that has taken over primary duties)

5.3.3. Two methods for managing your TDE keys

5.3.3.1. Service Managed

5.3.3.1.1. Manages rotation of keys for you

5.3.3.1.2. Stores older keys for use if restoring old backups

5.3.3.2. Bring Your Own Keys (BYOK)

5.3.4. BACPAC exports are not encrypted

5.3.4.1. BACPAC export decrypts data extracted from disk and stores in a secondary file that is not encrypted

5.3.4.1.1. .bak and .bacpac files are very different. .bak files are copies of the database file pages. .bacpac files are created by scripting out the TSQL statements to recreate the schema of a database and using Bulk Export to query and export the rows of all the tables.

5.3.4.1.2. TDE works by encrypting the database file pages, but leaving the in memory version of the files in plaintext so they can be queried. Since .bak files are copies of the database file pages, they are encrypted. Since the data in .bacpac files are the output queries, they are plaintext and unencrypted just as bcp out files are from TDE databases.

5.3.5. Once TDE enabled, all data files, log files and tempdb files are encrypted and all backup (.bak) files are encrypted going forward

5.3.6. Single TDE key applies at server level and used to encrypt all associated databases, for which you can optionally enable/disable TDE

5.3.7. T-SQL to query database encryption state (for active database + tempdb)

5.3.7.1. SELECT DB_NAME(DATABASE_ID), * FROM sys.dm_database_encryption_keys

5.3.7.1.1. encryption_state = 3 means encrypted

5.3.7.1.2. encryption_state = 1 means not encrypted

5.3.7.1.3. encryption_state = 2 means encryption in progress

5.4. Database Permissions

5.4.1. User credential authentication and authorization to determine what user can or cannot do within a database

5.4.2. Very similar to on-prem

5.4.3. No SA account

5.4.3.1. The server admin account you create when provisioning the Azure SQL Server is equivalent

5.4.4. Permissions can be set at server level or database level

5.4.5. Contained database users are recommended

5.4.5.1. This is due to making databases more portable and supporting replication failover scenarios

5.4.5.2. If you use server level permissions for users, you will need to manually copy those permissions to any secondary server in order for users to access the databases in event of a failover

5.4.5.3. Ultimately it is a trade off - if the overhead of maintaining contained database users outweighs the overhead of recreating server level permissions to support portable database scenarios, then you may choose to go server level with a manual plan for transferring server level permissions

5.4.6. No Windows Authentication

5.4.6.1. No Active Directory

5.4.6.2. But Azure Active Directory IS supported

5.4.6.2.1. Azure Active Directory enables optional Multi Factor Authentication (MFA) for users

5.4.7. Admin accounts

5.4.7.1. Server admin account is created at moment that Azure SQL Server is provisioned

5.4.7.1.1. Equivalent of on-prem SA user

5.4.7.1.2. Cannot change server admin user name after SQL Server is provisioned

5.4.7.1.3. Can reset password via Azure Portal, subject to your Azure account having sufficient RBAC permissions

5.4.7.2. Azure Active Directory admin account

5.4.7.2.1. Optional if you use SQL Authentication, mandatory only for Azure AD authentication

5.4.7.2.2. Can be an individual or a group

5.4.7.2.3. Grants effectively same permissions to server and all associated databases that the server admin account has

5.4.7.2.4. Enable and set this in Azure Portal, via SQL Server | Active Directory admin | Set Admin

5.4.8. Non-admin users

5.4.8.1. Can be created at server level or database level

5.4.8.1.1. When created at database level, these will be contained database users

5.4.8.2. Can use SQL Authentication or Azure AD Authentication

5.4.8.3. Create using SSMS

5.4.8.3.1. Server level, connected to master database

5.4.8.3.2. Database level, connected to specific SQL database

6. Backup and Restore

6.1. Built-in Backups

6.1.1. Automatic backups at no extra charge

6.1.2. Backups to read-only geo-redundant storage

6.1.2.1. Guarantees that if data centre goes down, a backup copy of the data will be held in another data centre

6.1.3. If TDE is enabled on database, backup will automatically be encrypted

6.1.4. Automatically performs appropriate Full, Differential and Transaction Log backups

6.1.4.1. Service decides when to take each type of backup

6.1.4.1.1. Full backup will happen within 30 mins of database creation

6.1.4.1.2. Transaction log backups typically every 5 to 10 mins

6.1.4.1.3. Differential every 12 hours approx

6.1.5. Retention of the built-in backups depends on pricing model and service tier

6.1.5.1. e.g. DTU Basic tier gives you 1 week retention, whilst Standard tier gives you 5 weeks

6.1.5.2. Deleted databases retain their backups

6.1.5.2.1. Retention period is tied to pricing model and service tier

6.1.5.3. Scaling down from one service tier to another means you automatically lose retained backups

6.1.5.3.1. e.g. scaling down from DTU Standard to Basic would lose 4 weeks of backups and leave you with only most recent week of retained backups

6.1.5.3.2. Long term retention backups are protected and immune from automated discarding

6.1.6. Backups can be restored to any location

6.1.6.1. e.g. a database originally created in East US region can be restored to West US region

6.1.7. Restore supports point in time - i.e. any time within your retention period

6.1.7.1. Go to SQL database, click Restore

6.1.7.1.1. 1. Select date and time

6.1.7.1.2. 2. Enter new database name

6.1.7.1.3. 3. Configure database

6.1.7.1.4. 4. Create

6.1.7.2. Service automatically decides which, full, differential and transaction log backups to use for restore

6.1.8. Restoring database to originating server requires you to create a new database with new name

6.1.8.1. To restore the original database to a point in time, you would delete the database and then restore it

6.1.9. To speed up restore operation, you can choose a higher service tier and then scale it back down post restore

6.2. Long Term Retention (LTR)

6.2.1. Feature that attracts additional charges but enables you to retain backups for up to 10 years

6.2.1.1. May be required for regulatory requirements

6.2.1.2. Charge will be per GB of storage required over time

6.2.1.2.1. Prices for storage generally quoted per month (as per billing cycle) but apply per hour

6.2.2. Backups are retained on both database deletion and server deletion

6.2.2.1. If needing to restore database after server deletion, this can be done using PowerShell

6.2.3. Retention policies defined on server and can vary per database

6.2.3.1. e.g. 1 database can have backup retention of 6 months, another 2 years

6.2.3.2. Frequency of backups for long term retention can be specified as weekly, monthly or yearly

6.2.3.3. New policies only apply going forward

6.2.3.3.1. e.g. if new monthly backup 6-month retention policy created on 17-June, the first backup made under that policy will not happen until 01-July

6.2.3.4. You can choose any combination of weekly, monthly and yearly backups in a new LTR policy

6.2.4. Restore can be to any server in the same subscription

6.2.5. LTR does not automatically apply to secondary servers in geo-replication

6.2.5.1. You will need to manually configure separate LTR policies on the secondary servers

6.2.6. Restoring from LTR

6.2.6.1. SQL servers | Settings | Manage Backups | Available Long-Term Backups

6.3. Recovering Deleted Databases

6.3.1. Deleted databases are retained at server level

6.3.2. When you delete, a final backup is automatically created to enable restore to the point of deletion event

6.3.3. Azure Portal supports restoring a deleted database to the point of deletion event, PowerShell supports restoring to prior point in time within automatic retention window (7 to 35 days, depending on service tier)

6.3.4. Deleted database backups roll off according to automatic retention window

6.3.4.1. e.g. a deleted database in the DTU Basic tier is available for restore for up to 7 days after the deletion event

6.3.5. Restoring deleted database

6.3.5.1. SQL servers | Settings | Deleted databases

6.4. Import and Export with BACPAC

6.4.1. Native (.bak) on-prem backups cannot be restored into Azure SQL Database, nor can you produce a native backup file from Azure for restoring on-prem

6.4.2. BACPAC is zip file with .bacpac extension

6.4.2.1. Contains metadata and user data

6.4.3. The progress of BACPAC Import/Export operations can be monitored at server level

6.4.4. Premium storage not supported

6.4.4.1. BACPAC files must be stored on Azure Blob storage for import/export operations

6.4.4.2. But import is available via local disk storage for on-prem

6.4.5. Consider scaling up temporarily to reduce import/export execution time

6.4.6. Import is initiated at server level

6.4.6.1. Creates new database

6.4.6.2. Compatibility inherited from BACPAC file

6.4.6.3. Import is from local disk or blob storage

6.4.6.3.1. Local disk import option can only be done using T-SQL (via SSMS), not Azure Portal

6.4.7. Export is initiated at database level

6.4.7.1. Max BACPAC size on blob storage is 200GB

6.4.7.1.1. Larger files allowed on local storage (i.e. on-prem)

6.4.7.2. Export process does not put database into read-only mode

6.4.7.2.1. Database remains active for transactions, so if table content changes during the export operation, you are not guaranteed to capture all of those changes in the final BACPAC file

6.4.7.3. Export time cannot exceed 20 hours

6.4.7.3.1. Process likely to be automatically killed if it goes beyond that

6.4.8. In order to perform a BACPAC Import/Export operation in Azure SQL Server/Database, you must provision a storage account that is attached to same resource group as the SQL Server

6.4.8.1. Storage Accounts | Add

6.4.8.1.1. 1. Select Subscription

6.4.8.1.2. 2. Select Resource Group

6.4.8.1.3. 3. Enter Storage account name

6.4.8.1.4. 4. Select Location

6.4.8.1.5. 5. Set Performance to Standard

6.4.8.1.6. 6. Set Account kind

6.4.8.1.7. 7. Set Replication

6.4.8.1.8. 8. Set Access tier

6.4.8.1.9. 9. Click Review + create

6.4.9. Exporting from Azure SQL Database

6.4.9.1. Prerequisite

6.4.9.1.1. SQL Server | Security | Firewalls and virtual networks

6.4.9.2. SQL Database | Export

6.4.9.2.1. 1. Set File name

6.4.9.2.2. 2. Set Subscription

6.4.9.2.3. 3. Configure Storage

6.4.9.2.4. 4. Enter your SQL Server admin login and password

6.4.9.2.5. 5. Click OK

6.4.9.2.6. If you get authentication error, take note of message and look at SQL Server firewall rules

6.4.10. Exporting from SSMS

6.4.10.1. Object Explorer | Right-click database

6.4.10.1.1. Tasks | Export Data-tier Application

6.4.11. Import to Azure SQL Server

6.4.11.1. SQL Server | Import database

6.4.11.1.1. 1. Set Subscription

6.4.11.1.2. 2. Configure Storage

6.4.11.1.3. 3. Choose Pricing/Service Tier

6.4.11.1.4. 4. Set Database name

6.4.11.1.5. 5. Set Collation

6.4.11.1.6. 6. Enter SQL Server admin login and password

6.4.11.1.7. 7. Click OK

6.4.11.1.8. If import fails, follow link to Activity log to find more information about reason for failure

6.4.12. Importing from SSMS

6.4.12.1. Object Explorer | Right-click Databases folder

6.4.12.1.1. Tasks | Import Data-tier application

7. High Availability and Disaster Recovery

7.1. High Availability

7.1.1. Built in, no additional cost

7.1.2. 99.99% SLA on all service tiers

7.1.2.1. Guarantees no more than 7 minutes downtime per month

7.1.2.2. Automatically given credit if any SLA breaches occur

7.1.3. Always 3 copies of data made across different racks in same data centre

7.1.4. Redundant compute layer provided - works in different ways dependent on service tier

7.1.4.1. Standard/General Purpose tiers auto-migrate the compute to another VM and auto-migrate storage to that

7.1.4.1.1. VM part of the Azure stateless compute service - aka Azure Service Fabric

7.1.4.2. Premium/Business Critical tiers use Always On Availability Groups

7.1.4.2.1. Automatic failover

7.2. Disaster Recovery

7.2.1. All built on top of Geo-Replication

7.2.1.1. Protects against data centre failure

7.2.1.2. Configured on individual databases

7.2.1.2.1. Meaning we must choose what databases require the extra protection of geo-replication

7.2.1.3. Uses Always On Availability Groups under the hood

7.2.1.4. You can configure up to 4 secondaries

7.2.1.4.1. For multiple secondaries, you may consider adding one into same data centre as primary

7.2.1.5. Azure Portal supports the initiation of a manual failover

7.2.1.5.1. Helpful in order to test your applications in event of Azure SQL Database failover

7.2.1.6. Primary and secondary databases can be sized at different service tiers

7.2.1.6.1. Testing needed to ensure secondaries don't lag behind

7.2.1.6.2. Secondaries can be sized bigger than primaries, which can make sense when using secondaries for sharing read-only requests and you want the secondary to handle the majority of that traffic

7.2.1.7. Need to plan VNet/Firewall rules for failover

7.2.1.7.1. VNet rules apply only at server level, so you'll need to set these up on both your primary and secondary

7.2.1.7.2. Firewall rules can apply at both levels, but it saves maintenance effort if you set these up at database level, so that they port with the database during a failover

7.2.1.8. Failover Groups

7.2.1.8.1. Built on top of geo-replication

7.2.1.8.2. Tied to single secondary region

7.2.1.8.3. Groups multiple databases for simultaneous failover

7.2.1.8.4. Any single database failure in a group will cause the whole group to failover

7.2.1.8.5. Provides automatic failover functionality

7.2.2. Configuring Geo-Replication in Azure Portal

7.2.2.1. SQL Database | Settings | Geo-Replication

7.2.2.1.1. 1. Select Secondary region

7.2.2.1.2. 2. Set target server

7.2.2.1.3. 3. Set pricing tier

7.2.2.1.4. To manually failover to a provisioned secondary, select secondary, click ellipses and choose Forced Failover

7.2.3. Configuring Failover Group in Azure Portal

7.2.3.1. SQL Server | Settings | Failover groups

7.2.3.1.1. 1. Add group

7.2.3.1.2. 2. Set Failover group name

7.2.3.1.3. 3. Set Secondary server

7.2.3.1.4. 4. Set Read/Write failover policy

7.2.3.1.5. 5. Set Read/Write grace period (hours)

7.2.3.1.6. 6. Select databases to add

7.2.3.1.7. 7. Click Create

7.2.3.2. Once created, you get two listener endpoints

7.2.3.2.1. Read/write listener endpoint

7.2.3.2.2. Read-only listener endpoint

7.2.3.2.3. Your applications will connect to these listeners

8. Platform as a Service

8.1. You manage development, deployment and maintenance of ETL and BI applications and data.

8.2. Microsoft manages networking, storage, servers, virtualization, operating system and SQL Server itself

8.2.1. Includes management of database backups

8.3. Analogy for comparing on-prem vs cloud solutions

8.3.1. On Premises

8.3.1.1. You own a car and are responsible for everything to do with its ongoing maintenance

8.3.2. Infrastructure as a Service

8.3.2.1. You rent a car, with all the flexibility of owning your own car but reduced responsibility on maintenance

8.3.3. Platform as a Service

8.3.3.1. You get an Uber to take you from A-to-B, with zero concerns about maintenance but still a high degree of flexibility in the journeys you can make and when

8.3.4. Software as a Service

8.3.4.1. You take public transport, accepting restrictions on your options for A-to-B journeys, which are limited to set schedules and stations, but very easy to use and relatively economic

8.4. Going for PaaS solution as a Data Developer means you don't have to worry about a whole host of issues, as these things are either taken care of invisibly for you under the surface or can be set via very simple button clicks

8.4.1. Scaling

8.4.2. High Availability

8.4.2.1. Always On Availability

8.4.3. Disaster Recovery

8.4.4. Monitoring

8.4.5. Backups

8.4.6. Database Patches

8.4.7. Operating System Patches

9. Built-in Features

9.1. 3 built-in replicas for high availability

9.1.1. database files always available on 3 separate disks within data centre

9.1.1.1. automatic failover

9.1.1.1.1. automatically provisions database on new 3rd disk if one disk goes down

9.2. Automatic backups with point-in-time restore

9.3. Geo-replication

9.3.1. for increasing high availability for mission critical applications that depend on SQL Database

9.3.2. protects against data centre going down

9.4. Restore deleted databases

9.4.1. common issue is someone goes in to delete Dev/Test database as a first step to restoring copy of Production database but accidentally deletes Production database

9.5. TDE, RLS and Always-Encrypted

9.5.1. TDE = Transparent Data Encryption

9.5.1.1. applies to whole database

9.5.1.2. protects data at rest

9.5.1.3. helps with regulatory compliance

9.5.2. RLS = Row Level Security

9.5.3. Always-Encrypted

9.5.3.1. applies to specific columns

9.5.3.2. protects sensitive data

9.6. Auditing and Advanced Threat Protection

9.6.1. Driven by machine learning resources available in Azure

9.7. Automatic database tuning

9.7.1. Optional, also driven by Azure machine learning

9.8. Azure Active Directory Integration

10. Provisioning Azure SQL Servers

10.1. Tied to Azure region

10.2. Global settings

10.2.1. Firewall rules

10.2.1.1. Can be set at server level or individual database level

10.2.2. Logins

10.2.2.1. Can be set at server level or individual database level (contained users)

10.3. Transparent Data Encryption (TDE)

10.3.1. TDE key generated at server level and TDE enabled (using key) at database level

10.4. Is parent resource for SQL Databases, elastic pools and data warehouses

10.4.1. When moving or deleting SQL Server resource, this affects all child resources too

10.5. Can be moved from one resource group to another (e.g. from Production resource group to Development resource group)

10.6. Role Based Access Control (RBAC)

10.6.1. Allows us to grant access to functionality related to SQL Server without giving access to data in the databases

10.7. Provides connection endpoint

10.7.1. servername property in connection string

10.8. DMVs provided

10.8.1. Excluding DMVs that relate to features only found in on-prem versions

10.9. Limit of 6 servers per subscription

10.9.1. this is a soft default limit, as you can ask Azure Support to increase this up to 200

10.9.1.1. 200 is also a soft limit, as Azure Support will grant justified requests for more than 200

10.10. Access to master database only

10.10.1. no access to tempdb or msdb

10.11. Server name must be unique across Azure universe

10.11.1. when you see .database.windows.net by server name property, this tells you you're creating a public endpoint

10.12. Server admin login forbids certain keywords including sa and admin

10.13. Provisioning a new SQL Server resource is free!

10.13.1. Can't do much with it until you add child resources (i.e. Database, Elastic Pool or Synapse SQL Pool)

10.13.1.1. Note: Synapse SQL Pool is for data warehouses

10.14. Azure SQL Server is just a logical server consisting only of meta-data

10.14.1. When you add databases to Azure SQL Server, these databases may be running on different SQL Server instances under the hood, even though they are all associated with your single Azure SQL Server resource

11. Provisioning Azure SQL Database

11.1. Every database provisioned is a chargeable resource that will appear on your Azure bill as a separate line item

11.2. Must be provisioned in same subscription and region as your Azure SQL Server resource

11.3. Very limited cross database join functionality due to fact that databases will typically not be running on same host under the hood, even though they all appear on same Azure SQL Server when browsing in SSMS

11.4. Contained users are preferable to server level users

11.5. Each database has 99.99% SLA

11.5.1. Guarantees no more than 7 mins downtime per month

11.5.1.1. Breach triggers compensation in form of refund credits

11.6. 3 methods for provisioning new database

11.6.1. Azure Portal

11.6.1.1. you can do this from the SQL Server resource or the SQL Database resource

11.6.1.2. there is also an option to provision a new server as part of the provisioning dialog under SQL Database

11.6.2. SSMS

11.6.2.1. Standard right-click New Database (like on prem), with only diff being an Azure section in the Options

11.6.2.1.1. Very important to change options because default is to provision Standard S2 tier in DTU pricing model, and you may want to drop this down to Basic tier (B0) to save money until you are sure you need to scale up

11.6.3. PowerShell

11.7. As each database is tied to one of your Azure SQL Server resources, it must be given a name that is unique within the context of that SQL Server parent resource

11.8. At point of creation you can choose from 3 options:

11.8.1. Blank database

11.8.2. Adventure Works sample

11.8.3. Backup

11.8.3.1. Must exist in Azure cloud environment

11.9. Azure SQL Pricing Models

11.9.1. DTU vs vCore

11.9.1.1. DTU = Database Transaction Unit

11.9.1.1.1. All-in-one performance metric

11.9.1.1.2. Simple pricing but hard to judge what DTU level is appropriate in advance

11.9.1.1.3. Each DTU level gives certain balanced amount of Compute + Storage + IO

11.9.1.1.4. Backups included in price for DTU model

11.9.1.1.5. Consistent pricing

11.9.1.1.6. 3 service tiers to choose from when opting for the DTU pricing model

11.9.1.1.7. To help estimate the DTUs you will need to migrate an on-prem database to Azure SQL Database, go to this website:

11.9.1.2. vCore is the newer pricing model, introduced in 2018

11.9.1.2.1. Enables more fine grained control

11.9.1.2.2. Storage and compute scale independently

11.9.1.2.3. Metrics more familiar than with DTU model

11.9.1.2.4. May become the dominant pricing model

11.9.1.2.5. Minimum is 1 virtual core

11.9.1.2.6. Hybrid cloud benefit allows you to migrate on-prem licenses to vCore

11.9.1.2.7. 2 service tiers: General Purpose vs Business Critical

11.9.1.2.8. 1 vCore can refer to 1 physical CPU core or 1 hyper-thread, depending on hardware generation chosen

11.9.2. Comparing DTU to vCore

11.9.2.1. 100 Standard DTU = 1 General Purpose vCore (approx)

11.9.2.2. 125 Premium DTU = 1 Business Critical vCore (approx)

11.9.2.3. You can scale up and down service tiers within pricing models

11.9.2.3.1. e.g. Basic-to-Standard-to-Premium and back down for DTU model

11.9.2.3.2. e.f. General Purpose-to-Business Critical for vCore model

11.9.2.4. You can also scale from one pricing model to the other and back

11.9.2.4.1. e.g. DTU model to vCore model and then back down to a cheaper DTU model

12. Scaling Databases and Elastic Pools

12.1. Scaling Database

12.1.1. Can be done any time, on demand, scaling up or down

12.1.1.1. Can be done via Azure Portal or via code

12.1.1.2. It's an online operation, which means you can keep using your database whilst a scaling operation is being executed in background

12.1.1.3. There will be a short disconnection at end of scaling operation

12.1.1.3.1. Recommended to build "retry logic" into your applications that use Azure SQL Database so that they gracefully handle small disconnection events that occur due to scaling

12.1.2. Use Cases:

12.1.2.1. Usage patterns change

12.1.2.1.1. Typically for predictable changes, such as a well known peak day or other period for customer traffic

12.1.2.2. Database outgrows storage

12.1.2.2.1. vCore model provides more flexibility for this than DTU model

12.1.2.3. Overloading compute resources

12.1.2.3.1. e.g. compute resource utilization running at 80-90% or higher

12.1.2.4. Change tiers, change pricing models

12.1.3. Scaling can take minutes to hours depending on the nature of the change requested

12.1.3.1. For example, choosing a scaling option that involves moving from remote storage to local SSD storage is likely to take longer due to need not only to provision new resources but also to copy data onto new storage medium

12.1.4. Scaling can be done via Azure Portal or PowerShell

12.1.4.1. Benefit of PowerShell is that it enables automation for more dynamic scenarios

12.1.4.2. Scaling in Azure Portal:

12.1.4.2.1. Navigate to SQL Database

12.1.4.2.2. Select Settings | Configure

12.1.4.2.3. Choose Service Level

12.1.4.2.4. Click Apply

12.1.5. Handy T-SQL

12.1.5.1. Check current database service level

12.1.5.1.1. SELECT d.name, dso.* FROM sys.databases d INNER JOIN sys.database_service_objectives dso ON dso.database_id = d.database_id;

12.1.5.2. Check resource usage now (stats every 15 seconds retained for last hour)

12.1.5.2.1. SELECT * FROM sys.dm_db_resource_stats;

12.1.5.3. Check resource usage stats over time (stats every 5 mins retained for 14 days)

12.1.5.3.1. SELECT * FROM sys.resource_stats;

12.2. Elastic Pools

12.2.1. Way to share resources across multiple databases for a more cost effective solution

12.2.2. Primarily suited to scenarios where you have multiple databases that have non correlated usage patterns

12.2.2.1. i.e. one database spikes in usage, whilst the other is low, and then vice versa at another time

12.2.3. Usage scenario:

12.2.3.1. Source application database + data warehouse staging DB + data warehouse DB

12.2.3.1.1. During normal business hours, source application database is busy with many user transactions

12.2.3.1.2. Data warehouse is busy during second phase of overnight ETL window, and then relatively low usage during day as it delivers data for reports and analytics

12.2.3.1.3. Data warehouse staging database is busy during first phase of overnight ETL window and then idle for the rest of the time

12.2.3.2. Multi-tenant scenarios can also benefit from elastic pools, as each tenant is typically a different organisation and therefore it is likely that resource usage across databases will be non-correlated

12.2.3.3. Generally good for collecting a relatively large number of databases that each has a low daily average utilization

12.2.3.3.1. Short, infrequent, unpredictable spikes are ok, but monitor to ensure no more than 2 or 3 databases hit peak resource utilization at same time

12.2.3.3.2. Rule of thumb is that individual databases should have peaks that are <= 1.5x average utilization

12.2.4. Supports both DTU and vCore pricing models

12.2.4.1. Every database in an elastic pool must belong to a common pricing model (i.e. cannot be a mix of DTU + vCore)

12.2.5. Databases can be added and removed to elastic pools at any time

12.2.6. Databases within elastic pools have parameters that can be set to control scaling behaviour within the pool

12.2.7. Elastic pools are resources in their own right that are associated with a SQL Server and provisioned in similar way to a new SQL database

12.2.7.1. You choose pricing model (DTU or vCore) and service level

12.2.7.2. Like databases, elastic pools can be scaled up and down on demand

12.2.8. Sizing elastic pools is done based on picking from the larger of two metrics

12.2.8.1. eDTUs

12.2.8.1.1. Number of Databases x Average DTU Utilization

12.2.8.1.2. Number of Concurrent Peaking Databases x Peak DTU Utilization

12.2.8.2. vCores

12.2.8.2.1. Number of Databases x Average vCore Utilization

12.2.8.2.2. Number of Concurrent Peaking Databases x Peak vCore Utilization

12.2.9. Example Elastic Pool Sizing

12.2.9.1. Example Cost Comparison for Elastic Pool vs Individual Databases

12.2.10. Elastic Pools can be provisioned via Azure Portal in one of two ways: SQL elastic pools SQL servers

12.2.10.1. In both cases you have to name the pool, size it and add databases

12.2.10.2. If provisioning via SQL elastic pools, you will also be prompted to choose Subscription and Resource Group

12.2.10.2.1. Not necessary if provisioning via SQL servers, as your SQL server resources are already tied to a Subscription and Resource Group

12.2.10.3. SQL servers | New elastic pool

12.2.10.3.1. Name pool

12.2.10.4. SQL elastic pools | Add

12.2.10.4.1. Set Subscription + Resource Group, Name pool, Create or Select Server

12.2.11. Note: Elastic pools for DTU model and Basic service tier start at 50 DTUs and 4.88GB storage, with an approximate cost of £70 per month

12.2.12. Once databases are added to an elastic pool, you get statistics per database that help you to monitor whether or not elastic pool is scaled optimally or not

12.2.12.1. Average eDTU/vCore

12.2.12.2. Peak eDTU/vCore

12.2.12.3. Size (i.e. storage consumption of database)

12.2.13. Removing databases from an elastic pool is also done via the pool's Configure | Databases

12.2.13.1. Beware the pricing model and service tier that removed database will be restored to because it may not match the original settings you had for that database before putting it into the pool

12.2.14. Once all databases are removed from elastic pool, you can delete it via the pool Overview screen

12.2.14.1. Delete

12.2.14.1.1. You'll be warned that deletion is irreversable and will need to type name of pool to confirm, then click a second Delete button to confirm action

13. Database Migrations

13.1. Migration tools

13.1.1. Export to BACPAC

13.1.1.1. Can run into issues with compatibility

13.1.2. Azure SQL Database Migration Service

13.1.2.1. Paid service in general

13.1.3. Data Migration Assistant

13.1.3.1. Most popular tool for migrating on-prem databases to Azure - free to use

13.1.4. Manual schema and data copy

13.1.4.1. Might be the most practical method if your on-prem database has a lot of features in use that are not supported by Azure SQL Database

13.1.5. Third party tools

13.1.5.1. Will be developed in collaboration with Azure SQL Database team, and will generally look and feel similar to Database Migration Service and Data Migration Assistant

13.1.5.2. May be good option for migrating non SQL Server databases into Azure SQL Database

13.2. Data Migration Assistant (DMA)

13.2.1. Use for modernizing on-prem SQL database to PAAS Azure SQL Database

13.2.2. Identifies migration issues and recommends fixes

13.2.2.1. Blocking issues

13.2.2.1.1. Cannot easily be moved from on-prem to Azure SQL Database

13.2.2.2. Partially supported features

13.2.2.2.1. DMVs, extended events, T-SQL

13.2.2.2.2. Feature may also be supported only in certain Azure service tiers

13.2.2.3. Unsupported features

13.2.2.3.1. Likely to require more substantial workarounds

13.2.2.3.2. e.g. SSIS not supported

13.2.2.4. Breaking changes, behaviour changes & deprecated features

13.2.3. Recommends performance improvements

13.2.3.1. New performance, security and storage features available after upgrade will be highlighted by DMA report

13.2.4. Scripts out and moves schema, data and objects, and moves it all into Azure SQL Database for you

13.2.4.1. Option to move schema and objects only, and move data later - perhaps via different method

13.2.5. Moving data works best with smaller databases

13.2.5.1. For larger data movement scenarios, consider using the paid Azure SQL Database Migration Service

13.2.6. Download latest from Microsoft Download Center

13.2.6.1. Search "Data Migration Assistant"

13.2.6.1.1. Follow Download link

13.2.7. Using DMA

13.2.7.1. Create New Project

13.2.7.1.1. Assessment

13.2.7.1.2. Migration

14. Database Monitoring

14.1. Database Alerts

14.1.1. Provides notifications for specific events

14.1.2. Metric value threshold triggers

14.1.2.1. Event is specific metric crossing a configured threshold

14.1.2.2. Examples

14.1.2.2.1. DTU utilization percentage above 80%

14.1.2.2.2. Database size falls below 5GB

14.1.3. Activity log events

14.1.3.1. Examples

14.1.3.1.1. Database exported to BACPAC

14.1.3.1.2. Name of database is changed

14.1.3.2. Not limited to user triggered events, can notify system driven events too

14.1.4. General approach

14.1.4.1. Add Rule

14.1.4.1.1. Select database

14.1.4.1.2. Provide rule name

14.1.4.1.3. Choose metric

14.1.4.1.4. Build condition

14.1.4.1.5. Configure who gets notified and how

14.1.5. You get 2 alert notifications per rule for metric driven alerts

14.1.5.1. One alert when threshold is breached and a second alert when metric falls back below threshold

14.1.6. Setting up alerts via Azure Portal

14.1.6.1. SQL database | Monitoring | Alerts

14.1.6.1.1. New Alert Rule

14.2. Database Auditing

14.2.1. Reasons for auditing

14.2.1.1. Regulatory compliance

14.2.1.2. Gathering diagnostic information

14.2.1.2.1. Often used to investigate root cause of a database alert

14.2.2. Auditing policies set at two levels

14.2.2.1. Server level policy

14.2.2.1.1. Applies to all databases

14.2.2.1.2. New databases are automatically audited

14.2.2.1.3. Cannot disable at database level

14.2.2.2. Database level policy

14.2.2.2.1. Does not override server level policy

14.2.2.2.2. Can be different retention period from server

14.2.2.3. For replication failover scenarios, database level auditing policies will automatically carry across to secondary server (as it takes over as primary)

14.2.2.3.1. but for server level audit policies, you will need to manually set that up on the secondary server to ensure there is no break in auditing in the event of a failover

14.2.3. Approach to auditing

14.2.3.1. 1. Enable

14.2.3.2. 2. Select (or create on fly) storage account

14.2.3.2.1. for storing the audit logs

14.2.3.3. 3. Choose retention period

14.2.3.3.1. Remember there is a cost implication

14.2.3.4. As well as saving audit logs to Blob storage, there are new (Preview) options for sending audit logs to Azure services

14.2.3.4.1. Log Analytics

14.2.3.4.2. Event Hub

14.2.4. Setting up SQL server auditing via Azure Portal

14.2.4.1. SQL server | Security | Auditing

14.2.4.1.1. 1. Auditing = ON

14.2.4.1.2. 2. Check Storage checkbox

14.2.4.1.3. 3. Configure Storage

14.2.4.1.4. 4. Click Save

14.2.5. Setting up SQL database auditing via Azure Portal

14.2.5.1. SQL database | Security | Auditing

14.2.5.1.1. Same process as server level

14.2.6. Once your have enabled a new auditing policy, you will be able to see the audit logs stored in Blob storage using Azure Storage Explorer

14.2.6.1. To view the logs, go to Security | Auditing (server or database level as appropriate) and click View Audit Logs

14.2.6.1.1. The logs are very verbose so look at connecting Power BI, which supports consumption of Azure SQL Database audit logs and provides built in dashboard and reports for this