<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>David Barbarin &#187; SQL Azure</title>
	<atom:link href="https://blog.developpez.com/mikedavem/pcategory/sql-azure/feed" rel="self" type="application/rss+xml" />
	<link>https://blog.developpez.com/mikedavem</link>
	<description>MVP DataPlatform - MCM SQL Server</description>
	<lastBuildDate>Thu, 09 Sep 2021 21:19:50 +0000</lastBuildDate>
	<language>fr-FR</language>
	<sy:updatePeriod>hourly</sy:updatePeriod>
	<sy:updateFrequency>1</sy:updateFrequency>
	<generator>https://wordpress.org/?v=4.1.42</generator>
	<item>
		<title>FinOps with Azure Cost management and Azure Log Analytics</title>
		<link>https://blog.developpez.com/mikedavem/p13208/sql-azure/finops-with-azure-cost-management-and-azure-log-analytics</link>
		<comments>https://blog.developpez.com/mikedavem/p13208/sql-azure/finops-with-azure-cost-management-and-azure-log-analytics#comments</comments>
		<pubDate>Wed, 12 May 2021 15:37:47 +0000</pubDate>
		<dc:creator><![CDATA[mikedavem]]></dc:creator>
				<category><![CDATA[DevOps]]></category>
		<category><![CDATA[SQL Azure]]></category>
		<category><![CDATA[Azure]]></category>
		<category><![CDATA[devops]]></category>
		<category><![CDATA[finops]]></category>
		<category><![CDATA[Log Analytics]]></category>
		<category><![CDATA[observability]]></category>

		<guid isPermaLink="false">http://blog.developpez.com/mikedavem/?p=1801</guid>
		<description><![CDATA[In a previous blog post, I surfaced Azure monitor capabilities for extending observability of Azure SQL databases. We managed to correlate different metrics and SQL logs to identify new execution patterns against our Azure SQL DB, and we finally go &#8230; <a href="https://blog.developpez.com/mikedavem/p13208/sql-azure/finops-with-azure-cost-management-and-azure-log-analytics">Lire la suite <span class="meta-nav">&#8594;</span></a>]]></description>
				<content:encoded><![CDATA[<p>In a<a href="https://blog.developpez.com/mikedavem/p13205/sql-azure/azure-monitor-as-observability-platform-for-azure-sql-databases" rel="noopener" target="_blank"> previous blog post</a>, I surfaced Azure monitor capabilities for extending observability of Azure SQL databases. We managed to correlate different metrics and SQL logs to identify new execution patterns against our Azure SQL DB, and we finally go through a new compute tier model that fits better with our new context. In this blog post, I would like to share some new experiences about combining Azure cost analysis and Azure log analytics to spot “abnormal” trend and to fix it. </p>
<p><span id="more-1801"></span></p>
<p>If you deal with Cloud services and infrastructure, FinOps is a discipline you should get into for keeping under control your costs and getting actionable insights that could result in efficient cloud costs. Azure cost management provides visibility and control. Azure cost analysis is my favorite tool when I want to figure out costs of the different services and to visualize improvements after applying quick wins, architecture upgrades on the environment. It is also a good place to identify stale resources to cleanup. I will focus on Azure SQL DB here. From a cost perspective, Azure SQL DB service includes different meter subcategories regarding the options and the service tier you will use. You may have to pay for the compute, the dedicated storage for your database and for your backups (pitr or ltr) and so on … Cost Analysis allows drill-down analysis through different axis with aggregation or forecast capabilities. </p>
<p>In our context, we would like to know if moving from Azure SQL DB Azure Serverless compute tier (Pay-As-You-Go) to Provisioned Tier (+ Azure Hybrid Benefit + Reserved Instances for 3 years) has some good effects on costs.  First look at the cost analysis section by applying correct filters and data aggregation on compute tier, confirmed our initial assumption that Serverless didn’t fit anymore with our context now. The chart uses a monthly-based timeframe daily aggregation. We switched to a different model mi-April as show below:</p>
<p><a href="http://blog.developpez.com/mikedavem/files/2021/05/blog-176-serveless-vs-compute-tier-e1620817381744.png"><img src="http://blog.developpez.com/mikedavem/files/2021/05/blog-176-serveless-vs-compute-tier-1024x355.png" alt="blog 176 - serveless vs compute tier" width="584" height="202" class="alignnone size-large wp-image-1802" /></a></p>
<p>Real numbers are confidential but not so important here. We can easily notice a drop a daily cost (~ 0.5) between Serverless and Provisioned compute tier. </p>
<p>If we get a higher-level view of all services and costs for previous months, the trend is also confirmed for April with serverless + provisioned tier combined costs lower than serverless computer tier only for previous months. But we need to wait for next months to confirm the trend. </p>
<p><a href="http://blog.developpez.com/mikedavem/files/2021/05/blog-176-compute-vs-backup-storage-e1620817431402.png"><img src="http://blog.developpez.com/mikedavem/files/2021/05/blog-176-compute-vs-backup-storage-1024x727.png" alt="blog 176 - compute vs backup storage" width="584" height="415" class="alignnone size-large wp-image-1803" /></a></p>
<p>At the same time (and this is the focus on this write-up), we detected a sudden increase of backup storage cost in March that may ruin our optimization efforts made for compute, right? :). To explain this new trend, log analytics came to the rescue. As explained in the previous blog post, we configured streaming of Azure SQL DB telemetry into Log Analytics target to get benefit from solutions like SQL Insights and custom queries from different Azure logs. </p>
<p>Basic metrics are part of Azure SQL DB telemetry and stored in AzureMetrics table. We can use Kusto query to extract backup metrics and get an idea of different backup type trends over the time including FULL, DIFF and LOG backups. The following query shows backup trends within the same timeframe used for billing in cost management (February to May). It also includes a <a href="https://docs.microsoft.com/en-us/azure/data-explorer/kusto/query/series-fit-linefunction" rel="noopener" target="_blank">series_file_line</a> function to draw a trendline in the time chart.</p>
<div class="codecolorer-container text default" style="overflow:auto;white-space:nowrap;border:1px solid #9F9F9F;width:650px;"><div class="text codecolorer" style="padding:5px;font:normal 12px/1.4em Monaco, Lucida Console, monospace;white-space:nowrap">AzureMetrics<br />
| where TimeGenerated &amp;gt;= ago(90d)<br />
| where Resource == 'myDB'<br />
| where MetricName == 'full_backup_size_bytes' // in ('full_backup_size_bytes','diff_backup_size_bytes','log_backup_size_bytes')<br />
| make-series SizeBackupDiffTB=max(Maximum/1024/1024/1024/1024) on TimeGenerated in range(ago(90d),now(), 1d)<br />
| extend (RSquare,Slope,Variance,RVariance,Interception,TrendLine)=series_fit_line(SizeBackupDiffTB)<br />
| render timechart</div></div>
<p><strong>Full backup time chart</strong></p>
<p><a href="http://blog.developpez.com/mikedavem/files/2021/05/blog-176-backup-full-trend-e1620817591328.png"><img src="http://blog.developpez.com/mikedavem/files/2021/05/blog-176-backup-full-trend-1024x464.png" alt="blog 176 - backup full trend" width="584" height="265" class="alignnone size-large wp-image-1804" /></a></p>
<p>FULL backup size is relatively steady and cannot explain the sudden increase of storage backup cost in our case. </p>
<p><strong>DIFF and LOG backup time chart</strong></p>
<p><a href="http://blog.developpez.com/mikedavem/files/2021/05/blog-176-backup-diff-trend-e1620832494315.png"><img src="http://blog.developpez.com/mikedavem/files/2021/05/blog-176-backup-diff-trend-1024x463.png" alt="blog 176 - backup diff trend" width="584" height="264" class="alignnone size-large wp-image-1806" /></a></p>
<p>&#8230;</p>
<p><a href="http://blog.developpez.com/mikedavem/files/2021/05/blog-176-backup-log-trend-e1620832522947.png"><img src="http://blog.developpez.com/mikedavem/files/2021/05/blog-176-backup-log-trend-1024x469.png" alt="blog 176 - backup log trend" width="584" height="267" class="alignnone size-large wp-image-1807" /></a></p>
<p>LOG and DIFF backup charts are more relevant and the trendline suggests a noticeable change starting mi-March. For the first part of the month, the trendline starts misaligning with backup size series. </p>
<p>At this stage, we found out the cause of the cost increase, but we were interested in understanding the reasons that may explain such trend. After investigating our ITSM system, we were able to find a correlation with the deployment of new maintenance tool &#8211; <a href="https://ola.hallengren.com/" rel="noopener" target="_blank">Ola Hallengren</a> maintenance solution + custom scripts to rebuild columnstore indexes. The latter rebuilds aggressively 2 big fact tables with CCI in our DW (unlike the former tool) that explain the increase of DIFF and LOG backup sizes (~ 1TB). </p>
<p><a href="http://blog.developpez.com/mikedavem/files/2021/05/blog-176-fact-tables.png"><img src="http://blog.developpez.com/mikedavem/files/2021/05/blog-176-fact-tables.png" alt="blog 176 - fact tables" width="664" height="120" class="alignnone size-full wp-image-1808" /></a></p>
<p>This is where the collaboration with the data engineering team is starting to find an efficient and durable way to minimize the impact of the maintenance:</p>
<p>&#8211; Reviewing the custom script threshold may result to a more relax detection of fragmented columnstore indexes. However, this is only a piece of the solution because when a columnstore index becomes a good candidate for the next maintenance operation, it will lead to a resource-intensive and time-consuming operation (&gt; 2.5h dedicated for these two tables). We are using Azure automation jobs with fair share to execute the maintenance and we are limited to 3h max per job execution. We may use a divide and conquer strategy to fit within the permitted execution timeframe, but it would lead to more complexity and we want to keep maintenance as simple as possible. </p>
<p>&#8211; We need to find another way to keep index and stat maintenance jobs execute time under a certain control.  Introducing partition for these tables is probably a good catch and another piece of the solution. Indeed, currently concerned tables are not partitioned, and we could get benefit from partition-level maintenance for both indexes and statistics at the partition level.</p>
<p><strong>Bottom line</strong></p>
<p>Azure cost management center and log analytics are a powerful recipe in the FinOps practice. Kusto SQL language is a flexible tool for finding and correlate all kinds of log entries and events assuming you configured telemetry to the right target. I definitely like annotation-like system as we are using with Grafana because it makes correlation with external changes and workflows easier. Next step: investigate annotations on metric charts in Application insights? </p>
<p>See you!!</p>
]]></content:encoded>
			<wfw:commentRss></wfw:commentRss>
		<slash:comments>1</slash:comments>
		</item>
		<item>
		<title>Azure monitor as observability platform for Azure SQL Databases and more</title>
		<link>https://blog.developpez.com/mikedavem/p13205/sql-azure/azure-monitor-as-observability-platform-for-azure-sql-databases</link>
		<comments>https://blog.developpez.com/mikedavem/p13205/sql-azure/azure-monitor-as-observability-platform-for-azure-sql-databases#comments</comments>
		<pubDate>Mon, 08 Feb 2021 16:57:26 +0000</pubDate>
		<dc:creator><![CDATA[mikedavem]]></dc:creator>
				<category><![CDATA[DevOps]]></category>
		<category><![CDATA[SQL Azure]]></category>
		<category><![CDATA[Azure Monitor]]></category>
		<category><![CDATA[Azure SQL Analytics]]></category>
		<category><![CDATA[Azure SQL Database]]></category>
		<category><![CDATA[devops]]></category>
		<category><![CDATA[Log Analytics]]></category>
		<category><![CDATA[observability]]></category>
		<category><![CDATA[performance]]></category>

		<guid isPermaLink="false">http://blog.developpez.com/mikedavem/?p=1762</guid>
		<description><![CDATA[In a previous blog post, I wrote about reasons we moved our monitoring of on-prem SQL Server instances on Prometheus and Grafana. But what about Cloud and database services? We have different options and obviously in my company we thought &#8230; <a href="https://blog.developpez.com/mikedavem/p13205/sql-azure/azure-monitor-as-observability-platform-for-azure-sql-databases">Lire la suite <span class="meta-nav">&#8594;</span></a>]]></description>
				<content:encoded><![CDATA[<p>In a previous <a href="https://blog.developpez.com/mikedavem/p13203/sql-server-2014/why-we-moved-sql-server-monitoring-on-prometheus-and-grafana" rel="noopener" target="_blank">blog post</a>, I wrote about reasons we moved our monitoring of on-prem SQL Server instances on Prometheus and Grafana. But what about Cloud and database services? </p>
<p><span id="more-1762"></span></p>
<p>We have different options and obviously in my company we thought first moving our Azure SQL Database workload telemetry on on-prem central monitoring infrastructure as well. But not to mention the main blocker which is the serverless compute tier because Telegraf Server agent would imply initiating a connection that could prevent auto-pausing the database or at least it would made monitoring more complex because it would supposed to have a predictable workload all the time. </p>
<p>The second option was to rely on Azure monitor which is a common platform for combining several logging, monitoring and dashboard solutions across a wide set of Azure resources. It is scalable platform, fully managed and provides a powerful query language and native features like alerts, if logs or metrics match specific conditions. Another important point is there is no vendor lock-in, with this solution, as we can always fallback to our self-hosted Prometheus and Grafana instances if neither computer tier doesn’t fit nor in case Azure Monitor might not be an option anymore! </p>
<p>Firstly, to achieve a good observability with Azure SQL Database we need to put both diagnostic telemetry and SQL Server audits events in a common Log Analytics workspace. A quick illustration below: </p>
<p><a href="http://blog.developpez.com/mikedavem/files/2021/02/173-0-Azure-SQL-DB-Monitor-architecture.jpg"><img src="http://blog.developpez.com/mikedavem/files/2021/02/173-0-Azure-SQL-DB-Monitor-architecture-1024x387.jpg" alt="173 - 0 - Azure SQL DB Monitor architecture" width="584" height="221" class="alignnone size-large wp-image-1763" /></a></p>
<p>Diagnostic settings are configured per database and including basic metrics (CPU, IO, Memory etc …) and also different SQL Server internal metrics as deadlock, blocked processes or query store information about query execution statistic and waits etc&#8230; For more details please refer to the Microsoft <a href="https://docs.microsoft.com/en-us/azure/azure-sql/database/metrics-diagnostic-telemetry-logging-streaming-export-configure?tabs=azure-portal" rel="noopener" target="_blank">BOL</a>.</p>
<p>SQL Azure DB auditing is both server-level or database-level configuration setting. In our context, we defined a template of events at the server level which is then applied to all databases within the logical server. By default, 3 events are automatically audited:<br />
&#8211;	BATCH_COMPLETED_GROUP<br />
&#8211;	SUCCESSFUL_DATABASE_AUTHENTICATION_GROUP<br />
&#8211;	FAILED_DATABASE_AUTHENTICATION_GROUP</p>
<p>The first one of the list is probably to be discussed according to the environment because of its impact but in our context that&rsquo;s ok because we faced a data warehouse workload. However we added other ones to meet our security requirements:<br />
&#8211;	PERMISSION_CHANGE_GROUP<br />
&#8211;	DATABASE_PRINCIPAL_CHANGE_GROUP<br />
&#8211;	DATABASE_ROLE_MEMBER_CHANGE_GROUP<br />
&#8211;	USER_CHANGE_PASSWORD_GROUP</p>
<p>But if you take care about Log Analytics as target for SQL audits, you will notice it is still a feature in preview as shown below:</p>
<p><a href="http://blog.developpez.com/mikedavem/files/2021/02/173-4-Audit-target.jpg"><img src="http://blog.developpez.com/mikedavem/files/2021/02/173-4-Audit-target.jpg" alt="173 - 4 - Audit target" width="484" height="153" class="alignnone size-full wp-image-1765" /></a></p>
<p>To be clear, usually we don’t consider using Azure preview features in production especially when they remain in this state for a long time but in this specific context we got interested by observability capabilities of the platform. From one hand, we get very useful performance insights through SQL Analytics dashboards (again in preview) and from the other hand we can easily query logs and traces through Log Analytics for correlation with other metrics. Obviously, we hope Microsoft moving a step further and providing this feature in GA in the near feature. </p>
<p>Let’s talk briefly of SQL Analytics first. It is an advanced and free cloud monitoring solution for Azure SQL database monitoring performance and it relies mainly on your Azure Diagnostic metrics and Azure Monitor views to present data in a structured way through performance dashboard.</p>
<p>Here an example of built-in dashboards we are using to track activity and high CPU / IO bound queries against our data warehouse.</p>
<p><a href="http://blog.developpez.com/mikedavem/files/2021/02/173-1-SQL-Analytics-general-dashboard-e1612797920282.jpg"><img src="http://blog.developpez.com/mikedavem/files/2021/02/173-1-SQL-Analytics-general-dashboard-1024x410.jpg" alt="173 - 1 - SQL Analytics general dashboard" width="584" height="234" class="alignnone size-large wp-image-1768" /></a></p>
<p>You can use drill-down capabilities to different contextual dashboards to get insights of resource intensive queries. For example, we identified some LOG IO intensive queries against a clustered columnstore index and after some refactoring of UPDATE statement to DELETE + INSERT we reduced drastically LOG IO waits.</p>
<p><a href="http://blog.developpez.com/mikedavem/files/2021/02/173-2-SQL-Analytics-IO-e1612797960660.jpg"><img src="http://blog.developpez.com/mikedavem/files/2021/02/173-2-SQL-Analytics-IO-1024x316.jpg" alt="173 - 2 - SQL Analytics IO" width="584" height="180" class="alignnone size-large wp-image-1767" /></a></p>
<p>In addition, Azure monitor helped us in an another scenario where we tried to figure out recent workload patterns and to know if the current compute tier still fits with it. As said previously, we are relying on Serverless compute tier to handle the data warehouse-oriented workload with both auto-scaling and auto-pausing capabilities. At the first glance, we might expect a typical nightly workload as illustrated to Microsoft <a href="https://docs.microsoft.com/en-us/azure/azure-sql/database/serverless-tier-overview#:~:text=Serverless%20is%20a%20compute%20tier,of%20compute%20used%20per%20second." rel="noopener" target="_blank">BOL</a> and a cost optimized to this workload:</p>
<p><a href="http://blog.developpez.com/mikedavem/files/2021/02/173-6-Serverless-pattern.jpg"><img src="http://blog.developpez.com/mikedavem/files/2021/02/173-6-Serverless-pattern.jpg" alt="173 - 6 - Serverless pattern" width="516" height="316" class="alignnone size-full wp-image-1769" /></a></p>
<p><em>Images from Microsoft BOL</em></p>
<p>It could have been true when the activity started on Azure, but the game has changed with new incoming projects over the time. Starting with the general performance dashboard, the workload seems to follow the right pattern for Serverless compute tier, but we noticed billing keep going during unexpected timeframe as shown below. Let’s precise that I put deliberately only a sample of two days, but this pattern is a good representation of the general workload in our context. </p>
<p><a href="http://blog.developpez.com/mikedavem/files/2021/02/173-3-General-performance-dashboard.jpg"><img src="http://blog.developpez.com/mikedavem/files/2021/02/173-3-General-performance-dashboard-1024x556.jpg" alt="173 - 3 - General performance dashboard" width="584" height="317" class="alignnone size-large wp-image-1771" /></a></p>
<p>Indeed, workload should be mostly nightly-oriented with sporadic activity during the day but quick correlation with other basic metrics like CPU or Memory percentage usage confirmed a persistent activity all day. We have CPU spikes and probably small batches that keep minimum memory around at other moments. </p>
<p>As per the <a href="https://docs.microsoft.com/en-us/azure/azure-sql/database/serverless-tier-overview#:~:text=Serverless%20is%20a%20compute%20tier,of%20compute%20used%20per%20second." rel="noopener" target="_blank">Microsoft documentation</a>, the minimum auto-pausing  delay value is 1h and requires an inactive database (number of sessions = 0 and CPU = 0 for user workload) during this timeframe. Basic metrics didn’t provide any further insights about connections, applications or users that could generate such &laquo;&nbsp;noisy&nbsp;&raquo; activity, so we had to go another way by looking at the SQL Audit logs stored in Azure Monitor Logs. Data can be read through KQL which stands for Kusto Query Language (and not Kibana Query Language <img src="https://blog.developpez.com/mikedavem/wp-includes/images/smilies/icon_smile.gif" alt=":-)" class="wp-smiley" /> ). It’s the language used to query the Azure log databases: Azure Monitor Logs, Azure Monitor Application Insights and others and it is pretty similar to SQL language in the construct. </p>
<p>Here the first query I used to correlate number of events with metrics and that could prevent auto-pausing to kick in for the concerned database including RPC COMPLETED, BATCH COMPLETED, DATABASE AUTHENTICATION SUCCEEDED or DATABASE AUTHENTICATION FAILED</p>
<div class="codecolorer-container text default" style="overflow:auto;white-space:nowrap;border:1px solid #9F9F9F;width:650px;"><div class="text codecolorer" style="padding:5px;font:normal 12px/1.4em Monaco, Lucida Console, monospace;white-space:nowrap">AzureDiagnostics<br />
| where Category == 'SQLSecurityAuditEvents' and (action_name_s in ('RPC COMPLETED','BATCH COMPLETED') or action_name_s contains &quot;DATABASE AUTHENTICATION&quot;) &nbsp;and LogicalServerName_s == 'xxxx' and database_name_s == xxxx<br />
| summarize count() by bin(event_time_t, 1h),action_name_s<br />
| render columnchart</div></div>
<p>Results are aggregated and bucketized per hour on generated time event with bin() function. Finally, for a quick and easy read, I choosed a simple and unformatted column chart render. Here the outcome:</p>
<p><a href="http://blog.developpez.com/mikedavem/files/2021/02/173-7-Audit-per-hour-per-event-e1612798279257.jpg"><img src="http://blog.developpez.com/mikedavem/files/2021/02/173-7-Audit-per-hour-per-event-1024x459.jpg" alt="173 - 7 - Audit per hour per event" width="584" height="262" class="alignnone size-large wp-image-1772" /></a></p>
<p>As you probably noticed, daily activity is pretty small compared to nightly one and seems to confirm SQL batches and remote procedure calls. From this unclear picture, we can confirm anyway the daily workload is enough to keep the billing going because there is no per hour timeframe where there is no activity. </p>
<p>Let’s write another KQL query to draw a clearer picture of which applications ran during the a daily timeframe 07:00 – 20:00:</p>
<div class="codecolorer-container text default" style="overflow:auto;white-space:nowrap;border:1px solid #9F9F9F;width:650px;"><div class="text codecolorer" style="padding:5px;font:normal 12px/1.4em Monaco, Lucida Console, monospace;white-space:nowrap">let start=datetime(&quot;2021-01-26&quot;);<br />
let end=datetime(&quot;2021-01-29&quot;);<br />
let dailystart=7;<br />
let dailyend=20;<br />
let timegrain=1d;<br />
AzureDiagnostics<br />
| project &nbsp;action_name_s, event_time_t, application_name_s, server_principal_name_s, Category, LogicalServerName_s, database_name_s<br />
| where Category == 'SQLSecurityAuditEvents' and (action_name_s in ('RPC COMPLETED','BATCH COMPLETED') or action_name_s contains &quot;DATABASE AUTHENTICATION&quot;) &nbsp;<br />
| where LogicalServerName_s == 'xxxx' and database_name_s == 'xxxx' <br />
| where event_time_t &amp;gt; start and event_time_t &amp;lt; end<br />
| where datetime_part(&amp;quot;Hour&amp;quot;,event_time_t) between (dailystart .. dailyend)<br />
| summarize count() by bin(event_time_t, 1h), application_name_s<br />
| render columnchart with (xtitle = &amp;#039;Date&amp;#039;, ytitle = &amp;#039;Nb events&amp;#039;, title = &amp;#039;Prod SQL Workload pattern&amp;#039;)</div></div>
<p>And here the new outcome:</p>
<p><a href="http://blog.developpez.com/mikedavem/files/2021/02/173-8-Audit-per-hour-per-application.jpg"><img src="http://blog.developpez.com/mikedavem/files/2021/02/173-8-Audit-per-hour-per-application-1024x380.jpg" alt="173 - 8 - Audit per hour per application" width="584" height="217" class="alignnone size-large wp-image-1774" /></a></p>
<p>The new chart reveals some activities from SQL Server Management Studio but most part concerns applications with .Net SQL Data Provider. For a better clarity, we need more information related about applications and, in my context, I managed to address the point by reducing the search scope with the service principal name that issued the related audit event. It results to this new outcome that is pretty similar to previous one:</p>
<p><a href="http://blog.developpez.com/mikedavem/files/2021/02/173-9-Audit-per-hour-per-sp.jpg"><img src="http://blog.developpez.com/mikedavem/files/2021/02/173-9-Audit-per-hour-per-sp-1024x362.jpg" alt="173 - 9 - Audit per hour per sp" width="584" height="206" class="alignnone size-large wp-image-1775" /></a></p>
<p>Good job so far. For a sake of clarity, the service principal obfuscated above is used by our Reporting Server infrastructure and reports to get data from this data warehouse.  By going this way to investigate daily activity at different moments on the concerned Azure SQL database, we came to the conclusion that using Serverless computer tier didn’t make sense anymore and we need to upgrade likely to another computer tier.</p>
<p><strong>Additional thoughts</strong></p>
<p>Azure monitor is definitely a must to have if you are running resources on Azure and if you don’t own a platform for observability (metrics, logs and traces). Otherwise, it can be even beneficial for freeing up your on-prem monitoring infrastructure resources if scalability is a concern. Furthermore, there is no vendor-locking and you can decide to stream Azure monitor data outside in another place but at the cost of additional network transfer fees according to the target scenario. For example, Azure monitor can be used directly as datasource with Grafana. Azure SQL telemetry can be collected with Telegraf agent whereas audit logs can be recorded in another logging system like Kibana. In this blog post, we just surfaced the Azure monitor capabilities but, as demonstrated above, performing deep analysis correlations from different sources in a very few steps is a good point of this platform.</p>
]]></content:encoded>
			<wfw:commentRss></wfw:commentRss>
		<slash:comments>2</slash:comments>
		</item>
		<item>
		<title>Extending SQL Server monitoring with Raspberry PI and Lametric</title>
		<link>https://blog.developpez.com/mikedavem/p13204/sql-server-2005/extending-sql-server-monitoring-with-raspberry-pi-and-lametric</link>
		<comments>https://blog.developpez.com/mikedavem/p13204/sql-server-2005/extending-sql-server-monitoring-with-raspberry-pi-and-lametric#comments</comments>
		<pubDate>Thu, 07 Jan 2021 21:59:25 +0000</pubDate>
		<dc:creator><![CDATA[mikedavem]]></dc:creator>
				<category><![CDATA[DevOps]]></category>
		<category><![CDATA[Docker]]></category>
		<category><![CDATA[K8s]]></category>
		<category><![CDATA[SQL Azure]]></category>
		<category><![CDATA[SQL Server 2005]]></category>
		<category><![CDATA[SQL Server 2008]]></category>
		<category><![CDATA[SQL Server 2008 R2]]></category>
		<category><![CDATA[SQL Server 2014]]></category>
		<category><![CDATA[SQL Server 2016]]></category>
		<category><![CDATA[SQL Server 2017]]></category>
		<category><![CDATA[SQL Server 2019]]></category>
		<category><![CDATA[Lametric]]></category>
		<category><![CDATA[monitoring]]></category>
		<category><![CDATA[Powershell]]></category>
		<category><![CDATA[Raspberry]]></category>
		<category><![CDATA[sqlserver]]></category>

		<guid isPermaLink="false">http://blog.developpez.com/mikedavem/?p=1742</guid>
		<description><![CDATA[First blog of this new year 2021 and I will start with a fancy and How-To Geek topic In my last blog post, I discussed about monitoring and how it should help to address quickly a situation that is going &#8230; <a href="https://blog.developpez.com/mikedavem/p13204/sql-server-2005/extending-sql-server-monitoring-with-raspberry-pi-and-lametric">Lire la suite <span class="meta-nav">&#8594;</span></a>]]></description>
				<content:encoded><![CDATA[<p>First blog of this new year 2021 and I will start with a fancy and How-To Geek topic </p>
<p>In my <a href="https://blog.developpez.com/mikedavem/p13203/sql-server-2014/why-we-moved-sql-server-monitoring-on-prometheus-and-grafana" rel="noopener" target="_blank">last blog post</a>, I discussed about monitoring and how it should help to address quickly a situation that is going degrading. Alerts are probably the first way to raise your attention and, in my case, they are often in the form of emails in a dedicated folder. That remains a good thing, at least if you’re not focusing too long in other daily tasks or projects. In work office, I know I would probably better focus on new alerts but as I said previously, telework changed definitely the game.  </p>
<p><span id="more-1742"></span></p>
<p>I wanted to find a way to address this concern at least for main SQL Server critical alerts and I thought about relying on my existing home lab infrastructure to address the point. Reasons are it is always a good opportunity to learn something and to improve my skills by referring to a real case scenario. </p>
<p>My home lab infrastructure includes a cluster of <a href="https://www.raspberrypi.org/products/raspberry-pi-4-model-b/" rel="noopener" target="_blank">Raspberry PI 4</a> nodes. Initially, I use it to improve my skills on K8s or to study some IOT stuff for instance. It is a good candidate for developing and deploying a new app for detecting new incoming alerts in my mailbox and sending notifications to my Lametric accordingly. </p>
<p><a href="https://lametric.com/" rel="noopener" target="_blank">Lametric</a> is a basically a connected clock but works also as a highly-visible display showing notifications from devices or apps via REST APIs. First time I saw such device in action was in a DevOps meetup in 2018 around Docker and Jenkins deployment with <a href="https://www.linkedin.com/in/duquesnoyeric/" rel="noopener" target="_blank">Eric Dusquenoy</a> and Tim Izzo (<a href="https://twitter.com/5ika_" rel="noopener" target="_blank">@5ika_</a>). In addition, one of my previous customers had also one in his office and we had some discussions about cool customization through Lametric apps. </p>
<p>Connection through VPN to my company network is mandatory to work from home and unfortunately Lametric device doesn’t support this scenario because communication is limited to local network only. So, I need an app that run on my local (home) network and able to connect to my mailbox, get new incoming emails and finally sending notifications to my Lametric device. </p>
<p>Here my setup:</p>
<p><a href="http://blog.developpez.com/mikedavem/files/2021/01/171-0-lametric_infra.jpg"><img src="http://blog.developpez.com/mikedavem/files/2021/01/171-0-lametric_infra-1024x711.jpg" alt="171 - 0 - lametric_infra" width="584" height="405" class="alignnone size-large wp-image-1743" /></a></p>
<p>There are plenty of good blog posts to create a Raspberry cluster on the internet and I would suggest to read <a href="https://dbafromthecold.com/2020/11/30/building-a-raspberry-pi-cluster-to-run-azure-sql-edge-on-kubernetes/" rel="noopener" target="_blank">that</a> of Andrew Pruski (<a href="https://twitter.com/dbafromthecold" rel="noopener" target="_blank">@dbafromthecold</a>). </p>
<p>As shown above, there are different paths for SQL alerts referring our infrastructure (On-prem and Azure SQL databases) but all of them are send to a dedicated distribution list for DBA. </p>
<p>The app is a simple PowerShell script that relies on Exchange Webservices APIs for connecting to the mailbox and to get new mails. Sending notifications to my Lametric device is achieved by a simple REST API call with well-formatted body. Details can be found the <a href="https://lametric-documentation.readthedocs.io/en/latest/reference-docs/device-notifications.html" rel="noopener" target="_blank">Lametric documentation</a>. As prerequisite, you need to create a notification app from Lametric Developer site as follows:</p>
<p><a href="http://blog.developpez.com/mikedavem/files/2021/01/171-3-lametric-app-token.jpg"><img src="http://blog.developpez.com/mikedavem/files/2021/01/171-3-lametric-app-token-1024x364.jpg" alt="171 - 3 - lametric app token" width="584" height="208" class="alignnone size-large wp-image-1744" /></a></p>
<p>As said previously, I used PowerShell for this app. It can help to find documentation and tutorials when it comes Microsoft product. But if you are more confident with Python, APIs are also available in a <a href="https://pypi.org/project/py-ews/" rel="noopener" target="_blank">dedicated package</a>. But let’s precise that using PowerShell doesn’t necessarily mean using Windows-based container and instead I relied on Linux-based image with PowerShell core for ARM architecture. Image is provided by Microsoft on <a href="https://hub.docker.com/_/microsoft-powershell" rel="noopener" target="_blank">Docker Hub</a>. Finally, sensitive information like Lametric Token or mailbox credentials are stored in K8s secret for security reasons. My app project is available on my <a href="https://github.com/mikedavem/lametric" rel="noopener" target="_blank">GitHub</a>. Feel free to use it.</p>
<p>Here some results:</p>
<p>&#8211; After deploying my pod:</p>
<p><a href="http://blog.developpez.com/mikedavem/files/2021/01/171-1-lametric-pod.jpg"><img src="http://blog.developpez.com/mikedavem/files/2021/01/171-1-lametric-pod.jpg" alt="171 - 1 - lametric pod" width="483" height="82" class="alignnone size-full wp-image-1745" /></a></p>
<p>&#8211; The app is running and checking new incoming emails (kubectl logs command)</p>
<p><a href="http://blog.developpez.com/mikedavem/files/2021/01/171-2-lametric-pod-logs.jpg"><img src="http://blog.developpez.com/mikedavem/files/2021/01/171-2-lametric-pod-logs.jpg" alt="171 - 2 - lametric pod logs" width="828" height="438" class="alignnone size-full wp-image-1747" /></a></p>
<p>When email is detected, <a href="https://youtu.be/EcdSFziNc3U" title="Notification" rel="noopener" target="_blank">notification</a> is sendig to Lametric device accordingly</p>
<p>Geek fun good (bad?) idea to start this new year 2021 <img src="https://blog.developpez.com/mikedavem/wp-includes/images/smilies/icon_smile.gif" alt=":-)" class="wp-smiley" /></p>
]]></content:encoded>
			<wfw:commentRss></wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>Monitoring Azure SQL Databases with Azure Monitor and Automation</title>
		<link>https://blog.developpez.com/mikedavem/p13198/sql-server-2012/monitoring-azure-sql-databases-with-azure-monitor-and-automation</link>
		<comments>https://blog.developpez.com/mikedavem/p13198/sql-server-2012/monitoring-azure-sql-databases-with-azure-monitor-and-automation#comments</comments>
		<pubDate>Sun, 23 Aug 2020 15:32:07 +0000</pubDate>
		<dc:creator><![CDATA[mikedavem]]></dc:creator>
				<category><![CDATA[SQL Azure]]></category>
		<category><![CDATA[SQL Server 2012]]></category>
		<category><![CDATA[Azure]]></category>
		<category><![CDATA[Azure Alerts]]></category>
		<category><![CDATA[Azure Monitor]]></category>
		<category><![CDATA[Azure SQL Database]]></category>
		<category><![CDATA[Cloud]]></category>
		<category><![CDATA[monitoring]]></category>
		<category><![CDATA[SQL Server]]></category>

		<guid isPermaLink="false">http://blog.developpez.com/mikedavem/?p=1653</guid>
		<description><![CDATA[Supervising Cloud Infrastructure is an important aspect of Cloud administration and Azure SQL Databases are no exception. This is something we are continuously improving at my company. On-prem, DBAs often rely on well-established products but with Cloud-based architectures, often implemented &#8230; <a href="https://blog.developpez.com/mikedavem/p13198/sql-server-2012/monitoring-azure-sql-databases-with-azure-monitor-and-automation">Lire la suite <span class="meta-nav">&#8594;</span></a>]]></description>
				<content:encoded><![CDATA[<p>Supervising Cloud Infrastructure is an important aspect of Cloud administration and Azure SQL Databases are no exception. This is something we are continuously improving at my company. </p>
<p>On-prem, DBAs often rely on well-established products but with Cloud-based architectures, often implemented through DevOps projects and developers, monitoring should be been redefined and include some new topics as:</p>
<p><span id="more-1653"></span></p>
<p>1)	Cloud service usage and fees observability<br />
2)	Metrics and events detection that could affect bottom line<br />
3)	Implementing a single platform to report all data that comes from different sources<br />
4)	Trigger rules with data if workload reaches over or drops below certain levels or when an event is enough relevant to not meet the configuration standard and implies unwanted extra billing or when it compromises the company security rules.<br />
5)	Monitoring of the user experience</p>
<p>A key benefit often discussed about Cloud computing, and mainly driven by DevOps, is how it enables agility. One of the meaning of term agility is tied to the rapid provisioning of computer resources (in seconds or minutes) and this shortening provisioning path enables work to start quickly. You may be tempted to grant some provisioning permissions to DEV teams and from my opinion this is not a bad thing, but it may come with some drawbacks if not under control by Ops team including database area. Indeed, for example I have in mind some real cases including architecture configuration drift, security breaches created by unwanted item changes, or idle orphan resources for which you keep being charged. All of these scenarios may lead either to security issues or extra billing and I believe it is important to get clear visibility of such events. </p>
<p>In my company, Azure built-in capabilities with Azure Monitor architecture are our first target (at least in a first stage) and seem to address the aforementioned topics. To set the context, we already relied on Azure Monitor infrastructure for different things including Query Performance Insight, SQL Audit analysis through Log Analytics and Azure alerts for some performance metrics. Therefore, it was the obvious way to go further by adding activity log events to the story. </p>
<p><a href="http://blog.developpez.com/mikedavem/files/2020/08/165-1-Azure-Monitor.jpg"><img src="http://blog.developpez.com/mikedavem/files/2020/08/165-1-Azure-Monitor.jpg" alt="165 - 1 - Azure Monitor" width="843" height="474" class="alignnone size-full wp-image-1655" /></a></p>
<p>In this blog post, let’s focus on the items 2) 4). I would like to share some experimentations and thoughts about them. As a reminder, items 2) 4) are about catching relevant events to help identifying configuration and security drifts and performing actions accordingly. In addition, as many event-based architectures, additional events may appear or evolve over the time and we started thinking about the concept with the following basic diagram …</p>
<p><a href="http://blog.developpez.com/mikedavem/files/2020/08/165-2-Workflow-chart-e1598182358607.jpg"><img src="http://blog.developpez.com/mikedavem/files/2020/08/165-2-Workflow-chart-e1598182358607.jpg" alt="165 - 2 - Workflow chart" width="800" height="533" class="alignnone size-full wp-image-1657" /></a></p>
<p>… that led to the creation of the two following workflows:<br />
&#8211;	Workflow 1: To get notified immediately for critical events that may compromise security or lead immediately to important extra billing<br />
&#8211;	Workflow 2: To get a report of other misconfigured items (including critical ones) on schedule basis but don’t require quick responsiveness of Ops team.</p>
<p>Concerning the first workflow, using <a href="https://docs.microsoft.com/en-us/azure/azure-monitor/platform/activity-log-alerts" rel="noopener" target="_blank">alerts on activity logs</a>, action groups and webhooks as input of an Azure automation runbook appeared to be a good solution. On another side, the second one only requires running an Azure automation workbook on schedule basis. In fact, this is the same runbook but with different input parameters according to the targeted environment (e.g. PROD / ACC / INT). In addition, the runbook should be able to identity unmanaged events and notified Ops team who will decide either to skip it or to integrate it to runbook processing.</p>
<p>Azure alerts which can be divided in different categories including metric, log alerts and activity log alerts. The last one drew our attention because it allows getting notified for operation of specific resources by email or by generating JSON schema reusable from Azure Automation runbook. Focusing on the latter, we had come up (I believe) with what we thought was a reasonable solution. </p>
<p>Here the high-level picture of the architecture we have implemented:</p>
<p><a href="http://blog.developpez.com/mikedavem/files/2020/08/165-3-Architecture-e1598182462929.jpg"><img src="http://blog.developpez.com/mikedavem/files/2020/08/165-3-Architecture-e1598182462929.jpg" alt="165 - 3 - Architecture" width="800" height="347" class="alignnone size-full wp-image-1659" /></a></p>
<p>1-	During the creation of an Azure SQL Server or a database, corresponding alerts are added with Administrative category with a specific scope. Let&rsquo;s precise that concerned operations must be registered with Azure Resource Manager in order to be used in Activity Log and fortunately they are all including in the <a href="https://docs.microsoft.com/en-us/azure/role-based-access-control/resource-provider-operations" rel="noopener" target="_blank">Microsoft.Sql</a> resource provider in this case.<br />
2-	When an event occurs on the targeted environment, an alert is triggered as well as the concerned runbook.<br />
3-	The execution of the same runbook but with different input parameters is scheduled on weekly basis to a general configuration report of our Azure SQL environments.<br />
4-	According the event, Ops team gets notified and acts (either to update misconfigured item, or to delete the unauthorized item, or to update runbook code on Git Repo to handle the new event and so on …)</p>
<p>The skeleton of the Azure automation runbook is pretty similar to the following one:</p>
<div class="codecolorer-container text default" style="overflow:auto;white-space:nowrap;border:1px solid #9F9F9F;width:650px;height:450px;"><div class="text codecolorer" style="padding:5px;font:normal 12px/1.4em Monaco, Lucida Console, monospace;white-space:nowrap">[OutputType(&quot;PSAzureOperationResponse&quot;)]<br />
param<br />
(<br />
&nbsp; &nbsp; [Parameter (Mandatory=$false)]<br />
&nbsp; &nbsp; [object] $WebhookData<br />
&nbsp; &nbsp; ,<br />
&nbsp; &nbsp; [parameter(Mandatory=$False)]<br />
&nbsp; &nbsp; [ValidateSet(&quot;PROD&quot;,&quot;ACC&quot;,&quot;INT&quot;)]<br />
&nbsp; &nbsp; [String]$EnvTarget<br />
&nbsp; &nbsp; ,<br />
&nbsp; &nbsp; [parameter(Mandatory=$False)]<br />
&nbsp; &nbsp; [Boolean]$DebugMode = $False<br />
)<br />
<br />
<br />
<br />
<br />
If ($WebhookData)<br />
{<br />
<br />
&nbsp; &nbsp; # Logic to allow for testing in test pane<br />
&nbsp; &nbsp; If (-Not $WebhookData.RequestBody){<br />
&nbsp; &nbsp; &nbsp; &nbsp; $WebhookData = (ConvertFrom-Json -InputObject $WebhookData)<br />
&nbsp; &nbsp; }<br />
<br />
&nbsp; &nbsp; $WebhookBody = (ConvertFrom-Json -InputObject $WebhookData.RequestBody)<br />
<br />
&nbsp; &nbsp; $schemaId = $WebhookBody.schemaId<br />
<br />
&nbsp; &nbsp; If ($schemaId -eq &quot;azureMonitorCommonAlertSchema&quot;) {<br />
&nbsp; &nbsp; &nbsp; &nbsp; # This is the common Metric Alert schema (released March 2019)<br />
&nbsp; &nbsp; &nbsp; &nbsp; $Essentials = [object] ($WebhookBody.data).essentials<br />
&nbsp; &nbsp; &nbsp; &nbsp; # Get the first target only as this script doesn't handle multiple<br />
&nbsp; &nbsp; &nbsp; &nbsp; $status = $Essentials.monitorCondition<br />
<br />
&nbsp; &nbsp; &nbsp; &nbsp; # Focus only on succeeded or Fired Events<br />
&nbsp; &nbsp; &nbsp; &nbsp; If ($status -eq &quot;Succeeded&quot; -Or $Status -eq &quot;Fired&quot;)<br />
&nbsp; &nbsp; &nbsp; &nbsp; {<br />
&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; # Extract info from webook <br />
&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; $alertTargetIdArray = (($Essentials.alertTargetIds)[0]).Split(&quot;/&quot;)<br />
&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; $SubId = ($alertTargetIdArray)[2]<br />
&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; $ResourceGroupName = ($alertTargetIdArray)[4]<br />
&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; $ResourceType = ($alertTargetIdArray)[6] + &quot;/&quot; + ($alertTargetIdArray)[7]<br />
<br />
&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; # Determine code path depending on the resourceType<br />
&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; if ($ResourceType -eq &quot;microsoft.sql/servers&quot;)<br />
&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; {<br />
&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; # DEBUG<br />
&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; Write-Output &quot;This is a SQL Server Resource.&quot;<br />
<br />
&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; $firedDate = $Essentials.firedDateTime<br />
&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; $AlertContext = [object] ($WebhookBody.data).alertContext<br />
&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; $channel = $AlertContext.channels<br />
&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; $EventSource = $AlertContext.eventSource<br />
&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; $Level = $AlertContext.level<br />
&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; $Operation = $AlertContext.operationName<br />
&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; $Properties = [object] ($WebhookBody.data).alertContext.properties<br />
&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; $EventName = $Properties.eventName<br />
&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; $EventStatus = $Properties.status<br />
&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; $Description = $Properties.description_scrubbed<br />
&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; $Caller = $Properties.caller<br />
&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; $IPAddress = $Properties.ipAddress<br />
&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; $ResourceName = ($alertTargetIdArray)[8]<br />
&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; $DatabaseName = ($alertTargetIdArray)[10]<br />
&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; $Operation_detail = $Operation.Split('/')<br />
<br />
&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; # Check firewall rules<br />
&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; If ($EventName -eq 'OverwriteFirewallRules'){<br />
&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; Write-Output &quot;Firewall Overwrite is detected ...&quot;<br />
&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; # Code to handle firewall update event<br />
&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; }<br />
&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; # Update DB =&amp;gt; No need to be monitored in real time<br />
&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; Elseif ($EventName -eq 'UpdateDatabase') {<br />
&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; # Code to handle Database config update event or skip <br />
&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; }<br />
&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; # Create DB<br />
&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; Elseif ($EventName -eq 'CreateDatabase' -Or `<br />
&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; $Operation -eq 'Microsoft.Sql/servers/databases/write'){<br />
&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; Write-Output &quot;Azure Database creation has been detected ...&quot;<br />
&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; # Code to handle Database creation event or skip <br />
&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; }<br />
&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; # Delete DB<br />
&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; Elseif ($EventName -eq 'DeleteDatabase' -Or `<br />
&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; $Operation -eq 'Microsoft.Sql/servers/databases/delete') {<br />
&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; Write-Output &quot;Azure Database has been deleted ...&quot;<br />
&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; # Code to handle Database deletion event or skip <br />
&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; }<br />
&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; Elseif ($Operation -eq 'Microsoft.Sql/servers/databases/transparentDataEncryption/write') {<br />
&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; Write-Output &quot;Azure Database Encryption update has been detected ...&quot;<br />
&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; # Code to handle Database encryption update event or skip <br />
&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; }<br />
&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; Elseif ($Operation -eq 'Microsoft.Sql/servers/databases/auditingSettings/write') {<br />
&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; Write-Output &quot;Azure Database Audit update has been detected ...&quot;<br />
&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; # Code to handle Database audit update event or skip <br />
&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; }<br />
&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; Elseif ($Operation -eq 'Microsoft.Sql/servers/databases/securityAlertPolicies/write' -or $Operation -eq 'Microsoft.Sql/servers/databases/vulnerabilityAssessments/write') {<br />
&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; Write-Output &quot;Azure ADS update has been detected ...&quot;<br />
&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; # Code to handle ADS update event or skip <br />
&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; }<br />
&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; ElseIf ($Operation -eq 'Microsoft.Sql/servers/databases/backupShortTermRetentionPolicies/write'){<br />
&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; Write-Output &quot;Azure Retention Backup has been modified ...&quot;<br />
&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; # Code to handle Database retention backup update event or skip <br />
&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; }<br />
&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; # ... other ones <br />
&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; Else {<br />
&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; Write-Output &quot;Event not managed yet &nbsp; &nbsp;&quot;<br />
&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; }<br />
&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; else {<br />
&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; # ResourceType not supported<br />
&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; Write-Error &quot;$ResourceType is not a supported resource type for this runbook.&quot;<br />
&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; }<br />
&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; }<br />
&nbsp; &nbsp; &nbsp; &nbsp; }<br />
&nbsp; &nbsp; &nbsp; &nbsp; Else {<br />
&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; # The alert status was not 'Activated' or 'Fired' so no action taken<br />
&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; Write-Verbose (&quot;No action taken. Alert status: &quot; + $status) -Verbose<br />
&nbsp; &nbsp; &nbsp; &nbsp; }<br />
&nbsp; &nbsp; }<br />
&nbsp; &nbsp; Else{<br />
&nbsp; &nbsp; &nbsp; &nbsp;# SchemaID doesn't correspond to azureMonitorCommonAlertSchema =&amp;gt;&amp;gt; Skip<br />
&nbsp; &nbsp; &nbsp; &nbsp;Write-Host &quot;Skip ...&quot; <br />
&nbsp; &nbsp; }<br />
}<br />
Else {<br />
&nbsp; &nbsp; Write-Output &quot;No Webhook detected ... switch to normal mode ...&quot;<br />
<br />
&nbsp; &nbsp; If ([String]::IsNullOrEmpty($EnvTarget)){<br />
&nbsp; &nbsp; &nbsp; &nbsp; Write-Error '$EnvTarget is mandatory in normal mode'<br />
&nbsp; &nbsp; }<br />
<br />
&nbsp; &nbsp; #########################################################<br />
&nbsp; &nbsp; # Code for a complete check of Azure SQL DB environment #<br />
&nbsp; &nbsp; #########################################################<br />
}</div></div>
<p>Some comments about the PowerShell script:</p>
<p>1)	Input parameters should include either the Webhook data or specific parameter values for a complete Azure SQL DB check.<br />
2)	The first section should include your own functions to respond to different events. In our context, currently we drew on <a href="https://github.com/sqlcollaborative/dbachecks" rel="noopener" target="_blank">DBAChecks</a> thinking to develop a derived model but why not using directly DBAChecks in a near future?<br />
3)	When an event is triggered, a JSON schema is generated and provides insight. The point here is you must navigate through different properties according to the operation type (cf. <a href="https://docs.microsoft.com/en-us/azure/azure-monitor/platform/activity-log-schema" rel="noopener" target="_blank">BOL</a>).<br />
4)	The increase of events to manage could be a potential issue making the runbook fat especially if we keep both the core functions and event processing. To mitigate this topic, we are thinking to move functions into modules in Azure automation (next step).</p>
<p><strong>Bottom line</strong></p>
<p>Thanks to Azure built-in capabilities we improved our visibility of events that occur on the Azure SQL environment (both expected and unexcepted) and we’re now able to act accordingly. But I should tell you that going this way is not a free lunch and we achieved a reasonable solution after some programming and testing efforts. If you can invest time, it is probably the kind of solution you can add to your study.</p>
<p>See you</p>
]]></content:encoded>
			<wfw:commentRss></wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>AAD user creation on behalf AAD Service Principal with Azure SQL DB</title>
		<link>https://blog.developpez.com/mikedavem/p13197/sql-azure/aad-user-creation-on-behalf-aad-service-principal-with-azure-sql-db</link>
		<comments>https://blog.developpez.com/mikedavem/p13197/sql-azure/aad-user-creation-on-behalf-aad-service-principal-with-azure-sql-db#comments</comments>
		<pubDate>Sun, 02 Aug 2020 22:28:06 +0000</pubDate>
		<dc:creator><![CDATA[mikedavem]]></dc:creator>
				<category><![CDATA[DevOps]]></category>
		<category><![CDATA[PowerShell]]></category>
		<category><![CDATA[SQL Azure]]></category>
		<category><![CDATA[Authentication]]></category>
		<category><![CDATA[Azure Automation]]></category>
		<category><![CDATA[Azure SQL Database]]></category>
		<category><![CDATA[Azure SQL DB]]></category>
		<category><![CDATA[Powershell]]></category>
		<category><![CDATA[Runbook]]></category>
		<category><![CDATA[Security]]></category>
		<category><![CDATA[Service Principal]]></category>
		<category><![CDATA[SQL Server]]></category>
		<category><![CDATA[System managed identity]]></category>

		<guid isPermaLink="false">http://blog.developpez.com/mikedavem/?p=1643</guid>
		<description><![CDATA[An interesting improvement was announced by the SQL AAD team on Monday 27th July 2020 and concerns the support for Azure AD user creation on behalf of Azure AD Applications for Azure SQL as mentioned to this Microsoft blog post. &#8230; <a href="https://blog.developpez.com/mikedavem/p13197/sql-azure/aad-user-creation-on-behalf-aad-service-principal-with-azure-sql-db">Lire la suite <span class="meta-nav">&#8594;</span></a>]]></description>
				<content:encoded><![CDATA[<p>An interesting improvement was announced by the SQL AAD team on Monday 27th July 2020 and concerns the support for Azure AD user creation on behalf of Azure AD Applications for Azure SQL as mentioned to this <a href="https://techcommunity.microsoft.com/t5/azure-sql-database/support-for-azure-ad-user-creation-on-behalf-of-azure-ad/ba-p/1491121" rel="noopener" target="_blank">Microsoft blog post</a>. </p>
<p><span id="more-1643"></span></p>
<p>In my company, this is something we were looking for a while with our database refresh process in Azure. Before talking this new feature, let me share a brief history of different considerations we had for this DB refresh process over the time with different approaches we went through. First let’s precise DB Refresh includes usually at least two steps: restoring backup / copying database – you have both ways in Azure SQL Database – and realigning security context with specific users regarding your targeted environment (ACC / INT …).  But the latter is not as trivial as you may expect if you opted to use either a SQL Login / User or a Service Principal to carry out this operation in your process. Indeed, in both cases creating an Azure AD User or Group is not supported, and if you try you will face this error message:</p>
<blockquote><p>‘’ is not a valid login or you do not have permission. </p></blockquote>
<p>All the stuff (either Azure automation runbook and PowerShell modules on-prem) done so far and described afterwards meets the same following process:</p>
<p><a href="http://blog.developpez.com/mikedavem/files/2020/08/164-1-DB-Refresh-process-e1596406580306.jpg"><img src="http://blog.developpez.com/mikedavem/files/2020/08/164-1-DB-Refresh-process-e1596406580306.jpg" alt="164 - 1 - DB Refresh process" width="800" height="566" class="alignnone size-full wp-image-1645" /></a></p>
<p>First, we used Invoke-SQCMD in Azure Automation runbook with T-SQL query to create a copy of a source database to the target server. T-SQL is mandatory in this case as per <a href="https://docs.microsoft.com/en-us/azure/azure-sql/database/database-copy?tabs=azure-powershell" rel="noopener" target="_blank">documented</a> in the Microsoft BOL because PROD and ACC or INT servers are not on the same subscription. Here a simplified sample of code:</p>
<div class="codecolorer-container text default" style="overflow:auto;white-space:nowrap;border:1px solid #9F9F9F;width:650px;"><div class="text codecolorer" style="padding:5px;font:normal 12px/1.4em Monaco, Lucida Console, monospace;white-space:nowrap">...<br />
$CopyDBCMD = @{<br />
&nbsp; &nbsp; 'Database' = 'master'<br />
&nbsp; &nbsp; 'ServerInstance' = $TargetServerName<br />
&nbsp; &nbsp; 'Username' = $SQLUser<br />
&nbsp; &nbsp; 'Password' = $SQLPWD<br />
&nbsp; &nbsp; 'Query' = 'CREATE DATABASE '+ '[' + $DatabaseName + '] ' + 'AS COPY OF ' + '[' + $SourceServerName + '].[' + $DatabaseName + ']'<br />
} <br />
<br />
Invoke-Sqlcmd @CopyDBCMD <br />
...</div></div>
<p>But as you likely know, Invoke-SQLCMD doesn’t support AAD authentication and because SQL Login authentication was the only option here, it led us dealing with an annoying issue about the security configuration step with AAD users or groups as you may imagine. </p>
<p>Then, because we based authentication mainly on trust architecture and our security rules require using it including apps with managed identities or service principals, we wanted also to introduce this concept to our database refresh process. Fortunately, service principals are supported with <a href="https://techcommunity.microsoft.com/t5/azure-sql-database/token-based-authentication-support-for-azure-sql-db-using-azure/ba-p/386091" rel="noopener" target="_blank">Azure SQL DBs since v12</a> with access token for authentication by ADALSQL. The corresponding DLL is required on your server or if you use it from Azure Automation like us, we added the ADAL.PS module but be aware it is now deprecated, and I advise you to strongly invest in moving to MSAL. Here a sample we used:</p>
<div class="codecolorer-container text default" style="overflow:auto;white-space:nowrap;border:1px solid #9F9F9F;width:650px;height:450px;"><div class="text codecolorer" style="padding:5px;font:normal 12px/1.4em Monaco, Lucida Console, monospace;white-space:nowrap">...<br />
$response = Get-ADALToken `<br />
&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; -ClientId $clientId `<br />
&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; -ClientSecret $clientSecret `<br />
&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; -Resource $resourceUri `<br />
&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; -Authority $authorityUri `<br />
&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; -TenantId $tenantName<br />
<br />
...<br />
<br />
$connectionString = &quot;Server=tcp:$SqlInstanceFQDN,1433;Initial Catalog=master;Persist Security Info=False;MultipleActiveResultSets=False;Encrypt=True;TrustServerCertificate=False;&quot;<br />
# Create the connection object<br />
$connection = New-Object System.Data.SqlClient.SqlConnection($connectionString)<br />
# Set AAD generated token to SQL connection token<br />
$connection.AccessToken = $response.AccessToken<br />
<br />
Try {<br />
&nbsp; &nbsp; $connection.Open()<br />
&nbsp; &nbsp; ...<br />
} <br />
...</div></div>
<p>But again, even if the copy or restore steps are well managed, we still got stuck with security reconfiguration, because service principals were not supported for creating AAD users or groups so far &#8230;</p>
<p>In the meantime, we found out a temporary and interesting solution based on <a href="https://dbatools.io/" rel="noopener" target="_blank">dbatools framework</a> and the <a href="https://docs.dbatools.io/#Invoke-DbaQuery" rel="noopener" target="_blank">Invoke-dbaquery command</a> which supports AAD authentication (Login + Password). As we may not rely on service principal in this case, using a dedicated AAD account was an acceptable tradeoff to manage all the database refresh process steps. But going through this way comes with some disadvantages because running Invoke-dbaquery in a full Azure automation mode is not possible with missing ADALsql.dll. Workaround may be to use hybrid-worker, but we didn’t want to add complexity to our current architecture only for this special case. Instead we decided to move the logic of the Azure automation runbook into on-prem PowerShell framework which already include logic for DB refresh for on-prem SQL Server instances. </p>
<p>Here a simplified sample of code we are using:</p>
<div class="codecolorer-container text default" style="overflow:auto;white-space:nowrap;border:1px solid #9F9F9F;width:650px;height:450px;"><div class="text codecolorer" style="padding:5px;font:normal 12px/1.4em Monaco, Lucida Console, monospace;white-space:nowrap">...<br />
Try {<br />
&nbsp; &nbsp; # Connect to get access to Key Vault info<br />
&nbsp; &nbsp; Connect-AzAccount | Out-Null<br />
<br />
&nbsp; &nbsp; [String]$user = (Get-AzKeyVaultSecret -VaultName $KeyvaultName -Name &quot;AZSQL-SQLBCKUSER&quot;).SecretValueText<br />
&nbsp; &nbsp; [System.Security.SecureString]$pwd = &nbsp;ConvertTo-SecureString (Get-AzKeyVaultSecret -VaultName $KeyvaultName -Name &quot;AZSQL-SQLBCKPWD&quot;).SecretValueText -AsPlainText -Force<br />
&nbsp; &nbsp; [String]$SourceServerName = (Get-AzKeyVaultSecret -VaultName $KeyvaultName -Name &quot;AZSQL-NAME&quot;).SecretValueText<br />
&nbsp; &nbsp; [String]$TargetServerName = (Get-AzKeyVaultSecret -VaultName $KeyvaultName -Name &quot;AZSQL-TARGETNAME&quot;).SecretValueText + '.database.windows.net'<br />
<br />
&nbsp; &nbsp; # DB Restore will be performed in the context of dedicated AAD account <br />
&nbsp; &nbsp; $pscredential = New-Object -TypeName System.Management.Automation.PSCredential($user, $pwd)<br />
<br />
&nbsp; &nbsp; Write-Host &quot;Restoring DB:$DatabaseName from Source Server: $SourceServerName to Target Server: $TargetServerName&quot;<br />
&nbsp; &nbsp; <br />
&nbsp; &nbsp; $Query = &quot;CREATE DATABASE [$DatabaseName] AS COPY OF [$SourceServerName].[$DatabaseName]&quot;<br />
&nbsp; &nbsp; Invoke-DbaQuery `<br />
&nbsp; &nbsp; &nbsp; &nbsp; -SqlInstance $TargetServerName `<br />
&nbsp; &nbsp; &nbsp; &nbsp; -Database master `<br />
&nbsp; &nbsp; &nbsp; &nbsp; -SqlCredential $pscredential `<br />
&nbsp; &nbsp; &nbsp; &nbsp; -Query $Query `<br />
&nbsp; &nbsp; &nbsp; &nbsp; -EnableException <br />
<br />
&nbsp; &nbsp; # Wait for DB online and ready ... <br />
&nbsp; &nbsp; # Code should be implemented for this check <br />
<br />
<br />
&nbsp; &nbsp; Write-Output &quot;Applying security configuration to DB: $DatabaseName on Server:$TargetServerName&quot;<br />
<br />
&nbsp; &nbsp; $Query = &quot;<br />
&nbsp; &nbsp; &nbsp; &nbsp; DROP USER [az_sql_ro];CREATE USER [az_sql_ro] FROM EXTERNAL PROVIDER;<br />
&nbsp; &nbsp; &quot;<br />
&nbsp; &nbsp; Invoke-DbaQuery `<br />
&nbsp; &nbsp; &nbsp; &nbsp; -SqlInstance $TargetServerName `<br />
&nbsp; &nbsp; &nbsp; &nbsp; -Database $DatabaseName `<br />
&nbsp; &nbsp; &nbsp; &nbsp; -SqlCredential $pscredential `<br />
&nbsp; &nbsp; &nbsp; &nbsp; -Query $Query `<br />
&nbsp; &nbsp; &nbsp; &nbsp; -EnableException<br />
<br />
}<br />
Catch {<br />
&nbsp; &nbsp; Write-Host &quot;Error encountered: $($_.Exception.Message)&quot;<br />
} <br />
...</div></div>
<p>Referring to the PowerShell code above, in the second step, we create an AAG group [az_sql_ro] on behalf of the AAD dedicated account with the CLAUSE FROM EXTERNAL PROVIDER. </p>
<p>Finally, with the latest news published by the SQL AAD team, we will likely consider using back service principal instead of dedicated Windows AAD account. <a href="https://techcommunity.microsoft.com/t5/azure-sql-database/support-for-azure-ad-user-creation-on-behalf-of-azure-ad/ba-p/1491121" rel="noopener" target="_blank">This Microsoft blog post</a> explains in details how it works and what you have to setup to make it work correctly. I don’t want to duplicate what is already explained so I will apply the new stuff to my context. </p>
<p>Referring to the above blog post, you need first to setup a server identity for your Azure SQL Server as below:</p>
<div class="codecolorer-container text default" style="overflow:auto;white-space:nowrap;border:1px solid #9F9F9F;width:650px;"><div class="text codecolorer" style="padding:5px;font:normal 12px/1.4em Monaco, Lucida Console, monospace;white-space:nowrap">Set-AzSqlServer `<br />
&nbsp; &nbsp; -ResourceGroupName sandox-rg `<br />
&nbsp; &nbsp; -ServerName a-s-sql02 `<br />
&nbsp; &nbsp; -AssignIdentity<br />
<br />
# Check server identity<br />
Get-AzSqlServer `<br />
&nbsp; &nbsp; -ResourceGroupName sandox-rg `<br />
&nbsp; &nbsp; -ServerName a-s-sql02 | `<br />
&nbsp; &nbsp; Select-Object ServerName, Identity</div></div>
<div class="codecolorer-container text default" style="overflow:auto;white-space:nowrap;border:1px solid #9F9F9F;width:650px;"><div class="text codecolorer" style="padding:5px;font:normal 12px/1.4em Monaco, Lucida Console, monospace;white-space:nowrap">ServerName Identity &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;<br />
---------- -------- &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;<br />
a-s-sql02 &nbsp;Microsoft.Azure.Management.Sql.Models.ResourceIdentity</div></div>
<p>Let&rsquo;s have a look at the server identity</p>
<div class="codecolorer-container text default" style="overflow:auto;white-space:nowrap;border:1px solid #9F9F9F;width:650px;"><div class="text codecolorer" style="padding:5px;font:normal 12px/1.4em Monaco, Lucida Console, monospace;white-space:nowrap"># Get identity details<br />
$identity = Get-AzSqlServer `<br />
&nbsp; &nbsp; &nbsp; &nbsp; -ResourceGroupName sandox-rg `<br />
&nbsp; &nbsp; &nbsp; &nbsp; -ServerName a-s-sql02<br />
<br />
$identity.identity</div></div>
<div class="codecolorer-container text default" style="overflow:auto;white-space:nowrap;border:1px solid #9F9F9F;width:650px;"><div class="text codecolorer" style="padding:5px;font:normal 12px/1.4em Monaco, Lucida Console, monospace;white-space:nowrap">PrincipalId &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;Type &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; TenantId &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;<br />
----------- &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;---- &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; -------- &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;<br />
7f0d16f7-b172-4c97-94d3-34f0f7ed93cf SystemAssigned 2fcd19a7-ab24-4aef-802b-6851ef5d1ed5</div></div>
<p>In fact, assigning a server identity means creating a system assigned managed identity in the Azure AD tenant that&rsquo;s trusted by the subscription of the instance. To keep things simple, let’s say that System Managed Identity in Azure is like to Managed Account or Group Managed Account on-prem. Those identities are self-managed by the system. Then you need to grant this identity the Azure AD &laquo;&nbsp;Directory Readers &laquo;&nbsp;permission to get rights for creating AAD Users or Groups on behalf of this identity. A PowerShell script is provided by Microsoft <a href="https://docs.microsoft.com/en-us/azure/azure-sql/database/authentication-aad-service-principal-tutorial" rel="noopener" target="_blank">here</a> a sample of code I applied in my context for testing:</p>
<div class="codecolorer-container text default" style="overflow:auto;white-space:nowrap;border:1px solid #9F9F9F;width:650px;height:450px;"><div class="text codecolorer" style="padding:5px;font:normal 12px/1.4em Monaco, Lucida Console, monospace;white-space:nowrap">...<br />
Try {<br />
&nbsp; &nbsp; $DatabaseName = &quot;test-DBA&quot; &nbsp; <br />
&nbsp; &nbsp; &nbsp; <br />
&nbsp; &nbsp; # Connect to get access to Key Vault info<br />
&nbsp; &nbsp; Connect-AzAccount | Out-Null<br />
<br />
&nbsp; &nbsp; [String]$user = (Get-AzKeyVaultSecret -VaultName $KeyvaultName -Name &quot;AZSQL-SQLBCKAPPID&quot;).SecretValueText<br />
&nbsp; &nbsp; [System.Security.SecureString]$pwd = &nbsp;ConvertTo-SecureString (Get-AzKeyVaultSecret -VaultName $KeyvaultName -Name &quot;AZSQL-SQLBCKAPPSECRET&quot;).SecretValueText -AsPlainText -Force<br />
&nbsp; &nbsp; [String]$SourceServerName = (Get-AzKeyVaultSecret -VaultName $KeyvaultName -Name &quot;AZSQL-NAME&quot;).SecretValueText<br />
&nbsp; &nbsp; [String]$TargetServerName = (Get-AzKeyVaultSecret -VaultName $KeyvaultName -Name &quot;AZSQL-TARGETNAME&quot;).SecretValueText + '.database.windows.net'<br />
<br />
&nbsp; &nbsp; # DB Restore will be performed in the context of dedicated AAD account <br />
&nbsp; &nbsp; $pscredential = New-Object -TypeName System.Management.Automation.PSCredential($user, $pwd)<br />
<br />
&nbsp; &nbsp; $adalPath &nbsp;= &quot;${env:ProgramFiles}\WindowsPowerShell\Modules\Az.Profile.7.0\PreloadAssemblies&quot;<br />
&nbsp; &nbsp; # To install the latest AzureRM.profile version execute &nbsp;-Install-Module -Name AzureRM.profile<br />
&nbsp; &nbsp; $adal &nbsp; &nbsp; &nbsp;= &quot;$adalPath\Microsoft.IdentityModel.Clients.ActiveDirectory.dll&quot;<br />
&nbsp; &nbsp; $adalforms = &quot;$adalPath\Microsoft.IdentityModel.Clients.ActiveDirectory.WindowsForms.dll&quot;<br />
&nbsp; &nbsp; [System.Reflection.Assembly]::LoadFrom($adal) | Out-Null<br />
&nbsp; &nbsp; $resourceAppIdURI = 'https://database.windows.net/'<br />
<br />
&nbsp; &nbsp; # Set Authority to Azure AD Tenant<br />
&nbsp; &nbsp; $authority = 'https://login.windows.net/' + $tenantId<br />
<br />
&nbsp; &nbsp; $ClientCred = [Microsoft.IdentityModel.Clients.ActiveDirectory.ClientCredential]::new($clientId, $clientSecret)<br />
&nbsp; &nbsp; $authContext = [Microsoft.IdentityModel.Clients.ActiveDirectory.AuthenticationContext]::new($authority)<br />
&nbsp; &nbsp; $authResult = $authContext.AcquireTokenAsync($resourceAppIdURI,$ClientCred)<br />
&nbsp; &nbsp; $Tok = $authResult.Result.CreateAuthorizationHeader()<br />
&nbsp; &nbsp; $Tok=$Tok.Replace(&quot;Bearer &quot;,&quot;&quot;)<br />
&nbsp; &nbsp; <br />
&nbsp; &nbsp; Write-host &quot;Token generated is ...&quot;<br />
&nbsp; &nbsp; $Tok<br />
&nbsp; &nbsp; Write-host &nbsp;&quot;&quot;<br />
<br />
&nbsp; &nbsp; Write-Host &quot;Create SQL connectionstring&quot;<br />
&nbsp; &nbsp; $conn = New-Object System.Data.SqlClient.SQLConnection <br />
&nbsp; &nbsp; <br />
&nbsp; &nbsp; $conn.ConnectionString = &quot;Data Source=$TargetServerName;Initial Catalog=master;Connect Timeout=30&quot;<br />
&nbsp; &nbsp; $conn.AccessToken = $Tok<br />
<br />
&nbsp; &nbsp; Write-host &quot;Connect to database and execute SQL script&quot;<br />
&nbsp; &nbsp; $conn.Open() <br />
<br />
&nbsp; &nbsp; Write-Host &quot;Check connected user ...&quot;<br />
&nbsp; &nbsp; $Query = &quot;SELECT USER_NAME() AS [user_name];&quot;<br />
&nbsp; &nbsp; $command = New-Object -TypeName System.Data.SqlClient.SqlCommand($Query, $conn)<br />
&nbsp; &nbsp; $Command.ExecuteScalar()<br />
&nbsp; &nbsp; $conn.Close()<br />
<br />
&nbsp; &nbsp; Write-Host &quot;Restoring DB:$DatabaseName from Source Server: $SourceServerName to Target Server: $TargetServerName&quot;<br />
<br />
&nbsp; &nbsp; $conn.ConnectionString = &quot;Data Source=$TargetServerName;Initial Catalog=master;Connect Timeout=30&quot;<br />
&nbsp; &nbsp; $conn.AccessToken = $Tok<br />
&nbsp; &nbsp; $conn.Open()<br />
&nbsp; &nbsp; $Query = &quot;DROP DATABASE IF EXISTS [$DatabaseName]; CREATE DATABASE [$DatabaseName] AS COPY OF [$SourceServerName].[$DatabaseName]&quot;<br />
&nbsp; &nbsp; $command = New-Object -TypeName System.Data.SqlClient.SqlCommand($Query, $conn)<br />
&nbsp; &nbsp; $command.CommandTimeout = 1200<br />
&nbsp; &nbsp; $command.ExecuteNonQuery()<br />
&nbsp; &nbsp; $conn.Close()<br />
<br />
&nbsp; &nbsp; # Wait for DB online and ready ... <br />
&nbsp; &nbsp; # Code should be implemented for this check <br />
<br />
&nbsp; &nbsp; <br />
&nbsp; &nbsp; Write-Output &quot;Applying security configuration to DB: $DatabaseName on Server:$TargetServerName&quot;<br />
<br />
&nbsp; &nbsp; $conn.ConnectionString = &quot;Data Source=$TargetServerName;Initial Catalog=$DatabaseName;Connect Timeout=30&quot;<br />
&nbsp; &nbsp; $conn.AccessToken = $Tok<br />
&nbsp; &nbsp; $conn.Open() <br />
&nbsp; &nbsp; $Query = 'CREATE USER [az_sql_ro] FROM EXTERNAL PROVIDER;'<br />
&nbsp; &nbsp; $command = New-Object -TypeName System.Data.SqlClient.SqlCommand($Query, $conn) &nbsp; &nbsp; &nbsp; <br />
&nbsp; &nbsp; $command.ExecuteNonQuery()<br />
&nbsp; &nbsp; $conn.Close()<br />
<br />
}<br />
Catch {<br />
&nbsp; &nbsp; Write-Output &quot;Error encountered: $($_.Exception.Message)&quot;<br />
} <br />
...</div></div>
<p>Using service principal required few changes in my case. I now get credentials of the service principal (ClientId and Secret) from Azure Key Vault instead of the AAD dedicated account used in previous example. I also changed the way to connect to SQL Server by relying on ADALSQL to get the access token instead of using dbatools commands. Indeed, as far as I know, dbatools doesn’t support this authentication way (yet?). </p>
<p>The authentication process becomes as follows:</p>
<p><a href="http://blog.developpez.com/mikedavem/files/2020/08/164-3-new-auth-process-e1596407082747.jpg"><img src="http://blog.developpez.com/mikedavem/files/2020/08/164-3-new-auth-process-e1596407082747.jpg" alt="164 - 3 - new auth process" width="800" height="610" class="alignnone size-full wp-image-1647" /></a></p>
<p>My first test seems to be relevant:</p>
<p><a href="http://blog.developpez.com/mikedavem/files/2020/08/164-4-test-with-SP-e1596407153885.jpg"><img src="http://blog.developpez.com/mikedavem/files/2020/08/164-4-test-with-SP-e1596407153885.jpg" alt="164 - 4 - test with SP" width="800" height="301" class="alignnone size-full wp-image-1648" /></a></p>
<p>This improvement looks promise and may cover broader scenarios as the one I described in this blog post. This feature is in preview at the moment of this write-up and I hope to see it coming soon in GA as well as a potential support of preferred PowerShell framework DBAtools <img src="https://blog.developpez.com/mikedavem/wp-includes/images/smilies/icon_smile.gif" alt=":)" class="wp-smiley" /></p>
<p>See you!</p>
]]></content:encoded>
			<wfw:commentRss></wfw:commentRss>
		<slash:comments>2</slash:comments>
		</item>
		<item>
		<title>Database maintenance thoughts with Azure SQL databases</title>
		<link>https://blog.developpez.com/mikedavem/p13192/sql-azure/database-maintenance-concerns-with-azure-sql-databases</link>
		<comments>https://blog.developpez.com/mikedavem/p13192/sql-azure/database-maintenance-concerns-with-azure-sql-databases#comments</comments>
		<pubDate>Sun, 29 Mar 2020 20:55:33 +0000</pubDate>
		<dc:creator><![CDATA[mikedavem]]></dc:creator>
				<category><![CDATA[DevOps]]></category>
		<category><![CDATA[SQL Azure]]></category>
		<category><![CDATA[Azure Automation]]></category>
		<category><![CDATA[Azure SQL Scheduling]]></category>
		<category><![CDATA[devops]]></category>
		<category><![CDATA[Git]]></category>
		<category><![CDATA[Powershell]]></category>
		<category><![CDATA[Runbook]]></category>

		<guid isPermaLink="false">http://blog.developpez.com/mikedavem/?p=1552</guid>
		<description><![CDATA[As DBA, your priority is to ensure your data are consistent, safely backed up and you get steady performance of your database. In on-prem environments, these tasks are generally performed through scheduled jobs including backups, check integrity and index / &#8230; <a href="https://blog.developpez.com/mikedavem/p13192/sql-azure/database-maintenance-concerns-with-azure-sql-databases">Lire la suite <span class="meta-nav">&#8594;</span></a>]]></description>
				<content:encoded><![CDATA[<p>As DBA, your priority is to ensure your data are consistent, safely backed up and you get steady performance of your database. In on-prem environments, these tasks are generally performed through scheduled jobs including backups, check integrity and index / statistics maintenance tasks. </p>
<p>But moving databases to the cloud in Azure (and others) tells a different story. Indeed, even if the same concern and tasks remain, some of them are under the responsibility of the Cloud provider and some other ones not.  If you’re working with Azure SQL databases – like me – some questions raise very quickly on this topic and it was my motivation to write this write-up. I would like to share with you some new experiences by digging into the different maintenance items. If you have a different story to tell, please feel free to comment and to share your own experience!</p>
<p><span id="more-1552"></span></p>
<p><strong>Database backups</strong></p>
<p>Microsoft takes over the database backups with a strategy based on FULL (every week), DIFF (every 12 hours) and LOGs (every 5 to 10min) with cross-datacenter replication of the backup data. As far as I know, we cannot change this strategy, but we may change the retention period and extend it with an archiving period  extend up to 10 years by enabling the Long-term retention. The latter assumes this is supported by your database service level and options that come with. For instance, we are using some SQL Azure databases in serverless mode which doesn’t support LTR. This strategy provides different methods to restore an Azure database including PITR, Geo-Restore or the ability to restore a deleted database. We are using some of them for our database refresh between Azure SQL Servers or sometimes to restore previous database states for testing. However, just be aware that even if restoring a database may be a trivial operation in Azure, the operation may take a long time regarding your context and factors described <a href="https://docs.microsoft.com/en-us/azure/sql-database/sql-database-recovery-using-backups" rel="noopener" target="_blank">here</a>. In our context and regarding the operation, a restore operation may take up to 2.5h (600GB of data to restore on GEN5</p>
<p>In addition, it is worth noting that there is not a free lunch here and you will pay for storing your backups and probably more than you initially expect. Cost is obviously tied to your backup size for FULL, DIFF and LOG and the retention period making the budget sometimes hard to predict. According to discussions with some colleagues and other MVPs, it seems we are not alone in this case and my advice is to keep an eye of your cost. Here a quick and real picture of the cost ratio between compute + database storage versus backup storage (PITR + LTR) with a PITR retention of 35 days and LITR (max retention of one year)</p>
<p><a href="http://blog.developpez.com/mikedavem/files/2020/03/159-3-DB-retention-policies-PITR-LTR-1.jpg"><img src="http://blog.developpez.com/mikedavem/files/2020/03/159-3-DB-retention-policies-PITR-LTR-1.jpg" alt="159 - 3 - DB retention policies PITR LTR" width="1492" height="376" class="alignnone size-full wp-image-1565" /></a></p>
<p>&#8230;</p>
<p><a href="http://blog.developpez.com/mikedavem/files/2020/03/159-2-Cost-ratio-compute-storage-backup-.jpg"><img src="http://blog.developpez.com/mikedavem/files/2020/03/159-2-Cost-ratio-compute-storage-backup-.jpg" alt="159 - 2 - Cost ratio compute - storage - backup" width="819" height="263" class="alignnone size-full wp-image-1555" /></a></p>
<p>As you may notice half of the total fees for the Azure SQL Database may concern only the backup storage. From our side, we are working hard on reducing this ratio, but this is another topic out of the scope of this blog post.</p>
<p><strong>Database integrity check</strong></p>
<p>Should we continue to use the famous DBCC CHECKDB command? Well, the response is no, and the Azure SQL Database engineering team takes responsibility for managing data integrity. During internal team discussions we wondered what the process would be to recover corrupt data and how fast corruptions are treated by the Azure team. All questions seem to be addressed in this Microsoft blog post here and for us, it was important to know the Microsoft response time in case of database corruption because it may impact the retention policy. Faster Microsoft warns you about your integrity issue, less the retention could be to rewind to the last consistent point (in a reasonable order of magnitude obviously). </p>
<p><strong>Database maintenance (statistics and indexes)</strong></p>
<p>Something that is likely misunderstood with Azure SQL database is the maintenance of indexes and statistics are not anymore under the responsibility of the DBA. Referring to some discussions around me, it seems to be a misconception and the automatic index tuning was often mentioned in the discussions. Automatic tuning aims to adapt dynamically database to a changing workload by applying tuning recommendations either by creating new indexes or dropping redundant and duplicate indexes or forcing last good plan for queries as well. Even this feature (not by default) helps improving the performance for sure, it doesn’t substitute neither updating statistics nor rebuilding fragmented indexes. Concerning the statistics, it is true that some improvements about statistics has been shipped with SQL Server over the time like TF2371 which makes the formula for large tables more dynamic (by default since SQL Server 2016+) but we may arguably say that it remains situations where updating statistics should be done manually and as database administrator it is still under your own responsibility to maintain them.</p>
<p>Database maintenance and schedulng in Azure?</p>
<p>As said as the beginning of this write-up with Azure SQL DB, database maintenance is a different story and the same applies when it comes scheduling. Indeed, you quickly noticed we lacked built-in job scheduler capabilities like the traditional SQL Server agent with on-premises installations, but it doesn’t mean we were not able to schedule any job at all. In fact, there is exists different options to look at to replace the traditional SQL Server agent for database maintenance in Azure we had to look at: </p>
<p>1) SQL Agent jobs still exist but only available for SQL Managed Instances. In our context, we use Azure Single Database with GP_S_Gen5 SKU, so definitely not an option for us.</p>
<p>2) Elastic database jobs can run across multiple servers and allow to write DB maintenance tasks in T-SQL or PowerShell. But this feature has some limitations which has excluded it from the equation:<br />
&#8211; It’s still in preview and we cannot rely on it for production scenarios<br />
&#8211; Serverless and auto-pausing / auto-resuming used with our GP_S_Gen5 SKU database are not supported </p>
<p>3) Data factory could be an option because it is already part of the Azure Services consumed in our context, but we wanted to be decoupled from ETL / Business workflow. </p>
<p>4) Finally, we were interested by Data factory especially the integration with Git and Azure DevOps and the same capabilities are shipped with Azure Automation. One another important factor of decision was the cost because Azure automation runs for free until 500 minutes of job execution per month. In our context, we have a weekly-based schedule for our maintenance plan and we estimated one hour per runbook execution. Thus, we stay under the limit of additional fees.</p>
<p>Azure Automation brings a good control on credentials, but we already use the Azure Key Vault to protect sensitive information. We found that using Azure automation native capabilities and Azure Key Vault may be duplicate that could lead to decentralize our secret management and it more complex. Here a big picture of the process to perform the maintenance of our Azure databases from a scheduled runbook in Azure automation:</p>
<p><a href="http://blog.developpez.com/mikedavem/files/2020/03/159-1-Azure-automation-DB-maintenance-process--e1585510197141.jpg"><img src="http://blog.developpez.com/mikedavem/files/2020/03/159-1-Azure-automation-DB-maintenance-process--e1585510197141.jpg" alt="159 - 1 - Azure automation DB maintenance process" width="1000" height="373" class="alignnone size-full wp-image-1557" /></a></p>
<p>Firstly, we use a PowerShell-based runbook which in turn calls different stored procedures on the target Azure database to perform the database maintenance. To be compliant with our DevOps processes, the runbook is stored in a source control repository (Git) and published to Azure Automation through the built-in sync process. The runbook runs with “Run As Account” option to get access of Azure Key Vault and AppID for using the dedicated application identity. Finally, this identity is then used to connect to the SQL Azure DB and to perform the database maintenance based on the corresponding token authentication and granted permissions on the DB side. New token-based authentication available since the Azure SQL DB v12 and helped us to meet our security policy that prevents using SQL Logins when possible. To generate the token, we still use the old ADAL.PS module. This is something we need to update in the future.</p>
<p>Here a sample of interesting parts of the PowerShell code to authenticate to the Azure database:</p>
<div class="codecolorer-container text default" style="overflow:auto;white-space:nowrap;border:1px solid #9F9F9F;width:650px;"><div class="text codecolorer" style="padding:5px;font:normal 12px/1.4em Monaco, Lucida Console, monospace;white-space:nowrap"># Run runbook as special account to get access the Azure Key Vault<br />
$AzureAutomationConnectionName = &quot;xxxx&quot;<br />
$ServicePrincipalConnection = Get-AutomationConnection -Name $AzureAutomationConnectionName<br />
<br />
…<br />
<br />
$clientId = (Get-AzKeyVaultSecret -VaultName $KeyvaultName -Name &quot;xxxxx&quot;).SecretValueText<br />
$response = Get-ADALToken -ClientId $clientId -ClientSecret $clientSecret -Resource $resourceUri -Authority $authorityUri -TenantId $tenantName<br />
<br />
# Connection String<br />
$connectionString = &quot;Server=tcp:$SqlInstance,1433;Initial Catalog=$Database;Persist Security Info=False;MultipleActiveResultSets=False;Encrypt=True;TrustServerCertificate=False;&quot;<br />
<br />
# Create the connection object<br />
$connection = New-Object System.Data.SqlClient.SqlConnection($connectionString)<br />
<br />
# Set identity by using the corresponding token to connect to the Azure DB<br />
$connection.AccessToken = $response.AccessToken<br />
<br />
...</div></div>
<p>Yes, Azure is a different beast (like other Clouds) and requires from DBAs to review their habits. It may be very confusing at the beginning but everything you made in the past is possible or at least can be achieved in a different way in Azure. Just think differently would be my best advice in this case! </p>
]]></content:encoded>
			<wfw:commentRss></wfw:commentRss>
		<slash:comments>1</slash:comments>
		</item>
		<item>
		<title>SQL DB Azure, performance scaling thoughts</title>
		<link>https://blog.developpez.com/mikedavem/p13188/sql-azure/sql-db-azure-performance-scaling-thoughts</link>
		<comments>https://blog.developpez.com/mikedavem/p13188/sql-azure/sql-db-azure-performance-scaling-thoughts#comments</comments>
		<pubDate>Thu, 20 Feb 2020 21:09:54 +0000</pubDate>
		<dc:creator><![CDATA[mikedavem]]></dc:creator>
				<category><![CDATA[SQL Azure]]></category>
		<category><![CDATA[monitoring]]></category>
		<category><![CDATA[performance]]></category>
		<category><![CDATA[SQL Azure DB]]></category>

		<guid isPermaLink="false">http://blog.developpez.com/mikedavem/?p=1490</guid>
		<description><![CDATA[Let’s continue with Azure stories and performance scaling &#8230; A couple of weeks ago, we studied opportunities to replace existing clustered indexes (CI) with columnstore indexes (CCI) for some facts. To cut the story short and to focus on the &#8230; <a href="https://blog.developpez.com/mikedavem/p13188/sql-azure/sql-db-azure-performance-scaling-thoughts">Lire la suite <span class="meta-nav">&#8594;</span></a>]]></description>
				<content:encoded><![CDATA[<p>Let’s continue with Azure stories and performance scaling &#8230;</p>
<p><a href="http://blog.developpez.com/mikedavem/files/2020/02/155-0-banner-e1582232926354.jpg"><img src="http://blog.developpez.com/mikedavem/files/2020/02/155-0-banner-e1582232926354.jpg" alt="155 - 0 - banner" width="500" height="288" class="alignnone size-full wp-image-1507" /></a></p>
<p>A couple of weeks ago, we studied opportunities to replace existing clustered indexes (CI) with columnstore indexes (CCI) for some facts. To cut the story short and to focus on the right topic of this write-up, we prepared a creation script for specific CCIs based on the <a href="http://www.nikoport.com/2014/04/16/clustered-columnstore-indexes-part-29-data-loading-for-better-segment-elimination/" rel="noopener" target="_blank">Niko’s technique</a> variation (no MAXDOP = 1 meaning we enable parallelism) in order to get a better segment alignment. </p>
<p><span id="more-1490"></span></p>
<div class="codecolorer-container text default" style="overflow:auto;white-space:nowrap;border:1px solid #9F9F9F;width:650px;"><div class="text codecolorer" style="padding:5px;font:normal 12px/1.4em Monaco, Lucida Console, monospace;white-space:nowrap">-- Recreation of clustered index<br />
CREATE CLUSTERED INDEX [PK_FACT_IDX] <br />
ON dbo.FactTable (KeyColumn)<br />
WITH (DROP_EXISTING = ON, DATA_COMPRESSION = PAGE);<br />
<br />
-- Creation of the CCI<br />
CREATE CLUSTERED COLUMNSTORE INDEX [PK_FACT_IDX] <br />
ON dbo.FactTable <br />
WITH (DROP_EXISTING = ON);<br />
<br />
-- Recreation of [[... n] nonclustered indexes<br />
CREATE INDEX [IDX_xxx … n]<br />
ON dbo.FactTable (column)<br />
WITH (DROP_EXISTING = ON, DATA_COMPRESSION = PAGE);</div></div>
<p>Before deploying those indexes in our SQL DB Azure environment, we staged a first scenario in on-premises instance and the creation of all indexes took ~ 1h. It is worth noting that our tests are based on the same database with the same data in all cases. But guess what, the story was different in Azure <img src="https://blog.developpez.com/mikedavem/wp-includes/images/smilies/icon_smile.gif" alt=":)" class="wp-smiley" /> and I got feedbacks from another team who was responsible to deploy indexes in Azure, the creation script was a bit longer (~ 4h).<br />
I definitely enjoyed this story because we got a deeper understanding of DB Azure performance topic.</p>
<p><strong>=&gt; Moving to the cloud means we’ll get slower performance? </strong></p>
<p>Before drawing conclusions to quickly a good habit to get is to compare specifications between environments. It’s not about comparing oranges and apples.  Well let’s set my own context: from one side, the on-premises virtual SQL Server environment specification includes 8vCPUs (Intel(R) Xeon(R) CPU E5-2670 v3 @ 2.30GHz), 64 GB of RAM and a high-performance storage array with micro latency device dedicated to our IO intensive workloads. From the vendor specifications, we may except very interesting IO performance with a general throughput greater than 100 KIOPs (Random) or 1GB/s (sequential).  On another side, the SQL DB Azure is based on the service pricing tier General Purpose: Serverless Gen5, 8 vCores. We use the vCore purchasing model and referring to the <a href="https://docs.microsoft.com/bs-latn-ba/Azure/sql-database/sql-database-vcore-resource-limits-single-databases" rel="noopener" target="_blank">Microsoft documentation</a>, hardware generation 5 includes a compute specification based on Intel E5-2673 v4 (Broadwell) 2.3-GHz and Intel SP-8160 (Skylake) processors.  Added to this, the service pricing tier comes with a remote SSD based storage including IO latency around 5-7ms and 2560 IOPs max. Given the opportunity of the infrastructure elasticity, we could scale to up 16 vCores, 48GB of RAM and 5120 IOPs for data. Obviously, latency remains the same in this case.</p>
<p>As illustration, creation of all indexes (CI + CCI + NCIs) performed in our on-premises environment gave the following storage performance figures:  ~ 700MB/s and 13K IOPs for maximum values that were an aggregation of DATA + LOG activity on D: drive. Rebuilding indexes are high resource consuming operations in terms of CPU as well and we obviously noticed CPU saturation at different steps of the operation.</p>
<p><a href="http://blog.developpez.com/mikedavem/files/2020/02/155-1-on-premises-storage-performance-e1582231532623.gif"><img src="http://blog.developpez.com/mikedavem/files/2020/02/155-1-on-premises-storage-performance-e1582231532623.gif" alt="155 - 1 - on-premises-storage-performance" width="900" height="448" class="alignnone size-full wp-image-1491" /></a></p>
<p>&#8230;</p>
<p><a href="http://blog.developpez.com/mikedavem/files/2020/02/155-2-on-premises-cpu-performance-e1582231566692.gif"><img src="http://blog.developpez.com/mikedavem/files/2020/02/155-2-on-premises-cpu-performance-e1582231566692.gif" alt="155 - 2 - on-premises-cpu-performance" width="800" height="398" class="alignnone size-full wp-image-1492" /></a></p>
<p>As an aside, we may notice the creation of CCI is a less intensive operation in terms of resources and we retrieve the same pattern in Azure below. Talking of which, let’s compare with our SQL Azure DB. There are different ways to get performance metrics including the portal which enables monitoring performance through easy-to-use interface or DMVs for each Azure DB like sys.dm_db_resource_stats. It is worth noting that in SQL Azure DB metrics are expressed as percentage of the service tier limit, so you need to adjust your analysis with the tier you’re using. First, we observed the same resource utilization pattern for all steps of the creation script but within a different timeline – duration has increased to 4h (as mentioned by another team). There is a clear picture of reaching the limit of the configured service tier, especially for Log IO (green line) and we already switched from GP_S_Gen5_8 to GP_S_Gen5_16 service tier </p>
<p><a href="http://blog.developpez.com/mikedavem/files/2020/02/155-3-Az-CCI_Gen5_16_General_Purpose_CI_CCI_compressed_page-e1582231670221.jpg"><img src="http://blog.developpez.com/mikedavem/files/2020/02/155-3-Az-CCI_Gen5_16_General_Purpose_CI_CCI_compressed_page-e1582231670221.jpg" alt="155 - 3 - Az - CCI_Gen5_16_General_Purpose_CI_CCI_compressed_page" width="1200" height="278" class="alignnone size-full wp-image-1494" /></a></p>
<p>In addition, Wait stats gave interesting insights as well:</p>
<p><a href="http://blog.developpez.com/mikedavem/files/2020/02/155-5-wait_stats_CCI_index_Gen5_8_16_GP_CI_CCI_compressed_page_-e1582231763875.jpg"><img src="http://blog.developpez.com/mikedavem/files/2020/02/155-5-wait_stats_CCI_index_Gen5_8_16_GP_CI_CCI_compressed_page_-e1582231763875.jpg" alt="155 - 5 - wait_stats_CCI_index_Gen5_8_16_GP_CI_CCI_compressed_page_" width="1200" height="226" class="alignnone size-full wp-image-1496" /></a></p>
<p>Excluding the traditional PAGEIOLATCH_xx waits, the LOG_RATE_GOVERNOR wait type appeared in the top waits and confirms that we bumped into the limits imposed on transaction log I/O by our performance tier.</p>
<p><strong>=&gt; Scaling vs Upgrading the Service for better performance?  </strong></p>
<p>With SQL DB Azure PaaS, we may benefit from elastic architecture. Firstly, scaling the number of CPUs is a factor of improvement and there is a direct relationship with storage (IOPs), memory or disk space allocated for tempdb for instance. But the order of magnitude varies with the service tier as shown below:</p>
<p>For General Purpose ServerLess Generation 5 service tier &#8211; Resources per Core</p>
<p><a href="http://blog.developpez.com/mikedavem/files/2020/02/155-6-Gen5_8_16_GP_service_tier_perf_.jpg"><img src="http://blog.developpez.com/mikedavem/files/2020/02/155-6-Gen5_8_16_GP_service_tier_perf_.jpg" alt="155 - 6 - Gen5_8_16_GP_service_tier_perf_" width="1002" height="175" class="alignnone size-full wp-image-1499" /></a></p>
<p>Something relevant here because even performance increases with the number of vCores provisioned, we can deduce Log IO saturation from our test in Azure (especially in the first step of the CI creation) results of max log rate limitation that doesn’t scale in the same way. This is especially relevant here because as said previously index creation can be an resource intensive operation with a huge impact on the transaction log.</p>
<p><strong>What would be a solution to speed-up this operation? </strong></p>
<p>First viable solution in our context would be to switch to SIMPLE recovery model that fits perfectly with our scenario because we could get minimally-logged capabilities and a lower impact on the transaction log and because it is suitable for DW environments. Unfortunately, at the moment of this write-up, this is not supported and I suggest you to vote on <a href="https://feedback.azure.com/forums/217321-sql-database/suggestions/36400585-allow-recovery-model-to-be-changed-to-simple-in-az" rel="noopener" target="_blank">feedback Azure</a> if you are interested in.<br />
From an infrastructure standpoint, improving max log rate throughput is only possible by upgrading to a higher service tier (but at the cost of higher fees obviously). For a sake of curiosity, I did a try with the <strong>BC_Gen5_16</strong> service tier specifications:</p>
<p><a href="http://blog.developpez.com/mikedavem/files/2020/02/155-6-Gen5_8_16_BC_service_tier_perf_.jpg"><img src="http://blog.developpez.com/mikedavem/files/2020/02/155-6-Gen5_8_16_BC_service_tier_perf_.jpg" alt="155 - 6 - Gen5_8_16_BC_service_tier_perf_" width="1002" height="175" class="alignnone size-full wp-image-1500" /></a></p>
<p>Even if this new service tier seems to be a better fit (suggested by the relative percentage of resource usage) …</p>
<p><a href="http://blog.developpez.com/mikedavem/files/2020/02/155-4-CCI_index_Gen5_16_Business_Critical_CI_CCI_compressed_page_-e1582232230338.jpg"><img src="http://blog.developpez.com/mikedavem/files/2020/02/155-4-CCI_index_Gen5_16_Business_Critical_CI_CCI_compressed_page_-e1582232230338.jpg" alt="155 - 4 - CCI_index_Gen5_16_Business_Critical_CI_CCI_compressed_page_" width="1200" height="203" class="alignnone size-full wp-image-1501" /></a></p>
<p>… there are important notes here:</p>
<p>1) Business Critical Tier is not available for Serverless architecture</p>
<p>2) Moving to a different service is not instantaneous and it may require several hours according to the database size (~ 3h for a total size of ~500GB database size in my case).  Well, this is not viable option even if get better performance. Indeed, if we add the time to upgrade to a higher service tier (3h) + time to run the creation script (3h or 25% of performance gain compared to the previous GP_S_Gen5_16 service tier). We may obviously upgrade again to reach performance closer to our on-premises environment but does it worth fighting for here only for an index creation script? </p>
<p>Concerning our scenario (Data Warehouse), it is generally easy to schedule a non-peak hours time frame that doesn&rsquo;t overlap with the processing-oriented workload but it could not be the case for everyone!  </p>
<p>See you!</p>
]]></content:encoded>
			<wfw:commentRss></wfw:commentRss>
		<slash:comments>1</slash:comments>
		</item>
		<item>
		<title>Configuring Integrated Windows Authentication with SSRS and SQL DB Azure</title>
		<link>https://blog.developpez.com/mikedavem/p13187/sql-azure/configuring-integrated-windows-authentication-with-ssrs-and-sql-db-azure</link>
		<comments>https://blog.developpez.com/mikedavem/p13187/sql-azure/configuring-integrated-windows-authentication-with-ssrs-and-sql-db-azure#comments</comments>
		<pubDate>Wed, 12 Feb 2020 21:40:26 +0000</pubDate>
		<dc:creator><![CDATA[mikedavem]]></dc:creator>
				<category><![CDATA[SQL Azure]]></category>
		<category><![CDATA[AD Connect]]></category>
		<category><![CDATA[ADFS]]></category>
		<category><![CDATA[Authentication]]></category>
		<category><![CDATA[Azure]]></category>
		<category><![CDATA[SQL Azure DB]]></category>
		<category><![CDATA[SSRS]]></category>

		<guid isPermaLink="false">http://blog.developpez.com/mikedavem/?p=1451</guid>
		<description><![CDATA[Today let’s talk about Cloud and Azure. My new job gives me now the opportunity to work in a hybrid environment with some components hosted in a cloud including SQL Azure Databases. To get straight to the point, PaaS databases &#8230; <a href="https://blog.developpez.com/mikedavem/p13187/sql-azure/configuring-integrated-windows-authentication-with-ssrs-and-sql-db-azure">Lire la suite <span class="meta-nav">&#8594;</span></a>]]></description>
				<content:encoded><![CDATA[<p>Today let’s talk about Cloud and Azure. My new job gives me now the opportunity to work in a hybrid environment with some components hosted in a cloud including SQL Azure Databases.  To get straight to the point, PaaS databases are different beasts and I my confirm DBA role is shifting to another dimension. The focus is more on providing higher value in the architecture design and tuning because resources are a big concern, at least in a different order of magnitude, because they are now treated as operational expenses (OpEx). Entering now in a World of code as infrastructure, provisioning such service has become an easy game and can be automated through Cloud provider APIs and specialized tools. My colleagues already did a lot of good jobs on this topic. </p>
<p><span id="more-1451"></span></p>
<p>In this blog post I would like to focus on the authentication architecture design to connect to a SQL Azure DB. I already played with some SQL Azure DB with a simple authentication protocol that consisted in using SQL Logins and opening some firewall ports to expose the DB service on the internet. I guess this is a basic scenario for lot of people (including me) who want to play with such service. But what about enterprise-class scenarios?  I had the chance to get involved to the implementation of the end-to-end Integrated Windows Authentication (IWA) between SSRS data sources on-premises and one of SQL Azure DB. SQL Login is likely the most common method used to connect for its simplicity, but like on-premises scenarios, it is not the best one in terms of security. </p>
<p>So where to start? First of all, let’s say that Microsoft provides some configuration clues in the <a href="https://docs.microsoft.com/en-us/sql/reporting-services/report-data/sql-azure-connection-type-ssrs?view=sql-server-ver15" rel="noopener" target="_blank">BOL</a> (Azure SQL Database and AAD section).</p>
<p>From an architecture standpoint we must meet the following prerequisites:</p>
<li>Active Directory Authentication installed on the concerned SSRS servers</li>
<li>Active Directory Federation Services (ADFS) configured to federate across on-premises your on-premises Active Directory (AD) and Azure AD (AAD)</li>
<li>Kerberos Constraint Delegation (KCD) enabled between SSRS and ADFs services</li>
<li>Kerberos authentication enabled in SSRS report (RSReportServer.config)</li>
<li>Azure Active Directory authentication configured with SQL DB Azure</li>
<p>Sounds complicated right? It could be regarding your context and which components you have access.</p>
<p>Let’s show a very simplified chart of the authentication flow:</p>
<p><a href="http://blog.developpez.com/mikedavem/files/2020/02/154-0-SSRS-SQL-DB-Azure-Login-flow.jpg"><img src="http://blog.developpez.com/mikedavem/files/2020/02/154-0-SSRS-SQL-DB-Azure-Login-flow.jpg" alt="154 - 0 - SSRS - SQL DB Azure Login flow" width="1024" height="500" class="alignnone size-full wp-image-1457" /></a></p>
<p>Usually shops which own a hybrid environment with Azure already implemented the federation which was the case for me. So as DBA you will likely delegate some Active Directory stuff to right team.<br />
Let’s start with the easiest parts of this big cake:  Installing ADALSQL library on SSRS servers is pretty straightforward so need to talk further about it. Once the library is installed, it gives access to a new <strong>Microsoft Azure SQL Database</strong> connection type in SSRS data source as show below:</p>
<p><a href="http://blog.developpez.com/mikedavem/files/2020/02/154-1-SSRS-datasource-DB-Azure-Type.jpg"><img src="http://blog.developpez.com/mikedavem/files/2020/02/154-1-SSRS-datasource-DB-Azure-Type.jpg" alt="154 - 1 - SSRS datasource DB Azure Type" width="1131" height="535" class="alignnone size-full wp-image-1458" /></a></p>
<p><strong>=&gt; Kerberos delegation</strong></p>
<p>If you are already confident with Kerberos delegation with your SQL Server environment, configuring Kerberos delegation on ADFS didn’t raise special difficulties. You must still configure correctly the SPNs and then configure the Kerberos delegation. In this case, The SSRS service must be configured to delegate the authentication to the ADFS service.</p>
<div class="codecolorer-container text default" style="overflow:auto;white-space:nowrap;border:1px solid #9F9F9F;width:650px;"><div class="text codecolorer" style="padding:5px;font:normal 12px/1.4em Monaco, Lucida Console, monospace;white-space:nowrap"># Retrieve ADFS hostname<br />
Get-AdfsProperties | Select-Object Hostname<br />
setpn -A http/adfs_host_name.domainname domain-user-account<br />
<br />
# Configure SPN SSRS + ADFS<br />
setpn -A http/computername.domainname domain-user-account<br />
… <br />
…</div></div>
<p><strong>=&gt; AAD authentication with SQL DB Azure</strong></p>
<p>AAD authentication must be enabled on the SQL Azure DB before using IWA. As you know, SQL DB Azure provides different ways to connect including:</p>
<li>SQL based Login – the most basic method we may use when Windows authentication is not available</li>
<li>Azure Active Directory based Login – This method requires an underlying AAD infrastructure configured + one AAD account as Active Directory Admin of the SQL DB in Azure. This is mandatory to allow the Azure SQL Server to get permissions to read Azure AD and successfully accomplish tasks such as authentication of users through security group membership or creation of new users</li>
<p>Well, you may notice that enabling AAD authentication is not trivial <img src="https://blog.developpez.com/mikedavem/wp-includes/images/smilies/icon_smile.gif" alt=":)" class="wp-smiley" /> So let’s dig further to the AAD authentication method. It provides 2 non-interactive ways <strong>Active Directory – Password</strong> and <strong>Active Directory &#8211; Integrated authentication</strong> that are suitable for many applications based on ADO.NET, JDCB, ODC used by SSRS data-sources. This is because these methods never result in pop-up dialog boxes which can be used. Other method includes interactive method like <strong>Active Directory &#8211; Universal with MFA</strong> suitable for administrative accounts (including DBAs) for obvious security reasons. These methods are illustrated below:</p>
<p><a href="http://blog.developpez.com/mikedavem/files/2020/02/154-2-SQL-DB-Azure-Login.jpg"><img src="http://blog.developpez.com/mikedavem/files/2020/02/154-2-SQL-DB-Azure-Login.jpg" alt="154 - 2 - SQL DB Azure - Login" width="1689" height="629" class="alignnone size-full wp-image-1461" /></a></p>
<p>After authentication, authorization comes into play. As DBA we must be aware of the different security model implied by this Azure service. In addition, regarding the login / user type, the access path will be different accordingly as follows:</p>
<p><a href="http://blog.developpez.com/mikedavem/files/2020/02/154-8-Access-Path.jpg"><img src="http://blog.developpez.com/mikedavem/files/2020/02/154-8-Access-Path.jpg" alt="154 - 8 - Access Path" width="764" height="438" class="alignnone size-full wp-image-1462" /></a></p>
<p>Compared to on-premises SQL databases, there are only 2 unrestricted accounts including a Server Admin (SQL based Login) and an AAD admin created in master. These accounts are automatically mapped to the dbo user of each SQL Azure DB and are implicitly DB owner. Even these accounts are considered unrestricted, in fact it is not because they are not member of sysadmin server role (that is not available in SQL DB Azure by the way). For more details the <a href="https://docs.microsoft.com/en-us/azure/sql-database/sql-database-manage-logins" rel="noopener" target="_blank">BOL</a> is your friend <img src="https://blog.developpez.com/mikedavem/wp-includes/images/smilies/icon_smile.gif" alt=":)" class="wp-smiley" /></p>
<p>Here a PowerShell script sample that helps identifying each of them:</p>
<div class="codecolorer-container text default" style="overflow:auto;white-space:nowrap;border:1px solid #9F9F9F;width:650px;"><div class="text codecolorer" style="padding:5px;font:normal 12px/1.4em Monaco, Lucida Console, monospace;white-space:nowrap"># Server admin <br />
(Get-AzSqlServer -ResourceGroupName $ResourceGroup).SqlAdministratorLogin &nbsp;<br />
<br />
# Active Directory admin<br />
(Get-AzSqlServerActiveDirectoryAdministrator -ResourceGroupName $ResourceGroup -ServerName $ServerName).DisplayName</div></div>
<p>Other administrative roles are <strong>dbmanager</strong> and <strong>loginmanager</strong> which are respectively able to create Azure databases and to create logins. Members of these roles must be created in the master database.<br />
Finally, non-administrative users don’t need to access to the master database and may be SQL or AAD contained authentication-based users (making your database portable)<br />
Creation of AAD login / user requires using the clause <strong>FROM EXTERNAL PROVIDER</strong> in the CREATE USER TSQL command. Like on-premises environments, you can rely on AAD group for your security strategy.<br />
The <em>sys.database_principals</em> DMV is still available to get a picture of different users in each database:</p>
<div class="codecolorer-container text default" style="overflow:auto;white-space:nowrap;border:1px solid #9F9F9F;width:650px;"><div class="text codecolorer" style="padding:5px;font:normal 12px/1.4em Monaco, Lucida Console, monospace;white-space:nowrap">select name as username,<br />
&nbsp; &nbsp; &nbsp; &nbsp;create_date,<br />
&nbsp; &nbsp; &nbsp; &nbsp;modify_date,<br />
&nbsp; &nbsp; &nbsp; &nbsp;type_desc as type,<br />
&nbsp; &nbsp; &nbsp; &nbsp;authentication_type_desc as authentication_type<br />
from sys.database_principals<br />
where type not in ('A', 'G', 'R')<br />
&nbsp; &nbsp; &nbsp; and sid is not null<br />
order by username;</div></div>
<p>&#8230;</p>
<p><a href="http://blog.developpez.com/mikedavem/files/2020/02/154-9-DB-users.jpg"><img src="http://blog.developpez.com/mikedavem/files/2020/02/154-9-DB-users.jpg" alt="154 - 9 - DB users" width="987" height="234" class="alignnone size-full wp-image-1465" /></a></p>
<p>Username values are not important here <img src="https://blog.developpez.com/mikedavem/wp-includes/images/smilies/icon_smile.gif" alt=":)" class="wp-smiley" /> Note the type and authentication_type column values. In my context, we have a mix of AAD groups and users (EXTERNAL_USER/GROUP). Users with INSTANCE authentication type are server-level users with a correspond entry in master database. In the first line, the user is part of the <strong>dbmanager</strong> role.</p>
<div class="codecolorer-container text default" style="overflow:auto;white-space:nowrap;border:1px solid #9F9F9F;width:650px;"><div class="text codecolorer" style="padding:5px;font:normal 12px/1.4em Monaco, Lucida Console, monospace;white-space:nowrap">SELECT DP1.name AS DatabaseRoleName, &nbsp; <br />
&nbsp; &nbsp; isnull (DP2.name, 'No members') AS DatabaseUserName &nbsp; <br />
FROM sys.database_role_members AS DRM &nbsp;<br />
RIGHT OUTER JOIN sys.database_principals AS DP1 &nbsp;<br />
&nbsp; &nbsp; ON DRM.role_principal_id = DP1.principal_id &nbsp;<br />
LEFT OUTER JOIN sys.database_principals AS DP2 &nbsp;<br />
&nbsp; &nbsp; ON DRM.member_principal_id = DP2.principal_id &nbsp;<br />
WHERE DP1.type = 'R'<br />
ORDER BY DP1.name; &nbsp;<br />
GO</div></div>
<p>&#8230;</p>
<p><a href="http://blog.developpez.com/mikedavem/files/2020/02/154-9-2-DB-roles.jpg"><img src="http://blog.developpez.com/mikedavem/files/2020/02/154-9-2-DB-roles.jpg" alt="154 - 9 - 2 - DB roles" width="379" height="300" class="alignnone size-full wp-image-1466" /></a></p>
<p>Referring to this quick overview of SQL DB Azure authentication and permissions, in the context of my project we only need to configure a dedicated AD account to get access the data in Azure. Because the security standard is group-based we only need to move this user to the corresponding group that is member of db_datareader role on the SQL DB Azure side. </p>
<p>For DBAs the previous topics are familiar. However, the biggest part of the cake: the underlying AD and federation infrastructure is likely less. For modern DBAs in the cloud, authentication and connectivity are part of their new skills to gain. I don’t pretend to be an expert on this topic so the idea here is to share my experience I had on an issue we faced during the IWA implementation and drove me to improve my skills on the different components of the federation infrastructure including the ADFS server, the AD Azure connect component and the Azure Active Directory. After configuring the different components of the authentication infrastructure, we tried to configure the dedicated SSRS service account with the new SSRS data source and here the message we got when attempting to connect to the SQL Azure DB: </p>
<blockquote><p>Could not discover a user realm. (System.Data)</p></blockquote>
<p>Before continuing, let’s precise that the SSRS service account name (or User Principal Name) we configured on the AD on-premises side was in the form of <strong>domain.local\ssrs_account</strong>. After the AD Azure sync comes into play, on the AAD side the correspond AAD identity was <strong>ssrs_account@domain.onmicrosoft.com</strong></p>
<p>At this stage, I didn’t pay attention of this UPN and I naively believed that creating the user referring this identity on the SQL DB Azure will be the final step of the authentication architecture implementation:</p>
<div class="codecolorer-container text default" style="overflow:auto;white-space:nowrap;border:1px solid #9F9F9F;width:650px;"><div class="text codecolorer" style="padding:5px;font:normal 12px/1.4em Monaco, Lucida Console, monospace;white-space:nowrap">CREATE USER [ssrs_account@domain.onmicrosoft.com]<br />
FROM EXTERNAL PROVIDER;<br />
<br />
ALTER ROLE db_datareader ADD MEMBER [ssrs_account@domain.onmicrosoft.com];<br />
GO</div></div>
<p>But as you guess, it was not the case and the authentication process failed. I spent some times to figure out what could be wrong.I admit my first DBA’s habit led me to try to look at an eventual SQL Server error log, but you know, we are dealing with a different beast and there is no SQL Server error log anymore to look at … In fact, to get a status of SQL Server connections SQL DB Azure provides auditing capabilities at different levels (server and db) with a default policy that includes FAILED_DATABASE_AUTHENTICATION_GROUP action to monitor successful and failed logins. In addition, consuming these events depends on the target storage (Storage Account, Logs Analytics). For instance, here a sample of results I got from our dedicated Log Analytics workspace with a search from the <strong><em>SQLSecurityAuditEvents</em></strong> category: </p>
<p><a href="http://blog.developpez.com/mikedavem/files/2020/02/154-7-Logs-Analytics.jpg"><img src="http://blog.developpez.com/mikedavem/files/2020/02/154-7-Logs-Analytics.jpg" alt="154 - 7 - Logs Analytics" width="1901" height="597" class="alignnone size-full wp-image-1469" /></a></p>
<p>But nothing relevant and related to my SSRS service account … After reading the BOL more carefully,  I finally figured out that if I want to get a chance to find (maybe) something related to the AAD identity I had to look at the AAD side instead as stated to the follow section of the documentation:</p>
<blockquote><p>When using AAD Authentication, failed logins records will not appear in the SQL audit log. To view failed login audit records, you need to visit the Azure Active Directory portal, which logs details of these events.</p></blockquote>
<p>But again, the AAD audit trail didn’t contain any relevant records of my issue. So where to look at in this case? Well, I decided to review the authentication process in the federation part. In the error message it was about a missing realm … realm names come from the Kerberos authentication protocol and they serve practically the same purpose as domains and domain names. In my context because we are in a hybrid authentication infrastructure, in order to make the authentication process working correctly a federation must be configured and it was effective in my case. As said previously, the federation part includes an ADFS server (provides SSO capabilities and IWA authentication for applications), an AD Sync component (To sync identities between AD and AAD) and a configured AAD as shown below: </p>
<p><a href="http://blog.developpez.com/mikedavem/files/2020/02/154-10-ADFS.jpg"><img src="http://blog.developpez.com/mikedavem/files/2020/02/154-10-ADFS.jpg" alt="154 - 10 - ADFS" width="792" height="327" class="alignnone size-full wp-image-1471" /></a></p>
<p>A basic configuration we may retrieve in many shops. However, the remarkable thing was the different form of the SSRS realm between AD on-premises and AAD after the synchronization stuff of the AD Azure connect component. Getting support of my team, it became quickly obvious the root cause was on the federation infrastructure side. In normal case, with verified users  we normally get the same UPN on the both side as shown below:</p>
<p><strong>domain.com\user</strong> (AD on-premises) =&gt; <strong>domain.com\user</strong> (AAD)</p>
<p>Something important to bear in mind is that the Azure AD Connect tool comes with a troubleshooting section that was helpful in my case because it helped to validate our suspicion:</p>
<p><a href="http://blog.developpez.com/mikedavem/files/2020/02/154-4-AD-Connect-troubleshooting.jpg"><img src="http://blog.developpez.com/mikedavem/files/2020/02/154-4-AD-Connect-troubleshooting.jpg" alt="154 - 4 - AD Connect troubleshooting" width="1207" height="305" class="alignnone size-full wp-image-1472" /></a></p>
<p>&#8230;</p>
<p><a href="http://blog.developpez.com/mikedavem/files/2020/02/154-5-AD-Connect-troubleshooting-Result-e1581542912771.jpg"><img src="http://blog.developpez.com/mikedavem/files/2020/02/154-5-AD-Connect-troubleshooting-Result-e1581542912771.jpg" alt="154 - 5 - AD Connect troubleshooting Result" width="800" height="460" class="alignnone size-full wp-image-1474" /></a></p>
<p>The tool diagnosed there was a mismatch between the userPrincipalName attribute value between the AD on-premises and the AAD because the UPN suffix was not verified by the AAD Tenant.<br />
Using the <strong>Get-AzureDDomain</strong> cmdlet confirmed the only domain name: domain.com that was federated as shown below:</p>
<p><a href="http://blog.developpez.com/mikedavem/files/2020/02/154-11-AAD-domain-e1581542982838.jpg"><img src="http://blog.developpez.com/mikedavem/files/2020/02/154-11-AAD-domain-e1581542982838.jpg" alt="154 - 11 - AAD domain" width="800" height="209" class="alignnone size-full wp-image-1475" /></a></p>
<p>As a reminder, the SSRS account was created with an UPN domain suffix (domain.local) which was not verified by the AAD (domain.local is different from domain.com). As stated by the troubleshooting section when the UPN suffix is not verified with the AAD Tenant, Azure AD takes different inputs into account in the given order to calculate the UPN prefix in the cloud resulting to create the wrong one in my case (ssrs_account@domain.onmicrosoft.com). </p>
<p>Finally changing the correct UPN suffix to domain.com fixed this annoying issue!</p>
<p><a href="http://blog.developpez.com/mikedavem/files/2020/02/154-12-SSRS-auth-ok.jpg"><img src="http://blog.developpez.com/mikedavem/files/2020/02/154-12-SSRS-auth-ok.jpg" alt="154 - 12 - SSRS auth ok" width="423" height="613" class="alignnone size-full wp-image-1477" /></a></p>
<p>See you! </p>
]]></content:encoded>
			<wfw:commentRss></wfw:commentRss>
		<slash:comments>4</slash:comments>
		</item>
		<item>
		<title>Windows Server Vnext and cloud witness</title>
		<link>https://blog.developpez.com/mikedavem/p12860/sql-azure/windows-server-vnext-and-cloud-witness</link>
		<comments>https://blog.developpez.com/mikedavem/p12860/sql-azure/windows-server-vnext-and-cloud-witness#comments</comments>
		<pubDate>Mon, 30 Mar 2015 06:09:36 +0000</pubDate>
		<dc:creator><![CDATA[mikedavem]]></dc:creator>
				<category><![CDATA[SQL Azure]]></category>
		<category><![CDATA[WFSC;Windows failover cluster;FCI;Failover;Basculement;Witness;témoin;Cloud;Windows 10]]></category>

		<guid isPermaLink="false">http://blog.developpez.com/mikedavem/?p=1014</guid>
		<description><![CDATA[La prochaine version de Windows fournira d&#8217;intéressantes fonctionnalités concernant les architectures de cluster à basculement dont l&#8217;une d&#8217;entre elles concerne un nouveau type de quorum &#171;&#160;Node majority and cloud witness&#160;&#187;. Celui-ci va très certainement résoudre beaucoup de scénarios où l&#8217;utilisation &#8230; <a href="https://blog.developpez.com/mikedavem/p12860/sql-azure/windows-server-vnext-and-cloud-witness">Lire la suite <span class="meta-nav">&#8594;</span></a>]]></description>
				<content:encoded><![CDATA[<p>La prochaine version de Windows fournira d&rsquo;intéressantes fonctionnalités concernant les architectures de cluster à basculement dont l&rsquo;une d&rsquo;entre elles concerne un nouveau type de quorum &laquo;&nbsp;Node majority and cloud witness&nbsp;&raquo;. Celui-ci va très certainement résoudre beaucoup de scénarios où l&rsquo;utilisation d&rsquo;un 3ème datacenter est obligatoire pour atteindre une vraie résilience du quorum. </p>
<p>&gt; <a href="http://www.dbi-services.com/index.php/blog/entry/windows-cluster-vnext-and-cloud-witness" target="_blank">Lire la suite</a> (en anglais)</p>
<p>David Barbarin<br />
MVP &amp; MCM SQL Server</p>
]]></content:encoded>
			<wfw:commentRss></wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>Les journ&#233;es SQL Server&#8211;second volet</title>
		<link>https://blog.developpez.com/mikedavem/p11302/sql-server-2005/les-journes-sql-serversecond-volet</link>
		<comments>https://blog.developpez.com/mikedavem/p11302/sql-server-2005/les-journes-sql-serversecond-volet#comments</comments>
		<pubDate>Thu, 13 Sep 2012 06:03:38 +0000</pubDate>
		<dc:creator><![CDATA[mikedavem]]></dc:creator>
				<category><![CDATA[Evénements]]></category>
		<category><![CDATA[SQL Azure]]></category>
		<category><![CDATA[SQL Server 2000]]></category>
		<category><![CDATA[SQL Server 2005]]></category>
		<category><![CDATA[SQL Server 2008]]></category>
		<category><![CDATA[SQL Server 2012]]></category>

		<guid isPermaLink="false">http://blog.developpez.com/mikedavem/?p=170</guid>
		<description><![CDATA[&#160; &#160; &#160; Le premier volet des journées SQL Server a visiblement connu un très grand succès. C’est la raison pour laquelle GUSS s’est relancé dans la préparation d’un second volet qui devrait se dérouler en décembre. Les dates définitives &#8230; <a href="https://blog.developpez.com/mikedavem/p11302/sql-server-2005/les-journes-sql-serversecond-volet">Lire la suite <span class="meta-nav">&#8594;</span></a>]]></description>
				<content:encoded><![CDATA[<p>&#160;</p>
<p>&#160;</p>
<p><img src="http://blog.capdata.fr/wp-content/uploads/2012/09/jss2012-full-header31.png" /></p>
<p>&#160;</p>
<p>Le premier volet des journées SQL Server a visiblement connu un très grand succès. C’est la raison pour laquelle GUSS s’est relancé dans la préparation d’un second volet qui devrait se dérouler en décembre. Les dates définitives seront communiqués un peu plus tard. Cependant pour que cet évènement soit de nouveau un succès nous avons besoin de vos avis qui se présente sous la forme d’un sondage qui ne vous prendra que quelques minutes de votre temps <img class="wlEmoticon wlEmoticon-smile" style="border-top-style: none;border-left-style: none;border-bottom-style: none;border-right-style: none" alt="Sourire" src="http://blog.developpez.com/mikedavem/files/2012/09/wlEmoticon-smile1.png" /> et qui nous permettront de mieux cibler vos attentes à tous les niveaux (contenu des sessions, organisation etc …)</p>
<p>Pour le remplir c’est par <a href="http://guss.fr/jss-2012.aspx">ici</a></p>
<p>Merci par avance !!</p>
<p>&#160;</p>
<p>David BARBARIN (Mikedavem)    <br />MVP SQL Server </p>
]]></content:encoded>
			<wfw:commentRss></wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
	</channel>
</rss>
