<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>David Barbarin</title>
	<atom:link href="https://blog.developpez.com/mikedavem/feed" rel="self" type="application/rss+xml" />
	<link>https://blog.developpez.com/mikedavem</link>
	<description>MVP DataPlatform - MCM SQL Server</description>
	<lastBuildDate>Thu, 09 Sep 2021 21:19:50 +0000</lastBuildDate>
	<language>fr-FR</language>
	<sy:updatePeriod>hourly</sy:updatePeriod>
	<sy:updateFrequency>1</sy:updateFrequency>
	<generator>https://wordpress.org/?v=4.1.42</generator>
	<item>
		<title>Graphing SQL Server wait stats on Prometheus and Grafana</title>
		<link>https://blog.developpez.com/mikedavem/p13209/devops/graphing-sql-server-wait-stats-on-prometheus-and-grafana</link>
		<comments>https://blog.developpez.com/mikedavem/p13209/devops/graphing-sql-server-wait-stats-on-prometheus-and-grafana#comments</comments>
		<pubDate>Thu, 09 Sep 2021 21:19:22 +0000</pubDate>
		<dc:creator><![CDATA[mikedavem]]></dc:creator>
				<category><![CDATA[DevOps]]></category>
		<category><![CDATA[Performance]]></category>
		<category><![CDATA[grafana]]></category>
		<category><![CDATA[monitoring]]></category>
		<category><![CDATA[observability]]></category>
		<category><![CDATA[prometheus]]></category>
		<category><![CDATA[prompQL]]></category>
		<category><![CDATA[SQL Server]]></category>
		<category><![CDATA[telegraf]]></category>

		<guid isPermaLink="false">http://blog.developpez.com/mikedavem/?p=1816</guid>
		<description><![CDATA[Wait stats are essential performance metrics for diagnosing SQL Server Performance problems. Related metrics can be monitored from different DMVs including sys.dm_os_wait_stats and sys.dm_db_wait_stats (Azure). As you probably know, there are 2 categories of DMVs in SQL Server: Point in &#8230; <a href="https://blog.developpez.com/mikedavem/p13209/devops/graphing-sql-server-wait-stats-on-prometheus-and-grafana">Lire la suite <span class="meta-nav">&#8594;</span></a>]]></description>
				<content:encoded><![CDATA[<p>Wait stats are essential performance metrics for diagnosing SQL Server Performance problems. Related metrics can be monitored from different DMVs including sys.dm_os_wait_stats and sys.dm_db_wait_stats (Azure).</p>
<p>As you probably know, there are 2 categories of DMVs in SQL Server: Point in time versus cumulative and DMVs mentioned previously are in the second category. It means data in these DMVs are accumulative and incremented every time wait events occur. Values reset only when SQL Server restarts or when you intentionally run DBCC SQLPERF command. Baselining these metric values require taking snapshots to compare day-to-day activity or maybe simply trends for a given timeline.  Paul Randal kindly provided a TSQL script for trend analysis in a specified time range in this <a href="https://www.sqlskills.com/blogs/paul/capturing-wait-statistics-period-time/" rel="noopener" target="_blank">blog post</a>.  The interesting part of this script is the focus of most relevant wait types and corresponding statistics. This is basically the kind of scripts I used for many years when I performed SQL Server audits at customer shops but today working as database administrator for a company, I can rely on our observability stack that includes Telegraf / Prometheus and Grafana to do the job.</p>
<p><span id="more-1816"></span></p>
<p>In a previous <a href="https://blog.developpez.com/mikedavem/p13203/sql-server-2014/why-we-moved-sql-server-monitoring-on-prometheus-and-grafana" rel="noopener" target="_blank">write-up</a>, I explained the choice of such platform for SQL Server. But transposing the Paul’s script logic to Prometheus and Grafana was not a trivial stuff, but the result was worthy. It was an interesting topic that I want to share with Ops and DBA who wants to baseline SQL Server telemetry on Prometheus and Grafana observability platform.  </p>
<p>So, let’s start with metrics provided by Telegraf collector agent and then scraped by Prometheus job:<br />
&#8211;	sqlserver_waitstats_wait_time_ms<br />
&#8211;	sqlserver_waitstats_waiting_tasks_count<br />
&#8211;	sqlserver_waitstats_resource_wait_time_ms<br />
&#8211;	sqlserver_waitstats_signe_wait_time_ms</p>
<p>In the context of the blog post we will focus only on the first 2 ones of the above list, but the same logic applies for others. </p>
<p>As a reminder, we want to graph most relevant wait types and their average value within a time range specified in a Grafana dashboard. In fact, this is a 2 steps process: </p>
<p>1) Identifying most relevant wait types by computing their ratio with the total amount of wait time within the specific time range.<br />
2) Graphing in Grafana these most relevant wait types with their corresponding average value for every Prometheus step in the time range.</p>
<p>To address the first point, we need to rely on special Prometheus <a href="https://prometheus.io/docs/prometheus/latest/querying/functions/#rate" rel="noopener" target="_blank">rate()</a> function and <a href="https://prometheus.io/docs/prometheus/latest/querying/operators/" rel="noopener" target="_blank">group_left</a> modifier. </p>
<p>As per the Prometheus documentation, rate() gives the per second average rate of change over the specified range interval by using the boundary metric points in it. That is exactly what we need to compute the total average of wait time (in ms) per wait type in a specified time range. rate() needs a range vector as input. Let’s illustrate what is a range vector with the following example. For a sake of simplicity, I filtered with sqlserver_waitstats_wait_time_ms metric to one specific SQL Server instance and wait type (PAGEIOLATCH_EX). Range vector is expressed with a range interval at the end of the query as you can see below:</p>
<div class="codecolorer-container text default" style="overflow:auto;white-space:nowrap;border:1px solid #9F9F9F;width:650px;"><div class="text codecolorer" style="padding:5px;font:normal 12px/1.4em Monaco, Lucida Console, monospace;white-space:nowrap">sqlserver_waitstats_wait_time_ms{sql_instance=&quot;$Instance&quot;,wait_type=&quot;PAGEIOLATCH_EX&quot;}[1m]</div></div>
<p>The result is a set of data metrics within the specified range interval as show below:</p>
<p><a href="http://blog.developpez.com/mikedavem/files/2021/09/blog-177-range-vector.png"><img src="http://blog.developpez.com/mikedavem/files/2021/09/blog-177-range-vector.png" alt="blog 177 - range vector" width="238" height="256" class="alignnone size-full wp-image-1818" /></a></p>
<p>We got for each data metric the value and the corresponding timestamp in epoch format.  You can convert this epoch format to user friendly one by using <strong>date -r -j </strong> for example. Another important point here: The sqlserver_waitstats_wait_time_ms metric is a counter in Prometheus world because value keeps increasing over the time as you can see above (from top to bottom). The same concept exists in SQL Server with cumulative DMV category as explained at the beginning. It explains why we need to use rate() function for drawing the right representation of increase / decrease rate over the time between data metric points. We got 12 data metrics with an interval of 5s between each value. This is because in my context we defined a Prometheus scrape interval of 5s for SQL Server =&gt; 60s/5s = 12 data points and 11 steps. The next question is how rate calculates per-second rate of change between data points. Referring to my previous example, I can get the rate value by using the following prompQL query:</p>
<div class="codecolorer-container text default" style="overflow:auto;white-space:nowrap;border:1px solid #9F9F9F;width:650px;"><div class="text codecolorer" style="padding:5px;font:normal 12px/1.4em Monaco, Lucida Console, monospace;white-space:nowrap">rate(sqlserver_waitstats_wait_time_ms{sql_instance=&quot;$Instance&quot;,wait_type=&quot;PAGEIOLATCH_EX&quot;}[1m])</div></div>
<p>&#8230; and the corresponding value:</p>
<p><a href="http://blog.developpez.com/mikedavem/files/2021/09/blog-177-rate-value.png"><img src="http://blog.developpez.com/mikedavem/files/2021/09/blog-177-rate-value.png" alt="blog 177 - rate value" width="211" height="67" class="alignnone size-full wp-image-1820" /></a></p>
<p>To understand this value, let’s have a good reminder of mathematic lesson at school with <a href="https://en.wikipedia.org/wiki/Slope" rel="noopener" target="_blank">slope calculation</a>. </p>
<p><a href="http://blog.developpez.com/mikedavem/files/2021/09/Tangent_function_animation.gif"><img src="http://blog.developpez.com/mikedavem/files/2021/09/Tangent_function_animation.gif" alt="Tangent_function_animation" width="300" height="285" class="alignnone size-full wp-image-1823" /></a></p>
<p><em>Image from Wikipedia</em></p>
<p>The basic idea of slope value is to find the rate of change of one variable compared to another. Less the distance between two data points we have, more chance we have to get a precise approximate value of the slope. And this is exactly what it is happening with Prometheus when you zoom in or out by changing the range interval. A good resolution is also determined by the Prometheus scraping interval especially when your metrics are extremely volatile. This is something to keep in mind with Prometheus. We are working with approximation by design. So let&rsquo;s do some math with a slope calculation of the above range vector:</p>
<p>Slope = DV/DT = (332628-332582)/(@1631125796.971 &#8211; @1631125746.962) =~ 0.83</p>
<p>Excellent! This is how rate() works and the beauty of this function is that slope calculation is doing automatically for all the steps within the range interval.</p>
<p>But let’s go back to the initial requirement. We need to calculate per wait type the average value of wait time between the first and last point in the specified range vector. We can now step further by using Prometheus aggregation operator as follows:</p>
<div class="codecolorer-container text default" style="overflow:auto;white-space:nowrap;border:1px solid #9F9F9F;width:650px;"><div class="text codecolorer" style="padding:5px;font:normal 12px/1.4em Monaco, Lucida Console, monospace;white-space:nowrap">sum by (wait_type) (rate(sqlserver_waitstats_wait_time_ms{sql_instance=&quot;$Instance&quot;}[1m]))</div></div>
<p>Please note we could have written it another way without using the sum by aggregator but it allows naturally to exclude all unwanted labels for the result metric. It will be particularly helpful for the next part. Anyway, Here a sample of the output:</p>
<p><a href="http://blog.developpez.com/mikedavem/files/2021/09/blog-177-aggregation-by-waittype.png"><img src="http://blog.developpez.com/mikedavem/files/2021/09/blog-177-aggregation-by-waittype-1024x145.png" alt="blog 177 - aggregation by waittype" width="584" height="83" class="alignnone size-large wp-image-1826" /></a></p>
<p>Then we can compute label (wait type) ratio (or percentage). First attempt and naïve approach could be as follows:</p>
<div class="codecolorer-container text default" style="overflow:auto;white-space:nowrap;border:1px solid #9F9F9F;width:650px;"><div class="text codecolorer" style="padding:5px;font:normal 12px/1.4em Monaco, Lucida Console, monospace;white-space:nowrap">sum by (wait_type) (rate(sqlserver_waitstats_wait_time_ms{sql_instance=&quot;$Instance&quot;}[1m]))/ sum(rate(sqlserver_waitstats_wait_time_ms{sql_instance='$Instance'}[1m]))</div></div>
<p>But we get empty query result. Bad joke right? We need to understand that. </p>
<p>First part of the query gives total amount of wait time per wait type. I put a sample of the results for simplicity:</p>
<p><a href="http://blog.developpez.com/mikedavem/files/2021/09/blog-177-aggregation-by-waittype1.png"><img src="http://blog.developpez.com/mikedavem/files/2021/09/blog-177-aggregation-by-waittype1-1024x145.png" alt="blog 177 - aggregation by waittype" width="584" height="83" class="alignnone size-large wp-image-1828" /></a></p>
<p>It results a new set of metrics with only one label for wait_type. Second part gives to total amount of wait time for all wait types as show below:</p>
<p><a href="http://blog.developpez.com/mikedavem/files/2021/09/blog-177-total-waits.png"><img src="http://blog.developpez.com/mikedavem/files/2021/09/blog-177-total-waits.png" alt="blog 177 - total waits" width="479" height="39" class="alignnone size-full wp-image-1829" /></a></p>
<p>With SQL statement, we instinctively select columns that have matching values in concerned tables. Those columns are often concerned by primary or foreign keys. In Prometheus world, vector matching is performing the same way by using all labels at the starting point. But samples are selected or dropped from the result vector based either on &laquo;&nbsp;ignoring&nbsp;&raquo; and &laquo;&nbsp;on&nbsp;&raquo; keywords. In my case, they are no matching labels so we must tell Prometheus to ignore the remaining label (wait_type) on the first part of the query:</p>
<div class="codecolorer-container text default" style="overflow:auto;white-space:nowrap;border:1px solid #9F9F9F;width:650px;"><div class="text codecolorer" style="padding:5px;font:normal 12px/1.4em Monaco, Lucida Console, monospace;white-space:nowrap">sum by (wait_type) (rate(sqlserver_waitstats_wait_time_ms{sql_instance=&quot;$Instance&quot;}[1m]))/ ignoring(wait_type) sum(rate(sqlserver_waitstats_wait_time_ms{sql_instance='$Instance'}[1m]))</div></div>
<p>But another error message &#8230;</p>
<p><strong>Error executing query: multiple matches for labels: many-to-one matching must be explicit (group_left/group_right)</strong></p>
<p>In the many-to -one or one-to-many vector matching with Prometheus, samples are selected using keywords like group_left or group_right. In other words, we are telling Prometheus to perform a cross join in this case with this final query before performing division between values:</p>
<div class="codecolorer-container text default" style="overflow:auto;white-space:nowrap;border:1px solid #9F9F9F;width:650px;"><div class="text codecolorer" style="padding:5px;font:normal 12px/1.4em Monaco, Lucida Console, monospace;white-space:nowrap">sum by (wait_type) (rate(sqlserver_waitstats_wait_time_ms{sql_instance=&quot;$Instance&quot;}[1m]))/ ignoring(wait_type) group_left sum(rate(sqlserver_waitstats_wait_time_ms{sql_instance='$Instance'}[1m]))</div></div>
<p>Here we go!</p>
<p><a href="http://blog.developpez.com/mikedavem/files/2021/09/blog-177-ratio-per-label.png"><img src="http://blog.developpez.com/mikedavem/files/2021/09/blog-177-ratio-per-label-1024x149.png" alt="blog 177 - ratio per label" width="584" height="85" class="alignnone size-large wp-image-1830" /></a></p>
<p>We finally managed to calculate ratio per wait type with a specified range interval. Last thing is to select most relevant wait types by excluding first irrelevant wait types. Most of wait types come from the exclusion list provided by Paul Randal’s script. We also decided to only focus on max top 5 wait types with ratio  &gt; 10% but it is up to you to change these values:</p>
<div class="codecolorer-container text default" style="overflow:auto;white-space:nowrap;border:1px solid #9F9F9F;width:650px;"><div class="text codecolorer" style="padding:5px;font:normal 12px/1.4em Monaco, Lucida Console, monospace;white-space:nowrap">topk(5, sum by (wait_type) (rate(sqlserver_waitstats_wait_time_ms{sql_instance='$Instance',measurement_db_type=&quot;SQLServer&quot;,wait_type!~'(BROKER_EVENTHANDLER|BROKER_RECEIVE_WAITFOR|BROKER_TASK_STOP|BROKER_TO_FLUSH|BROKER_TRANSMITTER|CHECKPOINT_QUEUE|CHKPT|CLR_AUTO_EVENT|CLR_MANUAL_EVENT|CLR_SEMAPHORE|DBMIRROR_DBM_EVENT|DBMIRROR_EVENTS_QUEUE|DBMIRROR_WORKER_QUEUE|DBMIRRORING_CMD|DIRTY_PAGE_POLL|DISPATCHER_QUEUE_SEMAPHORE|EXECSYNC|FSAGENT|FT_IFTS_SCHEDULER_IDLE_WAIT|FT_IFTSHC_MUTEX|KSOURCE_WAKEUP|LAZYWRITER_SLEEP|LOGMGR_QUEUE|MEMORY_ALLOCATION_EXT|ONDEMAND_TASK_QUEUE|PARALLEL_REDO_DRAIN_WORKER|PARALLEL_REDO_LOG_CACHE|PARALLEL_REDO_TRAN_LIST|PARALLEL_REDO_WORKER_SYNC|PARALLEL_REDO_WORKER_WAIT_WORK|PREEMPTIVE_OS_FLUSHFILEBUFFERS|PREEMPTIVE_XE_GETTARGETSTATE|PWAIT_ALL_COMPONENTS_INITIALIZED|PWAIT_DIRECTLOGCONSUMER_GETNEXT|QDS_PERSIST_TASK_MAIN_LOOP_SLEEP|QDS_ASYNC_QUEUE|QDS_CLEANUP_STALE_QUERIES_TASK_MAIN_LOOP_SLEEP|QDS_SHUTDOWN_QUEUE|REDO_THREAD_PENDING_WORK|REQUEST_FOR_DEADLOCK_SEARCH|RESOURCE_QUEUE|SERVER_IDLE_CHECK|SLEEP_BPOOL_FLUSH|SLEEP_DBSTARTUP|SLEEP_DCOMSTARTUP|SLEEP_MASTERDBREADY|SLEEP_MASTERMDREADY|SLEEP_MASTERUPGRADED|SLEEP_MSDBSTARTUP|SLEEP_SYSTEMTASK|SLEEP_TASK|SLEEP_TEMPDBSTARTUP|SNI_HTTP_ACCEPT|SOS_WORK_DISPATCHER|SP_SERVER_DIAGNOSTICS_SLEEP|SQLTRACE_BUFFER_FLUSH|SQLTRACE_INCREMENTAL_FLUSH_SLEEP|SQLTRACE_WAIT_ENTRIES|VDI_CLIENT_OTHER|WAIT_FOR_RESULTS|WAITFOR|WAITFOR_TASKSHUTDOW|WAIT_XTP_RECOVERY|WAIT_XTP_HOST_WAIT|WAIT_XTP_OFFLINE_CKPT_NEW_LOG|WAIT_XTP_CKPT_CLOSE|XE_DISPATCHER_JOIN|XE_DISPATCHER_WAIT|XE_TIMER_EVENT|MEMORY_ALLOCATION_EXT|ONDEMAND_TASK_QUEUE|PREEMPTIVE_HADR_LEASE_MECHANISM|PREEMPTIVE_SP_SERVER_DIAGNOSTICS|PREEMPTIVE_ODBCOPS|PREEMPTIVE_OS_LIBRARYOPS|PREEMPTIVE_OS_COMOPS|PREEMPTIVE_OS_CRYPTOPS|PREEMPTIVE_OS_PIPEOPS|PREEMPTIVE_OS_AUTHENTICATIONOPS|PREEMPTIVE_OS_GENERICOPS|PREEMPTIVE_OS_VERIFYTRUST|PREEMPTIVE_OS_FILEOPS|PREEMPTIVE_OS_DEVICEOPS|PREEMPTIVE_OS_QUERYREGISTRY|PREEMPTIVE_OS_WRITEFILE|PREEMPTIVE_XE_CALLBACKEXECUTEPREEMPTIVE_XE_DISPATCHER|PREEMPTIVE_XE_GETTARGETSTATEPREEMPTIVE_XE_SESSIONCOMMIT|PREEMPTIVE_XE_TARGETINITPREEMPTIVE_XE_TARGETFINALIZE|PREEMPTIVE_XHTTP|PWAIT_EXTENSIBILITY_CLEANUP_TASK|PREEMPTIVE_OS_DISCONNECTNAMEDPIPE|PREEMPTIVE_OS_DELETESECURITYCONTEXT|PREEMPTIVE_OS_CRYPTACQUIRECONTEXT|PREEMPTIVE_HTTP_REQUEST|RESOURCE_GOVERNOR_IDLE|HADR_FABRIC_CALLBACK|PVS_PREALLOCATE)'}[1m])) / ignoring(wait_type) group_left sum(rate(sqlserver_waitstats_wait_time_ms{sql_instance='$Instance',measurement_db_type=&quot;SQLServer&quot;,wait_type!~'(BROKER_EVENTHANDLER|BROKER_RECEIVE_WAITFOR|BROKER_TASK_STOP|BROKER_TO_FLUSH|BROKER_TRANSMITTER|CHECKPOINT_QUEUE|CHKPT|CLR_AUTO_EVENT|CLR_MANUAL_EVENT|CLR_SEMAPHORE|DBMIRROR_DBM_EVENT|DBMIRROR_EVENTS_QUEUE|DBMIRROR_WORKER_QUEUE|DBMIRRORING_CMD|DIRTY_PAGE_POLL|DISPATCHER_QUEUE_SEMAPHORE|EXECSYNC|FSAGENT|FT_IFTS_SCHEDULER_IDLE_WAIT|FT_IFTSHC_MUTEX|KSOURCE_WAKEUP|LAZYWRITER_SLEEP|LOGMGR_QUEUE|MEMORY_ALLOCATION_EXT|ONDEMAND_TASK_QUEUE|PARALLEL_REDO_DRAIN_WORKER|PARALLEL_REDO_LOG_CACHE|PARALLEL_REDO_TRAN_LIST|PARALLEL_REDO_WORKER_SYNC|PARALLEL_REDO_WORKER_WAIT_WORK|PREEMPTIVE_OS_FLUSHFILEBUFFERS|PREEMPTIVE_XE_GETTARGETSTATE|PWAIT_ALL_COMPONENTS_INITIALIZED|PWAIT_DIRECTLOGCONSUMER_GETNEXT|QDS_PERSIST_TASK_MAIN_LOOP_SLEEP|QDS_ASYNC_QUEUE|QDS_CLEANUP_STALE_QUERIES_TASK_MAIN_LOOP_SLEEP|QDS_SHUTDOWN_QUEUE|REDO_THREAD_PENDING_WORK|REQUEST_FOR_DEADLOCK_SEARCH|RESOURCE_QUEUE|SERVER_IDLE_CHECK|SLEEP_BPOOL_FLUSH|SLEEP_DBSTARTUP|SLEEP_DCOMSTARTUP|SLEEP_MASTERDBREADY|SLEEP_MASTERMDREADY|SLEEP_MASTERUPGRADED|SLEEP_MSDBSTARTUP|SLEEP_SYSTEMTASK|SLEEP_TASK|SLEEP_TEMPDBSTARTUP|SNI_HTTP_ACCEPT|SOS_WORK_DISPATCHER|SP_SERVER_DIAGNOSTICS_SLEEP|SQLTRACE_BUFFER_FLUSH|SQLTRACE_INCREMENTAL_FLUSH_SLEEP|SQLTRACE_WAIT_ENTRIES|VDI_CLIENT_OTHER|WAIT_FOR_RESULTS|WAITFOR|WAITFOR_TASKSHUTDOW|WAIT_XTP_RECOVERY|WAIT_XTP_HOST_WAIT|WAIT_XTP_OFFLINE_CKPT_NEW_LOG|WAIT_XTP_CKPT_CLOSE|XE_DISPATCHER_JOIN|XE_DISPATCHER_WAIT|XE_TIMER_EVENT|MEMORY_ALLOCATION_EXT|ONDEMAND_TASK_QUEUE|PREEMPTIVE_HADR_LEASE_MECHANISM|PREEMPTIVE_SP_SERVER_DIAGNOSTICS|PREEMPTIVE_ODBCOPS|PREEMPTIVE_OS_LIBRARYOPS|PREEMPTIVE_OS_COMOPS|PREEMPTIVE_OS_CRYPTOPS|PREEMPTIVE_OS_PIPEOPS|PREEMPTIVE_OS_AUTHENTICATIONOPS|PREEMPTIVE_OS_GENERICOPS|PREEMPTIVE_OS_VERIFYTRUST|PREEMPTIVE_OS_FILEOPS|PREEMPTIVE_OS_DEVICEOPS|PREEMPTIVE_OS_QUERYREGISTRY|PREEMPTIVE_OS_WRITEFILE|PREEMPTIVE_XE_CALLBACKEXECUTEPREEMPTIVE_XE_DISPATCHER|PREEMPTIVE_XE_GETTARGETSTATEPREEMPTIVE_XE_SESSIONCOMMIT|PREEMPTIVE_XE_TARGETINITPREEMPTIVE_XE_TARGETFINALIZE|PREEMPTIVE_XHTTP|PWAIT_EXTENSIBILITY_CLEANUP_TASK|PREEMPTIVE_OS_DISCONNECTNAMEDPIPE|PREEMPTIVE_OS_DELETESECURITYCONTEXT|PREEMPTIVE_OS_CRYPTACQUIRECONTEXT|PREEMPTIVE_HTTP_REQUEST|RESOURCE_GOVERNOR_IDLE|HADR_FABRIC_CALLBACK|PVS_PREALLOCATE)'}[1m]))) &amp;gt;= 0.1</div></div>
<p>I got 3 relevant wait types with their correspond ratio in the specified time range.</p>
<p><a href="http://blog.developpez.com/mikedavem/files/2021/09/blog-177-ratio-per-label-top-5.png"><img src="http://blog.developpez.com/mikedavem/files/2021/09/blog-177-ratio-per-label-top-5-1024x67.png" alt="blog 177 - ratio per label top 5" width="584" height="38" class="alignnone size-large wp-image-1832" /></a></p>
<p>Pretty cool stuff but we must now to go through the second requirement. We want to graph the average value of the identified wait types within a specified time range in Grafana dashboard. First thing consists in including the above Prometheus query as variable in the Grafana dashboard. Here how I setup my Top5Waits variable in Grafana:</p>
<p><a href="http://blog.developpez.com/mikedavem/files/2021/09/blog-177-granafa-top5waits.png"><img src="http://blog.developpez.com/mikedavem/files/2021/09/blog-177-granafa-top5waits-1024x501.png" alt="blog 177 - granafa top5waits" width="584" height="286" class="alignnone size-large wp-image-1833" /></a></p>
<p>Some interesting points here: variable dependency kicks in with my $Top5Waits variable that depends hierarchically on another $Instance variable in my dashboard (from another Prometheus query). You probably have noticed the use of [${__range_s}s] to determine the range interval but depending on the Grafana $__interval may be a good fit as well. </p>
<p>In turn, $Top5Waits can be used from another query but this time directly in a Grafana dashboard panel with the average value of most relevant wait types as shown below:</p>
<p><a href="http://blog.developpez.com/mikedavem/files/2021/09/blog-177-grafana-avg-wait-stats.png"><img src="http://blog.developpez.com/mikedavem/files/2021/09/blog-177-grafana-avg-wait-stats-1024x400.png" alt="blog 177 - grafana avg wait stats" width="584" height="228" class="alignnone size-large wp-image-1834" /></a></p>
<p>Calculating wait type average is not a hard task by itself. In fact, we can apply the same methods than previously by matching the sqlserver_waitstats_wait_tine_ms and sqlserver_waitstats_waiting_task_count and to divide their corresponding values to obtain the average wait time (in ms) for each step within the time range (remember how the rate () function works). Both metrics own the same set of labels, so we don’t need to use &laquo;&nbsp;on&nbsp;&raquo; or &laquo;&nbsp;ignoring&nbsp;&raquo; keywords in this case. But we must introduce the $Top5Waits variable in the label filter in the first metric as follows:</p>
<div class="codecolorer-container text default" style="overflow:auto;white-space:nowrap;border:1px solid #9F9F9F;width:650px;"><div class="text codecolorer" style="padding:5px;font:normal 12px/1.4em Monaco, Lucida Console, monospace;white-space:nowrap">rate(sqlserver_waitstats_wait_time_ms{sql_instance='$Instance',wait_type=~&quot;$Top5Waits&quot;,measurement_db_type=&quot;SQLServer&quot;}[$__rate_interval])/rate(sqlserver_waitstats_waiting_tasks_count{sql_instance='$Instance',wait_type=~&quot;$Top5Waits&quot;,measurement_db_type=&quot;SQLServer&quot;}[$__rate_interval])</div></div>
<p>We finally managed to get an interesting dynamic measurement of SQL Server telemetry wait stats. Hope this blog post helps!<br />
Let me know your feedback if your are using SQL Server wait stats in Prometheus and Grafana in a different way !</p>
]]></content:encoded>
			<wfw:commentRss></wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>FinOps with Azure Cost management and Azure Log Analytics</title>
		<link>https://blog.developpez.com/mikedavem/p13208/sql-azure/finops-with-azure-cost-management-and-azure-log-analytics</link>
		<comments>https://blog.developpez.com/mikedavem/p13208/sql-azure/finops-with-azure-cost-management-and-azure-log-analytics#comments</comments>
		<pubDate>Wed, 12 May 2021 15:37:47 +0000</pubDate>
		<dc:creator><![CDATA[mikedavem]]></dc:creator>
				<category><![CDATA[DevOps]]></category>
		<category><![CDATA[SQL Azure]]></category>
		<category><![CDATA[Azure]]></category>
		<category><![CDATA[devops]]></category>
		<category><![CDATA[finops]]></category>
		<category><![CDATA[Log Analytics]]></category>
		<category><![CDATA[observability]]></category>

		<guid isPermaLink="false">http://blog.developpez.com/mikedavem/?p=1801</guid>
		<description><![CDATA[In a previous blog post, I surfaced Azure monitor capabilities for extending observability of Azure SQL databases. We managed to correlate different metrics and SQL logs to identify new execution patterns against our Azure SQL DB, and we finally go &#8230; <a href="https://blog.developpez.com/mikedavem/p13208/sql-azure/finops-with-azure-cost-management-and-azure-log-analytics">Lire la suite <span class="meta-nav">&#8594;</span></a>]]></description>
				<content:encoded><![CDATA[<p>In a<a href="https://blog.developpez.com/mikedavem/p13205/sql-azure/azure-monitor-as-observability-platform-for-azure-sql-databases" rel="noopener" target="_blank"> previous blog post</a>, I surfaced Azure monitor capabilities for extending observability of Azure SQL databases. We managed to correlate different metrics and SQL logs to identify new execution patterns against our Azure SQL DB, and we finally go through a new compute tier model that fits better with our new context. In this blog post, I would like to share some new experiences about combining Azure cost analysis and Azure log analytics to spot “abnormal” trend and to fix it. </p>
<p><span id="more-1801"></span></p>
<p>If you deal with Cloud services and infrastructure, FinOps is a discipline you should get into for keeping under control your costs and getting actionable insights that could result in efficient cloud costs. Azure cost management provides visibility and control. Azure cost analysis is my favorite tool when I want to figure out costs of the different services and to visualize improvements after applying quick wins, architecture upgrades on the environment. It is also a good place to identify stale resources to cleanup. I will focus on Azure SQL DB here. From a cost perspective, Azure SQL DB service includes different meter subcategories regarding the options and the service tier you will use. You may have to pay for the compute, the dedicated storage for your database and for your backups (pitr or ltr) and so on … Cost Analysis allows drill-down analysis through different axis with aggregation or forecast capabilities. </p>
<p>In our context, we would like to know if moving from Azure SQL DB Azure Serverless compute tier (Pay-As-You-Go) to Provisioned Tier (+ Azure Hybrid Benefit + Reserved Instances for 3 years) has some good effects on costs.  First look at the cost analysis section by applying correct filters and data aggregation on compute tier, confirmed our initial assumption that Serverless didn’t fit anymore with our context now. The chart uses a monthly-based timeframe daily aggregation. We switched to a different model mi-April as show below:</p>
<p><a href="http://blog.developpez.com/mikedavem/files/2021/05/blog-176-serveless-vs-compute-tier-e1620817381744.png"><img src="http://blog.developpez.com/mikedavem/files/2021/05/blog-176-serveless-vs-compute-tier-1024x355.png" alt="blog 176 - serveless vs compute tier" width="584" height="202" class="alignnone size-large wp-image-1802" /></a></p>
<p>Real numbers are confidential but not so important here. We can easily notice a drop a daily cost (~ 0.5) between Serverless and Provisioned compute tier. </p>
<p>If we get a higher-level view of all services and costs for previous months, the trend is also confirmed for April with serverless + provisioned tier combined costs lower than serverless computer tier only for previous months. But we need to wait for next months to confirm the trend. </p>
<p><a href="http://blog.developpez.com/mikedavem/files/2021/05/blog-176-compute-vs-backup-storage-e1620817431402.png"><img src="http://blog.developpez.com/mikedavem/files/2021/05/blog-176-compute-vs-backup-storage-1024x727.png" alt="blog 176 - compute vs backup storage" width="584" height="415" class="alignnone size-large wp-image-1803" /></a></p>
<p>At the same time (and this is the focus on this write-up), we detected a sudden increase of backup storage cost in March that may ruin our optimization efforts made for compute, right? :). To explain this new trend, log analytics came to the rescue. As explained in the previous blog post, we configured streaming of Azure SQL DB telemetry into Log Analytics target to get benefit from solutions like SQL Insights and custom queries from different Azure logs. </p>
<p>Basic metrics are part of Azure SQL DB telemetry and stored in AzureMetrics table. We can use Kusto query to extract backup metrics and get an idea of different backup type trends over the time including FULL, DIFF and LOG backups. The following query shows backup trends within the same timeframe used for billing in cost management (February to May). It also includes a <a href="https://docs.microsoft.com/en-us/azure/data-explorer/kusto/query/series-fit-linefunction" rel="noopener" target="_blank">series_file_line</a> function to draw a trendline in the time chart.</p>
<div class="codecolorer-container text default" style="overflow:auto;white-space:nowrap;border:1px solid #9F9F9F;width:650px;"><div class="text codecolorer" style="padding:5px;font:normal 12px/1.4em Monaco, Lucida Console, monospace;white-space:nowrap">AzureMetrics<br />
| where TimeGenerated &amp;gt;= ago(90d)<br />
| where Resource == 'myDB'<br />
| where MetricName == 'full_backup_size_bytes' // in ('full_backup_size_bytes','diff_backup_size_bytes','log_backup_size_bytes')<br />
| make-series SizeBackupDiffTB=max(Maximum/1024/1024/1024/1024) on TimeGenerated in range(ago(90d),now(), 1d)<br />
| extend (RSquare,Slope,Variance,RVariance,Interception,TrendLine)=series_fit_line(SizeBackupDiffTB)<br />
| render timechart</div></div>
<p><strong>Full backup time chart</strong></p>
<p><a href="http://blog.developpez.com/mikedavem/files/2021/05/blog-176-backup-full-trend-e1620817591328.png"><img src="http://blog.developpez.com/mikedavem/files/2021/05/blog-176-backup-full-trend-1024x464.png" alt="blog 176 - backup full trend" width="584" height="265" class="alignnone size-large wp-image-1804" /></a></p>
<p>FULL backup size is relatively steady and cannot explain the sudden increase of storage backup cost in our case. </p>
<p><strong>DIFF and LOG backup time chart</strong></p>
<p><a href="http://blog.developpez.com/mikedavem/files/2021/05/blog-176-backup-diff-trend-e1620832494315.png"><img src="http://blog.developpez.com/mikedavem/files/2021/05/blog-176-backup-diff-trend-1024x463.png" alt="blog 176 - backup diff trend" width="584" height="264" class="alignnone size-large wp-image-1806" /></a></p>
<p>&#8230;</p>
<p><a href="http://blog.developpez.com/mikedavem/files/2021/05/blog-176-backup-log-trend-e1620832522947.png"><img src="http://blog.developpez.com/mikedavem/files/2021/05/blog-176-backup-log-trend-1024x469.png" alt="blog 176 - backup log trend" width="584" height="267" class="alignnone size-large wp-image-1807" /></a></p>
<p>LOG and DIFF backup charts are more relevant and the trendline suggests a noticeable change starting mi-March. For the first part of the month, the trendline starts misaligning with backup size series. </p>
<p>At this stage, we found out the cause of the cost increase, but we were interested in understanding the reasons that may explain such trend. After investigating our ITSM system, we were able to find a correlation with the deployment of new maintenance tool &#8211; <a href="https://ola.hallengren.com/" rel="noopener" target="_blank">Ola Hallengren</a> maintenance solution + custom scripts to rebuild columnstore indexes. The latter rebuilds aggressively 2 big fact tables with CCI in our DW (unlike the former tool) that explain the increase of DIFF and LOG backup sizes (~ 1TB). </p>
<p><a href="http://blog.developpez.com/mikedavem/files/2021/05/blog-176-fact-tables.png"><img src="http://blog.developpez.com/mikedavem/files/2021/05/blog-176-fact-tables.png" alt="blog 176 - fact tables" width="664" height="120" class="alignnone size-full wp-image-1808" /></a></p>
<p>This is where the collaboration with the data engineering team is starting to find an efficient and durable way to minimize the impact of the maintenance:</p>
<p>&#8211; Reviewing the custom script threshold may result to a more relax detection of fragmented columnstore indexes. However, this is only a piece of the solution because when a columnstore index becomes a good candidate for the next maintenance operation, it will lead to a resource-intensive and time-consuming operation (&gt; 2.5h dedicated for these two tables). We are using Azure automation jobs with fair share to execute the maintenance and we are limited to 3h max per job execution. We may use a divide and conquer strategy to fit within the permitted execution timeframe, but it would lead to more complexity and we want to keep maintenance as simple as possible. </p>
<p>&#8211; We need to find another way to keep index and stat maintenance jobs execute time under a certain control.  Introducing partition for these tables is probably a good catch and another piece of the solution. Indeed, currently concerned tables are not partitioned, and we could get benefit from partition-level maintenance for both indexes and statistics at the partition level.</p>
<p><strong>Bottom line</strong></p>
<p>Azure cost management center and log analytics are a powerful recipe in the FinOps practice. Kusto SQL language is a flexible tool for finding and correlate all kinds of log entries and events assuming you configured telemetry to the right target. I definitely like annotation-like system as we are using with Grafana because it makes correlation with external changes and workflows easier. Next step: investigate annotations on metric charts in Application insights? </p>
<p>See you!!</p>
]]></content:encoded>
			<wfw:commentRss></wfw:commentRss>
		<slash:comments>1</slash:comments>
		</item>
		<item>
		<title>Creating dynamic Grafana dashboard for SQL Server</title>
		<link>https://blog.developpez.com/mikedavem/p13207/sql-server-2008-r2/creating-dynamic-grafana-dashboard-for-sql-server</link>
		<comments>https://blog.developpez.com/mikedavem/p13207/sql-server-2008-r2/creating-dynamic-grafana-dashboard-for-sql-server#comments</comments>
		<pubDate>Sun, 11 Apr 2021 19:52:09 +0000</pubDate>
		<dc:creator><![CDATA[mikedavem]]></dc:creator>
				<category><![CDATA[DevOps]]></category>
		<category><![CDATA[SQL Server 2008 R2]]></category>
		<category><![CDATA[SQL Server 2014]]></category>
		<category><![CDATA[SQL Server 2016]]></category>
		<category><![CDATA[SQL Server 2017]]></category>
		<category><![CDATA[SQL Server 2019]]></category>
		<category><![CDATA[AlwaysOn;groupes de disponibilité;availability groups]]></category>
		<category><![CDATA[grafana]]></category>
		<category><![CDATA[monitoring]]></category>
		<category><![CDATA[observability]]></category>
		<category><![CDATA[prometheus]]></category>
		<category><![CDATA[SQL Server]]></category>

		<guid isPermaLink="false">http://blog.developpez.com/mikedavem/?p=1784</guid>
		<description><![CDATA[A couple of months ago I wrote about “Why we moved SQL Server monitoring to Prometheus and Grafana”. I talked about the creation of two dashboards. The first one is blackbox monitoring-oriented and aims to spot in (near) real-time resource &#8230; <a href="https://blog.developpez.com/mikedavem/p13207/sql-server-2008-r2/creating-dynamic-grafana-dashboard-for-sql-server">Lire la suite <span class="meta-nav">&#8594;</span></a>]]></description>
				<content:encoded><![CDATA[<p>A couple of months ago I wrote about “<a href="https://blog.developpez.com/mikedavem/p13203/sql-server-2014/why-we-moved-sql-server-monitoring-on-prometheus-and-grafana" rel="noopener" target="_blank">Why we moved SQL Server monitoring to Prometheus and Grafana</a>”. I talked about the creation of two dashboards. The first one is blackbox monitoring-oriented and aims to spot in (near) real-time resource pressure / saturation issues with self-explained gauges, numbers and colors indicating healthy (green) or unhealthy resources (orange / red). We also include availability group synchronization health metric in the dashboard. We will focus on it in this write-up.</p>
<p><span id="more-1784"></span></p>
<p><a href="http://blog.developpez.com/mikedavem/files/2021/04/174-1-mssql-dashboard.jpg"><img src="http://blog.developpez.com/mikedavem/files/2021/04/174-1-mssql-dashboard-1024x158.jpg" alt="174 - 1 - mssql dashboard" width="584" height="90" class="alignnone size-large wp-image-1785" /></a></p>
<p>As a reminder, this Grafana dashboard gets information from Prometheus server and metrics related to MSSQL environments. For a sake of clarity, in this dashboard, environment defines one availability group and a set of 2 AG replicas (A or B) in synchronous replication mode. In other words, <strong>ENV1</strong> value corresponds to availability group name and to SQL instance names member of the AG group with <strong>SERVERA\ENV1</strong> (first replica), <strong>SERVERB\ENV1</strong> (second replica). </p>
<p>In the picture above, you can notice 2 sections. One is for availability group and health monitoring and the second includes a set of black box metrics related to saturation and latencies (CPU, RAM, Network, AG replication delay, SQL Buffer Pool, blocked processes &#8230;). Good job for one single environment but what if I want to introduce more availability groups and SQL instances in the game?</p>
<p>The first and easiest (or naïve) way we went through when we started writing this dashboard was to copy / paste all the stuff done for one environment the panels as shown below:</p>
<p><a href="http://blog.developpez.com/mikedavem/files/2021/04/174-2-mssql-dashboard-static.jpg"><img src="http://blog.developpez.com/mikedavem/files/2021/04/174-2-mssql-dashboard-static-1024x242.jpg" alt="174 - 2 - mssql dashboard static" width="584" height="138" class="alignnone size-large wp-image-1786" /></a></p>
<p>After creating a new row (can be associated to section in the present context) at the bottom, all panels were copied from ENV1 to the new fresh section ENV2. New row is created by converting anew panel into row as show below:</p>
<p><a href="http://blog.developpez.com/mikedavem/files/2021/04/174-3-convert-panel-to-row.jpg"><img src="http://blog.developpez.com/mikedavem/files/2021/04/174-3-convert-panel-to-row-1024x199.jpg" alt="174 - 3 - convert panel to row" width="584" height="113" class="alignnone size-large wp-image-1787" /></a></p>
<p>Then I need to modify manually ALL the new metrics with the new environment. Let’s illustrate the point with Batch Requests/sec metric as example. The corresponding Prometheus query for the first replica (A) is: (the initial query has been simplified for the purpose of this blog post):</p>
<div class="codecolorer-container text default" style="overflow:auto;white-space:nowrap;border:1px solid #9F9F9F;width:650px;"><div class="text codecolorer" style="padding:5px;font:normal 12px/1.4em Monaco, Lucida Console, monospace;white-space:nowrap">irate(sqlserver_performance{sql_instance='SERVERA:ENV1',counter=&quot;Batch Requests/sec&quot;}[$__range])</div></div>
<p>Same query exists for secondary replica (B) but with a different label value:</p>
<div class="codecolorer-container text default" style="overflow:auto;white-space:nowrap;border:1px solid #9F9F9F;width:650px;"><div class="text codecolorer" style="padding:5px;font:normal 12px/1.4em Monaco, Lucida Console, monospace;white-space:nowrap">irate(sqlserver_performance{sql_instance='SERVERB:ENV1',counter=&quot;Batch Requests/sec&quot;}[$__range])</div></div>
<p>SERVERA:ENV1 and SERVERB:ENV1 are static values that correspond to the name of each SQL Server instance – respectively SERVERA\ENV1 and SERVERB\ENV1. As you probably already guessed and according to our naming convention, for the new environment and related panels, we obviously changed initial values ENV1 with new one ENV2. But having more environments or providing filtering capabilities to focus only on specific environments make the current process tedious and we need introduce dynamic stuff in the game &#8230; Good news, Grafana provides such capabilities with dynamic creation of rows and panels. and rows. </p>
<p><strong>Generating dynamic panels in the same section (row)</strong></p>
<p>Referring to the dashboard, first section concerns availability group health metric. When adding a new environment – meaning a new availability group – we want a new dedicated panel creating automatically in the same section (AG health).<br />
Firstly, we need to add a multi-value variable in the dashboard. Values can be static or dynamic from another query regarding your context. (up to you to choose the right solution according to your context).</p>
<p><a href="http://blog.developpez.com/mikedavem/files/2021/04/174-4-grafana_variable.jpg"><img src="http://blog.developpez.com/mikedavem/files/2021/04/174-4-grafana_variable.jpg" alt="174 - 4 - grafana_variable" width="968" height="505" class="alignnone size-full wp-image-1789" /></a></p>
<p>Once created, a drop-down list appears at the upper left in the dashboard and now we can perform multi selections or we can filter to specific ones.</p>
<p><a href="http://blog.developpez.com/mikedavem/files/2021/04/174-5-grafana_variable.jpg"><img src="http://blog.developpez.com/mikedavem/files/2021/04/174-5-grafana_variable.jpg" alt="174 - 5 - grafana_variable" width="202" height="346" class="alignnone size-full wp-image-1790" /></a></p>
<p>Then we need to make panel in the AG Heath section dynamic as follows:<br />
&#8211; Change the title value with corresponding dashboard (optional)<br />
&#8211; Configure repeat options values with the variable (mandatory). You can also define max panel per row</p>
<p><a href="http://blog.developpez.com/mikedavem/files/2021/04/174-6-panel-variabilisation.jpg"><img src="http://blog.developpez.com/mikedavem/files/2021/04/174-6-panel-variabilisation.jpg" alt="174 - 6 - panel variabilisation" width="279" height="414" class="alignnone size-full wp-image-1792" /></a></p>
<p>According to this setup, we can display 4 panels (or availability groups) max per row. The 5th will be created and placed to a new line in the same section as shown below:<br />
<a href="http://blog.developpez.com/mikedavem/files/2021/04/174-7-panel-same-section.jpg"><img src="http://blog.developpez.com/mikedavem/files/2021/04/174-7-panel-same-section-1024x125.jpg" alt="174 - 7 - panel same section" width="584" height="71" class="alignnone size-large wp-image-1793" /></a></p>
<p>Finally, we must replace static label values defined in the query by the variable counterpart. For the availability group we are using <strong>sqlserver_hadr_replica_states_replica_synchronization_health</strong> metric as follows (again, I voluntary put a sample of the entire query for simplicity purpose):</p>
<div class="codecolorer-container text default" style="overflow:auto;white-space:nowrap;border:1px solid #9F9F9F;width:650px;"><div class="text codecolorer" style="padding:5px;font:normal 12px/1.4em Monaco, Lucida Console, monospace;white-space:nowrap">… sqlserver_hadr_replica_states_replica_synchronization_health{sql_instance=~'SERVER[A|B]:$ENV',measurement_db_type=&quot;SQLServer&quot;}) …</div></div>
<p>You can notice the regex expression used to get information from SQL Instances either from primary (A) or secondary (B). The most interesting part concerns the environment that is now dynamic with $ENV variable.</p>
<p><strong>Generating dynamic sections (rows)</strong></p>
<p>As said previously, sections are in fact rows in the Grafana dashboard and rows can contain panels. If we add new environment, we want also to see a new section (and panels) related to it. Configuring dynamic rows is pretty similar to panels. We only need to change the “Repeat for section” with the environment variable as follows (Title remains optional):</p>
<p><a href="http://blog.developpez.com/mikedavem/files/2021/04/174-8-row.jpg"><img src="http://blog.developpez.com/mikedavem/files/2021/04/174-8-row-1024x173.jpg" alt="174 - 8 - row" width="584" height="99" class="alignnone size-large wp-image-1794" /></a></p>
<p>As for AG Health panel, we also need to replace static label values in ALL panels with the new environment variable. Thus, referring to the previous Batch Requests / sec example, the updated Prometheus query will be as follows: (respectively for primary and secondary replicas):</p>
<div class="codecolorer-container text default" style="overflow:auto;white-space:nowrap;border:1px solid #9F9F9F;width:650px;"><div class="text codecolorer" style="padding:5px;font:normal 12px/1.4em Monaco, Lucida Console, monospace;white-space:nowrap">irate(sqlserver_performance{sql_instance='SERVERA:$ENV',counter=&quot;Batch Requests/sec&quot;}[$__range])</div></div>
<p>&#8230;</p>
<div class="codecolorer-container text default" style="overflow:auto;white-space:nowrap;border:1px solid #9F9F9F;width:650px;"><div class="text codecolorer" style="padding:5px;font:normal 12px/1.4em Monaco, Lucida Console, monospace;white-space:nowrap">irate(sqlserver_performance{sql_instance='SERVERB:$ENV',counter=&quot;Batch Requests/sec&quot;}[$__range])</div></div>
<p>The dashboard is now ready, and all dynamic kicks in when a new SQL Server instance is added to the list of monitored items. Here an example of outcome in our context:</p>
<p><a href="http://blog.developpez.com/mikedavem/files/2021/04/174-0-final-dashboard.jpg"><img src="http://blog.developpez.com/mikedavem/files/2021/04/174-0-final-dashboard-1024x404.jpg" alt="174 - 0 - final dashboard" width="584" height="230" class="alignnone size-large wp-image-1795" /></a></p>
<p>Happy monitoring!</p>
]]></content:encoded>
			<wfw:commentRss></wfw:commentRss>
		<slash:comments>1</slash:comments>
		</item>
		<item>
		<title>Azure monitor as observability platform for Azure SQL Databases and more</title>
		<link>https://blog.developpez.com/mikedavem/p13205/sql-azure/azure-monitor-as-observability-platform-for-azure-sql-databases</link>
		<comments>https://blog.developpez.com/mikedavem/p13205/sql-azure/azure-monitor-as-observability-platform-for-azure-sql-databases#comments</comments>
		<pubDate>Mon, 08 Feb 2021 16:57:26 +0000</pubDate>
		<dc:creator><![CDATA[mikedavem]]></dc:creator>
				<category><![CDATA[DevOps]]></category>
		<category><![CDATA[SQL Azure]]></category>
		<category><![CDATA[Azure Monitor]]></category>
		<category><![CDATA[Azure SQL Analytics]]></category>
		<category><![CDATA[Azure SQL Database]]></category>
		<category><![CDATA[devops]]></category>
		<category><![CDATA[Log Analytics]]></category>
		<category><![CDATA[observability]]></category>
		<category><![CDATA[performance]]></category>

		<guid isPermaLink="false">http://blog.developpez.com/mikedavem/?p=1762</guid>
		<description><![CDATA[In a previous blog post, I wrote about reasons we moved our monitoring of on-prem SQL Server instances on Prometheus and Grafana. But what about Cloud and database services? We have different options and obviously in my company we thought &#8230; <a href="https://blog.developpez.com/mikedavem/p13205/sql-azure/azure-monitor-as-observability-platform-for-azure-sql-databases">Lire la suite <span class="meta-nav">&#8594;</span></a>]]></description>
				<content:encoded><![CDATA[<p>In a previous <a href="https://blog.developpez.com/mikedavem/p13203/sql-server-2014/why-we-moved-sql-server-monitoring-on-prometheus-and-grafana" rel="noopener" target="_blank">blog post</a>, I wrote about reasons we moved our monitoring of on-prem SQL Server instances on Prometheus and Grafana. But what about Cloud and database services? </p>
<p><span id="more-1762"></span></p>
<p>We have different options and obviously in my company we thought first moving our Azure SQL Database workload telemetry on on-prem central monitoring infrastructure as well. But not to mention the main blocker which is the serverless compute tier because Telegraf Server agent would imply initiating a connection that could prevent auto-pausing the database or at least it would made monitoring more complex because it would supposed to have a predictable workload all the time. </p>
<p>The second option was to rely on Azure monitor which is a common platform for combining several logging, monitoring and dashboard solutions across a wide set of Azure resources. It is scalable platform, fully managed and provides a powerful query language and native features like alerts, if logs or metrics match specific conditions. Another important point is there is no vendor lock-in, with this solution, as we can always fallback to our self-hosted Prometheus and Grafana instances if neither computer tier doesn’t fit nor in case Azure Monitor might not be an option anymore! </p>
<p>Firstly, to achieve a good observability with Azure SQL Database we need to put both diagnostic telemetry and SQL Server audits events in a common Log Analytics workspace. A quick illustration below: </p>
<p><a href="http://blog.developpez.com/mikedavem/files/2021/02/173-0-Azure-SQL-DB-Monitor-architecture.jpg"><img src="http://blog.developpez.com/mikedavem/files/2021/02/173-0-Azure-SQL-DB-Monitor-architecture-1024x387.jpg" alt="173 - 0 - Azure SQL DB Monitor architecture" width="584" height="221" class="alignnone size-large wp-image-1763" /></a></p>
<p>Diagnostic settings are configured per database and including basic metrics (CPU, IO, Memory etc …) and also different SQL Server internal metrics as deadlock, blocked processes or query store information about query execution statistic and waits etc&#8230; For more details please refer to the Microsoft <a href="https://docs.microsoft.com/en-us/azure/azure-sql/database/metrics-diagnostic-telemetry-logging-streaming-export-configure?tabs=azure-portal" rel="noopener" target="_blank">BOL</a>.</p>
<p>SQL Azure DB auditing is both server-level or database-level configuration setting. In our context, we defined a template of events at the server level which is then applied to all databases within the logical server. By default, 3 events are automatically audited:<br />
&#8211;	BATCH_COMPLETED_GROUP<br />
&#8211;	SUCCESSFUL_DATABASE_AUTHENTICATION_GROUP<br />
&#8211;	FAILED_DATABASE_AUTHENTICATION_GROUP</p>
<p>The first one of the list is probably to be discussed according to the environment because of its impact but in our context that&rsquo;s ok because we faced a data warehouse workload. However we added other ones to meet our security requirements:<br />
&#8211;	PERMISSION_CHANGE_GROUP<br />
&#8211;	DATABASE_PRINCIPAL_CHANGE_GROUP<br />
&#8211;	DATABASE_ROLE_MEMBER_CHANGE_GROUP<br />
&#8211;	USER_CHANGE_PASSWORD_GROUP</p>
<p>But if you take care about Log Analytics as target for SQL audits, you will notice it is still a feature in preview as shown below:</p>
<p><a href="http://blog.developpez.com/mikedavem/files/2021/02/173-4-Audit-target.jpg"><img src="http://blog.developpez.com/mikedavem/files/2021/02/173-4-Audit-target.jpg" alt="173 - 4 - Audit target" width="484" height="153" class="alignnone size-full wp-image-1765" /></a></p>
<p>To be clear, usually we don’t consider using Azure preview features in production especially when they remain in this state for a long time but in this specific context we got interested by observability capabilities of the platform. From one hand, we get very useful performance insights through SQL Analytics dashboards (again in preview) and from the other hand we can easily query logs and traces through Log Analytics for correlation with other metrics. Obviously, we hope Microsoft moving a step further and providing this feature in GA in the near feature. </p>
<p>Let’s talk briefly of SQL Analytics first. It is an advanced and free cloud monitoring solution for Azure SQL database monitoring performance and it relies mainly on your Azure Diagnostic metrics and Azure Monitor views to present data in a structured way through performance dashboard.</p>
<p>Here an example of built-in dashboards we are using to track activity and high CPU / IO bound queries against our data warehouse.</p>
<p><a href="http://blog.developpez.com/mikedavem/files/2021/02/173-1-SQL-Analytics-general-dashboard-e1612797920282.jpg"><img src="http://blog.developpez.com/mikedavem/files/2021/02/173-1-SQL-Analytics-general-dashboard-1024x410.jpg" alt="173 - 1 - SQL Analytics general dashboard" width="584" height="234" class="alignnone size-large wp-image-1768" /></a></p>
<p>You can use drill-down capabilities to different contextual dashboards to get insights of resource intensive queries. For example, we identified some LOG IO intensive queries against a clustered columnstore index and after some refactoring of UPDATE statement to DELETE + INSERT we reduced drastically LOG IO waits.</p>
<p><a href="http://blog.developpez.com/mikedavem/files/2021/02/173-2-SQL-Analytics-IO-e1612797960660.jpg"><img src="http://blog.developpez.com/mikedavem/files/2021/02/173-2-SQL-Analytics-IO-1024x316.jpg" alt="173 - 2 - SQL Analytics IO" width="584" height="180" class="alignnone size-large wp-image-1767" /></a></p>
<p>In addition, Azure monitor helped us in an another scenario where we tried to figure out recent workload patterns and to know if the current compute tier still fits with it. As said previously, we are relying on Serverless compute tier to handle the data warehouse-oriented workload with both auto-scaling and auto-pausing capabilities. At the first glance, we might expect a typical nightly workload as illustrated to Microsoft <a href="https://docs.microsoft.com/en-us/azure/azure-sql/database/serverless-tier-overview#:~:text=Serverless%20is%20a%20compute%20tier,of%20compute%20used%20per%20second." rel="noopener" target="_blank">BOL</a> and a cost optimized to this workload:</p>
<p><a href="http://blog.developpez.com/mikedavem/files/2021/02/173-6-Serverless-pattern.jpg"><img src="http://blog.developpez.com/mikedavem/files/2021/02/173-6-Serverless-pattern.jpg" alt="173 - 6 - Serverless pattern" width="516" height="316" class="alignnone size-full wp-image-1769" /></a></p>
<p><em>Images from Microsoft BOL</em></p>
<p>It could have been true when the activity started on Azure, but the game has changed with new incoming projects over the time. Starting with the general performance dashboard, the workload seems to follow the right pattern for Serverless compute tier, but we noticed billing keep going during unexpected timeframe as shown below. Let’s precise that I put deliberately only a sample of two days, but this pattern is a good representation of the general workload in our context. </p>
<p><a href="http://blog.developpez.com/mikedavem/files/2021/02/173-3-General-performance-dashboard.jpg"><img src="http://blog.developpez.com/mikedavem/files/2021/02/173-3-General-performance-dashboard-1024x556.jpg" alt="173 - 3 - General performance dashboard" width="584" height="317" class="alignnone size-large wp-image-1771" /></a></p>
<p>Indeed, workload should be mostly nightly-oriented with sporadic activity during the day but quick correlation with other basic metrics like CPU or Memory percentage usage confirmed a persistent activity all day. We have CPU spikes and probably small batches that keep minimum memory around at other moments. </p>
<p>As per the <a href="https://docs.microsoft.com/en-us/azure/azure-sql/database/serverless-tier-overview#:~:text=Serverless%20is%20a%20compute%20tier,of%20compute%20used%20per%20second." rel="noopener" target="_blank">Microsoft documentation</a>, the minimum auto-pausing  delay value is 1h and requires an inactive database (number of sessions = 0 and CPU = 0 for user workload) during this timeframe. Basic metrics didn’t provide any further insights about connections, applications or users that could generate such &laquo;&nbsp;noisy&nbsp;&raquo; activity, so we had to go another way by looking at the SQL Audit logs stored in Azure Monitor Logs. Data can be read through KQL which stands for Kusto Query Language (and not Kibana Query Language <img src="https://blog.developpez.com/mikedavem/wp-includes/images/smilies/icon_smile.gif" alt=":-)" class="wp-smiley" /> ). It’s the language used to query the Azure log databases: Azure Monitor Logs, Azure Monitor Application Insights and others and it is pretty similar to SQL language in the construct. </p>
<p>Here the first query I used to correlate number of events with metrics and that could prevent auto-pausing to kick in for the concerned database including RPC COMPLETED, BATCH COMPLETED, DATABASE AUTHENTICATION SUCCEEDED or DATABASE AUTHENTICATION FAILED</p>
<div class="codecolorer-container text default" style="overflow:auto;white-space:nowrap;border:1px solid #9F9F9F;width:650px;"><div class="text codecolorer" style="padding:5px;font:normal 12px/1.4em Monaco, Lucida Console, monospace;white-space:nowrap">AzureDiagnostics<br />
| where Category == 'SQLSecurityAuditEvents' and (action_name_s in ('RPC COMPLETED','BATCH COMPLETED') or action_name_s contains &quot;DATABASE AUTHENTICATION&quot;) &nbsp;and LogicalServerName_s == 'xxxx' and database_name_s == xxxx<br />
| summarize count() by bin(event_time_t, 1h),action_name_s<br />
| render columnchart</div></div>
<p>Results are aggregated and bucketized per hour on generated time event with bin() function. Finally, for a quick and easy read, I choosed a simple and unformatted column chart render. Here the outcome:</p>
<p><a href="http://blog.developpez.com/mikedavem/files/2021/02/173-7-Audit-per-hour-per-event-e1612798279257.jpg"><img src="http://blog.developpez.com/mikedavem/files/2021/02/173-7-Audit-per-hour-per-event-1024x459.jpg" alt="173 - 7 - Audit per hour per event" width="584" height="262" class="alignnone size-large wp-image-1772" /></a></p>
<p>As you probably noticed, daily activity is pretty small compared to nightly one and seems to confirm SQL batches and remote procedure calls. From this unclear picture, we can confirm anyway the daily workload is enough to keep the billing going because there is no per hour timeframe where there is no activity. </p>
<p>Let’s write another KQL query to draw a clearer picture of which applications ran during the a daily timeframe 07:00 – 20:00:</p>
<div class="codecolorer-container text default" style="overflow:auto;white-space:nowrap;border:1px solid #9F9F9F;width:650px;"><div class="text codecolorer" style="padding:5px;font:normal 12px/1.4em Monaco, Lucida Console, monospace;white-space:nowrap">let start=datetime(&quot;2021-01-26&quot;);<br />
let end=datetime(&quot;2021-01-29&quot;);<br />
let dailystart=7;<br />
let dailyend=20;<br />
let timegrain=1d;<br />
AzureDiagnostics<br />
| project &nbsp;action_name_s, event_time_t, application_name_s, server_principal_name_s, Category, LogicalServerName_s, database_name_s<br />
| where Category == 'SQLSecurityAuditEvents' and (action_name_s in ('RPC COMPLETED','BATCH COMPLETED') or action_name_s contains &quot;DATABASE AUTHENTICATION&quot;) &nbsp;<br />
| where LogicalServerName_s == 'xxxx' and database_name_s == 'xxxx' <br />
| where event_time_t &amp;gt; start and event_time_t &amp;lt; end<br />
| where datetime_part(&amp;quot;Hour&amp;quot;,event_time_t) between (dailystart .. dailyend)<br />
| summarize count() by bin(event_time_t, 1h), application_name_s<br />
| render columnchart with (xtitle = &amp;#039;Date&amp;#039;, ytitle = &amp;#039;Nb events&amp;#039;, title = &amp;#039;Prod SQL Workload pattern&amp;#039;)</div></div>
<p>And here the new outcome:</p>
<p><a href="http://blog.developpez.com/mikedavem/files/2021/02/173-8-Audit-per-hour-per-application.jpg"><img src="http://blog.developpez.com/mikedavem/files/2021/02/173-8-Audit-per-hour-per-application-1024x380.jpg" alt="173 - 8 - Audit per hour per application" width="584" height="217" class="alignnone size-large wp-image-1774" /></a></p>
<p>The new chart reveals some activities from SQL Server Management Studio but most part concerns applications with .Net SQL Data Provider. For a better clarity, we need more information related about applications and, in my context, I managed to address the point by reducing the search scope with the service principal name that issued the related audit event. It results to this new outcome that is pretty similar to previous one:</p>
<p><a href="http://blog.developpez.com/mikedavem/files/2021/02/173-9-Audit-per-hour-per-sp.jpg"><img src="http://blog.developpez.com/mikedavem/files/2021/02/173-9-Audit-per-hour-per-sp-1024x362.jpg" alt="173 - 9 - Audit per hour per sp" width="584" height="206" class="alignnone size-large wp-image-1775" /></a></p>
<p>Good job so far. For a sake of clarity, the service principal obfuscated above is used by our Reporting Server infrastructure and reports to get data from this data warehouse.  By going this way to investigate daily activity at different moments on the concerned Azure SQL database, we came to the conclusion that using Serverless computer tier didn’t make sense anymore and we need to upgrade likely to another computer tier.</p>
<p><strong>Additional thoughts</strong></p>
<p>Azure monitor is definitely a must to have if you are running resources on Azure and if you don’t own a platform for observability (metrics, logs and traces). Otherwise, it can be even beneficial for freeing up your on-prem monitoring infrastructure resources if scalability is a concern. Furthermore, there is no vendor-locking and you can decide to stream Azure monitor data outside in another place but at the cost of additional network transfer fees according to the target scenario. For example, Azure monitor can be used directly as datasource with Grafana. Azure SQL telemetry can be collected with Telegraf agent whereas audit logs can be recorded in another logging system like Kibana. In this blog post, we just surfaced the Azure monitor capabilities but, as demonstrated above, performing deep analysis correlations from different sources in a very few steps is a good point of this platform.</p>
]]></content:encoded>
			<wfw:commentRss></wfw:commentRss>
		<slash:comments>2</slash:comments>
		</item>
		<item>
		<title>Extending SQL Server monitoring with Raspberry PI and Lametric</title>
		<link>https://blog.developpez.com/mikedavem/p13204/sql-server-2005/extending-sql-server-monitoring-with-raspberry-pi-and-lametric</link>
		<comments>https://blog.developpez.com/mikedavem/p13204/sql-server-2005/extending-sql-server-monitoring-with-raspberry-pi-and-lametric#comments</comments>
		<pubDate>Thu, 07 Jan 2021 21:59:25 +0000</pubDate>
		<dc:creator><![CDATA[mikedavem]]></dc:creator>
				<category><![CDATA[DevOps]]></category>
		<category><![CDATA[Docker]]></category>
		<category><![CDATA[K8s]]></category>
		<category><![CDATA[SQL Azure]]></category>
		<category><![CDATA[SQL Server 2005]]></category>
		<category><![CDATA[SQL Server 2008]]></category>
		<category><![CDATA[SQL Server 2008 R2]]></category>
		<category><![CDATA[SQL Server 2014]]></category>
		<category><![CDATA[SQL Server 2016]]></category>
		<category><![CDATA[SQL Server 2017]]></category>
		<category><![CDATA[SQL Server 2019]]></category>
		<category><![CDATA[Lametric]]></category>
		<category><![CDATA[monitoring]]></category>
		<category><![CDATA[Powershell]]></category>
		<category><![CDATA[Raspberry]]></category>
		<category><![CDATA[sqlserver]]></category>

		<guid isPermaLink="false">http://blog.developpez.com/mikedavem/?p=1742</guid>
		<description><![CDATA[First blog of this new year 2021 and I will start with a fancy and How-To Geek topic In my last blog post, I discussed about monitoring and how it should help to address quickly a situation that is going &#8230; <a href="https://blog.developpez.com/mikedavem/p13204/sql-server-2005/extending-sql-server-monitoring-with-raspberry-pi-and-lametric">Lire la suite <span class="meta-nav">&#8594;</span></a>]]></description>
				<content:encoded><![CDATA[<p>First blog of this new year 2021 and I will start with a fancy and How-To Geek topic </p>
<p>In my <a href="https://blog.developpez.com/mikedavem/p13203/sql-server-2014/why-we-moved-sql-server-monitoring-on-prometheus-and-grafana" rel="noopener" target="_blank">last blog post</a>, I discussed about monitoring and how it should help to address quickly a situation that is going degrading. Alerts are probably the first way to raise your attention and, in my case, they are often in the form of emails in a dedicated folder. That remains a good thing, at least if you’re not focusing too long in other daily tasks or projects. In work office, I know I would probably better focus on new alerts but as I said previously, telework changed definitely the game.  </p>
<p><span id="more-1742"></span></p>
<p>I wanted to find a way to address this concern at least for main SQL Server critical alerts and I thought about relying on my existing home lab infrastructure to address the point. Reasons are it is always a good opportunity to learn something and to improve my skills by referring to a real case scenario. </p>
<p>My home lab infrastructure includes a cluster of <a href="https://www.raspberrypi.org/products/raspberry-pi-4-model-b/" rel="noopener" target="_blank">Raspberry PI 4</a> nodes. Initially, I use it to improve my skills on K8s or to study some IOT stuff for instance. It is a good candidate for developing and deploying a new app for detecting new incoming alerts in my mailbox and sending notifications to my Lametric accordingly. </p>
<p><a href="https://lametric.com/" rel="noopener" target="_blank">Lametric</a> is a basically a connected clock but works also as a highly-visible display showing notifications from devices or apps via REST APIs. First time I saw such device in action was in a DevOps meetup in 2018 around Docker and Jenkins deployment with <a href="https://www.linkedin.com/in/duquesnoyeric/" rel="noopener" target="_blank">Eric Dusquenoy</a> and Tim Izzo (<a href="https://twitter.com/5ika_" rel="noopener" target="_blank">@5ika_</a>). In addition, one of my previous customers had also one in his office and we had some discussions about cool customization through Lametric apps. </p>
<p>Connection through VPN to my company network is mandatory to work from home and unfortunately Lametric device doesn’t support this scenario because communication is limited to local network only. So, I need an app that run on my local (home) network and able to connect to my mailbox, get new incoming emails and finally sending notifications to my Lametric device. </p>
<p>Here my setup:</p>
<p><a href="http://blog.developpez.com/mikedavem/files/2021/01/171-0-lametric_infra.jpg"><img src="http://blog.developpez.com/mikedavem/files/2021/01/171-0-lametric_infra-1024x711.jpg" alt="171 - 0 - lametric_infra" width="584" height="405" class="alignnone size-large wp-image-1743" /></a></p>
<p>There are plenty of good blog posts to create a Raspberry cluster on the internet and I would suggest to read <a href="https://dbafromthecold.com/2020/11/30/building-a-raspberry-pi-cluster-to-run-azure-sql-edge-on-kubernetes/" rel="noopener" target="_blank">that</a> of Andrew Pruski (<a href="https://twitter.com/dbafromthecold" rel="noopener" target="_blank">@dbafromthecold</a>). </p>
<p>As shown above, there are different paths for SQL alerts referring our infrastructure (On-prem and Azure SQL databases) but all of them are send to a dedicated distribution list for DBA. </p>
<p>The app is a simple PowerShell script that relies on Exchange Webservices APIs for connecting to the mailbox and to get new mails. Sending notifications to my Lametric device is achieved by a simple REST API call with well-formatted body. Details can be found the <a href="https://lametric-documentation.readthedocs.io/en/latest/reference-docs/device-notifications.html" rel="noopener" target="_blank">Lametric documentation</a>. As prerequisite, you need to create a notification app from Lametric Developer site as follows:</p>
<p><a href="http://blog.developpez.com/mikedavem/files/2021/01/171-3-lametric-app-token.jpg"><img src="http://blog.developpez.com/mikedavem/files/2021/01/171-3-lametric-app-token-1024x364.jpg" alt="171 - 3 - lametric app token" width="584" height="208" class="alignnone size-large wp-image-1744" /></a></p>
<p>As said previously, I used PowerShell for this app. It can help to find documentation and tutorials when it comes Microsoft product. But if you are more confident with Python, APIs are also available in a <a href="https://pypi.org/project/py-ews/" rel="noopener" target="_blank">dedicated package</a>. But let’s precise that using PowerShell doesn’t necessarily mean using Windows-based container and instead I relied on Linux-based image with PowerShell core for ARM architecture. Image is provided by Microsoft on <a href="https://hub.docker.com/_/microsoft-powershell" rel="noopener" target="_blank">Docker Hub</a>. Finally, sensitive information like Lametric Token or mailbox credentials are stored in K8s secret for security reasons. My app project is available on my <a href="https://github.com/mikedavem/lametric" rel="noopener" target="_blank">GitHub</a>. Feel free to use it.</p>
<p>Here some results:</p>
<p>&#8211; After deploying my pod:</p>
<p><a href="http://blog.developpez.com/mikedavem/files/2021/01/171-1-lametric-pod.jpg"><img src="http://blog.developpez.com/mikedavem/files/2021/01/171-1-lametric-pod.jpg" alt="171 - 1 - lametric pod" width="483" height="82" class="alignnone size-full wp-image-1745" /></a></p>
<p>&#8211; The app is running and checking new incoming emails (kubectl logs command)</p>
<p><a href="http://blog.developpez.com/mikedavem/files/2021/01/171-2-lametric-pod-logs.jpg"><img src="http://blog.developpez.com/mikedavem/files/2021/01/171-2-lametric-pod-logs.jpg" alt="171 - 2 - lametric pod logs" width="828" height="438" class="alignnone size-full wp-image-1747" /></a></p>
<p>When email is detected, <a href="https://youtu.be/EcdSFziNc3U" title="Notification" rel="noopener" target="_blank">notification</a> is sendig to Lametric device accordingly</p>
<p>Geek fun good (bad?) idea to start this new year 2021 <img src="https://blog.developpez.com/mikedavem/wp-includes/images/smilies/icon_smile.gif" alt=":-)" class="wp-smiley" /></p>
]]></content:encoded>
			<wfw:commentRss></wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>Why we moved SQL Server monitoring on Prometheus and Grafana</title>
		<link>https://blog.developpez.com/mikedavem/p13203/sql-server-2014/why-we-moved-sql-server-monitoring-on-prometheus-and-grafana</link>
		<comments>https://blog.developpez.com/mikedavem/p13203/sql-server-2014/why-we-moved-sql-server-monitoring-on-prometheus-and-grafana#comments</comments>
		<pubDate>Tue, 22 Dec 2020 16:55:12 +0000</pubDate>
		<dc:creator><![CDATA[mikedavem]]></dc:creator>
				<category><![CDATA[DevOps]]></category>
		<category><![CDATA[Performance]]></category>
		<category><![CDATA[SQL Server 2014]]></category>
		<category><![CDATA[SQL Server 2016]]></category>
		<category><![CDATA[SQL Server 2017]]></category>
		<category><![CDATA[SQL Server 2019]]></category>
		<category><![CDATA[Continuous Delivery]]></category>
		<category><![CDATA[database]]></category>
		<category><![CDATA[devops]]></category>
		<category><![CDATA[grafana]]></category>
		<category><![CDATA[monitoring]]></category>
		<category><![CDATA[observability]]></category>
		<category><![CDATA[prometheus]]></category>
		<category><![CDATA[RED]]></category>
		<category><![CDATA[sqlserver]]></category>
		<category><![CDATA[telegraf]]></category>
		<category><![CDATA[USE]]></category>

		<guid isPermaLink="false">http://blog.developpez.com/mikedavem/?p=1722</guid>
		<description><![CDATA[During this year, I spent a part of my job on understanding the processes and concepts around monitoring in my company. The DevOps mindset mainly drove the idea to move our SQL Server monitoring to the existing Prometheus and Grafana &#8230; <a href="https://blog.developpez.com/mikedavem/p13203/sql-server-2014/why-we-moved-sql-server-monitoring-on-prometheus-and-grafana">Lire la suite <span class="meta-nav">&#8594;</span></a>]]></description>
				<content:encoded><![CDATA[<p>During this year, I spent a part of my job on understanding the processes and concepts around monitoring in my company. The DevOps mindset mainly drove the idea to move our SQL Server monitoring to the existing Prometheus and Grafana infrastructure. Obviously, there were some technical decisions behind the scene, but the most important part of this write-up is dedicated to explaining other and likely most important reasons of this decision. </p>
<p><span id="more-1722"></span></p>
<p>But let’s precise first, this write-up doesn’t constitute any guidance or any kind of best practices for DBAs but only some sharing of my own experience on the topic. As usual, any comment will be appreciated.</p>
<p>That’s said, let’s continue with the context. At the beginning of this year, I started my new DBA position in a customer-centric company where DevOps culture, microservices and CI/CD are omnipresent. What does it mean exactly? To cut the story short, development and operation teams are used a common approach for agile software development and delivery. Tools and processes are used to automate build, test, deploy and to monitor applications with speed, quality and control. In other words, we are talking about Continuous Delivery and in my company, release cycle is faster than traditional shops I encountered so far with several releases per day including database changes. Another interesting point is that we are following the &laquo;&nbsp;Operate what you build&nbsp;&raquo; principle each team that develops a service is also responsible for operating and supporting it. It presents some advantages for both developers and operations but pushing out changes requires to get feedback and to observe impact on the system on both sides. </p>
<p>In addition, in operation teams we try to act as a centralized team and each member should understand the global scope and topics related to the infrastructure and its ecosystem. This is especially true when you&rsquo;re dealing with nightly on-calls. Each has its own segment responsibility (regarding their specialized areas) but following DevOps principles, we encourage shared ownership to break down internal silos for optimizing feedback and learning. It implies anyone should be able to temporarily overtake any operational task to some extent assuming the process is well-documented, and learnin has been done correctly. But world is not perfect and this model has its downsides. For example, it will prioritize effectiveness in broader domains leading to increase cognitive load of each team member and to lower visibility in for vertical topics when deeper expertise is sometimes required. Having an end-to-end observable system including infrastructure layer and databases may help to reduce time for investigating and fixing issues before end users experience them. </p>
<p><strong>The initial scenario</strong></p>
<p>Let me give some background info and illustration of the initial scenario:</p>
<p><a href="http://blog.developpez.com/mikedavem/files/2020/12/170-0-initial-scenario.jpg"><img src="http://blog.developpez.com/mikedavem/files/2020/12/170-0-initial-scenario-1024x704.jpg" alt="170 - 0 - initial scenario" width="584" height="402" class="alignnone size-large wp-image-1725" /></a></p>
<p>… and my feeling of what could be improved:</p>
<p>1) From a DBA perspective, at a first glance there are many potential issues. Indeed, a lot of automated or semi-manual deployment processes are out of the control and may have a direct impact on the database environment stability. Without better visibility, there is likely no easy way to address the famous question: He, we are experiencing performance degradations for two days, has something happened on database side?  </p>
<p>2) Silos are encouraged between DBA and DEVs in this scenario. Direct consequence is to limit drastically the adding value of the DBA role in a DevOps context. Obviously, primary concerns include production tasks like ensuring integrity, backups and maintenance of databases. But in a DevOps oriented company where we have automated &laquo;&nbsp;database-as-code&nbsp;&raquo; pipelines, they remain lots of unnecessary complexity and disruptive scripts that DBA should take care. If this role is placed only at the end of the delivery pipeline, collaboration and continuous learning with developer teams will restricted at minimum.  </p>
<p>3) There is a dedicated monitoring tool for SQL Server infrastructure and this is a good point. It provides necessary baselining and performance insights for DBAs. But in other hand, the tool in place targets only DBA profiles and its usage is limited to the infrastructure team. This doesn’t contribute to help improving the scalability in the operations team and beyond. Another issue with the existing tooling is correlation can be difficult with external events that come from either the continuous delivery pipeline or configuration changes performed by operations teams on the SQL Server instances. In this case, establishment of observability (the why) may be limited and this is what teams need to respond quickly and resolve emergencies in modern and distributed software.</p>
<p><strong>What is observability?</strong></p>
<p>You probably noticed the word &laquo;&nbsp;observability&nbsp;&raquo; in my previous sentence, so I think it deserves some explanations before to continue. Observability might seem like a buzzword but in fact it is not a new concept but became prominent in DevOps software development lifecycle (SDLC) methodologies and distributed infrastructure systems. Referring to the <a href="Implementing new monitoring stuff changed the way to observe the system (at least from a DBA perspective). Again, I believe the adding value of DBA role in a company with a strong DevOps mindset is being part of both production DBAs and Development DBAs. Making observability consistent across all the delivery pipeline including databases is likely part of the success and may help DBA getting a broader picture of system components. Referring to my context, I’m now able to get more interaction with developer teams on early phases and to provide them contextual feedbacks (and not generic feedbacks) for improvements regarding SQL production telemetry. They also have access to them and can check by themselves impact of their development.  In the same way, feedbacks and work with my team around database infrastructure topic may appear more relevant. It is finally a matter of collaboration " rel="noopener" target="_blank">Wikipedia</a> definition, <strong>Observability is the ability to infer internal states of a system based on the system’s external outputs</strong>. To be honest, it has not helped me very much and further readings were necessary to shed the light on what observability exactly is and what difference exist with monitoring. </p>
<p>Let’s start instead with monitoring which is the ability to translate infrastructure log metrics data into meaningful and actionable insights. It helps knowing when something goes wrong and starting your response quickly. This is the basis for monitoring tool and the existing one is doing a good job on it. In DBA world, monitoring is often related to performance but reporting performance is only as useful as that reporting accurately represents the internal state of the global system and not only your database environment. For example, in the past I went to some customer shops where I was in charge to audit their SQL Server infrastructure. Generally, customers were able to present their context, but they didn’t get the possibility to provide real facts or performance metrics of their application. In this case, you usually rely on a top-down approach and if you’re either lucky or experimented enough, you manage to find what is going wrong. But sometimes I got relevant SQL Server metrics that would have highlighted a database performance issue, but we didn’t make a clear correlation with those identified on application side. In this case, relying only on database performance metrics was not enough for inferring the internal state of the application. From my experience, many shops are concerned with such applications that have been designed for success and not for failure. They often lake of debuggability monitoring and telemetry is often missing. Collecting data is as the base of observability.</p>
<p>Observability provides not only the when of an error or issue, but more importantly the why. With modern software architectures including micro-services and the emphasis of DevOps, monitoring goals are no longer limited to collecting and processing log data, metrics, and event traces. Instead, it should be employed to improve observability by getting a better understanding of the properties of an application and its performance across distributed systems and delivery pipeline. Referring to the new context I&rsquo;m working now, metric capture and analysis is started with deployment of each micro-service and it provides better observability by measuring all the work done across all dependencies.</p>
<p><strong>White-Box vs. Black-Box Monitoring </strong></p>
<p>In my company as many other companies, different approaches are used when it comes monitoring: White-box and Black-Box monitoring.<br />
White-box monitoring focuses on exposing internals of a system. For example, this approach is used by many SQL Server performance tools on the market that make effort to set a map of the system with a bunch of internal statistic data about index or internal cache usage, existing wait stats, locks and so on …</p>
<p>In contrast, black-Box monitoring is symptom oriented and tests externally visible behavior as a user would see it. Goal is only monitoring the system from the outside and seeing ongoing problems in the system. There are many ways to achieve black-box monitoring and the first obvious one is using probes which will collect CPU or memory usage, network communications, HTTP health check or latency and so on … Another option is to use a set of integration tests that run all the time to test the system from a behavior / business perspective.</p>
<p>White-Box vs. Black-Box Monitoring: Which is finally more important? All are and can work together. In my company, both are used at different layers of the micro-service architecture including software and infrastructure components. </p>
<p><strong>RED vs USE monitoring</strong></p>
<p>When you’re working in a web-oriented and customer-centric company, you are quickly introduced to The Four Golden Signals monitoring concept which defines a series of metrics originally from <a href="https://sre.google/sre-book/monitoring-distributed-systems/" rel="noopener" target="_blank">Google Site Reliability Engineering</a> including latency, traffic, errors and saturation. The RED method is a subset of “Four Golden Signals” and focus on micro-service architectures and include following metrics:</p>
<ul>
<li>Rate: number of requests our service is serving per second</li>
<li>Error: number of failed requests per second </li>
<li>Duration: amount of time it takes to process a request</li>
</ul>
<p>Those metrics are relatively straightforward to understand and may reduce time to figure out which service was throwing the errors and then eventually look at the logs or to restart the service, whatever. </p>
<p>For HTTP Metrics the RED Method is a good fit while the USE Method is more suitable for infrastructure side where main concern is to keep physical resources under control. The latter is based on 3 metrics:</p>
<ul>
<li>Utilization: Mainly represented in percentage and indicates if a resource is in underload or overload state. </li>
<li>Saturation: Work in a queue and waiting to be processed</li>
<li>Errors: Count of event errors</li>
</ul>
<p>Those metrics are commonly used by DBAs to monitor performance. It is worth noting that utilization metric can be sometimes misinterpreted especially when maximum value depends of the context and can go over 100%. </p>
<p><strong>SQL Server infrastructure monitoring expectations</strong></p>
<p>Referring to the starting scenario and all concepts surfaced above, it was clear for us to evolve our existing SQL Server monitoring architecture to improve our ability to reach the following goals:</p>
<ul>
<li>Keeping analyzing long-term trends to respond usual questions like how my daily-workload is evolving? How big is my database? …</li>
<li>Alerting to respond for a broken issue we need to fix or for an issue that is going on and we must check soon.</li>
<li>Building comprehensive dashboards – dashboards should answer basic questions about our SQL Server instances, and should include some form of the advanced SQL telemetry and logging for deeper analysis.</li>
<li>Conducting an ad-hoc retrospective analysis with easier correlation: from example an http response latency that increased in one service. What happened around? Is-it related to database issue? Or blocking issue raised on the SQL Server instance? Is it related to a new query or schema change deployed from the automated delivery pipeline? In other words, good observability should be part of the new solution.</li>
<li>Automated discovery and telemetry collection for every SQL Server instance installed on our environment, either on VM or in container.</li>
<li>To rely entirely on the common platform monitoring based on Prometheus and Grafana. Having the same tooling make often communication easier between people (human factor is also an important aspect of DevOps) </li>
</ul>
<p><strong>Prometheus, Grafana and Telegraf</strong></p>
<p>Prometheus and Grafana are the central monitoring solution for our micro-service architecture. Some others exist but we’ll focus on these tools in the context of this write-up.<br />
Prometheus is an open-source ecosystem for monitoring and alerting. It uses a multi-dimensional data model based on time series data identified by metric name and key/value pairs. WQL is the query language used by Prometheus to aggregate data in real time and data are directly shown or consumed via HTTP API to allow external system like Grafana. Unlike previous tooling, we appreciated collecting SQL Server metrics as well as those of the underlying infrastructure like VMWare and others. It allows to comprehensive picture of a full path between the database services and infrastructure components they rely on. </p>
<p>Grafana is an open source software used to display time series analytics. It allows us to query, visualize and generate alerts from our metrics. It is also possible to integrate a variety of data sources in addition of Prometheus increasing the correlation and aggregation capabilities of metrics from different sources. Finally, Grafana comes with a native annotation store and the ability to add annotation events directly from the graph panel or via the HTTP API. This feature is especially useful to store annotations and tags related to external events and we decided to use it for tracking software releases or SQL Server configuration changes. Having such event directly on dashboard may reduce troubleshooting effort by responding faster to the why of an issue.  </p>
<p>For collecting data we use <a href="https://github.com/influxdata/telegraf/tree/master/plugins/inputs/sqlserver" rel="noopener" target="_blank">Telegraf plugin</a> for SQL Server. The plugin exposes all configured metrics to be polled by a Prometheus server. The plugin can be used for both on-prem and Azure instances including Azure SQL DB and Azure SQL MI. Automated deployment and configuration requires low effort as well. </p>
<p>The high-level overview of the new implemented monitoring solution is as follows:</p>
<p><a href="http://blog.developpez.com/mikedavem/files/2020/12/170-3-monitoring-architecture.jpg"><img src="http://blog.developpez.com/mikedavem/files/2020/12/170-3-monitoring-architecture-1024x776.jpg" alt="170 - 3 - monitoring architecture" width="584" height="443" class="alignnone size-large wp-image-1729" /></a></p>
<p>SQL Server telemetry is achieved through Telegraf + Prometheus and includes both Black-box and White-box oriented metrics. External events like automated deployment, server-level and database-level configuration changes are monitored through a centralized scheduled framework based on PowerShell. Then annotations + tags are written accordingly to Grafana and event details are recorded to logging tables for further troubleshooting.</p>
<p><strong>Did the new monitoring met our expectations?</strong></p>
<p>Well, having experienced the new monitoring solution during this year, I would say we are on a good track. We worked mainly on 2 dashboards. The first one exposes basic black-box metrics to show quickly if something is going wrong while the second one is DBA oriented with a plenty of internal counters to dig further and to perform retrospective analysis.</p>
<p>Here a sample of representative issues we faced this year and we managed to fix with the new monitoring solution:</p>
<p>1) Resource pressure and black-box monitoring in action:</p>
<p><a href="http://blog.developpez.com/mikedavem/files/2020/12/170-4-grafana-1.jpg"><img src="http://blog.developpez.com/mikedavem/files/2020/12/170-4-grafana-1-1024x168.jpg" alt="170 - 4 - grafana 1" width="584" height="96" class="alignnone size-large wp-image-1730" /></a></p>
<p>For this scenario, the first dashboard highlighted resource pressure issues, but it is worth noting that even if the infrastructure was burning, users didn’t experience any side effects or slowness on application side. After corresponding alerts raised on our side, we applied proactive and temporary fixes before users experience them. I would say, this scenario is something we would able to manage with previous monitoring and the good news is we didn’t notice any regression on this topic. </p>
<p>2) Better observability for better resolution of complex issue</p>
<p>This scenario was more interesting because the first symptom started from the application side without alerting the infrastructure layer. We started suffering from HTTP request slowness on November around 12:00am and developers got alerted with sporadic timeout issues from the logging system. After they traversed the service graph, they spotted on something went wrong on the database service by correlating http slowness with blocked processes on SQL Server dashboard as shown below. I put a simplified view on the dashboards, but we need to cross several routes between the front-end services and databases.</p>
<p><a href="http://blog.developpez.com/mikedavem/files/2020/12/170-6-grafana-3.jpg"><img src="http://blog.developpez.com/mikedavem/files/2020/12/170-6-grafana-3-1024x235.jpg" alt="170 - 6 - grafana 3" width="584" height="134" class="alignnone size-large wp-image-1732" /></a></p>
<p>Then I got a call from them and we started investigating blocking processes from the logging tables in place on SQL Server side. At a first glance, different queries with a longer execution time than usual and neither release deployments nor configuration updates may explain such sudden behavior change. The issue kept around and at 15:42 it started appearing more frequently to deserve a deeper look at the SQL Server internal metrics. We quickly found out some interesting correlation with other metrics and we finally managed to figure out why things went wrong as show below:</p>
<p><a href="http://blog.developpez.com/mikedavem/files/2020/12/170-7-grafana-4.jpg"><img src="http://blog.developpez.com/mikedavem/files/2020/12/170-7-grafana-4-706x1024.jpg" alt="170 - 7 - grafana 4" width="584" height="847" class="alignnone size-large wp-image-1733" /></a></p>
<p>Root cause was related to transaction replication slowness within Always On availability group databases and we directly jumped on storage issue according to error log details on secondary: </p>
<p><a href="http://blog.developpez.com/mikedavem/files/2020/12/170-8-errorlog.jpg"><img src="http://blog.developpez.com/mikedavem/files/2020/12/170-8-errorlog-1024x206.jpg" alt="170 - 8 - errorlog" width="584" height="117" class="alignnone size-large wp-image-1734" /></a></p>
<p>End-to-End observability by including the database services to the new monitoring system drastically reduces the time for finding the root cause. But we also learnt from this experience and to continuously improve the observability we added a black-box oriented metric related to availability group replication latency (see below) to detect faster any potential issue.</p>
<p><a href="http://blog.developpez.com/mikedavem/files/2020/12/170-9-avg-replication-metric.jpg"><img src="http://blog.developpez.com/mikedavem/files/2020/12/170-9-avg-replication-metric.jpg" alt="170 - 9 - avg replication metric" width="160" height="113" class="alignnone size-full wp-image-1736" /></a></p>
<p><strong>And what’s next? </strong></p>
<p>Having such monitoring is not the endpoint of this story. As said at the beginning of this write-up, continuous delivery comes with its own DBA challenges illustrated by the starting scenario. Traditionally the DBA role is siloed, turning requests or tickets into work and they can be lacking context about the broader business or technology used in the company. I experienced myself several situations where you get alerted during the night when developer’s query exceeds some usage threshold. Having discussed the point with many DBAs, they tend to be conservative about database changes (normal reaction?) especially when you are at the end of the delivery process without clear view of what will could deployed exactly. </p>
<p>Here the new situation:</p>
<p><a href="http://blog.developpez.com/mikedavem/files/2020/12/170-2-new-scenario.jpg"><img src="http://blog.developpez.com/mikedavem/files/2020/12/170-2-new-scenario-1024x641.jpg" alt="170 - 2 - new scenario" width="584" height="366" class="alignnone size-large wp-image-1737" /></a></p>
<p>Implementing new monitoring stuff changed the way to observe the system (at least from a DBA perspective). Again, I believe the adding value of DBA role in a company with a strong DevOps mindset is being part of both production DBAs and Development DBAs. Making observability consistent across all the delivery pipeline including databases is likely part of the success and may help DBA getting a broader picture of system components. Referring to my context, I’m now able to get more interaction with developer teams on early phases and to provide them contextual feedbacks (and not generic feedbacks) for improvements regarding SQL production telemetry. They also have access to them and can check by themselves impact of their development.  In the same way, feedbacks and work with my team around database infrastructure topic may appear more relevant. </p>
<p>It is finally a matter of collaboration </p>
]]></content:encoded>
			<wfw:commentRss></wfw:commentRss>
		<slash:comments>5</slash:comments>
		</item>
		<item>
		<title>Interesting use case of using dummy columnstore indexes and temp tables</title>
		<link>https://blog.developpez.com/mikedavem/p13202/sql-server-vnext/interesting-use-case-of-using-dummy-columnstore-indexes-and-temp-tables</link>
		<comments>https://blog.developpez.com/mikedavem/p13202/sql-server-vnext/interesting-use-case-of-using-dummy-columnstore-indexes-and-temp-tables#comments</comments>
		<pubDate>Fri, 20 Nov 2020 17:06:06 +0000</pubDate>
		<dc:creator><![CDATA[mikedavem]]></dc:creator>
				<category><![CDATA[Performance]]></category>
		<category><![CDATA[SQL Server 2017]]></category>
		<category><![CDATA[SQL Server 2019]]></category>
		<category><![CDATA[batch mode]]></category>
		<category><![CDATA[columnstore]]></category>
		<category><![CDATA[inline index]]></category>
		<category><![CDATA[operation analytics]]></category>
		<category><![CDATA[reporting]]></category>

		<guid isPermaLink="false">http://blog.developpez.com/mikedavem/?p=1710</guid>
		<description><![CDATA[Columnstore indexes are a very nice feature and well-suited for analytics queries. Using them for our datawarehouse helped to accelerate some big ETL processing and to reduce resource footprint such as CPU, IO and memory as well. In addition, SQL &#8230; <a href="https://blog.developpez.com/mikedavem/p13202/sql-server-vnext/interesting-use-case-of-using-dummy-columnstore-indexes-and-temp-tables">Lire la suite <span class="meta-nav">&#8594;</span></a>]]></description>
				<content:encoded><![CDATA[<p>Columnstore indexes are a very nice feature and well-suited for analytics queries. Using them for our datawarehouse helped to accelerate some big ETL processing and to reduce resource footprint such as CPU, IO and memory as well. In addition, SQL Server 2016 takes columnstore index to a new level and allows a fully updateable non-clustered columnstore index on a rowstore table making possible operational and analytics workloads. Non-clustered columnstore index are a different beast to manage with OLTP workload and we got both good and bad experiences on it. In this blog post, let’s talk about good effects and an interesting case where we use them for reducing CPU consumption of a big reporting query.</p>
<p><span id="more-1710"></span></p>
<p>In fact, the concerned query follows a common T-SQL anti-pattern for performance: A complex layer of nested views and CTEs which are an interesting mix to getting chances to prevent a cleaned-up execution plan. The SQL optimizer gets tricked easily in this case. So, for illustration let’s start with the following query pattern:</p>
<div class="codecolorer-container text default" style="overflow:auto;white-space:nowrap;border:1px solid #9F9F9F;width:650px;height:450px;"><div class="text codecolorer" style="padding:5px;font:normal 12px/1.4em Monaco, Lucida Console, monospace;white-space:nowrap">;WITH CTE1 AS (<br />
&nbsp; &nbsp; SELECT col ..., SUM(col2), ...<br />
&nbsp; &nbsp; FROM [VIEW]<br />
&nbsp; &nbsp; GROUP BY col ...<br />
),<br />
CTE2 AS (<br />
&nbsp; &nbsp; SELECT col ..., ROW_NUMBER() <br />
&nbsp; &nbsp; FROM (<br />
&nbsp; &nbsp; &nbsp; &nbsp; SELECT col ...<br />
&nbsp; &nbsp; &nbsp; &nbsp; JOIN CTE1 ON ...<br />
&nbsp; &nbsp; &nbsp; &nbsp; JOIN [VIEW2] ON ...<br />
&nbsp; &nbsp; &nbsp; &nbsp; JOIN [TABLE] ON ...<br />
&nbsp; &nbsp; ) AS VT <br />
),<br />
CTE3 A (<br />
&nbsp; &nbsp; SELECT col ...<br />
&nbsp; &nbsp; FROM [VIEW]<br />
&nbsp; &nbsp; JOIN [VIEW4] ON ...<br />
)<br />
...<br />
SELECT col ...<br />
FROM (<br />
&nbsp; &nbsp; SELECT <br />
&nbsp; &nbsp; &nbsp; &nbsp; col,<br />
&nbsp; &nbsp; &nbsp; &nbsp; STUFF((SELECT '', '' + col <br />
&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;FROM CTE2 <br />
&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;WHERE CTE2.ID = CTE1.ID<br />
&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;FOR XML PATH('''')), 1, 1, '''') AS colconcat, &nbsp; <br />
&nbsp; &nbsp; &nbsp; &nbsp; ...<br />
&nbsp; &nbsp; FROM (<br />
&nbsp; &nbsp; &nbsp; &nbsp; SELECT col ...<br />
&nbsp; &nbsp; &nbsp; &nbsp; FROM CTE1<br />
&nbsp; &nbsp; &nbsp; &nbsp; LEFT JOIN CTE2 ON ... &nbsp;<br />
&nbsp; &nbsp; &nbsp; &nbsp; LEFT JOIN (<br />
&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; SELECT col <br />
&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; FROM CTE3<br />
&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; GROUP BY col<br />
&nbsp; &nbsp; &nbsp; &nbsp; ) AS T1 ON ...<br />
&nbsp; &nbsp; ) AS T2 <br />
&nbsp; &nbsp; GROUP BY col ...<br />
)</div></div>
<p>Sometimes splitting the big query into small pieces and storing pre-aggregation within temporary tables may help. This is what it has been done and led to some good effects with a global reduction of the query execution time.</p>
<div class="codecolorer-container text default" style="overflow:auto;white-space:nowrap;border:1px solid #9F9F9F;width:650px;height:450px;"><div class="text codecolorer" style="padding:5px;font:normal 12px/1.4em Monaco, Lucida Console, monospace;white-space:nowrap">CREATE TABLE #T1 ...<br />
CREATE TABLE #T2 ...<br />
CREATE TABLE #T3 ...<br />
<br />
<br />
;WITH CTE1 AS (<br />
&nbsp; &nbsp; SELECT col ..., SUM(col2), ...<br />
&nbsp; &nbsp; FROM [VIEW]<br />
&nbsp; &nbsp; GROUP BY col ...<br />
)<br />
INSERT INTO #T1 ...<br />
SELECT col FROM CTE1 ...<br />
;<br />
<br />
WITH CTE2 AS (<br />
&nbsp; &nbsp; SELECT col ..., ROW_NUMBER() <br />
&nbsp; &nbsp; FROM (<br />
&nbsp; &nbsp; &nbsp; &nbsp; SELECT col ...<br />
&nbsp; &nbsp; &nbsp; &nbsp; JOIN #T1 ON ...<br />
&nbsp; &nbsp; &nbsp; &nbsp; JOIN [VIEW2] ON ...<br />
&nbsp; &nbsp; &nbsp; &nbsp; JOIN [TABLE] ON ...<br />
&nbsp; &nbsp; ) AS VT <br />
)<br />
INSERT INTO #T2 ...<br />
SELECT col FROM CTE2 ...<br />
;<br />
<br />
WITH CTE3 A (<br />
&nbsp; &nbsp; SELECT col ...<br />
&nbsp; &nbsp; FROM [VIEW]<br />
&nbsp; &nbsp; JOIN [VIEW4] ON ...<br />
)<br />
INSERT INTO #T3 ...<br />
SELECT col FROM CTE3 ...<br />
;<br />
<br />
<br />
SELECT col ...<br />
FROM (<br />
&nbsp; &nbsp; SELECT <br />
&nbsp; &nbsp; &nbsp; &nbsp; col,<br />
&nbsp; &nbsp; &nbsp; &nbsp; STUFF((SELECT '', '' + col <br />
&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;FROM CTE2 <br />
&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;WHERE CTE2.ID = CTE1.ID<br />
&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;FOR XML PATH('''')), 1, 1, '''') AS colconcat, &nbsp; <br />
&nbsp; &nbsp; &nbsp; &nbsp; ...<br />
&nbsp; &nbsp; FROM (<br />
&nbsp; &nbsp; &nbsp; &nbsp; SELECT col ...<br />
&nbsp; &nbsp; &nbsp; &nbsp; FROM #T1<br />
&nbsp; &nbsp; &nbsp; &nbsp; LEFT JOIN #T2 ON ... &nbsp;<br />
&nbsp; &nbsp; &nbsp; &nbsp; LEFT JOIN (<br />
&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; SELECT col <br />
&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; FROM #T3<br />
&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; GROUP BY col<br />
&nbsp; &nbsp; &nbsp; &nbsp; ) AS T1 ON ...<br />
&nbsp; &nbsp; ) AS T2 <br />
&nbsp; &nbsp; GROUP BY col ...<br />
)</div></div>
<p>However, it was not enough, and the query continued to consume a lot of CPU time as shown below:</p>
<p><a href="http://blog.developpez.com/mikedavem/files/2020/11/169-1-profiler-performance-current-e1605891229645.jpg"><img src="http://blog.developpez.com/mikedavem/files/2020/11/169-1-profiler-performance-current-e1605891229645.jpg" alt="169 - 1 - profiler performance current" width="800" height="65" class="alignnone size-full wp-image-1711" /></a></p>
<p>CPU time was around 20s per execution. CPU time is greater than duration time due to parallelization. Regarding the environment you are, you would say having such CPU time can be common for reporting queries and you’re probably right. But let’s say in my context where all reporting queries are offloaded in a secondary availability group replica (SQL Server 2017), we wanted to keep the read only CPU footprint as low as possible to guarantee a safety margin of CPU resources to address scenarios where all the traffic (including both R/W and R/O queries) is redirected to the primary replica (maintenance, failure and so on).  The concerned report is executed on-demand by users and mostly contribute to create high CPU spikes among other reporting queries as shown below:</p>
<p><a href="http://blog.developpez.com/mikedavem/files/2020/11/169-2-grafana-CPU-current.jpg"><img src="http://blog.developpez.com/mikedavem/files/2020/11/169-2-grafana-CPU-current.jpg" alt="169 - 2 - grafana CPU current" width="406" height="204" class="alignnone size-full wp-image-1712" /></a></p>
<p>Testing this query on DEV environment gave following statistic execution outcomes:</p>
<p>SQL Server Execution Times:<br />
   <strong>CPU time = 12988 ms,  elapsed time = 6084 ms.</strong><br />
SQL Server parse and compile time:<br />
   CPU time = 0 ms, elapsed time = 0 ms.</p>
<p>… with the related execution plan (not estimated). In fact, I put only the final SELECT step because it was the main culprit of high CPU consumption for this query – (plan was anonymized by SQL Sentry Plan Explorer):</p>
<p><a href="http://blog.developpez.com/mikedavem/files/2020/11/169-3-query-execution-plan-current.jpg"><img src="http://blog.developpez.com/mikedavem/files/2020/11/169-3-query-execution-plan-current-1024x149.jpg" alt="169 - 3 - query execution plan current" width="584" height="85" class="alignnone size-large wp-image-1713" /></a></p>
<p>Real content of the query doesn’t matter for this write-up, but you probably have noticed I put explicitly concatenation stuff with XML PATH construct previously and I identified the execution path in the query plan above. This point will be important in the last section of this write-up. </p>
<p>First, because CPU is my main concern, I only selected CPU cost and you may notice top consumers are repartition streams and hash match operators followed by Lazy spool used with XML PATH and correlated subquery. </p>
<p>Then rewriting the query could be a good option but we first tried to find out some quick wins to avoid engaging too much time for refactoring stuff. Focusing on the different branches of this query plan and operators engaged from the right to the left, we make assumption that experimenting <a href="https://techcommunity.microsoft.com/t5/sql-server/columnstore-index-performance-batchmode-execution/ba-p/385054" rel="noopener" target="_blank">batch mode</a> could help reducing the overall CPU time on the highlighted branch. But because we are not dealing with billion of rows within temporary tables, we didn’t want to get extra overhead of maintaining compressed columnstore index structure. I remembered reading an very interesting <a href="https://www.itprotoday.com/sql-server/what-you-need-know-about-batch-mode-window-aggregate-operator-sql-server-2016-part-1" rel="noopener" target="_blank">article</a> in 2016 about the creation of dummy non-clustered columnstore indexes (NCCI) with filter capabilities to enable batch mode and it seemed perfectly fit with our scenario. In parallel, we went through inline index creation capabilities to neither trigger recompilation of the batch statement nor to prevent temp table caching. The target is to save CPU time <img src="https://blog.developpez.com/mikedavem/wp-includes/images/smilies/icon_smile.gif" alt=":)" class="wp-smiley" /></p>
<p>So, the temp table and inline non-clustered columnstore  index DDL was as follows:</p>
<div class="codecolorer-container text default" style="overflow:auto;white-space:nowrap;border:1px solid #9F9F9F;width:650px;"><div class="text codecolorer" style="padding:5px;font:normal 12px/1.4em Monaco, Lucida Console, monospace;white-space:nowrap">CREATE TABLE #T1 ( col ..., INDEX CCI_IDX_T1 NONCLUSTERED COLUMNSTORE (col) ) WHERE col &amp;lt; 1<br />
CREATE TABLE #T2 ( col ..., INDEX CCI_IDX_T2 NONCLUSTERED COLUMNSTORE (col) ) WHERE col &amp;lt; 1<br />
CREATE TABLE #T3 ( col ..., INDEX CCI_IDX_T3 NONCLUSTERED COLUMNSTORE (col) ) WHERE col &amp;lt; 1<br />
…</div></div>
<p>Note the WHERE clause here with an out-of-range value to create an empty NCCI. </p>
<p>After applying the changes here, the new statistic execution metrics we got:</p>
<p>SQL Server Execution Times:<br />
   <strong>CPU time = 2842 ms,  elapsed time = 6536 ms.</strong><br />
SQL Server parse and compile time:<br />
   CPU time = 0 ms, elapsed time = 0 ms.</p>
<p>&#8230; and the related execution plan:</p>
<p><a href="http://blog.developpez.com/mikedavem/files/2020/11/169-4-query-execution-plan-first-optimization.jpg"><img src="http://blog.developpez.com/mikedavem/files/2020/11/169-4-query-execution-plan-first-optimization-1024x277.jpg" alt="169 - 4 - query execution plan first optimization" width="584" height="158" class="alignnone size-large wp-image-1715" /></a></p>
<p>A drop of CPU time consumption (2.8s vs 12s) per execution when the batch mode kicked-in. A good news for sure but something continued to draw my attention because even if batch mode came into play here, it was not propagated to the left and seem to stop at the level of XML PATH execution. After reading my <a href="http://www.nikoport.com/2018/10/12/batch-mode-part-4-some-of-the-limitations/" rel="noopener" target="_blank">preferred reference</a> on this topic (thank you Niko), I was able to confirm my suspicion of unsupported XML operation with batch mode. Unfortunately, I was out of luck to confirm with <strong>column_store_expression_filter_apply</strong> extended event that seem to not work for me. </p>
<p>Well, to allow the propagation of batch mode to the left side of the execution plan, it was necessary to write and move the correlated subquery with XML path to a simple JOIN and STRING_AGG() function – available since SQL Server 2016:</p>
<div class="codecolorer-container text default" style="overflow:auto;white-space:nowrap;border:1px solid #9F9F9F;width:650px;height:450px;"><div class="text codecolorer" style="padding:5px;font:normal 12px/1.4em Monaco, Lucida Console, monospace;white-space:nowrap">-- Concat with XML PATH<br />
SELECT <br />
&nbsp; &nbsp; col,<br />
&nbsp; &nbsp; STUFF((SELECT '', '' + col <br />
&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; FROM CTE2 <br />
&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; WHERE CTE2.col = CTE1.col<br />
&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; FOR XML PATH('''')), 1, 1, '''') AS colconcat,<br />
&nbsp; &nbsp; ...<br />
FROM [TABLE]<br />
<br />
-- Concat with STRING_AGG<br />
SELECT <br />
&nbsp; &nbsp; col,<br />
&nbsp; &nbsp; V.colconcat,<br />
&nbsp; &nbsp; ...<br />
FROM [TABLE] AS T<br />
JOIN (<br />
&nbsp; &nbsp; SELECT <br />
&nbsp; &nbsp; &nbsp; &nbsp; col,<br />
&nbsp; &nbsp; &nbsp; &nbsp; STRING_AGG(col2, ', ') AS colconcat<br />
&nbsp; &nbsp; FROM #T2 <br />
&nbsp; &nbsp; GROUP BY col<br />
) AS V ON V.col = T.col</div></div>
<p>The new change gave this following outcome:</p>
<p>SQL Server Execution Times:<br />
   <strong>CPU time = 2109 ms,  elapsed time = 1872 ms</strong>.</p>
<p>and new execution plan:</p>
<p><a href="http://blog.developpez.com/mikedavem/files/2020/11/169-5-query-execution-plan-2n-optimization.jpg"><img src="http://blog.developpez.com/mikedavem/files/2020/11/169-5-query-execution-plan-2n-optimization-1024x189.jpg" alt="169 - 5 - query execution plan 2n optimization" width="584" height="108" class="alignnone size-large wp-image-1719" /></a></p>
<p>First, batch mode is now propagated from the right to the left of the query execution plan because we eliminated all inhibitors including XML construct.  We got not real CPU reduction this time, but we managed to reduce global execution time. The hash match aggregate operator is the main CPU consumer and it is the main candidate to benefit from batch mode. All remaining operators on the left side process few rows and my guess is that batch mode may appear less efficient than with the main consumer in this case. But anyway, note we also got rid of the Lazy Spool operator with the refactoring of the XML path and correlated subquery by STRING_AGG() and JOIN construct.</p>
<p><a href="http://blog.developpez.com/mikedavem/files/2020/11/169-6-profiler-performance-optimization-e1605891681487.jpg"><img src="http://blog.developpez.com/mikedavem/files/2020/11/169-6-profiler-performance-optimization-1024x59.jpg" alt="169 - 6 - profiler performance optimization" width="584" height="34" class="alignnone size-large wp-image-1716" /></a></p>
<p>The new result is better by far compared to the initial scenario (New CPU time: 3s vs Old CPU Time: 20s). It also had good effect of the overall workload on the AG read only replica:</p>
<p><a href="http://blog.developpez.com/mikedavem/files/2020/11/169-7-grafana-CPU-optimization-e1605891720209.jpg"><img src="http://blog.developpez.com/mikedavem/files/2020/11/169-7-grafana-CPU-optimization-1024x199.jpg" alt="169 - 7 - grafana CPU optimization" width="584" height="113" class="alignnone size-large wp-image-1717" /></a></p>
<p>Not so bad for a quick win!<br />
See you</p>
]]></content:encoded>
			<wfw:commentRss></wfw:commentRss>
		<slash:comments>1</slash:comments>
		</item>
		<item>
		<title>Building a more robust and efficient statistic maintenance with large tables</title>
		<link>https://blog.developpez.com/mikedavem/p13201/sql-server-vnext/building-a-more-robust-and-efficient-statistic-maintenance-with-large-tables</link>
		<comments>https://blog.developpez.com/mikedavem/p13201/sql-server-vnext/building-a-more-robust-and-efficient-statistic-maintenance-with-large-tables#comments</comments>
		<pubDate>Mon, 26 Oct 2020 21:05:34 +0000</pubDate>
		<dc:creator><![CDATA[mikedavem]]></dc:creator>
				<category><![CDATA[SQL Server 2017]]></category>
		<category><![CDATA[maintenance]]></category>
		<category><![CDATA[performance]]></category>
		<category><![CDATA[rebuild index]]></category>
		<category><![CDATA[update statistic]]></category>

		<guid isPermaLink="false">http://blog.developpez.com/mikedavem/?p=1689</guid>
		<description><![CDATA[In a past, I went to different ways for improving update statistic maintenance in different shops according to their context, requirement and constraints as well as the SQL Server version used at this moment. All are important inputs for creating &#8230; <a href="https://blog.developpez.com/mikedavem/p13201/sql-server-vnext/building-a-more-robust-and-efficient-statistic-maintenance-with-large-tables">Lire la suite <span class="meta-nav">&#8594;</span></a>]]></description>
				<content:encoded><![CDATA[<p>In a past, I went to different ways for improving update statistic maintenance in different shops according to their context, requirement and constraints as well as the SQL Server version used at this moment. All are important inputs for creating a good maintenance strategy which can be very simple with execution of sp_updatestats or specialized scripts to focus on some tables.  </p>
<p><span id="more-1689"></span></p>
<p>One of my latest experiences on this topic was probably one of the best although we go to circuitous way for dealing with long update statistic maintenance task on a large database. We used a mix of statistic analysis stuff and improvements provided by SQL Server 2014 SP1 CU6 and parallel update statistic capabilities. I wrote a <a href="https://blog.dbi-services.com/experiencing-updating-statistics-on-a-big-table-by-unusual-ways/" rel="noopener" target="_blank">blog post</a> if you are interested in learning more on this experience.</p>
<p>I’m working now for a new company meaning a different context … At the moment of this write-up, we are running on SQL Server 2017 CU21 and database sizes are in different order of magnitude (more than 100GB compressed) compared to my previous experience. However, switching from default sampling method to FULLSCAN for some large tables drastically increased the update statistic task beyond to the allowed Windows time frame (00:00AM to 03:00AM) without any optimization. </p>
<p><strong>Why to change the update statistic sampling method? </strong></p>
<p>Let’s start from the beginning: why we need to change default statistic sample? In fact, this topic has been already covered in detail in the internet and to make the story short, good statistics are part of the recipe for efficient execution plans and queries. Default sampling size used by both auto update mechanism or UPDATE STATISTIC command without any specification come from a <a href="https://docs.microsoft.com/en-us/archive/blogs/srgolla/sql-server-statistics-explained" rel="noopener" target="_blank">non-linear algorithm</a> and may not produce good histogram with large tables. Indeed, the sampling size decreases as the table get bigger leading to a rough picture of values in the table which may affect cardinality estimation in execution plan … Exactly the side effects we experienced on with a couple of our queries and we wanted to minimize in the future. Therefore, we decided to improve cardinality estimation by switching to FULLSCAN method only for some big tables to produce better histogram. But this method comes also at the cost of a direct impact on consumed resources and execution time because the optimizer needs to read more data to build a better picture of data distribution and sometimes with an higher <a href="https://docs.microsoft.com/en-us/sql/t-sql/statements/update-statistics-transact-sql?redirectedfrom=MSDN&amp;view=sql-server-ver15" rel="noopener" target="_blank">tempdb usage</a>. Our first attempt on ACC environment increased the update statistic maintenance task from initially 5min with default sampling size to 3.5 hours with the FULLSCAN method and only for large tables … Obviously an unsatisfactory solution because we were out of the allowed Windows maintenance timeframe. </p>
<p><strong>Context matters</strong></p>
<p>But first let’s set the context a little bit more: The term “large” can be relative according to the environment. In my context, it means tables with more than 100M of rows and less than 100GB in size for the biggest one and 10M of rows and 10GB in size for lower ones. In fact, for partitioned tables total size includes the archive partition’s compression. </p>
<p>Another gusty detail: concerned databases are part of availability groups and maxdop for primary replica was setup to 1. There is a long story behind this value with some side effects encountered in the past when switching to <strong>maxdop &gt; 1 and cost threshold for parallelism = 50</strong>. At certain times of the year, the workload increased a lot and we faced memory allocation issues for some parallel queries (parallel queries usually require more memory). This is something we need to investigate further but we switched back to maxdop=1 for now and I would say so far so good …</p>
<p>Because we don’t really have index structures heavily fragmented between two rebuild index operations, we’re not in favor of frequent rebuilding index operations. Even if such operation can be either done online or is resumable with SQL Server 2017 EE, it remains a very resource intensive operation including log block replication on the underlying Always On infrastructure. In addition, there is a strong commitment of minimizing resource overhead during the Windows maintenance because of concurrent business workload in the same timeframe.  </p>
<p><strong>Options available to speed-up update statistic task</strong></p>
<p> <strong>Using MAXDOP / PERSIST_SAMPLE_PERCENT with UPDATE STATISTICS command</strong></p>
<p><a href="https://support.microsoft.com/en-us/help/4041809/kb4041809-update-adds-support-for-maxdop-for-create-statistics-and-upd" rel="noopener" target="_blank">KB4041809</a> describes new support added for MAXDOP option for the CREATE STATISTICS and UPDATE STATISTICS statements in Microsoft SQL Server 2014, 2016 and 2017. This is especially helpful to override MAXDOP settings defined at the server or database-scope level. As a reminder, maxdop value is forced to 1 in our context on availability group primary replicas. </p>
<p>For partitioned tables we don’t go through this setting because update statistic is done at partition level (see next section).  The concerned tables own 2 partitions, respectively CURRENT and ARCHIVE. We keep the former small in size and with a relative low number of rows (only last 2 weeks of data). Therefore, there is no real benefit of using MAXDOP to force update statistics to run with parallelism in this case.</p>
<p>But non-partitioned large tables (&gt;=10 GB) are good candidate. According to the following picture, we noticed an execution time reduction of 57% by increasing maxdop value to 4 for some large tables with these specifications:<br />
&#8211;	~= 10GB<br />
&#8211;	~ 11M rows<br />
&#8211;	112 columns<br />
&#8211;	71 statistics</p>
<p><a href="http://blog.developpez.com/mikedavem/files/2020/10/168-11-maxdop-nonpartitioned-tables.jpg"><img src="http://blog.developpez.com/mikedavem/files/2020/10/168-11-maxdop-nonpartitioned-tables.jpg" alt="168 - 11 - maxdop - nonpartitioned tables" width="481" height="289" class="alignnone size-full wp-image-1691" /></a></p>
<p>Another feature we went through is described in <a href="https://support.microsoft.com/en-us/help/4039284/kb4039284-enhancement-new-keyword-is-added-to-create-and-update-statis" rel="noopener" target="_blank">KB4039284</a> and available since with SQL Server 2016+.  In our context, the maintenance of statistics relies on a custom stored procedure (not Ola maintenance scripts yet) and we have configured default sampling rate method for all statistics and we wanted to make exception only for targeted large tables. In the past, we had to use <a href="https://docs.microsoft.com/en-us/sql/t-sql/statements/update-statistics-transact-sql?view=sql-server-ver15" rel="noopener" target="_blank">NO_RECOMPUTE</a> option to exclude statistics for automatic updates. The new PERSIST_SAMPLE_PERCENT option indicates SQL Server to lock the sampling rate for future update operations and we are using it for non-partitioned large tables. </p>
<p> <strong>Incremental statistics</strong></p>
<p>SQL Server 2017 provides interesting options to reduce maintenance overhead. Surprisingly some large tables were already partitioned but no incremental statistics were configured. Incremental statistics are especially useful for tables where only few partitions are changed at a time and are a great feature to improve efficiency of statistic maintenance because operations are done at the partition level since SQL Server 2014. Another <a href="https://blog.dbi-services.com/sql-server-2014-new-incremental-statistics/" rel="noopener" target="_blank">blog post</a> written a couple of years ago and here was a great opportunity to apply theorical concepts to a practical use case. Because we already implemented partition-level maintenance for indexes, it made sense to apply the same method for statistics to minimize overhead with FULLSCAN method and to benefit from statistic update threshold at the partition level. As said in the previous section, partitioned tables own 2 partitions CURRENT (last 2 weeks) and ARCHIVE and the goal was to only update statistics on the CURRENT partition on daily basis. However, let’s precise that although statistic objects are managed are the partition level, the SQL Server optimizer is not able to use them directly (no change since SQL Server 2014 to SQL Server 2019 as far as I know) and refers instead to the global statistic object.</p>
<p>Let’s demonstrate with the following example:</p>
<p>Let&rsquo;s consider BIG TABLE with 2 partitions for CURRENT (last 2 weeks) and ARCHIVE values as shown below:</p>
<div class="codecolorer-container text default" style="overflow:auto;white-space:nowrap;border:1px solid #9F9F9F;width:650px;"><div class="text codecolorer" style="padding:5px;font:normal 12px/1.4em Monaco, Lucida Console, monospace;white-space:nowrap">SELECT <br />
&nbsp; &nbsp; s.object_id,<br />
&nbsp; &nbsp; s.name AS stat_name,<br />
&nbsp; &nbsp; sp.rows,<br />
&nbsp; &nbsp; sp.rows_sampled,<br />
&nbsp; &nbsp; sp.node_id,<br />
&nbsp; &nbsp; sp.left_boundary,<br />
&nbsp; &nbsp; sp.right_boundary,<br />
&nbsp; &nbsp; sp.partition_number<br />
FROM sys.stats AS s<br />
CROSS APPLY sys.dm_db_stats_properties_internal(s.object_id, s.stats_id) AS sp<br />
WHERE s.object_id = OBJECT_ID('[dbo].[BIG TABLE]')<br />
AND s.name = 'XXXX_OID'</div></div>
<p><a href="http://blog.developpez.com/mikedavem/files/2020/10/168-2-Stats-Partition-e1603745274341.jpg"><img src="http://blog.developpez.com/mikedavem/files/2020/10/168-2-Stats-Partition-e1603745274341.jpg" alt="168 - 2 - Stats Partition" width="800" height="113" class="alignnone size-full wp-image-1692" /></a></p>
<p>Statistic object is incremental, and we got an internal picture of per-partition statistics and the global one. You need to enable trace flag 2309 and to add node id reference to the DBCC SHOW_STATISTICS command as well.  Let’s dig into the ARCHIVE partition to find a specific value within the histogram step:</p>
<div class="codecolorer-container text default" style="overflow:auto;white-space:nowrap;border:1px solid #9F9F9F;width:650px;"><div class="text codecolorer" style="padding:5px;font:normal 12px/1.4em Monaco, Lucida Console, monospace;white-space:nowrap">DBCC TRACEON ( 2309 );<br />
GO<br />
DBCC SHOW_STATISTICS('[dbo].[BIG TABLE]', 'XXX_OID', 7) WITH HISTOGRAM;</div></div>
<p><a href="http://blog.developpez.com/mikedavem/files/2020/10/168-3-histogram-partition-1.jpg"><img src="http://blog.developpez.com/mikedavem/files/2020/10/168-3-histogram-partition-1.jpg" alt="168 - 3 - histogram partition 1" width="825" height="157" class="alignnone size-full wp-image-1693" /></a></p>
<p>Then, I used the value 9246258 in the WHERE clause of the following query:</p>
<div class="codecolorer-container text default" style="overflow:auto;white-space:nowrap;border:1px solid #9F9F9F;width:650px;"><div class="text codecolorer" style="padding:5px;font:normal 12px/1.4em Monaco, Lucida Console, monospace;white-space:nowrap">SELECT *<br />
FROM dbo.[BIG TABLE]<br />
WHERE XXXX_OID = 9246258</div></div>
<p>It gives an estimated cardinality of 37.689 rows as show below …</p>
<p><a href="http://blog.developpez.com/mikedavem/files/2020/10/168-4-query.jpg"><img src="http://blog.developpez.com/mikedavem/files/2020/10/168-4-query.jpg" alt="168 - 4 - query" width="614" height="186" class="alignnone size-full wp-image-1694" /></a></p>
<p>… Cardinality estimation is 37.689 while we should expect a value of 12 rows here referring to the statistic histogram above. Let’s now have a look at the global statistic (nodeid = 1):</p>
<div class="codecolorer-container text default" style="overflow:auto;white-space:nowrap;border:1px solid #9F9F9F;width:650px;"><div class="text codecolorer" style="padding:5px;font:normal 12px/1.4em Monaco, Lucida Console, monospace;white-space:nowrap">DBCC SHOW_STATISTICS('[dbo].[BIG TABLE]', 'XXX_OID', 1) WITH HISTOGRAM;</div></div>
<p><a href="http://blog.developpez.com/mikedavem/files/2020/10/168-5-histogram-partition-global.jpg"><img src="http://blog.developpez.com/mikedavem/files/2020/10/168-5-histogram-partition-global.jpg" alt="168 - 5 - histogram partition global" width="822" height="139" class="alignnone size-full wp-image-1695" /></a></p>
<p>In fact, the query optimizer estimates rows by using AVG_RANGE_ROWS value between 9189129 and 9473685 in the global statistic. Well, it is likely not as perfect as we may expect. Incremental statistics do helps in reducing time taken to gather stats for sure, but it may not be enough to represent the entire data distribution in the table – We are still limited to 200 steps in the global statistic object. Pragmatically, I think we may mitigate this point by saying things could be worst somehow if we need either to use default sample algorithm or to decrease the sample size of your update statistic operation. </p>
<p>Let’s illustrate with the BIG TABLE. To keep things simple, I have voluntary chosen a (real) statistic where data is evenly distributed.  Here some pictures of real data distribution:</p>
<p>The first one is a simple view of MIN, MAX boundaries as well as AVG of occurrences (let’s say duplicate records for a better understanding) by distinct value:</p>
<p><a href="http://blog.developpez.com/mikedavem/files/2020/10/168-6-nb_occurences_per_value.jpg"><img src="http://blog.developpez.com/mikedavem/files/2020/10/168-6-nb_occurences_per_value.jpg" alt="168 - 6 - nb_occurences_per_value" width="457" height="104" class="alignnone size-full wp-image-1696" /></a></p>
<p>Referring to the picture above, we may notice there is no high variation of number of occurrences per distinct value represented by the leading XXX_OID column in the related index. In the picture below, another representation of data distribution where each histogram bucket includes the number of distinct values per number of occurrences. </p>
<p><a href="http://blog.developpez.com/mikedavem/files/2020/10/168-10-histogram_per_nb-occurences.jpg"><img src="http://blog.developpez.com/mikedavem/files/2020/10/168-10-histogram_per_nb-occurences.jpg" alt="168 - 10 - histogram_per_nb occurences" width="481" height="289" class="alignnone size-full wp-image-1697" /></a></p>
<p>For example, we have roughly 2.3% of distinct values in the BIG TABLE with 29 duplicate records. The same applies for values 28, 31 and so on … In short, this histogram confirms a certain degree of homogeneity of data distribution and avg_occurences value is not so far from the truth.</p>
<p>Let’s using default sample value for UPDATE STATISTICS. A very low sample of rows are taken into account leading to very approximative statistics as show below:</p>
<div class="codecolorer-container text default" style="overflow:auto;white-space:nowrap;border:1px solid #9F9F9F;width:650px;"><div class="text codecolorer" style="padding:5px;font:normal 12px/1.4em Monaco, Lucida Console, monospace;white-space:nowrap">SELECT <br />
&nbsp; &nbsp; rows,<br />
&nbsp; &nbsp; rows_sampled,<br />
&nbsp; &nbsp; CAST(rows_sampled * 100. / rows AS DECIMAL(5,2)) AS [sample_%],<br />
&nbsp; &nbsp; steps<br />
FROM sys.dm_db_stats_properties(OBJECT_ID('[dbo].[BIG TABLE]), 1)</div></div>
<p><a href="http://blog.developpez.com/mikedavem/files/2020/10/168-7-default_sample_value.jpg"><img src="http://blog.developpez.com/mikedavem/files/2020/10/168-7-default_sample_value.jpg" alt="168 - 7 - default_sample_value" width="421" height="56" class="alignnone size-full wp-image-1699" /></a></p>
<div class="codecolorer-container text default" style="overflow:auto;white-space:nowrap;border:1px solid #9F9F9F;width:650px;"><div class="text codecolorer" style="padding:5px;font:normal 12px/1.4em Monaco, Lucida Console, monospace;white-space:nowrap">SELECT *<br />
FROM sys.dm_db_stats_histogram(OBJECT_ID('[dbo].[BIG TABLE]), 1)</div></div>
<p><a href="http://blog.developpez.com/mikedavem/files/2020/10/168-8-default_sample_histogram-e1603745718861.jpg"><img src="http://blog.developpez.com/mikedavem/files/2020/10/168-8-default_sample_histogram-e1603745718861.jpg" alt="168 - 8 - default_sample_histogram" width="800" height="218" class="alignnone size-full wp-image-1700" /></a></p>
<p>Focusing on average_range_rows colum values, we may notice estimation is not representative of real distribution in the BIG TABLE. </p>
<p>After running FULLSCAN method with UPDATE STATISTICS command, the story has changed, and estimation is now closer to the reality:</p>
<p><a href="http://blog.developpez.com/mikedavem/files/2020/10/168-9-fullscan_histogram-e1603745769635.jpg"><img src="http://blog.developpez.com/mikedavem/files/2020/10/168-9-fullscan_histogram-e1603745769635.jpg" alt="168 - 9 - fullscan_histogram" width="800" height="255" class="alignnone size-full wp-image-1701" /></a></p>
<p>As a side note, one additional benefit of using FULLSCAN method is to get a representative statistic histogram in fewer steps. This is well-explained in the SQL Tiger team&rsquo;s <a href="https://docs.microsoft.com/en-us/archive/blogs/sql_server_team/perfect-statistics-histogram-in-just-few-steps" rel="noopener" target="_blank">blog post</a> and we noticed this specific behavior with some statistic histograms where frequency is low … mainly primary key and unique index related statistics.</p>
<p><a href="http://blog.developpez.com/mikedavem/files/2020/10/168-1-statistic-histogram-before-after-e1603745894373.jpg"><img src="http://blog.developpez.com/mikedavem/files/2020/10/168-1-statistic-histogram-before-after-e1603745894373.jpg" alt="168 - 1 - statistic histogram before after" width="800" height="196" class="alignnone size-full wp-image-1702" /></a></p>
<p><strong>How benefit was incremental statistic? </strong></p>
<p>The picture below refers to one of our biggest partitioned large table with the following characteristics:<br />
&#8211;	~ 410M rows<br />
&#8211;	~ 63GB in size (including compressed partition size)<br />
&#8211;	67 columns<br />
&#8211;	30 statistics </p>
<p><a href="http://blog.developpez.com/mikedavem/files/2020/10/168-12-maxdop-partitioned-tables.jpg"><img src="http://blog.developpez.com/mikedavem/files/2020/10/168-12-maxdop-partitioned-tables.jpg" alt="168 - 12 - maxdop - partitioned tables" width="738" height="289" class="alignnone size-full wp-image-1703" /></a></p>
<p>As noticed in the picture above, overriding maxdop setting at the database-scoped level resulted to an interesting drop in execution time when FULLSCAN method is used (from 03h30 to 17s in the best case)<br />
Similarly, combining efforts done for both non-partitioned and partitioned larges tables resulted to reduced execution time of update statistic task from ~ 03h30 to 15min – 30min in production that is a better fit with our requirements. </p>
<p>Going through more sophisticated process to update statistic may seem more complicated but strongly required in some specific scenarios. Fortunately, SQL Server provides different features to help optimizing this process. I’m looking forward to seeing features that will be shipped with next versions of SQL Server.</p>
]]></content:encoded>
			<wfw:commentRss></wfw:commentRss>
		<slash:comments>1</slash:comments>
		</item>
		<item>
		<title>Curious case of locking scenario with SQL Server audits</title>
		<link>https://blog.developpez.com/mikedavem/p13200/sql-server-vnext/curious-case-of-locking-scenario-including-sql-server-audits</link>
		<comments>https://blog.developpez.com/mikedavem/p13200/sql-server-vnext/curious-case-of-locking-scenario-including-sql-server-audits#comments</comments>
		<pubDate>Mon, 05 Oct 2020 19:25:47 +0000</pubDate>
		<dc:creator><![CDATA[mikedavem]]></dc:creator>
				<category><![CDATA[SQL Server 2017]]></category>
		<category><![CDATA[blocking]]></category>
		<category><![CDATA[dbatools]]></category>
		<category><![CDATA[performance]]></category>
		<category><![CDATA[SQL Server audit]]></category>

		<guid isPermaLink="false">http://blog.developpez.com/mikedavem/?p=1673</guid>
		<description><![CDATA[In high mission-critical environments, ensuring high level of availability is a prerequisite and usually IT department addresses required SLAs (the famous 9’s) with high available architecture solutions. As stated by Wikipedia: availability measurement is subject to some degree of interpretation. &#8230; <a href="https://blog.developpez.com/mikedavem/p13200/sql-server-vnext/curious-case-of-locking-scenario-including-sql-server-audits">Lire la suite <span class="meta-nav">&#8594;</span></a>]]></description>
				<content:encoded><![CDATA[<p>In high mission-critical environments, ensuring high level of availability is a prerequisite and usually IT department addresses required SLAs (the famous 9’s) with high available architecture solutions. As stated by <a href="https://en.wikipedia.org/wiki/High_availability" rel="noopener" target="_blank">Wikipedia</a>: <strong><em>availability measurement is subject to some degree of interpretation</em></strong>. Thus, IT department generally focus on uptime metric whereas for other departments availability is often related to application response time or tied to slowness / unresponsiveness complains. The latter is about application throughput and database locks may contribute to reduce it. This is something we are constantly monitoring in addition of the uptime in my company. </p>
<p><span id="more-1673"></span></p>
<p>A couple of weeks ago, we began to experience suddenly some unexpected blocking issues that included some specific query patterns and SQL Server audit feature. This is all more important as this specific scenario began from one specific database and led to create a long hierarchy tree of blocked processes with blocked SQL Server audit operation first and then propagated to all databases on the SQL Server instance. A very bad scenario we definitely want to avoid … Here a sample of the blocking processes tree:</p>
<p><a href="http://blog.developpez.com/mikedavem/files/2020/10/167-1-blocking-scenarios-e1601924652500.jpg"><img src="http://blog.developpez.com/mikedavem/files/2020/10/167-1-blocking-scenarios-e1601924652500.jpg" alt="167 - 1 - blocking scenarios" width="800" height="56" class="alignnone size-full wp-image-1674" /></a></p>
<p>First, let’s set the context :</p>
<p>We are using SQL Server audit for different purposes since the SQL Server 2014 version and we actually running on SQL Server 2017 CU21 at the moment of this write-up. The obvious one is for security regulatory compliance with login events. We also rely on SQL Server audits to extend the observability of our monitoring system (based on Prometheus and Grafana). Configuration changes are audited with specific events and we link concerned events with annotations in our SQL Server Grafana dashboards. Thus, we are able to quickly correlate events with some behavior changes that may occur on the database side. The high-level of the audit infrastructure is as follows:</p>
<p><a href="http://blog.developpez.com/mikedavem/files/2020/10/167-0-audit-architecture-e1601924728531.jpg"><img src="http://blog.developpez.com/mikedavem/files/2020/10/167-0-audit-architecture-e1601924728531.jpg" alt="167 - 0 - audit architecture" width="800" height="417" class="alignnone size-full wp-image-1675" /></a></p>
<p>As shown in the picture above, a PowerShell script carries out stopping and restarting the audit target and then we use the archive audit file to import related data to a dedicated database.<br />
Let’s precise we use this process without any issues since a couple of years and we were surprised to experience such behavior at this moment. Enough surprising for me to write a blog post <img src="https://blog.developpez.com/mikedavem/wp-includes/images/smilies/icon_smile.gif" alt=":)" class="wp-smiley" /> &#8230; Digging further to the root cause, we pointed out to a specific pattern that seemed to be the root cause of our specific issue:</p>
<p><strong><br />
1.	Open transaction<br />
2.	Foreach row in a file execute an UPSERT statement<br />
3.	Commit transaction<br />
</strong></p>
<p>This is a <a href="https://www.red-gate.com/simple-talk/sql/t-sql-programming/rbar-row-by-agonizing-row/" rel="noopener" target="_blank">RBAR pattern</a> and it may become slow according the number of lines it has to deal with. In addition, the logic is encapsulated within a single transaction leading to accumulate locks during all the transaction duration. Thinking about it, we didn’t face the specific locking issue with other queries so far because they are executed within short transactions by design. </p>
<p>This point is important because enabling SQL Server audits implies also extra metadata locks. We decided to mimic this behavior on a TEST environment in order to figure out what happened exactly.</p>
<p>Here the scripts we used for that purpose:</p>
<p><strong>TSQL script:</strong></p>
<div class="codecolorer-container text default" style="overflow:auto;white-space:nowrap;border:1px solid #9F9F9F;width:650px;height:450px;"><div class="text codecolorer" style="padding:5px;font:normal 12px/1.4em Monaco, Lucida Console, monospace;white-space:nowrap">- Create audit<br />
USE [master]<br />
GO<br />
<br />
CREATE SERVER AUDIT [Audit-Target-Login]<br />
TO FILE <br />
( &nbsp; FILEPATH = N'/var/opt/mssql/log/'<br />
&nbsp; &nbsp; ,MAXSIZE = 0 MB<br />
&nbsp; &nbsp; ,MAX_ROLLOVER_FILES = 2147483647<br />
&nbsp; &nbsp; ,RESERVE_DISK_SPACE = OFF<br />
)<br />
WITH<br />
( &nbsp; QUEUE_DELAY = 1000<br />
&nbsp; &nbsp; ,ON_FAILURE = CONTINUE<br />
)<br />
WHERE (<br />
&nbsp; &nbsp; [server_principal_name] like '%\%' <br />
&nbsp; &nbsp; AND NOT [server_principal_name] like '%\svc%' <br />
&nbsp; &nbsp; AND NOT [server_principal_name] like 'NT SERVICE\%' <br />
&nbsp; &nbsp; AND NOT [server_principal_name] like 'NT AUTHORITY\%' <br />
&nbsp; &nbsp; AND NOT [server_principal_name] like '%XDCP%'<br />
);<br />
<br />
ALTER SERVER AUDIT [Audit-Target-Login] WITH (STATE = ON);<br />
GO<br />
<br />
CREATE SERVER AUDIT SPECIFICATION [Server-Audit-Target-Login]<br />
FOR SERVER AUDIT [Audit-Target-Login]<br />
ADD (FAILED_DATABASE_AUTHENTICATION_GROUP),<br />
ADD (SUCCESSFUL_DATABASE_AUTHENTICATION_GROUP),<br />
ADD (FAILED_LOGIN_GROUP),<br />
ADD (SUCCESSFUL_LOGIN_GROUP),<br />
ADD (LOGOUT_GROUP)<br />
WITH (STATE = ON)<br />
GO<br />
<br />
USE [DBA] <br />
GO <br />
<br />
-- Tables to simulate the scenario<br />
CREATE TABLE dbo.T ( <br />
&nbsp; &nbsp; id INT, <br />
&nbsp; &nbsp; col1 VARCHAR(50) <br />
);<br />
<br />
CREATE TABLE dbo.T2 ( <br />
&nbsp; &nbsp; id INT, <br />
&nbsp; &nbsp; col1 VARCHAR(50) <br />
); <br />
<br />
INSERT INTO dbo.T VALUES (1, REPLICATE('T',20));<br />
INSERT INTO dbo.T2 VALUES (1, REPLICATE('T',20));</div></div>
<p><strong>PowerShell scripts:</strong></p>
<p>Session 1: Simulating SQL pattern</p>
<div class="codecolorer-container text default" style="overflow:auto;white-space:nowrap;border:1px solid #9F9F9F;width:650px;height:450px;"><div class="text codecolorer" style="padding:5px;font:normal 12px/1.4em Monaco, Lucida Console, monospace;white-space:nowrap"># Scenario simulation &nbsp;<br />
$server ='127.0.0.1' <br />
$Database ='DBA' <br />
<br />
$Connection =New-Object System.Data.SQLClient.SQLConnection <br />
$Connection.ConnectionString = &quot;Server=$server;Initial Catalog=$Database;Integrated Security=false;User ID=sa;Password=P@SSw0rd1;Application Name=TESTLOCK&quot; <br />
$Connection.Open() <br />
<br />
$Command = New-Object System.Data.SQLClient.SQLCommand <br />
$Command.Connection = $Connection <br />
$Command.CommandTimeout = 500<br />
<br />
$sql = <br />
&quot; <br />
MERGE T AS T <br />
USING T2 AS S ON T.id = S.id <br />
WHEN MATCHED THEN UPDATE SET T.col1 = 'TT' <br />
WHEN NOT MATCHED THEN INSERT (col1) VALUES ('TT'); <br />
<br />
WAITFOR DELAY '00:00:03' &nbsp;<br />
&quot; &nbsp;<br />
<br />
#Begin Transaction <br />
$command.Transaction = $connection.BeginTransaction() <br />
<br />
# Simulate for each file =&amp;gt; Execute merge statement<br />
while(1 -eq 1){<br />
<br />
&nbsp; &nbsp; $Command.CommandText =$sql <br />
&nbsp; &nbsp; $Result =$Command.ExecuteNonQuery() <br />
<br />
}<br />
&nbsp; &nbsp; &nbsp;<br />
$command.Transaction.Commit() <br />
$Connection.Close()</div></div>
<p>Session 2: Simulating stopping / starting SQL Server audit for archiving purpose</p>
<div class="codecolorer-container text default" style="overflow:auto;white-space:nowrap;border:1px solid #9F9F9F;width:650px;"><div class="text codecolorer" style="padding:5px;font:normal 12px/1.4em Monaco, Lucida Console, monospace;white-space:nowrap">$creds = New-Object System.Management.Automation.PSCredential -ArgumentList ($user, $password)<br />
<br />
$Query = &quot;<br />
&nbsp; &nbsp; USE master;<br />
&nbsp; &nbsp; ALTER SERVER AUDIT [Audit-Target-Login]<br />
&nbsp; &nbsp; WITH ( STATE = OFF );<br />
<br />
&nbsp; &nbsp; ALTER SERVER AUDIT [Audit-Target-Login]<br />
&nbsp; &nbsp; WITH ( STATE = ON );<br />
&quot;<br />
<br />
Invoke-DbaQuery `<br />
&nbsp; &nbsp; -SqlInstance $server `<br />
&nbsp; &nbsp; -Database $Database `<br />
&nbsp; &nbsp; -SqlCredential $creds `<br />
&nbsp; &nbsp; -Query $Query</div></div>
<p>First, we wanted to get a comprehensive picture of locks acquired during the execution of this specific SQL pattern by with an extended event session and lock_acquired event as follows:</p>
<div class="codecolorer-container text default" style="overflow:auto;white-space:nowrap;border:1px solid #9F9F9F;width:650px;height:450px;"><div class="text codecolorer" style="padding:5px;font:normal 12px/1.4em Monaco, Lucida Console, monospace;white-space:nowrap">CREATE EVENT SESSION [locks] <br />
ON SERVER <br />
ADD EVENT sqlserver.lock_acquired<br />
(<br />
&nbsp; &nbsp; ACTION(sqlserver.client_app_name,<br />
&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;sqlserver.session_id,<br />
&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;sqlserver.transaction_id)<br />
&nbsp; &nbsp; WHERE ([sqlserver].[client_app_name]=N'TESTLOCK'))<br />
ADD TARGET package0.histogram<br />
(<br />
&nbsp; &nbsp; SET filtering_event_name=N'sqlserver.lock_acquired',<br />
&nbsp; &nbsp; source=N'resource_type',source_type=(0)<br />
)<br />
WITH <br />
(<br />
&nbsp; &nbsp; MAX_MEMORY=4096 KB,<br />
&nbsp; &nbsp; EVENT_RETENTION_MODE=ALLOW_SINGLE_EVENT_LOSS,<br />
&nbsp; &nbsp; MAX_DISPATCH_LATENCY=30 SECONDS,<br />
&nbsp; &nbsp; MAX_EVENT_SIZE=0 KB,<br />
&nbsp; &nbsp; MEMORY_PARTITION_MODE=NONE,<br />
&nbsp; &nbsp; TRACK_CAUSALITY=OFF,<br />
&nbsp; &nbsp; STARTUP_STATE=OFF<br />
)<br />
GO</div></div>
<p>Here the output we got after running the first PowerShell session:</p>
<p><a href="http://blog.developpez.com/mikedavem/files/2020/10/167-2-xe-lock-output.jpg"><img src="http://blog.developpez.com/mikedavem/files/2020/10/167-2-xe-lock-output.jpg" alt="167 - 2 - xe lock output" width="327" height="158" class="alignnone size-full wp-image-1676" /></a></p>
<p>We confirm METADATA locks in addition to usual locks acquired to the concerned structures. We correlated this output with sp_WhoIsActive (and @get_locks = 1) after running the second PowerShell session. Let’s precise that you may likely have to run the 2nd query several times to reproduce the initial issue.  </p>
<p>Here a picture of locks respectively acquired by session 1 and in waiting state by session 2:</p>
<p><a href="http://blog.developpez.com/mikedavem/files/2020/10/167-3-sp_WhoIsActiveGetLocks-e1601925071999.jpg"><img src="http://blog.developpez.com/mikedavem/files/2020/10/167-3-sp_WhoIsActiveGetLocks-e1601925071999.jpg" alt="167 - 3 - sp_WhoIsActiveGetLocks" width="800" height="344" class="alignnone size-full wp-image-1677" /></a></p>
<p>&#8230;</p>
<p><a href="http://blog.developpez.com/mikedavem/files/2020/10/167-4-sp_WhoIsActiveGetLocks2-e1601925104990.jpg"><img src="http://blog.developpez.com/mikedavem/files/2020/10/167-4-sp_WhoIsActiveGetLocks2-e1601925104990.jpg" alt="167 - 4 - sp_WhoIsActiveGetLocks2" width="800" height="122" class="alignnone size-full wp-image-1678" /></a></p>
<p>We may identify clearly metadata locks acquired on the SQL Server audit itself (METDATA.AUDIT_ACTIONS with Sch-S) and the second query with ALTER SERVER AUDIT … WITH (STATE = OFF) statement that is waiting on the same resource (Sch-M). Unfortunately, my google Fu didn’t provide any relevant information on this topic excepted the documentation related to <a href="https://docs.microsoft.com/en-us/sql/relational-databases/system-dynamic-management-views/sys-dm-tran-locks-transact-sql?view=sql-server-ver15" rel="noopener" target="_blank">sys.dm_tran_locks</a> DMV. My guess is writing events to audits requires a stable the underlying infrastructure and SQL Server needs to protect concerned components (with Sch-S) against concurrent modifications (Sch-M). Anyway, it is easy to figure out that subsequent queries could be blocked (with incompatible Sch-S on the audit resource) while the previous ones are running.  </p>
<p>The query pattern exposed previously (unlike short transactions) is a good catalyst for such blocking scenario due to the accumulation and duration of locks within one single transaction. It may be confirmed by the XE’s output:</p>
<p><a href="http://blog.developpez.com/mikedavem/files/2020/10/167-5-lock_sch_s_same_transaction-e1601925276612.jpg"><img src="http://blog.developpez.com/mikedavem/files/2020/10/167-5-lock_sch_s_same_transaction-e1601925276612.jpg" alt="167 - 5 - lock_sch_s_same_transaction" width="800" height="543" class="alignnone size-full wp-image-1681" /></a></p>
<p>We managed to get a reproductible scenario with TSQL and PowerShell scripts. In addition, I also ran queries from other databases to confirm it may compromise responsiveness of the entire workload on the same instance (respectively DBA3 and DBA4 databases in my test). </p>
<p><a href="http://blog.developpez.com/mikedavem/files/2020/10/167-6-lock_tree-e1601925310889.jpg"><img src="http://blog.developpez.com/mikedavem/files/2020/10/167-6-lock_tree-e1601925310889.jpg" alt="167 - 6 - lock_tree" width="800" height="78" class="alignnone size-full wp-image-1682" /></a></p>
<p><strong>How we fixed this issue?</strong></p>
<p>Even it is only one part of the solution, I’m a strong believer this pattern remains a performance killer and using a set-bases approach may help to reduce drastically number and duration of locks and implicitly chances to make this blocking scenario happen again. Let&rsquo;s precise it is not only about MERGE statement because I managed to reproduce the same issue with INSERT and UPDATE statements as well.</p>
<p>Then, this scenario really made us think about a long-term solution because we cannot guarantee this pattern will not be used by other teams in the future. Looking further at the PowerShell script which carries out steps of archiving the audit file and inserting data to the audit database, we finally added a QueryTimeout parameter value to 10s to the concerned Invoke-DbaQuery command as follows:</p>
<div class="codecolorer-container text default" style="overflow:auto;white-space:nowrap;border:1px solid #9F9F9F;width:650px;height:450px;"><div class="text codecolorer" style="padding:5px;font:normal 12px/1.4em Monaco, Lucida Console, monospace;white-space:nowrap">...<br />
<br />
$query = &quot;<br />
&nbsp; &nbsp; USE [master];<br />
<br />
&nbsp; &nbsp; IF EXISTS (SELECT 1<br />
&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; FROM &nbsp;sys.dm_server_audit_status<br />
&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; WHERE [name] = '$InstanceAuditPrefix-$AuditName')<br />
&nbsp; &nbsp; BEGIN<br />
&nbsp; &nbsp; &nbsp; &nbsp; ALTER SERVER AUDIT [$InstanceAuditPrefix-$AuditName]<br />
&nbsp; &nbsp; &nbsp; &nbsp; WITH (STATE = OFF);<br />
&nbsp; &nbsp; END<br />
<br />
&nbsp; &nbsp; ALTER SERVER AUDIT [$InstanceAuditPrefix-$AuditName]<br />
&nbsp; &nbsp; WITH (STATE = ON);<br />
&quot;<br />
<br />
Invoke-DbaQuery `<br />
&nbsp; &nbsp; -SqlInstance $Instance `<br />
&nbsp; &nbsp; -SqlCredential $SqlCredential `<br />
&nbsp; &nbsp; -Database master `<br />
&nbsp; &nbsp; -Query $query `<br />
&nbsp; &nbsp; -EnableException `<br />
&nbsp; &nbsp; -QueryTimeout 5 <br />
<br />
...</div></div>
<p>Therefore, because we want to prioritize the business workload over the SQL Server audit operation, if such situation occurs again, stopping the SQL Server audit will timeout after reaching 5s which was relevant in our context. The next iteration of the PowerShell is able to restart at the last stage executed previously. </p>
<p>Hope this blog post helps.</p>
<p>See you!</p>
]]></content:encoded>
			<wfw:commentRss></wfw:commentRss>
		<slash:comments>1</slash:comments>
		</item>
		<item>
		<title>SQL Server index rebuid online and blocking scenario</title>
		<link>https://blog.developpez.com/mikedavem/p13199/sql-server-2012/sql-server-index-rebuid-online-and-blocking-scenario</link>
		<comments>https://blog.developpez.com/mikedavem/p13199/sql-server-2012/sql-server-index-rebuid-online-and-blocking-scenario#comments</comments>
		<pubDate>Sun, 30 Aug 2020 21:18:28 +0000</pubDate>
		<dc:creator><![CDATA[mikedavem]]></dc:creator>
				<category><![CDATA[SQL Server 2012]]></category>
		<category><![CDATA[SQL Server 2014]]></category>
		<category><![CDATA[SQL Server 2016]]></category>
		<category><![CDATA[SQL Server 2017]]></category>
		<category><![CDATA[SQL Server 2019]]></category>
		<category><![CDATA[blocking]]></category>
		<category><![CDATA[online operation]]></category>
		<category><![CDATA[performance]]></category>
		<category><![CDATA[SQL]]></category>
		<category><![CDATA[SQL Server]]></category>

		<guid isPermaLink="false">http://blog.developpez.com/mikedavem/?p=1664</guid>
		<description><![CDATA[A couple of months ago, I experienced a problem about index rebuild online operation on SQL Server. In short, the operation was supposed to be online and to never block concurrent queries. But in fact, it was not the case &#8230; <a href="https://blog.developpez.com/mikedavem/p13199/sql-server-2012/sql-server-index-rebuid-online-and-blocking-scenario">Lire la suite <span class="meta-nav">&#8594;</span></a>]]></description>
				<content:encoded><![CDATA[<p>A couple of months ago, I experienced a problem about index rebuild online operation on SQL Server. In short, the operation was supposed to be online and to never block concurrent queries. But in fact, it was not the case (or to be more precise, it was partially the case) and to make the scenario more complex, we experienced different behaviors regarding the context. Let’s start the story with the initial context: in my company, we usually go through continuous deployment including SQL modification scripts and because we usually rely on daily pipeline, we must ensure related SQL operations are not too disruptive to avoid impacting the user experience.</p>
<p><span id="more-1664"></span></p>
<p>Sometimes, we must introduce new indexes to deployment scripts and according to how disruptive the script can be, a discussion between Devs and Ops is initiated, and it results either to manage manually by the Ops team or to deploy it automatically through the automatic deployment pipeline by Devs. </p>
<p>Non-disruptive operations can be achieved in many ways and ONLINE capabilities of SQL Server may be part of the solution and this is what I suggested with one of our scripts. Let’s illustrate this context with the following example. I created a table named dbo.t1 with a bunch of rows:</p>
<div class="codecolorer-container text default" style="overflow:auto;white-space:nowrap;border:1px solid #9F9F9F;width:650px;"><div class="text codecolorer" style="padding:5px;font:normal 12px/1.4em Monaco, Lucida Console, monospace;white-space:nowrap">USE [test];<br />
<br />
SET NOCOUNT ON;<br />
<br />
DROP TABLE IF EXISTS dbo.t1;<br />
GO<br />
<br />
CREATE TABLE dbo.t1 (<br />
&nbsp; &nbsp; id INT IDENTITY(1,1) NOT NULL PRIMARY KEY,<br />
&nbsp; &nbsp; col1 VARCHAR(50) NULL<br />
);<br />
GO<br />
<br />
INSERT INTO dbo.t1 (col1) VALUES (REPLICATE('T', 50));<br />
GO …<br />
EXEC sp_spaceused 'dbo.t1'<br />
--name&nbsp; rows&nbsp; &nbsp; reserved&nbsp; &nbsp; data&nbsp; &nbsp; index_size&nbsp; unused<br />
--t1&nbsp; &nbsp; 5226496 &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; 1058000 KB&nbsp; 696872 KB &nbsp; 342888 KB &nbsp; 18240 KB</div></div>
<p>Go ahead and let’ set the context with a pattern of scripts deployment we went through during this specific deployment. Let’s precise this script is over simplified, but I keep the script voluntary simple to focus only on the most important part.  You will notice the script includes two steps with operations on the same table including updating / fixing values in col2 first and then rebuilding index on col1.</p>
<div class="codecolorer-container text default" style="overflow:auto;white-space:nowrap;border:1px solid #9F9F9F;width:650px;"><div class="text codecolorer" style="padding:5px;font:normal 12px/1.4em Monaco, Lucida Console, monospace;white-space:nowrap">/* Code before */<br />
<br />
-- Update some values in the col1 colum<br />
UPDATE [dbo].[t1]<br />
SET col1 = REPLICATE('B', 50)<br />
<br />
-- Then create an index on col1 column<br />
CREATE INDEX [col1]<br />
ON [dbo].[t1] (col1) WITH (ONLINE = ON);<br />
GO</div></div>
<p>At the initial stage, the creation of index was by default (OFFLINE). Having discussed this point with the DEV team, we decided to create the index ONLINE in this context. The choice between OFFLINE / ONLINE operation is often not trivial and should be evaluated carefully but to keep simple, let’s say it was the right way to go in our context. Generally speaking, online operations are slower, but the tradeoff was acceptable in order to minimize blocking issues during this deployment. At least, this is what I thought …</p>
<p>In my demo, without any concurrent workload against the dbo.t1 table, creating the index offline took 6s compared to the online method with 12s. So, an expected result here …</p>
<p>Let’s run this another query in another session:</p>
<div class="codecolorer-container text default" style="overflow:auto;white-space:nowrap;border:1px solid #9F9F9F;width:650px;"><div class="text codecolorer" style="padding:5px;font:normal 12px/1.4em Monaco, Lucida Console, monospace;white-space:nowrap">SELECT id, col1<br />
FROM dbo.t1<br />
WHERE id BETWEEN 1 AND 2</div></div>
<p>In a normal situation, this query should be blocked in a short time corresponding to the duration of the update operation. But once the update is done, blocking situation should disappear even during the index rebuild operation that is performed ONLINE. </p>
<p>But now let’s add <a href="https://flywaydb.org/" rel="noopener" target="_blank">Flyway</a> to the context. Flyway is an open source tool we are using for automatic deployment of SQL objects. The deployment script was executed from it in ACC environment and we noticed longer blocked concurrent accesses this time. This goes against what we would ideally like. Digging through this issue with the DEV team, we also noticed the following message when running the deployment script:</p>
<p><em>Warning: Online index operation on table &lsquo;dbo.t1 will proceed but concurrent access to the table may be limited due to residual lock on the table from a previous operation in the same transaction.<br />
</em></p>
<p>This is something I didn’t noticed from SQL Server Management Studio when I tested the same deployment script. So, what happened here?</p>
<p>Referring to the <a href="https://flywaydb.org/documentation/migrations#transactions" rel="noopener" target="_blank">Flyway documentation</a>, it is mentioned that Flyway always wraps the execution of an entire migration within a single transaction by default and it was exactly the root cause of the issue.</p>
<p>Let’s try with some experimentations: </p>
<p><strong>Test 1</strong>: Update + rebuilding index online in implicit transaction mode (one transaction per query).</p>
<div class="codecolorer-container text default" style="overflow:auto;white-space:nowrap;border:1px solid #9F9F9F;width:650px;"><div class="text codecolorer" style="padding:5px;font:normal 12px/1.4em Monaco, Lucida Console, monospace;white-space:nowrap">-- Update some values in the col1 colum<br />
UPDATE [dbo].[t1]<br />
SET col1 = REPLICATE('B', 50)<br />
<br />
-- Then create an index on col1 column<br />
CREATE INDEX [col1]<br />
ON [dbo].[t1] (col1) WITH (ONLINE = ON);<br />
GO<br />
-- In another session<br />
SELECT id, col1<br />
FROM dbo.t1<br />
WHERE id BETWEEN 1 AND 2</div></div>
<p><strong>Test 2</strong>: Update + rebuilding index online within one single explicit transaction</p>
<div class="codecolorer-container text default" style="overflow:auto;white-space:nowrap;border:1px solid #9F9F9F;width:650px;"><div class="text codecolorer" style="padding:5px;font:normal 12px/1.4em Monaco, Lucida Console, monospace;white-space:nowrap">BEGIN TRAN;<br />
<br />
-- Update some values in the col1 colum<br />
UPDATE [dbo].[t1]<br />
SET col1 = REPLICATE('B', 50)<br />
<br />
-- Then create an index on col1 column<br />
CREATE INDEX [col1]<br />
ON [dbo].[t1] (col1) WITH (ONLINE = ON);<br />
GO<br />
COMMIT TRAN;<br />
-- In another session<br />
SELECT id, col1<br />
FROM dbo.t1<br />
WHERE id BETWEEN 1 AND 2</div></div>
<p>After running these two scripts, we can notice the blocking duration of SELECT query is longer in test2 as shown in the picture below:</p>
<p><a href="http://blog.developpez.com/mikedavem/files/2020/08/166-1-blocked-process.jpg"><img src="http://blog.developpez.com/mikedavem/files/2020/08/166-1-blocked-process.jpg" alt="166 - 1 - blocked process" width="890" height="358" class="alignnone size-full wp-image-1665" /></a></p>
<p>In the test 1, the duration of the blocking session corresponds to that for updating operation (first step of the script). However, in the test 2, we must include the time for creating the index but let’s precise the index is not the blocking operation at all, but it increases the residual locking put by the previous update operation. In short, this is exactly what the warning message is telling us. I think you can imagine easily which impact such situation may implies if the index creation takes a long time. You may get exactly the opposite of what you really expected. </p>
<p>Obviously, this is not a recommended situation and creating an index should be run in very narrow and constrained transaction.But from my experience, things are never always obvious and regarding your context, you should keep an eye of how transactions are managed especially when it comes automatic deployment stuff that could be quickly out of the scope of the DBA / Ops team. Strong collaboration with DEV team is recommended to anticipate this kind of issue.</p>
<p>See you !!</p>
]]></content:encoded>
			<wfw:commentRss></wfw:commentRss>
		<slash:comments>1</slash:comments>
		</item>
	</channel>
</rss>
