<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>David Barbarin &#187; Docker</title>
	<atom:link href="https://blog.developpez.com/mikedavem/pcategory/docker/feed" rel="self" type="application/rss+xml" />
	<link>https://blog.developpez.com/mikedavem</link>
	<description>MVP DataPlatform - MCM SQL Server</description>
	<lastBuildDate>Thu, 09 Sep 2021 21:19:50 +0000</lastBuildDate>
	<language>fr-FR</language>
	<sy:updatePeriod>hourly</sy:updatePeriod>
	<sy:updateFrequency>1</sy:updateFrequency>
	<generator>https://wordpress.org/?v=4.1.42</generator>
	<item>
		<title>Extending SQL Server monitoring with Raspberry PI and Lametric</title>
		<link>https://blog.developpez.com/mikedavem/p13204/sql-server-2005/extending-sql-server-monitoring-with-raspberry-pi-and-lametric</link>
		<comments>https://blog.developpez.com/mikedavem/p13204/sql-server-2005/extending-sql-server-monitoring-with-raspberry-pi-and-lametric#comments</comments>
		<pubDate>Thu, 07 Jan 2021 21:59:25 +0000</pubDate>
		<dc:creator><![CDATA[mikedavem]]></dc:creator>
				<category><![CDATA[DevOps]]></category>
		<category><![CDATA[Docker]]></category>
		<category><![CDATA[K8s]]></category>
		<category><![CDATA[SQL Azure]]></category>
		<category><![CDATA[SQL Server 2005]]></category>
		<category><![CDATA[SQL Server 2008]]></category>
		<category><![CDATA[SQL Server 2008 R2]]></category>
		<category><![CDATA[SQL Server 2014]]></category>
		<category><![CDATA[SQL Server 2016]]></category>
		<category><![CDATA[SQL Server 2017]]></category>
		<category><![CDATA[SQL Server 2019]]></category>
		<category><![CDATA[Lametric]]></category>
		<category><![CDATA[monitoring]]></category>
		<category><![CDATA[Powershell]]></category>
		<category><![CDATA[Raspberry]]></category>
		<category><![CDATA[sqlserver]]></category>

		<guid isPermaLink="false">http://blog.developpez.com/mikedavem/?p=1742</guid>
		<description><![CDATA[First blog of this new year 2021 and I will start with a fancy and How-To Geek topic In my last blog post, I discussed about monitoring and how it should help to address quickly a situation that is going &#8230; <a href="https://blog.developpez.com/mikedavem/p13204/sql-server-2005/extending-sql-server-monitoring-with-raspberry-pi-and-lametric">Lire la suite <span class="meta-nav">&#8594;</span></a>]]></description>
				<content:encoded><![CDATA[<p>First blog of this new year 2021 and I will start with a fancy and How-To Geek topic </p>
<p>In my <a href="https://blog.developpez.com/mikedavem/p13203/sql-server-2014/why-we-moved-sql-server-monitoring-on-prometheus-and-grafana" rel="noopener" target="_blank">last blog post</a>, I discussed about monitoring and how it should help to address quickly a situation that is going degrading. Alerts are probably the first way to raise your attention and, in my case, they are often in the form of emails in a dedicated folder. That remains a good thing, at least if you’re not focusing too long in other daily tasks or projects. In work office, I know I would probably better focus on new alerts but as I said previously, telework changed definitely the game.  </p>
<p><span id="more-1742"></span></p>
<p>I wanted to find a way to address this concern at least for main SQL Server critical alerts and I thought about relying on my existing home lab infrastructure to address the point. Reasons are it is always a good opportunity to learn something and to improve my skills by referring to a real case scenario. </p>
<p>My home lab infrastructure includes a cluster of <a href="https://www.raspberrypi.org/products/raspberry-pi-4-model-b/" rel="noopener" target="_blank">Raspberry PI 4</a> nodes. Initially, I use it to improve my skills on K8s or to study some IOT stuff for instance. It is a good candidate for developing and deploying a new app for detecting new incoming alerts in my mailbox and sending notifications to my Lametric accordingly. </p>
<p><a href="https://lametric.com/" rel="noopener" target="_blank">Lametric</a> is a basically a connected clock but works also as a highly-visible display showing notifications from devices or apps via REST APIs. First time I saw such device in action was in a DevOps meetup in 2018 around Docker and Jenkins deployment with <a href="https://www.linkedin.com/in/duquesnoyeric/" rel="noopener" target="_blank">Eric Dusquenoy</a> and Tim Izzo (<a href="https://twitter.com/5ika_" rel="noopener" target="_blank">@5ika_</a>). In addition, one of my previous customers had also one in his office and we had some discussions about cool customization through Lametric apps. </p>
<p>Connection through VPN to my company network is mandatory to work from home and unfortunately Lametric device doesn’t support this scenario because communication is limited to local network only. So, I need an app that run on my local (home) network and able to connect to my mailbox, get new incoming emails and finally sending notifications to my Lametric device. </p>
<p>Here my setup:</p>
<p><a href="http://blog.developpez.com/mikedavem/files/2021/01/171-0-lametric_infra.jpg"><img src="http://blog.developpez.com/mikedavem/files/2021/01/171-0-lametric_infra-1024x711.jpg" alt="171 - 0 - lametric_infra" width="584" height="405" class="alignnone size-large wp-image-1743" /></a></p>
<p>There are plenty of good blog posts to create a Raspberry cluster on the internet and I would suggest to read <a href="https://dbafromthecold.com/2020/11/30/building-a-raspberry-pi-cluster-to-run-azure-sql-edge-on-kubernetes/" rel="noopener" target="_blank">that</a> of Andrew Pruski (<a href="https://twitter.com/dbafromthecold" rel="noopener" target="_blank">@dbafromthecold</a>). </p>
<p>As shown above, there are different paths for SQL alerts referring our infrastructure (On-prem and Azure SQL databases) but all of them are send to a dedicated distribution list for DBA. </p>
<p>The app is a simple PowerShell script that relies on Exchange Webservices APIs for connecting to the mailbox and to get new mails. Sending notifications to my Lametric device is achieved by a simple REST API call with well-formatted body. Details can be found the <a href="https://lametric-documentation.readthedocs.io/en/latest/reference-docs/device-notifications.html" rel="noopener" target="_blank">Lametric documentation</a>. As prerequisite, you need to create a notification app from Lametric Developer site as follows:</p>
<p><a href="http://blog.developpez.com/mikedavem/files/2021/01/171-3-lametric-app-token.jpg"><img src="http://blog.developpez.com/mikedavem/files/2021/01/171-3-lametric-app-token-1024x364.jpg" alt="171 - 3 - lametric app token" width="584" height="208" class="alignnone size-large wp-image-1744" /></a></p>
<p>As said previously, I used PowerShell for this app. It can help to find documentation and tutorials when it comes Microsoft product. But if you are more confident with Python, APIs are also available in a <a href="https://pypi.org/project/py-ews/" rel="noopener" target="_blank">dedicated package</a>. But let’s precise that using PowerShell doesn’t necessarily mean using Windows-based container and instead I relied on Linux-based image with PowerShell core for ARM architecture. Image is provided by Microsoft on <a href="https://hub.docker.com/_/microsoft-powershell" rel="noopener" target="_blank">Docker Hub</a>. Finally, sensitive information like Lametric Token or mailbox credentials are stored in K8s secret for security reasons. My app project is available on my <a href="https://github.com/mikedavem/lametric" rel="noopener" target="_blank">GitHub</a>. Feel free to use it.</p>
<p>Here some results:</p>
<p>&#8211; After deploying my pod:</p>
<p><a href="http://blog.developpez.com/mikedavem/files/2021/01/171-1-lametric-pod.jpg"><img src="http://blog.developpez.com/mikedavem/files/2021/01/171-1-lametric-pod.jpg" alt="171 - 1 - lametric pod" width="483" height="82" class="alignnone size-full wp-image-1745" /></a></p>
<p>&#8211; The app is running and checking new incoming emails (kubectl logs command)</p>
<p><a href="http://blog.developpez.com/mikedavem/files/2021/01/171-2-lametric-pod-logs.jpg"><img src="http://blog.developpez.com/mikedavem/files/2021/01/171-2-lametric-pod-logs.jpg" alt="171 - 2 - lametric pod logs" width="828" height="438" class="alignnone size-full wp-image-1747" /></a></p>
<p>When email is detected, <a href="https://youtu.be/EcdSFziNc3U" title="Notification" rel="noopener" target="_blank">notification</a> is sendig to Lametric device accordingly</p>
<p>Geek fun good (bad?) idea to start this new year 2021 <img src="https://blog.developpez.com/mikedavem/wp-includes/images/smilies/icon_smile.gif" alt=":-)" class="wp-smiley" /></p>
]]></content:encoded>
			<wfw:commentRss></wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>Introducing SQL Server with Portworx and storage orchestration</title>
		<link>https://blog.developpez.com/mikedavem/p13184/docker/introducing-sql-server-with-portworx-and-storage-orchestration</link>
		<comments>https://blog.developpez.com/mikedavem/p13184/docker/introducing-sql-server-with-portworx-and-storage-orchestration#comments</comments>
		<pubDate>Sun, 15 Dec 2019 22:08:03 +0000</pubDate>
		<dc:creator><![CDATA[mikedavem]]></dc:creator>
				<category><![CDATA[DevOps]]></category>
		<category><![CDATA[Docker]]></category>
		<category><![CDATA[K8s]]></category>
		<category><![CDATA[Kubernetes]]></category>
		<category><![CDATA[Portworx]]></category>
		<category><![CDATA[SQL Server]]></category>
		<category><![CDATA[Storage orchestration]]></category>

		<guid isPermaLink="false">http://blog.developpez.com/mikedavem/?p=1402</guid>
		<description><![CDATA[Stateful applications like databases need special considerations on K8s world. This is because data persistence is important and we need also something at the storage layer communicating with the container orchestrator to take advantage of its scheduling capabilities. For Stateful &#8230; <a href="https://blog.developpez.com/mikedavem/p13184/docker/introducing-sql-server-with-portworx-and-storage-orchestration">Lire la suite <span class="meta-nav">&#8594;</span></a>]]></description>
				<content:encoded><![CDATA[<p>Stateful applications like databases need special considerations on K8s world. This is because data persistence is important and we need also something at the storage layer communicating with the container orchestrator to take advantage of its scheduling capabilities. For Stateful applications, StatefulSet may be only part of the solution because it primary focuses on the Pod availability and we have to rely on the application capabilities for data replication stuff. But StatefulSet doesn’t address that of the underlying storage at all. At the moment of this write-up, StatefulSet-based solutions for SQL Server such availability groups are not supported yet on production. </p>
<p><span id="more-1402"></span></p>
<p>So, with Stateful applications we may consider other solutions like GlusterFS or NFS as distributed storage spanning all the nodes of the K8s cluster, but they often don’t meet the requirements of a database workload running in production with high throughput and IOPS requirement and data migration.</p>
<p>Products exist in the market and seem to address these specific requirements and I was very curious to get a better picture of their capabilities. During my investigation, I went through a very interesting one named Portworx for a potential customer&rsquo;s project. The interesting part of Portworx consists of a container-native, orchestration-aware storage fabric including the storage operation and administration inside K8s. It aggregates underlying storage and exposes it as a software-defined, programmable block device. </p>
<p>From a high-level perspective, Portworx is using a custom scheduler – <a href="https://portworx.com/stork-storage-orchestration-kubernetes/">STORK</a> (STorage Orchestration Runtime for Kubernetes) to assist K8s in placing a Pod in the same node where the associated PVC resides. It reduces drastically some complex stuff around annotations and labeling to perfmon some affinity rules. </p>
<p>In this blog post, I will focus only on the high-availability topic which is addressed by Portworx with volume&rsquo;s content synchronization between K8s nodes and aggregated disks. Therefore Portworx requires to define the redundancy of the dataset between replicas through a replication factor value by the way. </p>
<p>I cannot expose my customer&rsquo;s architecture here but let&rsquo;s try top apply the concept to my lab environment. </p>
<p><a href="http://blog.developpez.com/mikedavem/files/2019/12/151-0-0-K-Lab-architecture.jpg"><img src="http://blog.developpez.com/mikedavem/files/2019/12/151-0-0-K-Lab-architecture.jpg" alt="151 - 0 - 0 - K Lab architecture" width="675" height="495" class="alignnone size-full wp-image-1414" /></a></p>
<p>As shown above, my lab environment includes 4 k8s nodes with 3 nodes that will act as worker. Each worker node owns its local storage based on SSD disks (One for the SQL Server data files and the another one will handle Portworx metadata activity &#8211; Journal disk). After deploying Portworx on my K8s cluster here a big picture of my configuration:</p>
<div class="codecolorer-container text default" style="overflow:auto;white-space:nowrap;border:1px solid #9F9F9F;width:650px;"><div class="text codecolorer" style="padding:5px;font:normal 12px/1.4em Monaco, Lucida Console, monospace;white-space:nowrap">$ kubectl get daemonset -n kube-system | egrep &quot;(stork|portworx|px)&quot;<br />
portworx &nbsp; &nbsp; &nbsp; 3 &nbsp; &nbsp; &nbsp; &nbsp; 3 &nbsp; &nbsp; &nbsp; &nbsp; 3 &nbsp; &nbsp; &nbsp; 3 &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;3 &nbsp; &nbsp; &nbsp; &nbsp;<br />
portworx-api &nbsp; 3 &nbsp; &nbsp; &nbsp; &nbsp; 3 &nbsp; &nbsp; &nbsp; &nbsp; 3 &nbsp; &nbsp; &nbsp; 3 &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;3</div></div>
<p>Portworx is a DaemonSet-based installation. Each Portworx node will discover the availability storage to create a container-native block storage device with:<br />
&#8211;	/dev/sdb for my SQL Server data<br />
&#8211;	/dev/sdc for hosting my journal</p>
<div class="codecolorer-container text default" style="overflow:auto;white-space:nowrap;border:1px solid #9F9F9F;width:650px;"><div class="text codecolorer" style="padding:5px;font:normal 12px/1.4em Monaco, Lucida Console, monospace;white-space:nowrap">$ kubectl get pod -n kube-system | egrep &quot;(stork|portworx|px)&quot;<br />
<br />
portworx-555wf &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;1/1 &nbsp; &nbsp; Running &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;18 &nbsp; &nbsp; &nbsp; &nbsp; 2d23h<br />
portworx-api-2pv6s &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;1/1 &nbsp; &nbsp; Running &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;8 &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;2d23h<br />
portworx-api-s8zzr &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;1/1 &nbsp; &nbsp; Running &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;8 &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;2d23h<br />
portworx-api-vnqh2 &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;1/1 &nbsp; &nbsp; Running &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;4 &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;2d23h<br />
portworx-pjxl8 &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;1/1 &nbsp; &nbsp; Running &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;17 &nbsp; &nbsp; &nbsp; &nbsp; 2d23h<br />
portworx-wrcdf &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;1/1 &nbsp; &nbsp; Running &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;389 &nbsp; &nbsp; &nbsp; &nbsp;2d10h<br />
px-lighthouse-55db75b59c-qd2nc &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;3/3 &nbsp; &nbsp; Running &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;0 &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;35h<br />
stork-5d568485bb-ghlt9 &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;1/1 &nbsp; &nbsp; Running &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;0 &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;35h<br />
stork-5d568485bb-h2sqm &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;1/1 &nbsp; &nbsp; Running &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;13 &nbsp; &nbsp; &nbsp; &nbsp; 2d23h<br />
stork-5d568485bb-xxd4b &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;1/1 &nbsp; &nbsp; Running &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;1 &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;2d4h<br />
stork-scheduler-56574cdbb5-7td6v &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;1/1 &nbsp; &nbsp; Running &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;0 &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;35h<br />
stork-scheduler-56574cdbb5-skw5f &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;1/1 &nbsp; &nbsp; Running &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;4 &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;2d4h<br />
stork-scheduler-56574cdbb5-v5slj &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;1/1 &nbsp; &nbsp; Running &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;9 &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;2d23h</div></div>
<p>The above picture shows different stork pods that may influence scheduling based on the location of volumes that a pod requires. In addition, the PX cluster (part of Portworx Enterprise Platform) includes all the Portworx pods and allows getting to monitor and performance insights of each related pod (SQL Server instance here). </p>
<p>Let’s have a look at the global configuration by using the <strong>pxctl</strong> command (first section):</p>
<div class="codecolorer-container text default" style="overflow:auto;white-space:nowrap;border:1px solid #9F9F9F;width:650px;"><div class="text codecolorer" style="padding:5px;font:normal 12px/1.4em Monaco, Lucida Console, monospace;white-space:nowrap">$ PX_POD=$(kubectl get pods -l name=portworx -n kube-system -o jsonpath='{.items[0].metadata.name}')<br />
$ kubectl exec $PX_POD -n kube-system -- /opt/pwx/bin/pxctl status<br />
Status: PX is operational<br />
License: Trial (expires in 28 days)<br />
Node ID: 590d7afd-9d30-4624-8082-5f9cb18ecbfd<br />
&nbsp; &nbsp; &nbsp; &nbsp; IP: 192.168.90.63<br />
&nbsp; &nbsp; &nbsp; &nbsp; Local Storage Pool: 1 pool<br />
&nbsp; &nbsp; &nbsp; &nbsp; POOL &nbsp; &nbsp;IO_PRIORITY &nbsp; &nbsp; RAID_LEVEL &nbsp; &nbsp; &nbsp;USABLE &nbsp;USED &nbsp; &nbsp;STATUS &nbsp;ZONE &nbsp; &nbsp;REGION<br />
&nbsp; &nbsp; &nbsp; &nbsp; 0 &nbsp; &nbsp; &nbsp; HIGH &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;raid0 &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; 20 GiB &nbsp;8.5 GiB Online &nbsp;default default<br />
&nbsp; &nbsp; &nbsp; &nbsp; Local Storage Devices: 1 device<br />
&nbsp; &nbsp; &nbsp; &nbsp; Device &nbsp;Path &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;Media Type &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;Size &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;Last-Scan<br />
&nbsp; &nbsp; &nbsp; &nbsp; 0:1 &nbsp; &nbsp; /dev/sdb &nbsp; &nbsp; &nbsp; &nbsp;STORAGE_MEDIUM_MAGNETIC 20 GiB &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;08 Dec 19 21:59 UTC<br />
&nbsp; &nbsp; &nbsp; &nbsp; total &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; - &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; 20 GiB<br />
&nbsp; &nbsp; &nbsp; &nbsp; Cache Devices:<br />
&nbsp; &nbsp; &nbsp; &nbsp; No cache devices<br />
&nbsp; &nbsp; &nbsp; &nbsp; Journal Device:<br />
&nbsp; &nbsp; &nbsp; &nbsp; 1 &nbsp; &nbsp; &nbsp; /dev/sdc1 &nbsp; &nbsp; &nbsp; STORAGE_MEDIUM_MAGNETIC<br />
…</div></div>
<p>Portworx has created a pool composed of my 3 replicas / Kubernetes nodes with a 20GB SSD each. I just used a default configuration without specifying any zone or region stuff for fault tolerance capabilities. This is not my focus at this moment. According to Portworx’s performance tuning documentation, I configured a journal device to improve I/O performance by offloading PX metadata writes to a separate storage. </p>
<p>Second section:</p>
<div class="codecolorer-container text default" style="overflow:auto;white-space:nowrap;border:1px solid #9F9F9F;width:650px;"><div class="text codecolorer" style="padding:5px;font:normal 12px/1.4em Monaco, Lucida Console, monospace;white-space:nowrap">…<br />
Nodes: 3 node(s) with storage (3 online)<br />
&nbsp; &nbsp; &nbsp; &nbsp; IP &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;ID &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;SchedulerNodeName &nbsp; &nbsp; &nbsp; StorageNode &nbsp; &nbsp; &nbsp;Used &nbsp; &nbsp;Capacity &nbsp; &nbsp; &nbsp; &nbsp;Status &nbsp;StorageStatus &nbsp; Version &nbsp; &nbsp; &nbsp; &nbsp; Kernel &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;OS<br />
&nbsp; &nbsp; &nbsp; &nbsp; 192.168.5.62 &nbsp; &nbsp;b0ac4fa3-29c2-40a8-9033-1d0558ec31fd &nbsp; &nbsp;k8n2.dbi-services.test &nbsp;Yes &nbsp; &nbsp; 3.1 GiB &nbsp;20 GiB &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;Online &nbsp;Up &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;2.3.0.0-103206b 3.10.0-1062.1.2.el7.x86_64 &nbsp; &nbsp; &nbsp;CentOS Linux 7 (Core)<br />
&nbsp; &nbsp; &nbsp; &nbsp; 192.168.40.61 &nbsp; 9fc5bc45-5602-4926-ab38-c74f0a8a8b2c &nbsp; &nbsp;k8n1.dbi-services.test &nbsp;Yes &nbsp; &nbsp; 8.6 GiB &nbsp;20 GiB &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;Online &nbsp;Up &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;2.3.0.0-103206b 3.10.0-1062.1.2.el7.x86_64 &nbsp; &nbsp; &nbsp;CentOS Linux 7 (Core)<br />
&nbsp; &nbsp; &nbsp; &nbsp; 192.168.80.63 &nbsp; 590d7afd-9d30-4624-8082-5f9cb18ecbfd &nbsp; &nbsp;k8n3.dbi-services.test &nbsp;Yes &nbsp; &nbsp; 8.5 GiB &nbsp;20 GiB &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;Online &nbsp;Up (This node) &nbsp;2.3.0.0-103206b 3.10.0-1062.1.2.el7.x86_64 &nbsp; &nbsp; &nbsp;CentOS Linux 7 (Core)<br />
Global Storage Pool<br />
&nbsp; &nbsp; &nbsp; &nbsp; Total Used &nbsp; &nbsp; &nbsp;: &nbsp;20 GiB<br />
&nbsp; &nbsp; &nbsp; &nbsp; Total Capacity &nbsp;: &nbsp;60 GiB</div></div>
<p>All my nodes are up for a total storage of 60 GiB. Let&rsquo;s deploy a Portworx Storage Class with the following specification:</p>
<div class="codecolorer-container text default" style="overflow:auto;white-space:nowrap;border:1px solid #9F9F9F;width:650px;"><div class="text codecolorer" style="padding:5px;font:normal 12px/1.4em Monaco, Lucida Console, monospace;white-space:nowrap">kind: StorageClass<br />
apiVersion: storage.k8s.io/v1<br />
metadata:<br />
&nbsp; name: portworx-sc<br />
provisioner: kubernetes.io/portworx-volume<br />
parameters:<br />
&nbsp; repl: &quot;3&quot;<br />
&nbsp; nodes: &quot;b0ac4fa3-29c2-40a8-9033-1d0558ec31fd,9fc5bc45-5602-4926-ab38-c74f0a8a8b2c,590d7afd-9d30-4624-8082-5f9cb18ecbfd&quot;<br />
&nbsp; label: &quot;name=mssqlvol&quot;<br />
&nbsp; fs: &quot;xfs&quot;<br />
&nbsp; io_profile: &quot;db&quot;<br />
&nbsp; priority_io: &quot;high&quot;<br />
&nbsp; journal: &quot;true&quot;<br />
allowVolumeExpansion: true</div></div>
<p>The important parameters are:</p>
<p><strong>repl: &laquo;&nbsp;3&nbsp;&raquo;</strong> =&gt; Number of replicas (K8s nodes) where data will be replicated</p>
<p><strong>nodes: &laquo;&nbsp;b0ac4fa3-29c2-40a8-9033-1d0558ec31fd,9fc5bc45-5602-4926-ab38-c74f0a8a8b2c,590d7afd-9d30-4624-8082-5f9cb18ecbfd&nbsp;&raquo;</strong> =&gt; Number of replicas used for data replication. Replicas are identified by their ID. Each write is synchronously replicated to a quorum set of nodes whereas read throughput is aggregated, where multiple nodes can service one read request in parallel streams.</p>
<p><strong>fs: &laquo;&nbsp;xfs&nbsp;&raquo;</strong> =&gt; I used a Linux FS supported by SQL Server on Linux</p>
<p><strong>io_profile: &laquo;&nbsp;db&nbsp;&raquo;</strong> =&gt; By default, Portworx is able to use some profiles according to the access pattern. Here I just forced it to use db profile that implements a write-back flush coalescing algorithm. </p>
<p><strong>priority_io: &laquo;&nbsp;high&nbsp;&raquo;</strong> =&gt; I deliberately configured the IO priority value to high for my pool in order to favor maximum throughput and low latency transactional workloads. I used SSD storage accordingly.</p>
<p><strong>journal: &laquo;&nbsp;true&nbsp;&raquo;</strong> =&gt; The volumes used by this storage class will use the journal dedicated device</p>
<p><strong>allowVolumeExpansion: true</strong> =&gt; is an interesting parameter to allow online expansion of the concerned volume(s). As an aside, it is worth noting that volume expansion capabilities is pretty new (&gt; v1.11+) on K8s word for the following in-tree volume plugins: AWS-EBS, GCE-PD, Azure Disk, Azure File, Glusterfs, Cinder, Portworx, and Ceph RBD</p>
<p>Then, let&rsquo;s use Dynamic Provisioning with the following PVC specification:</p>
<div class="codecolorer-container text default" style="overflow:auto;white-space:nowrap;border:1px solid #9F9F9F;width:650px;"><div class="text codecolorer" style="padding:5px;font:normal 12px/1.4em Monaco, Lucida Console, monospace;white-space:nowrap">kind: PersistentVolumeClaim<br />
apiVersion: v1<br />
metadata:<br />
&nbsp; name: pvcsc001<br />
&nbsp; annotations:<br />
&nbsp; &nbsp; volume.beta.kubernetes.io/storage-class: portworx-sc<br />
spec:<br />
&nbsp; accessModes:<br />
&nbsp; &nbsp; - ReadWriteOnce<br />
&nbsp; resources:<br />
&nbsp; &nbsp; requests:<br />
&nbsp; &nbsp; &nbsp; storage: 20Gi</div></div>
<p>Usual specification for a PVC &#8230; I just claimed 20Gi of storage based on my portworx storage class. After deploying both the Storage Class and PVC here the new picture of my configuration:</p>
<div class="codecolorer-container text default" style="overflow:auto;white-space:nowrap;border:1px solid #9F9F9F;width:650px;"><div class="text codecolorer" style="padding:5px;font:normal 12px/1.4em Monaco, Lucida Console, monospace;white-space:nowrap">$ kubectl get sc<br />
NAME &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; PROVISIONER &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; AGE<br />
portworx-sc &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;kubernetes.io/portworx-volume &nbsp; 3d14h<br />
stork-snapshot-sc &nbsp; &nbsp; &nbsp; &nbsp;stork-snapshot &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;3d23h<br />
<br />
$ kubectl get pvc<br />
NAME &nbsp; &nbsp; &nbsp; STATUS &nbsp; VOLUME &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; CAPACITY &nbsp; ACCESS MODES &nbsp; STORAGECLASS &nbsp; AGE<br />
pvcsc001 &nbsp; Bound &nbsp; &nbsp;pvc-98d12db5-17ff-11ea-9d3a-00155dc4b604 &nbsp; 20Gi &nbsp; &nbsp; &nbsp; RWO &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;portworx-sc &nbsp; &nbsp;3d13h</div></div>
<p>Note that there is also a special storage class implemention for snapshot capabilities, we will talk about this capability in next write-up. My PVC <strong>pvcs001</strong> is ready to be used by my Stateful application. Now it&rsquo;s time to deploy a Stateful application with my SQL Server pod and the specification below. Let&rsquo;s say that Portworx volumes are usable for non-root execution containers when specifying fsGroup parameter (securityContext section). So,this is a good fit with the non-root execution capabilities shipped with SQL Server pod <img src="https://blog.developpez.com/mikedavem/wp-includes/images/smilies/icon_smile.gif" alt=":)" class="wp-smiley" /> You will also notice there is no special labeling or affinity stuff between my pod and the PVC. I just defined the volume mount, the corresponding PVC and that&rsquo;s it!</p>
<div class="codecolorer-container text default" style="overflow:auto;white-space:nowrap;border:1px solid #9F9F9F;width:650px;height:450px;"><div class="text codecolorer" style="padding:5px;font:normal 12px/1.4em Monaco, Lucida Console, monospace;white-space:nowrap">apiVersion: apps/v1beta1<br />
kind: Deployment<br />
metadata:<br />
&nbsp; name: mssql-deployment<br />
spec:<br />
&nbsp; replicas: 1<br />
&nbsp; template:<br />
&nbsp; &nbsp; metadata:<br />
&nbsp; &nbsp; &nbsp; labels:<br />
&nbsp; &nbsp; &nbsp; &nbsp; app: mssql<br />
&nbsp; &nbsp; spec:<br />
&nbsp; &nbsp; &nbsp; securityContext:<br />
&nbsp; &nbsp; &nbsp; &nbsp; runAsUser: 10001<br />
&nbsp; &nbsp; &nbsp; &nbsp; runAsGroup: 10001<br />
&nbsp; &nbsp; &nbsp; &nbsp; fsGroup: 10001<br />
&nbsp; &nbsp; &nbsp; terminationGracePeriodSeconds: 10<br />
&nbsp; &nbsp; &nbsp; containers:<br />
&nbsp; &nbsp; &nbsp; - name: mssql<br />
&nbsp; &nbsp; &nbsp; &nbsp; image: mcr.microsoft.com/mssql/server:2019-GA-ubuntu-16.04<br />
&nbsp; &nbsp; &nbsp; &nbsp; ports:<br />
&nbsp; &nbsp; &nbsp; &nbsp; - containerPort: 1433<br />
&nbsp; &nbsp; &nbsp; &nbsp; env:<br />
&nbsp; &nbsp; &nbsp; &nbsp; - name: MSSQL_PID<br />
&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; value: &quot;Developer&quot;<br />
&nbsp; &nbsp; &nbsp; &nbsp; - name: ACCEPT_EULA<br />
&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; value: &quot;Y&quot;<br />
&nbsp; &nbsp; &nbsp; &nbsp; - name: MSSQL_SA_PASSWORD<br />
&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; valueFrom:<br />
&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; secretKeyRef:<br />
&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; name: sql-secrets<br />
&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; key: sapassword<br />
&nbsp; &nbsp; &nbsp; &nbsp; volumeMounts:<br />
&nbsp; &nbsp; &nbsp; &nbsp; - name: mssqldb<br />
&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; mountPath: /var/opt/mssql<br />
&nbsp; &nbsp; &nbsp; &nbsp; resources:<br />
&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; limits:<br />
&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; cpu: &quot;3500m&quot;<br />
&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; requests:<br />
&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; cpu: &quot;2000m&quot;<br />
&nbsp; &nbsp; &nbsp; volumes:<br />
&nbsp; &nbsp; &nbsp; - name: mssqldb<br />
&nbsp; &nbsp; &nbsp; &nbsp; persistentVolumeClaim:<br />
&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; claimName: pvcsc001<br />
<br />
---<br />
apiVersion: v1<br />
kind: Service<br />
metadata:<br />
&nbsp; name: mssql-deployment<br />
spec:<br />
&nbsp; selector:<br />
&nbsp; &nbsp; app: mssql<br />
&nbsp; ports:<br />
&nbsp; &nbsp; - protocol: TCP<br />
&nbsp; &nbsp; &nbsp; port: 1470<br />
&nbsp; &nbsp; &nbsp; targetPort: 1433<br />
&nbsp; type: LoadBalancer</div></div>
<p>Let&rsquo;s take a look at the deployment status:</p>
<div class="codecolorer-container text default" style="overflow:auto;white-space:nowrap;border:1px solid #9F9F9F;width:650px;"><div class="text codecolorer" style="padding:5px;font:normal 12px/1.4em Monaco, Lucida Console, monospace;white-space:nowrap">$ kubectl get deployment,pod,svc<br />
NAME &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; READY &nbsp; UP-TO-DATE &nbsp; AVAILABLE &nbsp; AGE<br />
deployment.extensions/mssql-deployment &nbsp; 1/1 &nbsp; &nbsp; 1 &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;1 &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; 3d7h<br />
<br />
NAME &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; READY &nbsp; STATUS &nbsp; &nbsp;RESTARTS &nbsp; AGE<br />
pod/mssql-deployment-67fdd4759-vtzmz &nbsp; 1/1 &nbsp; &nbsp; Running &nbsp; 0 &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;45m<br />
<br />
NAME &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; TYPE &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; CLUSTER-IP &nbsp; &nbsp; &nbsp;EXTERNAL-IP &nbsp; &nbsp; PORT(S) &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;AGE<br />
service/kubernetes &nbsp; &nbsp; &nbsp; &nbsp; ClusterIP &nbsp; &nbsp; &nbsp;10.96.0.1 &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; 443/TCP &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;4d<br />
service/mssql-deployment &nbsp; LoadBalancer &nbsp; 10.98.246.160 &nbsp; 192.168.40.61 &nbsp; 1470:32374/TCP &nbsp; 3d7h</div></div>
<p>We&rsquo;re now ready to test the HA capabilities of Portworx! Let&rsquo;s see how STORK influences the scheduling to get my SQL Server pod on the same node where my PVC resides. The <strong>pxctl</strong> command provides different options to get information about the PX cluster and volumes as well as configuration and management capabilities. Here a picture of my volumes:</p>
<div class="codecolorer-container text default" style="overflow:auto;white-space:nowrap;border:1px solid #9F9F9F;width:650px;"><div class="text codecolorer" style="padding:5px;font:normal 12px/1.4em Monaco, Lucida Console, monospace;white-space:nowrap">$ kubectl exec $PX_POD -n kube-system -- /opt/pwx/bin/pxctl volume list<br />
ID &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;NAME &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;SIZE &nbsp; &nbsp;HA &nbsp; &nbsp; &nbsp;SHARED &nbsp;ENCRYPTED &nbsp; &nbsp; &nbsp; &nbsp;IO_PRIORITY &nbsp; &nbsp; STATUS &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;SNAP-ENABLED<br />
675137742462835449 &nbsp; &nbsp; &nbsp;pvc-98d12db5-17ff-11ea-9d3a-00155dc4b604 &nbsp; &nbsp; &nbsp; &nbsp;20 GiB &nbsp;2 &nbsp; &nbsp; &nbsp; no &nbsp; &nbsp; &nbsp;no &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; HIGH &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;up - attached on 192.168.40.61 &nbsp;no<br />
$ kubectl get pod -o wide<br />
NAME &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; READY &nbsp; STATUS &nbsp; &nbsp;RESTARTS &nbsp; AGE &nbsp; IP &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;NODE &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; NOMINATED NODE &nbsp; READINESS GATES<br />
mssql-deployment-67fdd4759-vtzmz &nbsp; 1/1 &nbsp; &nbsp; Running &nbsp; 0 &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;48m &nbsp; 172.16.160.54 &nbsp; k8n1.dbi-services.test</div></div>
<p>My SQL Server pod and my Portworx storage sit together on the K8n1.dbi-services.test node. The PX web console is also available and provides the same kind of information as pxctl command does. </p>
<p><a href="http://blog.developpez.com/mikedavem/files/2019/12/151-1-PX-web-console-volume.jpg"><img src="http://blog.developpez.com/mikedavem/files/2019/12/151-1-PX-web-console-volume.jpg" alt="151 - 1 - PX web console volume" width="1795" height="913" class="alignnone size-full wp-image-1408" /></a></p>
<p>Let&rsquo;s now simulate the K8n1.dbi-services.test node failure. In this scenario both my PVC and my SQL Server pod are going to move to the next available &#8211; K8n2 (192.168.20.62). This is where STORK comes into play to stick my pod with my PVC location. </p>
<p><a href="http://blog.developpez.com/mikedavem/files/2019/12/151-2-PX-web-console-volume-after-failover.jpg"><img src="http://blog.developpez.com/mikedavem/files/2019/12/151-2-PX-web-console-volume-after-failover.jpg" alt="151 - 2 - PX web console volume after failover" width="1835" height="941" class="alignnone size-full wp-image-1410" /></a></p>
<p>&#8230;</p>
<div class="codecolorer-container text default" style="overflow:auto;white-space:nowrap;border:1px solid #9F9F9F;width:650px;"><div class="text codecolorer" style="padding:5px;font:normal 12px/1.4em Monaco, Lucida Console, monospace;white-space:nowrap">$ kubectl get pod -o wide<br />
NAME &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; READY &nbsp; STATUS &nbsp; &nbsp;RESTARTS &nbsp; AGE &nbsp; IP &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; NODE &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; NOMINATED NODE &nbsp; READINESS GATES<br />
mssql-deployment-67fdd4759-rbxcb &nbsp; 1/1 &nbsp; &nbsp; Running &nbsp; 0 &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;31m &nbsp; 172.16.197.157 &nbsp; k8n2.dbi-services.test</div></div>
<p>Another important point, my SQL Server data survived to my pod restart and remained available through my SQL Server instance as expected !! It was a short introduction to Portworx capabilities here and I will continue to share about it in a near future!</p>
<p>See you !</p>
<p>David Barbarin</p>
]]></content:encoded>
			<wfw:commentRss></wfw:commentRss>
		<slash:comments>1</slash:comments>
		</item>
		<item>
		<title>SQL Server sur Docker et réseau bridge</title>
		<link>https://blog.developpez.com/mikedavem/p13174/sql-server-vnext/sql-server-sur-docker-et-reseau-bridge</link>
		<comments>https://blog.developpez.com/mikedavem/p13174/sql-server-vnext/sql-server-sur-docker-et-reseau-bridge#comments</comments>
		<pubDate>Tue, 20 Feb 2018 12:12:50 +0000</pubDate>
		<dc:creator><![CDATA[mikedavem]]></dc:creator>
				<category><![CDATA[Docker]]></category>
		<category><![CDATA[SQL Server 2017]]></category>
		<category><![CDATA[bridge]]></category>
		<category><![CDATA[container]]></category>
		<category><![CDATA[Linux]]></category>

		<guid isPermaLink="false">http://blog.developpez.com/mikedavem/?p=1395</guid>
		<description><![CDATA[Continuous sur la série des billets à propos de SQL Server sur Docker. Il y a quelques jours, j&#8217;étais chez un client qui avait déjà implémenté SQL Server 2017 sur Linux dans des containers. Ce fût évidemment une journée très &#8230; <a href="https://blog.developpez.com/mikedavem/p13174/sql-server-vnext/sql-server-sur-docker-et-reseau-bridge">Lire la suite <span class="meta-nav">&#8594;</span></a>]]></description>
				<content:encoded><![CDATA[<p>Continuous sur la série des billets à propos de SQL Server sur Docker. Il y a quelques jours, j&rsquo;étais chez un client qui avait déjà implémenté SQL Server 2017 sur Linux dans des containers. Ce fût évidemment une journée très enrichissante avec beaucoup de retour d&rsquo;expérience et de feedbacks de sa part. Nous avons beaucoup discuté de scénarii d&rsquo;architectures. </p>
<p>Le point intéressant ici est que j&rsquo;ai pu comparer le scénario en place avec celui d&rsquo;un autre client qui avait implémenté depuis un moment mais d&rsquo;une façon toute à fait différente. </p>
<p>&gt; Lire la suite (<a href="https://blog.dbi-services.com/sql-server-on-docker-and-network-bridge-considerations/" rel="noopener" target="_blank">en anglais</a>)</p>
<p>David Barbarin<br />
MVP &amp; MCM SQL Server</p>
]]></content:encoded>
			<wfw:commentRss></wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>SQL Server sur Docker Swarm</title>
		<link>https://blog.developpez.com/mikedavem/p13172/docker/sql-server-sur-docker-swarm</link>
		<comments>https://blog.developpez.com/mikedavem/p13172/docker/sql-server-sur-docker-swarm#comments</comments>
		<pubDate>Mon, 12 Feb 2018 17:51:48 +0000</pubDate>
		<dc:creator><![CDATA[mikedavem]]></dc:creator>
				<category><![CDATA[Docker]]></category>
		<category><![CDATA[Linux]]></category>
		<category><![CDATA[SQL Server]]></category>
		<category><![CDATA[Swarm]]></category>

		<guid isPermaLink="false">http://blog.developpez.com/mikedavem/?p=1387</guid>
		<description><![CDATA[SQL Server 2017 est disponible sur de multiples plateformes: Windows, Linux et Docker. La dernière plateforme fournit des fonctionnalités de containerisation avec setup rapide et sans prérequis spécifiques avant d&#8217;exécuter des bases de données SQL Server qui sont probablement la &#8230; <a href="https://blog.developpez.com/mikedavem/p13172/docker/sql-server-sur-docker-swarm">Lire la suite <span class="meta-nav">&#8594;</span></a>]]></description>
				<content:encoded><![CDATA[<p>SQL Server 2017 est disponible sur de multiples plateformes: Windows, Linux et Docker. La dernière plateforme fournit des fonctionnalités de containerisation avec setup rapide et sans prérequis spécifiques avant d&rsquo;exécuter des bases de données SQL Server qui sont probablement la clé du succès pour les développeurs.</p>
<p>&gt; <a href="https://blog.dbi-services.com/introducing-sql-server-on-docker-swarm-orchestrator/" rel="noopener" target="_blank">Lire la suite</a> (en anglais)</p>
<p>David Barbarin<br />
MVP &amp; MCM SQL Server</p>
]]></content:encoded>
			<wfw:commentRss></wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
	</channel>
</rss>
