<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>David Barbarin &#187; Kubernetes</title>
	<atom:link href="https://blog.developpez.com/mikedavem/ptag/kubernetes/feed" rel="self" type="application/rss+xml" />
	<link>https://blog.developpez.com/mikedavem</link>
	<description>MVP DataPlatform - MCM SQL Server</description>
	<lastBuildDate>Thu, 09 Sep 2021 21:19:50 +0000</lastBuildDate>
	<language>fr-FR</language>
	<sy:updatePeriod>hourly</sy:updatePeriod>
	<sy:updateFrequency>1</sy:updateFrequency>
	<generator>https://wordpress.org/?v=4.1.42</generator>
	<item>
		<title>Introducing SQL Server with Portworx and storage orchestration</title>
		<link>https://blog.developpez.com/mikedavem/p13184/docker/introducing-sql-server-with-portworx-and-storage-orchestration</link>
		<comments>https://blog.developpez.com/mikedavem/p13184/docker/introducing-sql-server-with-portworx-and-storage-orchestration#comments</comments>
		<pubDate>Sun, 15 Dec 2019 22:08:03 +0000</pubDate>
		<dc:creator><![CDATA[mikedavem]]></dc:creator>
				<category><![CDATA[DevOps]]></category>
		<category><![CDATA[Docker]]></category>
		<category><![CDATA[K8s]]></category>
		<category><![CDATA[Kubernetes]]></category>
		<category><![CDATA[Portworx]]></category>
		<category><![CDATA[SQL Server]]></category>
		<category><![CDATA[Storage orchestration]]></category>

		<guid isPermaLink="false">http://blog.developpez.com/mikedavem/?p=1402</guid>
		<description><![CDATA[Stateful applications like databases need special considerations on K8s world. This is because data persistence is important and we need also something at the storage layer communicating with the container orchestrator to take advantage of its scheduling capabilities. For Stateful &#8230; <a href="https://blog.developpez.com/mikedavem/p13184/docker/introducing-sql-server-with-portworx-and-storage-orchestration">Lire la suite <span class="meta-nav">&#8594;</span></a>]]></description>
				<content:encoded><![CDATA[<p>Stateful applications like databases need special considerations on K8s world. This is because data persistence is important and we need also something at the storage layer communicating with the container orchestrator to take advantage of its scheduling capabilities. For Stateful applications, StatefulSet may be only part of the solution because it primary focuses on the Pod availability and we have to rely on the application capabilities for data replication stuff. But StatefulSet doesn’t address that of the underlying storage at all. At the moment of this write-up, StatefulSet-based solutions for SQL Server such availability groups are not supported yet on production. </p>
<p><span id="more-1402"></span></p>
<p>So, with Stateful applications we may consider other solutions like GlusterFS or NFS as distributed storage spanning all the nodes of the K8s cluster, but they often don’t meet the requirements of a database workload running in production with high throughput and IOPS requirement and data migration.</p>
<p>Products exist in the market and seem to address these specific requirements and I was very curious to get a better picture of their capabilities. During my investigation, I went through a very interesting one named Portworx for a potential customer&rsquo;s project. The interesting part of Portworx consists of a container-native, orchestration-aware storage fabric including the storage operation and administration inside K8s. It aggregates underlying storage and exposes it as a software-defined, programmable block device. </p>
<p>From a high-level perspective, Portworx is using a custom scheduler – <a href="https://portworx.com/stork-storage-orchestration-kubernetes/">STORK</a> (STorage Orchestration Runtime for Kubernetes) to assist K8s in placing a Pod in the same node where the associated PVC resides. It reduces drastically some complex stuff around annotations and labeling to perfmon some affinity rules. </p>
<p>In this blog post, I will focus only on the high-availability topic which is addressed by Portworx with volume&rsquo;s content synchronization between K8s nodes and aggregated disks. Therefore Portworx requires to define the redundancy of the dataset between replicas through a replication factor value by the way. </p>
<p>I cannot expose my customer&rsquo;s architecture here but let&rsquo;s try top apply the concept to my lab environment. </p>
<p><a href="http://blog.developpez.com/mikedavem/files/2019/12/151-0-0-K-Lab-architecture.jpg"><img src="http://blog.developpez.com/mikedavem/files/2019/12/151-0-0-K-Lab-architecture.jpg" alt="151 - 0 - 0 - K Lab architecture" width="675" height="495" class="alignnone size-full wp-image-1414" /></a></p>
<p>As shown above, my lab environment includes 4 k8s nodes with 3 nodes that will act as worker. Each worker node owns its local storage based on SSD disks (One for the SQL Server data files and the another one will handle Portworx metadata activity &#8211; Journal disk). After deploying Portworx on my K8s cluster here a big picture of my configuration:</p>
<div class="codecolorer-container text default" style="overflow:auto;white-space:nowrap;border:1px solid #9F9F9F;width:650px;"><div class="text codecolorer" style="padding:5px;font:normal 12px/1.4em Monaco, Lucida Console, monospace;white-space:nowrap">$ kubectl get daemonset -n kube-system | egrep &quot;(stork|portworx|px)&quot;<br />
portworx &nbsp; &nbsp; &nbsp; 3 &nbsp; &nbsp; &nbsp; &nbsp; 3 &nbsp; &nbsp; &nbsp; &nbsp; 3 &nbsp; &nbsp; &nbsp; 3 &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;3 &nbsp; &nbsp; &nbsp; &nbsp;<br />
portworx-api &nbsp; 3 &nbsp; &nbsp; &nbsp; &nbsp; 3 &nbsp; &nbsp; &nbsp; &nbsp; 3 &nbsp; &nbsp; &nbsp; 3 &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;3</div></div>
<p>Portworx is a DaemonSet-based installation. Each Portworx node will discover the availability storage to create a container-native block storage device with:<br />
&#8211;	/dev/sdb for my SQL Server data<br />
&#8211;	/dev/sdc for hosting my journal</p>
<div class="codecolorer-container text default" style="overflow:auto;white-space:nowrap;border:1px solid #9F9F9F;width:650px;"><div class="text codecolorer" style="padding:5px;font:normal 12px/1.4em Monaco, Lucida Console, monospace;white-space:nowrap">$ kubectl get pod -n kube-system | egrep &quot;(stork|portworx|px)&quot;<br />
<br />
portworx-555wf &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;1/1 &nbsp; &nbsp; Running &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;18 &nbsp; &nbsp; &nbsp; &nbsp; 2d23h<br />
portworx-api-2pv6s &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;1/1 &nbsp; &nbsp; Running &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;8 &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;2d23h<br />
portworx-api-s8zzr &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;1/1 &nbsp; &nbsp; Running &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;8 &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;2d23h<br />
portworx-api-vnqh2 &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;1/1 &nbsp; &nbsp; Running &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;4 &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;2d23h<br />
portworx-pjxl8 &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;1/1 &nbsp; &nbsp; Running &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;17 &nbsp; &nbsp; &nbsp; &nbsp; 2d23h<br />
portworx-wrcdf &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;1/1 &nbsp; &nbsp; Running &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;389 &nbsp; &nbsp; &nbsp; &nbsp;2d10h<br />
px-lighthouse-55db75b59c-qd2nc &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;3/3 &nbsp; &nbsp; Running &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;0 &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;35h<br />
stork-5d568485bb-ghlt9 &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;1/1 &nbsp; &nbsp; Running &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;0 &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;35h<br />
stork-5d568485bb-h2sqm &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;1/1 &nbsp; &nbsp; Running &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;13 &nbsp; &nbsp; &nbsp; &nbsp; 2d23h<br />
stork-5d568485bb-xxd4b &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;1/1 &nbsp; &nbsp; Running &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;1 &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;2d4h<br />
stork-scheduler-56574cdbb5-7td6v &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;1/1 &nbsp; &nbsp; Running &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;0 &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;35h<br />
stork-scheduler-56574cdbb5-skw5f &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;1/1 &nbsp; &nbsp; Running &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;4 &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;2d4h<br />
stork-scheduler-56574cdbb5-v5slj &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;1/1 &nbsp; &nbsp; Running &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;9 &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;2d23h</div></div>
<p>The above picture shows different stork pods that may influence scheduling based on the location of volumes that a pod requires. In addition, the PX cluster (part of Portworx Enterprise Platform) includes all the Portworx pods and allows getting to monitor and performance insights of each related pod (SQL Server instance here). </p>
<p>Let’s have a look at the global configuration by using the <strong>pxctl</strong> command (first section):</p>
<div class="codecolorer-container text default" style="overflow:auto;white-space:nowrap;border:1px solid #9F9F9F;width:650px;"><div class="text codecolorer" style="padding:5px;font:normal 12px/1.4em Monaco, Lucida Console, monospace;white-space:nowrap">$ PX_POD=$(kubectl get pods -l name=portworx -n kube-system -o jsonpath='{.items[0].metadata.name}')<br />
$ kubectl exec $PX_POD -n kube-system -- /opt/pwx/bin/pxctl status<br />
Status: PX is operational<br />
License: Trial (expires in 28 days)<br />
Node ID: 590d7afd-9d30-4624-8082-5f9cb18ecbfd<br />
&nbsp; &nbsp; &nbsp; &nbsp; IP: 192.168.90.63<br />
&nbsp; &nbsp; &nbsp; &nbsp; Local Storage Pool: 1 pool<br />
&nbsp; &nbsp; &nbsp; &nbsp; POOL &nbsp; &nbsp;IO_PRIORITY &nbsp; &nbsp; RAID_LEVEL &nbsp; &nbsp; &nbsp;USABLE &nbsp;USED &nbsp; &nbsp;STATUS &nbsp;ZONE &nbsp; &nbsp;REGION<br />
&nbsp; &nbsp; &nbsp; &nbsp; 0 &nbsp; &nbsp; &nbsp; HIGH &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;raid0 &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; 20 GiB &nbsp;8.5 GiB Online &nbsp;default default<br />
&nbsp; &nbsp; &nbsp; &nbsp; Local Storage Devices: 1 device<br />
&nbsp; &nbsp; &nbsp; &nbsp; Device &nbsp;Path &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;Media Type &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;Size &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;Last-Scan<br />
&nbsp; &nbsp; &nbsp; &nbsp; 0:1 &nbsp; &nbsp; /dev/sdb &nbsp; &nbsp; &nbsp; &nbsp;STORAGE_MEDIUM_MAGNETIC 20 GiB &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;08 Dec 19 21:59 UTC<br />
&nbsp; &nbsp; &nbsp; &nbsp; total &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; - &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; 20 GiB<br />
&nbsp; &nbsp; &nbsp; &nbsp; Cache Devices:<br />
&nbsp; &nbsp; &nbsp; &nbsp; No cache devices<br />
&nbsp; &nbsp; &nbsp; &nbsp; Journal Device:<br />
&nbsp; &nbsp; &nbsp; &nbsp; 1 &nbsp; &nbsp; &nbsp; /dev/sdc1 &nbsp; &nbsp; &nbsp; STORAGE_MEDIUM_MAGNETIC<br />
…</div></div>
<p>Portworx has created a pool composed of my 3 replicas / Kubernetes nodes with a 20GB SSD each. I just used a default configuration without specifying any zone or region stuff for fault tolerance capabilities. This is not my focus at this moment. According to Portworx’s performance tuning documentation, I configured a journal device to improve I/O performance by offloading PX metadata writes to a separate storage. </p>
<p>Second section:</p>
<div class="codecolorer-container text default" style="overflow:auto;white-space:nowrap;border:1px solid #9F9F9F;width:650px;"><div class="text codecolorer" style="padding:5px;font:normal 12px/1.4em Monaco, Lucida Console, monospace;white-space:nowrap">…<br />
Nodes: 3 node(s) with storage (3 online)<br />
&nbsp; &nbsp; &nbsp; &nbsp; IP &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;ID &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;SchedulerNodeName &nbsp; &nbsp; &nbsp; StorageNode &nbsp; &nbsp; &nbsp;Used &nbsp; &nbsp;Capacity &nbsp; &nbsp; &nbsp; &nbsp;Status &nbsp;StorageStatus &nbsp; Version &nbsp; &nbsp; &nbsp; &nbsp; Kernel &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;OS<br />
&nbsp; &nbsp; &nbsp; &nbsp; 192.168.5.62 &nbsp; &nbsp;b0ac4fa3-29c2-40a8-9033-1d0558ec31fd &nbsp; &nbsp;k8n2.dbi-services.test &nbsp;Yes &nbsp; &nbsp; 3.1 GiB &nbsp;20 GiB &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;Online &nbsp;Up &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;2.3.0.0-103206b 3.10.0-1062.1.2.el7.x86_64 &nbsp; &nbsp; &nbsp;CentOS Linux 7 (Core)<br />
&nbsp; &nbsp; &nbsp; &nbsp; 192.168.40.61 &nbsp; 9fc5bc45-5602-4926-ab38-c74f0a8a8b2c &nbsp; &nbsp;k8n1.dbi-services.test &nbsp;Yes &nbsp; &nbsp; 8.6 GiB &nbsp;20 GiB &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;Online &nbsp;Up &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;2.3.0.0-103206b 3.10.0-1062.1.2.el7.x86_64 &nbsp; &nbsp; &nbsp;CentOS Linux 7 (Core)<br />
&nbsp; &nbsp; &nbsp; &nbsp; 192.168.80.63 &nbsp; 590d7afd-9d30-4624-8082-5f9cb18ecbfd &nbsp; &nbsp;k8n3.dbi-services.test &nbsp;Yes &nbsp; &nbsp; 8.5 GiB &nbsp;20 GiB &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;Online &nbsp;Up (This node) &nbsp;2.3.0.0-103206b 3.10.0-1062.1.2.el7.x86_64 &nbsp; &nbsp; &nbsp;CentOS Linux 7 (Core)<br />
Global Storage Pool<br />
&nbsp; &nbsp; &nbsp; &nbsp; Total Used &nbsp; &nbsp; &nbsp;: &nbsp;20 GiB<br />
&nbsp; &nbsp; &nbsp; &nbsp; Total Capacity &nbsp;: &nbsp;60 GiB</div></div>
<p>All my nodes are up for a total storage of 60 GiB. Let&rsquo;s deploy a Portworx Storage Class with the following specification:</p>
<div class="codecolorer-container text default" style="overflow:auto;white-space:nowrap;border:1px solid #9F9F9F;width:650px;"><div class="text codecolorer" style="padding:5px;font:normal 12px/1.4em Monaco, Lucida Console, monospace;white-space:nowrap">kind: StorageClass<br />
apiVersion: storage.k8s.io/v1<br />
metadata:<br />
&nbsp; name: portworx-sc<br />
provisioner: kubernetes.io/portworx-volume<br />
parameters:<br />
&nbsp; repl: &quot;3&quot;<br />
&nbsp; nodes: &quot;b0ac4fa3-29c2-40a8-9033-1d0558ec31fd,9fc5bc45-5602-4926-ab38-c74f0a8a8b2c,590d7afd-9d30-4624-8082-5f9cb18ecbfd&quot;<br />
&nbsp; label: &quot;name=mssqlvol&quot;<br />
&nbsp; fs: &quot;xfs&quot;<br />
&nbsp; io_profile: &quot;db&quot;<br />
&nbsp; priority_io: &quot;high&quot;<br />
&nbsp; journal: &quot;true&quot;<br />
allowVolumeExpansion: true</div></div>
<p>The important parameters are:</p>
<p><strong>repl: &laquo;&nbsp;3&nbsp;&raquo;</strong> =&gt; Number of replicas (K8s nodes) where data will be replicated</p>
<p><strong>nodes: &laquo;&nbsp;b0ac4fa3-29c2-40a8-9033-1d0558ec31fd,9fc5bc45-5602-4926-ab38-c74f0a8a8b2c,590d7afd-9d30-4624-8082-5f9cb18ecbfd&nbsp;&raquo;</strong> =&gt; Number of replicas used for data replication. Replicas are identified by their ID. Each write is synchronously replicated to a quorum set of nodes whereas read throughput is aggregated, where multiple nodes can service one read request in parallel streams.</p>
<p><strong>fs: &laquo;&nbsp;xfs&nbsp;&raquo;</strong> =&gt; I used a Linux FS supported by SQL Server on Linux</p>
<p><strong>io_profile: &laquo;&nbsp;db&nbsp;&raquo;</strong> =&gt; By default, Portworx is able to use some profiles according to the access pattern. Here I just forced it to use db profile that implements a write-back flush coalescing algorithm. </p>
<p><strong>priority_io: &laquo;&nbsp;high&nbsp;&raquo;</strong> =&gt; I deliberately configured the IO priority value to high for my pool in order to favor maximum throughput and low latency transactional workloads. I used SSD storage accordingly.</p>
<p><strong>journal: &laquo;&nbsp;true&nbsp;&raquo;</strong> =&gt; The volumes used by this storage class will use the journal dedicated device</p>
<p><strong>allowVolumeExpansion: true</strong> =&gt; is an interesting parameter to allow online expansion of the concerned volume(s). As an aside, it is worth noting that volume expansion capabilities is pretty new (&gt; v1.11+) on K8s word for the following in-tree volume plugins: AWS-EBS, GCE-PD, Azure Disk, Azure File, Glusterfs, Cinder, Portworx, and Ceph RBD</p>
<p>Then, let&rsquo;s use Dynamic Provisioning with the following PVC specification:</p>
<div class="codecolorer-container text default" style="overflow:auto;white-space:nowrap;border:1px solid #9F9F9F;width:650px;"><div class="text codecolorer" style="padding:5px;font:normal 12px/1.4em Monaco, Lucida Console, monospace;white-space:nowrap">kind: PersistentVolumeClaim<br />
apiVersion: v1<br />
metadata:<br />
&nbsp; name: pvcsc001<br />
&nbsp; annotations:<br />
&nbsp; &nbsp; volume.beta.kubernetes.io/storage-class: portworx-sc<br />
spec:<br />
&nbsp; accessModes:<br />
&nbsp; &nbsp; - ReadWriteOnce<br />
&nbsp; resources:<br />
&nbsp; &nbsp; requests:<br />
&nbsp; &nbsp; &nbsp; storage: 20Gi</div></div>
<p>Usual specification for a PVC &#8230; I just claimed 20Gi of storage based on my portworx storage class. After deploying both the Storage Class and PVC here the new picture of my configuration:</p>
<div class="codecolorer-container text default" style="overflow:auto;white-space:nowrap;border:1px solid #9F9F9F;width:650px;"><div class="text codecolorer" style="padding:5px;font:normal 12px/1.4em Monaco, Lucida Console, monospace;white-space:nowrap">$ kubectl get sc<br />
NAME &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; PROVISIONER &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; AGE<br />
portworx-sc &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;kubernetes.io/portworx-volume &nbsp; 3d14h<br />
stork-snapshot-sc &nbsp; &nbsp; &nbsp; &nbsp;stork-snapshot &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;3d23h<br />
<br />
$ kubectl get pvc<br />
NAME &nbsp; &nbsp; &nbsp; STATUS &nbsp; VOLUME &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; CAPACITY &nbsp; ACCESS MODES &nbsp; STORAGECLASS &nbsp; AGE<br />
pvcsc001 &nbsp; Bound &nbsp; &nbsp;pvc-98d12db5-17ff-11ea-9d3a-00155dc4b604 &nbsp; 20Gi &nbsp; &nbsp; &nbsp; RWO &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;portworx-sc &nbsp; &nbsp;3d13h</div></div>
<p>Note that there is also a special storage class implemention for snapshot capabilities, we will talk about this capability in next write-up. My PVC <strong>pvcs001</strong> is ready to be used by my Stateful application. Now it&rsquo;s time to deploy a Stateful application with my SQL Server pod and the specification below. Let&rsquo;s say that Portworx volumes are usable for non-root execution containers when specifying fsGroup parameter (securityContext section). So,this is a good fit with the non-root execution capabilities shipped with SQL Server pod <img src="https://blog.developpez.com/mikedavem/wp-includes/images/smilies/icon_smile.gif" alt=":)" class="wp-smiley" /> You will also notice there is no special labeling or affinity stuff between my pod and the PVC. I just defined the volume mount, the corresponding PVC and that&rsquo;s it!</p>
<div class="codecolorer-container text default" style="overflow:auto;white-space:nowrap;border:1px solid #9F9F9F;width:650px;height:450px;"><div class="text codecolorer" style="padding:5px;font:normal 12px/1.4em Monaco, Lucida Console, monospace;white-space:nowrap">apiVersion: apps/v1beta1<br />
kind: Deployment<br />
metadata:<br />
&nbsp; name: mssql-deployment<br />
spec:<br />
&nbsp; replicas: 1<br />
&nbsp; template:<br />
&nbsp; &nbsp; metadata:<br />
&nbsp; &nbsp; &nbsp; labels:<br />
&nbsp; &nbsp; &nbsp; &nbsp; app: mssql<br />
&nbsp; &nbsp; spec:<br />
&nbsp; &nbsp; &nbsp; securityContext:<br />
&nbsp; &nbsp; &nbsp; &nbsp; runAsUser: 10001<br />
&nbsp; &nbsp; &nbsp; &nbsp; runAsGroup: 10001<br />
&nbsp; &nbsp; &nbsp; &nbsp; fsGroup: 10001<br />
&nbsp; &nbsp; &nbsp; terminationGracePeriodSeconds: 10<br />
&nbsp; &nbsp; &nbsp; containers:<br />
&nbsp; &nbsp; &nbsp; - name: mssql<br />
&nbsp; &nbsp; &nbsp; &nbsp; image: mcr.microsoft.com/mssql/server:2019-GA-ubuntu-16.04<br />
&nbsp; &nbsp; &nbsp; &nbsp; ports:<br />
&nbsp; &nbsp; &nbsp; &nbsp; - containerPort: 1433<br />
&nbsp; &nbsp; &nbsp; &nbsp; env:<br />
&nbsp; &nbsp; &nbsp; &nbsp; - name: MSSQL_PID<br />
&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; value: &quot;Developer&quot;<br />
&nbsp; &nbsp; &nbsp; &nbsp; - name: ACCEPT_EULA<br />
&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; value: &quot;Y&quot;<br />
&nbsp; &nbsp; &nbsp; &nbsp; - name: MSSQL_SA_PASSWORD<br />
&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; valueFrom:<br />
&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; secretKeyRef:<br />
&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; name: sql-secrets<br />
&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; key: sapassword<br />
&nbsp; &nbsp; &nbsp; &nbsp; volumeMounts:<br />
&nbsp; &nbsp; &nbsp; &nbsp; - name: mssqldb<br />
&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; mountPath: /var/opt/mssql<br />
&nbsp; &nbsp; &nbsp; &nbsp; resources:<br />
&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; limits:<br />
&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; cpu: &quot;3500m&quot;<br />
&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; requests:<br />
&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; cpu: &quot;2000m&quot;<br />
&nbsp; &nbsp; &nbsp; volumes:<br />
&nbsp; &nbsp; &nbsp; - name: mssqldb<br />
&nbsp; &nbsp; &nbsp; &nbsp; persistentVolumeClaim:<br />
&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; claimName: pvcsc001<br />
<br />
---<br />
apiVersion: v1<br />
kind: Service<br />
metadata:<br />
&nbsp; name: mssql-deployment<br />
spec:<br />
&nbsp; selector:<br />
&nbsp; &nbsp; app: mssql<br />
&nbsp; ports:<br />
&nbsp; &nbsp; - protocol: TCP<br />
&nbsp; &nbsp; &nbsp; port: 1470<br />
&nbsp; &nbsp; &nbsp; targetPort: 1433<br />
&nbsp; type: LoadBalancer</div></div>
<p>Let&rsquo;s take a look at the deployment status:</p>
<div class="codecolorer-container text default" style="overflow:auto;white-space:nowrap;border:1px solid #9F9F9F;width:650px;"><div class="text codecolorer" style="padding:5px;font:normal 12px/1.4em Monaco, Lucida Console, monospace;white-space:nowrap">$ kubectl get deployment,pod,svc<br />
NAME &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; READY &nbsp; UP-TO-DATE &nbsp; AVAILABLE &nbsp; AGE<br />
deployment.extensions/mssql-deployment &nbsp; 1/1 &nbsp; &nbsp; 1 &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;1 &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; 3d7h<br />
<br />
NAME &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; READY &nbsp; STATUS &nbsp; &nbsp;RESTARTS &nbsp; AGE<br />
pod/mssql-deployment-67fdd4759-vtzmz &nbsp; 1/1 &nbsp; &nbsp; Running &nbsp; 0 &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;45m<br />
<br />
NAME &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; TYPE &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; CLUSTER-IP &nbsp; &nbsp; &nbsp;EXTERNAL-IP &nbsp; &nbsp; PORT(S) &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;AGE<br />
service/kubernetes &nbsp; &nbsp; &nbsp; &nbsp; ClusterIP &nbsp; &nbsp; &nbsp;10.96.0.1 &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; 443/TCP &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;4d<br />
service/mssql-deployment &nbsp; LoadBalancer &nbsp; 10.98.246.160 &nbsp; 192.168.40.61 &nbsp; 1470:32374/TCP &nbsp; 3d7h</div></div>
<p>We&rsquo;re now ready to test the HA capabilities of Portworx! Let&rsquo;s see how STORK influences the scheduling to get my SQL Server pod on the same node where my PVC resides. The <strong>pxctl</strong> command provides different options to get information about the PX cluster and volumes as well as configuration and management capabilities. Here a picture of my volumes:</p>
<div class="codecolorer-container text default" style="overflow:auto;white-space:nowrap;border:1px solid #9F9F9F;width:650px;"><div class="text codecolorer" style="padding:5px;font:normal 12px/1.4em Monaco, Lucida Console, monospace;white-space:nowrap">$ kubectl exec $PX_POD -n kube-system -- /opt/pwx/bin/pxctl volume list<br />
ID &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;NAME &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;SIZE &nbsp; &nbsp;HA &nbsp; &nbsp; &nbsp;SHARED &nbsp;ENCRYPTED &nbsp; &nbsp; &nbsp; &nbsp;IO_PRIORITY &nbsp; &nbsp; STATUS &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;SNAP-ENABLED<br />
675137742462835449 &nbsp; &nbsp; &nbsp;pvc-98d12db5-17ff-11ea-9d3a-00155dc4b604 &nbsp; &nbsp; &nbsp; &nbsp;20 GiB &nbsp;2 &nbsp; &nbsp; &nbsp; no &nbsp; &nbsp; &nbsp;no &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; HIGH &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;up - attached on 192.168.40.61 &nbsp;no<br />
$ kubectl get pod -o wide<br />
NAME &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; READY &nbsp; STATUS &nbsp; &nbsp;RESTARTS &nbsp; AGE &nbsp; IP &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;NODE &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; NOMINATED NODE &nbsp; READINESS GATES<br />
mssql-deployment-67fdd4759-vtzmz &nbsp; 1/1 &nbsp; &nbsp; Running &nbsp; 0 &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;48m &nbsp; 172.16.160.54 &nbsp; k8n1.dbi-services.test</div></div>
<p>My SQL Server pod and my Portworx storage sit together on the K8n1.dbi-services.test node. The PX web console is also available and provides the same kind of information as pxctl command does. </p>
<p><a href="http://blog.developpez.com/mikedavem/files/2019/12/151-1-PX-web-console-volume.jpg"><img src="http://blog.developpez.com/mikedavem/files/2019/12/151-1-PX-web-console-volume.jpg" alt="151 - 1 - PX web console volume" width="1795" height="913" class="alignnone size-full wp-image-1408" /></a></p>
<p>Let&rsquo;s now simulate the K8n1.dbi-services.test node failure. In this scenario both my PVC and my SQL Server pod are going to move to the next available &#8211; K8n2 (192.168.20.62). This is where STORK comes into play to stick my pod with my PVC location. </p>
<p><a href="http://blog.developpez.com/mikedavem/files/2019/12/151-2-PX-web-console-volume-after-failover.jpg"><img src="http://blog.developpez.com/mikedavem/files/2019/12/151-2-PX-web-console-volume-after-failover.jpg" alt="151 - 2 - PX web console volume after failover" width="1835" height="941" class="alignnone size-full wp-image-1410" /></a></p>
<p>&#8230;</p>
<div class="codecolorer-container text default" style="overflow:auto;white-space:nowrap;border:1px solid #9F9F9F;width:650px;"><div class="text codecolorer" style="padding:5px;font:normal 12px/1.4em Monaco, Lucida Console, monospace;white-space:nowrap">$ kubectl get pod -o wide<br />
NAME &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; READY &nbsp; STATUS &nbsp; &nbsp;RESTARTS &nbsp; AGE &nbsp; IP &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; NODE &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; NOMINATED NODE &nbsp; READINESS GATES<br />
mssql-deployment-67fdd4759-rbxcb &nbsp; 1/1 &nbsp; &nbsp; Running &nbsp; 0 &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;31m &nbsp; 172.16.197.157 &nbsp; k8n2.dbi-services.test</div></div>
<p>Another important point, my SQL Server data survived to my pod restart and remained available through my SQL Server instance as expected !! It was a short introduction to Portworx capabilities here and I will continue to share about it in a near future!</p>
<p>See you !</p>
<p>David Barbarin</p>
]]></content:encoded>
			<wfw:commentRss></wfw:commentRss>
		<slash:comments>1</slash:comments>
		</item>
	</channel>
</rss>
