<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:media="http://search.yahoo.com/mrss/"><channel><title><![CDATA[The Kanto Blog]]></title><description><![CDATA[My random thoughts and mistakes published in hopes it may help someone else]]></description><link>https://blog.kanto.cloud/</link><generator>Ghost 2.38</generator><lastBuildDate>Tue, 07 Apr 2026 20:46:25 GMT</lastBuildDate><atom:link href="https://blog.kanto.cloud/rss/" rel="self" type="application/rss+xml"/><ttl>60</ttl><item><title><![CDATA[PRTG StatusPage.io Sensor]]></title><description><![CDATA[<!--kg-card-begin: image--><figure class="kg-card kg-image-card"><img src="https://blog.kanto.cloud/content/images/2020/06/2020-06-15_12-47-linode-statuspage.png" class="kg-image"></figure><!--kg-card-end: image--><!--kg-card-begin: markdown--><p>Ever wanted to monitor Linode's or DigitalOcean's status page? Or how about Zoom's now that its so popular in 2020?</p>
<p>You have the option to subscribe to each of them, but who wants to get be subscribed to 50 different services, all with different notifications. Personally, I prefer to have</p>]]></description><link>https://blog.kanto.cloud/prtg-statuspage-io-sensor/</link><guid isPermaLink="false">5eca073f67e42c00016260a2</guid><category><![CDATA[prtg]]></category><dc:creator><![CDATA[Cody Ernesti]]></dc:creator><pubDate>Mon, 15 Jun 2020 18:06:24 GMT</pubDate><content:encoded><![CDATA[<!--kg-card-begin: image--><figure class="kg-card kg-image-card"><img src="https://blog.kanto.cloud/content/images/2020/06/2020-06-15_12-47-linode-statuspage.png" class="kg-image"></figure><!--kg-card-end: image--><!--kg-card-begin: markdown--><p>Ever wanted to monitor Linode's or DigitalOcean's status page? Or how about Zoom's now that its so popular in 2020?</p>
<p>You have the option to subscribe to each of them, but who wants to get be subscribed to 50 different services, all with different notifications. Personally, I prefer to have all my monitoring happen within PRTG, even for services I am not hosting.</p>
<p>In comes the StatusPage.io sensor. This sensor allows you to parse the JSON endpoints of a status page site, and report the values to PRTG via a custom PowerShell script.</p>
<h2 id="installation">Installation</h2>
<p>The script is hosted on my custom prtg sensors github repo: <a href="https://github.com/SoarinFerret/prtg-custom-sensors/tree/master/statuspage-io">https://github.com/SoarinFerret/prtg-custom-sensors/tree/master/statuspage-io</a></p>
<ul>
<li>Copy the script <code>Get-StatusPageData.ps1</code> to <code>C:\Program Files (x86)\PRTG Network Monitor\Custom Sensors\EXEXML</code></li>
<li>For the lookup in PRTG to work you need to copy the file <code>custom.statuspage.status.ovl</code> to your PRTG installation folder <code>C:\Program Files (x86)\PRTG Network Monitor\lookups\custom\</code> of your core server and reload the lookups<br>
(Setup/System Administration/Administrative Tools -&gt; Load Lookups).</li>
</ul>
<h2 id="usage">Usage</h2>
<p>By default, all it needs is the URI parameter to be populated. The URI should end with <code>/api/v2/components.json</code>, so for example: <code>https://status.linode.com/api/v2/components.json</code></p>
<p>However, there are a couple of other parameters to use. <code>-PrependGroupNames</code> will add the groups every item is included under. The other one is the <code>-Name</code> parameter, which supports wildcards so you can filter the items you want to see. For example, if you would like to only monitor items from the Dallas datacenter for Linode, just use <code>&quot;*Dallas*&quot;</code>.</p>
<hr>
<p>If you have any issues, go ahead a file an issue on GitHub.</p>
<!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[Retrieving Linux Hyper-V KVPs in PowerShell]]></title><description><![CDATA[<!--kg-card-begin: markdown--><p>This is a small PowerShell script I wrote to retrieve the key value pairs that Hyper-V exposes to Linux VMs. To view the most up to date version of this script, go to this <a href="https://github.com/SoarinFerret/PowerShell/blob/master/HyperV-KVP/Get-LinuxHypervKvpValues.ps1">GitHub link</a>, otherwise look at the initial public code at the bottom of this blog post.</p>]]></description><link>https://blog.kanto.cloud/retrieving-linux-hyper-v-kvps-in-powershell/</link><guid isPermaLink="false">5e94cf3967e42c0001625fdd</guid><dc:creator><![CDATA[Cody Ernesti]]></dc:creator><pubDate>Mon, 13 Apr 2020 23:02:57 GMT</pubDate><content:encoded><![CDATA[<!--kg-card-begin: markdown--><p>This is a small PowerShell script I wrote to retrieve the key value pairs that Hyper-V exposes to Linux VMs. To view the most up to date version of this script, go to this <a href="https://github.com/SoarinFerret/PowerShell/blob/master/HyperV-KVP/Get-LinuxHypervKvpValues.ps1">GitHub link</a>, otherwise look at the initial public code at the bottom of this blog post.</p>
<h1 id="whatishypervkeyvaluepairdataexchange">What is Hyper-V Key-Value Pair Data Exchange?</h1>
<p>Simply put, Hyper-V Key-Value Pair Data Exchange is simply a way of transferring data between Hyper-V and a VM, both to and from. In Windows, this is exposed as registry keys in <code>HKLM:\Software\Microsoft\Virtual Machine</code>.</p>
<p>I am definitely not the most informed on this, if you would like to learn more, please see the following links:</p>
<ul>
<li><a href="https://www.altaro.com/hyper-v/key-value-pair-data-exchange-3-linux/">Eric Siron on Altaro</a> (bonus: he wrote a C++ program to read AND write to the kvp_pools in Linux!)</li>
<li><a href="https://docs.microsoft.com/en-us/previous-versions/windows/it-pro/windows-server-2012-R2-and-2012/dn798287(v=ws.11)">Microsoft's Documenation</a></li>
</ul>
<h1 id="howmyscriptworks">How my script works</h1>
<p>This script doesn't really do anything fancy. In linux, these values are exposed via a file. My script primarily targets the content in <code>/var/lib/hyperv/.kvp_pool_3</code>. In fact, if you wanted to access them without a script, you could just by using <code>cat</code> in bash or <code>Get-Content</code> in PowerShell.</p>
<!--kg-card-end: markdown--><!--kg-card-begin: code--><pre><code class="language-bash">$ cat /var/lib/hyperv/.kvp_pool_3
HostNameHyperV.ad.example.comHostingSystemEditionId8HostingSystemNestedLevel0HostingSystemOsMajor10HostingSystemOsMinor0HostingSystemProcessorArchitecture9HostingSystemProcessorIdleStateMax0HostingSystemProcessorThrottleMax100HostingSystemProcessorThrottleMin5HostingSystemSpMajor0HostingSystemSpMinor0PhysicalHostNameHYPERVPhysicalHostNameFullyQualifiedHyperV.ad.example.comVirtualMachineDynamicMemoryBalancingEnabled0VirtualMachineId5A535AFF-1708-4F6B-9B23-E506DF8CC5C5VirtualMachineNamekvp-testingtu18.04
</code></pre><!--kg-card-end: code--><!--kg-card-begin: markdown--><p>But, that is literally just a long string with no delimiter. Or is it?</p>
<p>Each KVP in the file is stored with a 512 byte key and 2048 byte value. Everything after the actual content is null content basically. So, we can use that to parse the data.</p>
<p>Essentially, my script reads the file as a byte stream, and parses out 512 bytes for the key, 2048 for the value, and then starts on the next one. I store this in a hashtable, and then return it as PSObject (because objects are awesome).</p>
<p>Here is a sample output:</p>
<!--kg-card-end: markdown--><!--kg-card-begin: code--><pre><code class="language-powershell">PS /home/serveradmin&gt; Get-LinuxHypervKvpValues

HostingSystemOsMinor                        : 0
HostingSystemProcessorThrottleMax           : 100
HostingSystemProcessorIdleStateMax          : 0
PhysicalHostName                            : HYPERV
HostingSystemEditionId                      : 8
HostingSystemOsMajor                        : 10
HostingSystemNestedLevel                    : 0
HostingSystemSpMajor                        : 0
HostName                                    : HyperV.ad.example.com
VirtualMachineDynamicMemoryBalancingEnabled : 0
HostingSystemSpMinor                        : 0
VirtualMachineName                          : kvp-testingtu18.04
HostingSystemProcessorThrottleMin           : 5
HostingSystemProcessorArchitecture          : 9
PhysicalHostNameFullyQualified              : HyperV.ad.example.com
VirtualMachineId                            : 5A535AFF-1708-4F6B-9B23-E506DF8CC5C5</code></pre><!--kg-card-end: code--><!--kg-card-begin: markdown--><p>In addition, I added the ability to send a PSSession to the script, so I could get info about remote VMs without making sure to copy this function over.</p>
<!--kg-card-end: markdown--><h1 id="the-script">The Script</h1><!--kg-card-begin: code--><pre><code class="language-powershell">Function Get-LinuxHypervKvpValues {
    &lt;#
    .SYNOPSIS
    Retrive Hyper-V KVP Data Exchange values from a Linux based Hyper-V virtual machine.

    .DESCRIPTION
    Hyper-V provides key value pairs to VMs to send VM/Host info in a safe way. This function retrieves those key value pairs and returns them as a PowerShell object.

    .PARAMETER Path
    Location of the kvp_pool you would like to access. They are usually named .kvp_pool_x, where x is an integer between 0 and 4.

    .PARAMETER Session
    Optional parameter for a PSSession to remotely retrieve the KVP values.

    .EXAMPLE
    Get-LinuxHypervKvpValues -Session (New-PSSession -Hostname 192.168.1.1 -Username serveradmin)

    .NOTES
    Cody Ernesti
    github.com/soarinferret

    .LINK
    https://github.com/Soarinferret/PowerShell
    https://blog.kanto.cloud/retrieving-linux-hyper-v-kvps-in-powershell

    #&gt;

    Param(
        [ValidateScript({Test-Path $_ -PathType 'Leaf'})] 
        [String]$Path = "/var/lib/hyperv/.kvp_pool_3",

        [Parameter(Mandatory=$false)]
        [ValidateScript({$_.State -eq "Opened"})] 
        [PsSession]$Session
    )
    function get-kvp ($KvpPath){
        $KEY_LENGTH = 512
        $VALUE_LENGTH = 2048

        $KVP_POOL = Get-Content $KvpPath -AsByteStream 

        $properties = @{}
        for($y = 0; $y -lt $KVP_POOL.Length; $y = $y + $KEY_LENGTH + $VALUE_LENGTH){

            $properties.add(
                # Key
                $([System.Text.Encoding]::UTF8.GetString($KVP_POOL[$y..$($y+$KEY_LENGTH -1)]) -replace "`0", ""),
                
                # Value
                $([System.Text.Encoding]::UTF8.GetString($KVP_POOL[$($y+$KEY_LENGTH)..$($y+$VALUE_LENGTH -1)]) -replace "`0", "")
            )
        }
        return New-Object PSObject -Property $properties
    }

    if($Session -and $Session.State -eq "Opened"){
        Invoke-Command -Session $Session -ScriptBlock ${function:get-kvp} -ArgumentList $Path
    }else{
        get-kvp $Path
    }
}
</code></pre><!--kg-card-end: code-->]]></content:encoded></item><item><title><![CDATA[BookStack ADFS SAML2 Setup]]></title><description><![CDATA[<!--kg-card-begin: markdown--><p><em>This post was updated on 2/15/2021 with an updated config to bypass the Single Logout issues.</em></p>
<!--kg-card-end: markdown--><p>In the last few weeks, v0.28 was released for <a href="https://github.com/BookStackApp/BookStack/releases/tag/v0.28.0">BookStack</a>, bringing lots of awesome new features and bug fixes, like their baseline API. </p><p>However, my favorite addition is the inclusion of</p>]]></description><link>https://blog.kanto.cloud/bookstack-adfs-setup/</link><guid isPermaLink="false">5e4dfd327408ef0001a7034c</guid><dc:creator><![CDATA[Cody Ernesti]]></dc:creator><pubDate>Thu, 20 Feb 2020 04:37:02 GMT</pubDate><content:encoded><![CDATA[<!--kg-card-begin: markdown--><p><em>This post was updated on 2/15/2021 with an updated config to bypass the Single Logout issues.</em></p>
<!--kg-card-end: markdown--><p>In the last few weeks, v0.28 was released for <a href="https://github.com/BookStackApp/BookStack/releases/tag/v0.28.0">BookStack</a>, bringing lots of awesome new features and bug fixes, like their baseline API. </p><p>However, my favorite addition is the inclusion of SAML2 as a built-in authentication option. Looking through the code, they are taking advantage of the <a href="https://github.com/onelogin/php-saml">onelogin/php-saml</a> library, which is very popular in a lot of other projects.</p><h2 id="bookstack-setup">BookStack Setup</h2><p>Not a lot of setup involved, simple edit your <code>.env</code> file with the following values:</p><!--kg-card-begin: code--><pre><code>## SAML Config
# Set authentication method to be saml2
AUTH_METHOD=saml2

# Set the display name to be shown on the login button.
# (Login with &lt;name&gt;)
SAML2_NAME=ADFS

# Name of the attribute which provides the users email address
SAML2_EMAIL_ATTRIBUTE=mail

# Name of the attribute to use as an ID for the SAML user.
SAML2_EXTERNAL_ID_ATTRIBUTE=http://schemas.xmlsoap.org/ws/2005/05/identity/claims/upn

# Name of the attribute(s) to use for the users display name
# Can have mulitple attributes listed, separated with a '|' in which
# case those values will be joined with a space.
# Example: SAML2_DISPLAY_NAME_ATTRIBUTES=firstName|lastName
# Defaults to the ID value if not found.
SAML2_DISPLAY_NAME_ATTRIBUTES=displayName

# Identity Provider entityID URL
SAML2_IDP_ENTITYID=http://sts.example.com/adfs/services/trust

# Auto-load metatadata from the IDP
# Setting this to true negates the need to specify the next three options
SAML2_AUTOLOAD_METADATA=false

# Identity Provider single-sign-on service URL
# Not required if using the autoload option above.
SAML2_IDP_SSO=https://sts.example.com/adfs/ls/

# Identity Provider single-logout-service URL
# Not required if using the autoload option above.
# Not required if your identity provider does not support SLS.
#SAML2_IDP_SLO=null

# Identity Provider x509 public certificate data.
# Not required if using the autoload option above.
SAML2_IDP_x509="MIIC2...."
</code></pre><!--kg-card-end: code--><h2 id="adfs-setup">ADFS Setup</h2><p>I do not use ADFS with a GUI, so I don't have screenshots of what the ADFS Management MMC would show. I do however have the PowerShell and the claims rules you need.<br><br>Simply copy the claims rules into a file, and use that file in the PowerShell command provided below.</p><h3 id="claims-rules">Claims Rules</h3><!--kg-card-begin: code--><pre><code>@RuleTemplate = "LdapClaims"
@RuleName = "User Attributes"
c:[Type == "http://schemas.microsoft.com/ws/2008/06/identity/claims/windowsaccountname", Issuer == "AD AUTHORITY"]
 =&gt; issue(store = "Active Directory", types = ("http://schemas.xmlsoap.org/ws/2005/05/identity/claims/upn", "mail", "groups", "displayName"), query = ";userPrincipalName,otherMailbox,tokenGroups,displayName;{0}", param = c.Value);

@RuleName = "Transform UPN to Name ID"
c:[Type == "http://schemas.xmlsoap.org/ws/2005/05/identity/claims/upn"]
 =&gt; issue(Type = "http://schemas.xmlsoap.org/ws/2005/05/identity/claims/nameidentifier", Issuer = c.Issuer, OriginalIssuer = c.OriginalIssuer, Value = c.Value, ValueType = c.ValueType, Properties["http://schemas.xmlsoap.org/ws/2005/05/identity/claimproperties/format"] = "urn:oasis:names:tc:SAML:1.1:nameid-format:emailAddress");</code></pre><!--kg-card-end: code--><!--kg-card-begin: markdown--><h3 id="powershell">PowerShell</h3>
<!--kg-card-end: markdown--><!--kg-card-begin: code--><pre><code class="language-powershell">Add-AdfsRelyingPartyTrust -Name Bookstack `
           -MetadataUrl https://docs.example.com/saml2/metadata `
           -IssuanceAuthorizationRules '@RuleTemplate = "AllowAllAuthzRule" =&gt; issue(Type = "http://schemas.microsoft.com/authorization/claims/permit", Value = "true");'`
           -IssuanceTransformRulesFile C:\bookstack-claimrules.txt</code></pre><!--kg-card-end: code-->]]></content:encoded></item><item><title><![CDATA[Install Rancher 2.0 using Docker-Compose]]></title><description><![CDATA[<p>Another short and sweet one here - installing Rancher 2.0 using a <code>docker-compose.yml</code> file. Why? Why not! I prefer to have a docker-compose file every time I setup a docker container, even if its just one container. Its an easy way to document how you want to configure</p>]]></description><link>https://blog.kanto.cloud/rancher-docker-compose/</link><guid isPermaLink="false">5c7df882c68d790001d13c8e</guid><category><![CDATA[docker]]></category><category><![CDATA[rancher]]></category><category><![CDATA[docker-compose]]></category><dc:creator><![CDATA[Cody Ernesti]]></dc:creator><pubDate>Tue, 05 Mar 2019 04:51:48 GMT</pubDate><content:encoded><![CDATA[<p>Another short and sweet one here - installing Rancher 2.0 using a <code>docker-compose.yml</code> file. Why? Why not! I prefer to have a docker-compose file every time I setup a docker container, even if its just one container. Its an easy way to document how you want to configure your setup, no matter how simple or complex!</p>
<p>But, this also allows me to show different setups that may vary from the norm. For example, I am using external SSL termination on a completely different host. These are the methods I will go over:</p>
<ul>
<li>Rancher 2.0 with self-signed cert</li>
<li>Rancher 2.0 with your own cert</li>
<li>Rancher 2.0 using Let's Encrypt</li>
<li>Rancher 2.0 using External SSL Termination</li>
<li>Rancher 2.0 with SSL termination using <a href="https://github.com/jwilder/nginx-proxy">nginx-proxy</a> and <a href="https://github.com/JrCs/docker-letsencrypt-nginx-proxy-companion">Let's Encrypt</a></li>
</ul>
<p>Bonus: All of these already include the necessary docker volume for persistent storage!</p>
<p>This does not include the steps for installing docker, or docker-compose. I recommend using the documentation provided on <a href="https://docs.docker.com">https://docs.docker.com</a>, and in addition, take a look at the Docker version requirements on Ranchers website <a href="https://rancher.com/docs/rancher/v2.x/en/installation/requirements/">here</a>.</p>
<h2 id="rancher20withselfsignedcert">Rancher 2.0 with self-signed cert</h2>
<pre><code class="language-lang-yaml">version: '3'

services:
  rancher:
    image: rancher/rancher:latest
    restart: unless-stopped
    ports:
      - '443:443'
      - '80:80'
    volumes:
      - rancher-vol:/var/lib/rancher

volumes:
  rancher-vol:
</code></pre>
<h2 id="rancher20withyourowncert">Rancher 2.0 with your own cert</h2>
<p>Make sure if you use this, you update the certificate volumes to the location of your pem files!</p>
<pre><code class="language-lang-yaml">version: '3'

services:
  rancher:
    image: rancher/rancher:latest
    restart: unless-stopped
    ports:
      - '443:443'
      - '80:80'
    volumes:
      - rancher-vol:/var/lib/rancher
      - ./full_chain.pem:/etc/rancher/ssl/cert.pem:ro
      - ./privatekey.pem:/etc/rancher/ssl/key.pem:ro

volumes:
  rancher-vol:
</code></pre>
<p>If you are using a self-signed certificate, make sure you add the following line to your volumes with the location of your <code>cacerts.pem</code> file:</p>
<pre><code class="language-lang-yaml">      - ./cacerts.pem:/etc/rancher/ssl/cacerts.pem:ro
</code></pre>
<h2 id="rancher20usingletsencrypt">Rancher 2.0 using Let's Encrypt</h2>
<p>For this one to work, Rancher needs to be sitting on a machine with a public IP, or ports 80 and 443 forwarded to it. Be sure to update your domain and double-check your public DNS record is pointed to your public IP address.</p>
<pre><code class="language-lang-yaml">version: '3'

services:
  rancher:
    image: rancher/rancher:latest
    restart: unless-stopped
    ports:
      - '80:80'
    volumes:
      - rancher-vol:/var/lib/rancher
    command: --acme-domain rancher.example.com

volumes:
  rancher-vol:
</code></pre>
<h2 id="rancher20withexternalssltermination">Rancher 2.0 with External SSL Termination</h2>
<p>Since the SSL termination is happening on a different host or load balanecer, we only need to expose port 80.</p>
<pre><code class="language-lang-yaml">version: '3'

services:
  rancher:
    image: rancher/rancher:latest
    restart: unless-stopped
    ports:
      - '80:80'
    volumes:
      - rancher-vol:/var/lib/rancher
    command: --no-cacerts

volumes:
  rancher-vol:
</code></pre>
<h2 id="rancher20withsslterminationusingnginxproxyandletsencrypt">Rancher 2.0 with SSL termination using <a href="https://github.com/jwilder/nginx-proxy">Nginx-Proxy</a> and <a href="https://github.com/JrCs/docker-letsencrypt-nginx-proxy-companion">Let's Encrypt</a></h2>
<p>This one may seem a little weird. You may be thinking, &quot;If rancher already has built-in Let's Encrypt support, why would I use this?&quot; To which I would reply, what if you'd like SSL termination to more than one service, on the same host?</p>
<pre><code class="language-lang-yaml">version: '3'

services:
  nginx-proxy:
    restart: always
    image: jwilder/nginx-proxy:alpine
    container_name: nginx-proxy
    ports:
      - &quot;80:80&quot;
      - &quot;443:443&quot;
    volumes:
      - &quot;/etc/nginx/vhost.d&quot;
      - &quot;/usr/share/nginx/html&quot;
      - &quot;certs:/etc/nginx/certs:ro&quot;
      - &quot;/var/run/docker.sock:/tmp/docker.sock:ro&quot;
      - &quot;./custom_nginx_settings.conf:/etc/nginx/conf.d/custom_nginx_settings.conf&quot;

  nginx-letsencrypt:
    restart: always
    image: jrcs/letsencrypt-nginx-proxy-companion
    container_name: nginx-letsencrypt
    depends_on:
      - nginx-proxy
    volumes_from:
      - nginx-proxy
    volumes:
      - &quot;/var/run/docker.sock:/var/run/docker.sock:ro&quot;
      - &quot;certs:/etc/nginx/certs:rw&quot;

  rancher:
    image: rancher/rancher:latest
    restart: unless-stopped
    expose:
      - '80'
    volumes:
      - rancher-vol:/var/lib/rancher
    command: --no-cacerts
    environment:
      - &quot;VIRTUAL_HOST=rancher.example.com&quot;
      - &quot;LETSENCRYPT_HOST=rancher.example.com&quot;

# other proxied services below here

volumes:
  rancher-vol:
</code></pre>
<p></p><p></p>]]></content:encoded></item><item><title><![CDATA[Inject a Static IP Address in Ubuntu 18.04 on Hyper-V]]></title><description><![CDATA[<p>If you didn't already know this, you can use the WMI class Msvm_GuestNetworkAdapterConfiguration and insert a static IP address into a Hyper-V VM. Ravikanth Chaganti provides a <a href="https://www.ravichaganti.com/blog/set-or-inject-guest-network-configuration-from-hyper-v-host-windows-server-2012/">PowerShell script</a> capable of setting a static IP on a VM. I slightly modified this to work on a remote Hyper-V host.</p>]]></description><link>https://blog.kanto.cloud/static-ip-injection-in-linux-on-hyper-v/</link><guid isPermaLink="false">5c6cc742c68d790001d13c65</guid><category><![CDATA[hyperv]]></category><category><![CDATA[linux]]></category><dc:creator><![CDATA[Cody Ernesti]]></dc:creator><pubDate>Tue, 26 Feb 2019 00:56:00 GMT</pubDate><content:encoded><![CDATA[<p>If you didn't already know this, you can use the WMI class Msvm_GuestNetworkAdapterConfiguration and insert a static IP address into a Hyper-V VM. Ravikanth Chaganti provides a <a href="https://www.ravichaganti.com/blog/set-or-inject-guest-network-configuration-from-hyper-v-host-windows-server-2012/">PowerShell script</a> capable of setting a static IP on a VM. I slightly modified this to work on a remote Hyper-V host. I will provide a Gist of it at the bottom of this post.</p>
<p>Using this script, you can set assign a IP to a Windows VM with ease:</p>
<pre><code class="language-lang-powershell">Get-VMNetworkAdapter -VMName Win2016 | `
     Set-VMNetworkConfiguration.ps1 -IPAddress 192.168.1.2 `
                                    -Subnet 255.255.255.0 `
                                    -DNSServer 192.168.1.1 `
                                    -DefaultGateway 192.168.1.1
</code></pre>
<p>If you try the same thing with an Ubuntu VM without any initial setup, it will fail.  I will provide the steps necessary to make it work below on Ubuntu 16.04 Server and Ubuntu 18.04 Server. <a href="https://docs.microsoft.com/en-us/windows-server/virtualization/hyper-v/supported-ubuntu-virtual-machines-on-hyper-v">This document</a> gives the steps for Ubuntu 16.04, but doesn't provide the full steps for Ubuntu 18.04.</p>
<h1 id="ubuntu1604">Ubuntu 16.04</h1>
<p>Ubuntu 16.04 is quite simple, just a few lines and a full shutdown:</p>
<pre><code class="language-lang-bash">sudo apt update
sudo apt install linux-azure -y
sudo shutdown now
</code></pre>
<p>Its important to note, you have to do a full shutdown, otherwise it will not reflect the changes.</p>
<h1 id="ubuntu1804">Ubuntu 18.04</h1>
<p>Ubuntu 18.04 is slightly more complicated. Ubuntu 18.04 by default ships with netplan.io, which is incompatible with the linux-azure module. It will show the netplan IP address in Hyper-V, but it will not update the IP address. To do so, you will need to reinstall ifupdown, resolvdconf, and then remove netplan.io</p>
<pre><code class="language-lang-bash"># install ifupdown, resolvvond, and linux-azure
sudo apt update
sudo apt install ifupdown resolvconf linux-azure -y

# add a default config
cat &lt;&lt;EOF &gt;/etc/network/interfaces
source /etc/network/interfaces.d/*

# The loopback network interface
auto lo
iface lo inet loopback

# The primary network interface
auto eth0
iface eth0 inet dhcp
EOF

# remove netplan and shutdown
sudo apt purge netplan.io -y
sudo shutdown now
</code></pre>
<h1 id="modifiedpowershellscriptgist">Modified PowerShell Script Gist</h1>
<p>Here is the gist promised from above:</p>
<script src="https://gist.github.com/SoarinFerret/8497f123ae1d6790b2de740010ca61fe.js"></script>]]></content:encoded></item><item><title><![CDATA[A Quick and Dirty Docker-Compose Managed Reverse Proxy]]></title><description><![CDATA[<p>This one is going to be short and sweet. Many of you have probably read and/or used the <em>very</em> popular docker nginx-proxy by <a href="https://github.com/jwilder/nginx-proxy">jwilder</a>. This is not another guide for that. I recommend you read the readme on the linked GitHub for that, or find a different guide.</p>
<p>No,</p>]]></description><link>https://blog.kanto.cloud/docker-reverse-proxy/</link><guid isPermaLink="false">5c6b5bdb0ebf0900015aadb7</guid><category><![CDATA[docker]]></category><dc:creator><![CDATA[Cody Ernesti]]></dc:creator><pubDate>Wed, 16 Jan 2019 02:17:00 GMT</pubDate><content:encoded><![CDATA[<p>This one is going to be short and sweet. Many of you have probably read and/or used the <em>very</em> popular docker nginx-proxy by <a href="https://github.com/jwilder/nginx-proxy">jwilder</a>. This is not another guide for that. I recommend you read the readme on the linked GitHub for that, or find a different guide.</p>
<p>No, this guide is for proxying non-dockerized services through the same reverse proxy, and the same <code>docker-compose.yml</code> file. I made a small alpine container just for this purpose. You can find it on <a href="https://github.com/SoarinFerret/iptablesproxy">GitHub</a> and <a href="https://cloud.docker.com/repository/docker/soarinferret/iptablesproxy">Docker Hub</a>.</p>
<p>For example, I have a VM running docker, with nginx-proxy performs proxy services for a Ghost blog and Grafana. The file would look something like this:</p>
<pre><code class="language-prettyprint">version: '3'

services:
  nginx-proxy:
    restart: always
    image: jwilder/nginx-proxy:alpine
    container_name: nginx-proxy
    ports:
      - &quot;80:80&quot;
      - &quot;443:443&quot;
    volumes:
      - &quot;/etc/nginx/vhost.d&quot;
      - &quot;/usr/share/nginx/html&quot;
      - &quot;/var/run/docker.sock:/tmp/docker.sock:ro&quot;
  
  ghost:
    image: ghost:latest
    restart: always
    environment:
      url: https://blog.example.com
      VIRTUAL_HOST: blog.example.com
  
  grafana:
    image: grafana/grafana:latest
    restart: always
    environment:
      VIRTUAL_HOST: monitor.example.com
</code></pre>
<p>Now lets say I have a PRTG server running on a separate VM I want to add to the proxy service. You could add the following to the bottom of the <code>docker-compose.yml</code> file:</p>
<pre><code class="language-prettyprint">...
  prtg:
    image: soarinferret/iptablesproxy:latest
    restart: always
    cap_add:
      - NET_ADMIN
      - NET_RAW
    environment:
      SERVERIP: 192.168.0.5
      SERVERPORT: 80
      HOSTPORT: 80
      VIRTUAL_HOST: prtg.kanto.cloud
    expose:
      - '80'
</code></pre>
<p>Now after doing running <code>docker-compose up -d</code>, the new container will spin up, and simply forward all traffic destined to port 80 on the container to port 80 of 192.168.0.5.</p>
]]></content:encoded></item><item><title><![CDATA[How to Merge VHDS Checkpoint Files]]></title><description><![CDATA[<h1 id="backstoryhowiaccidentlymade35tbsofdatatakeup5tbsofspace">Backstory - how I accidently made 3.5 TBs of data take up 5 TBs of space</h1>
<p>Ok, technically Veeam created the checkpoints on my VHDS due to a misconfiguration on my part. And I didn't notice that they were there for a few months. Oops. Until one day, I</p>]]></description><link>https://blog.kanto.cloud/howto-merge-vhds-checkpoints/</link><guid isPermaLink="false">5c6a41040ebf0900015aadaa</guid><category><![CDATA[hyperv]]></category><category><![CDATA[powershell]]></category><dc:creator><![CDATA[Cody Ernesti]]></dc:creator><pubDate>Tue, 24 Jul 2018 22:09:00 GMT</pubDate><content:encoded><![CDATA[<h1 id="backstoryhowiaccidentlymade35tbsofdatatakeup5tbsofspace">Backstory - how I accidently made 3.5 TBs of data take up 5 TBs of space</h1>
<p>Ok, technically Veeam created the checkpoints on my VHDS due to a misconfiguration on my part. And I didn't notice that they were there for a few months. Oops. Until one day, I got alerted from PRTG saying that the cluster shared volume (csv) on the cluster was running out of space, however the volume on the guest cluster was still showing up with plenty of space.</p>
<p>And then I saw it. 3.5 TBs of data taking up 5 TBs of my available space.</p>
<p>So I did some research. Sure, some people have complained about similar issues. Their solution? Copy all the data off onto a new data storage, and try again, or revert back to a VHDX. That wasn't really an option for me. So I started to do some more digging in how a VHDS works</p>
<h1 id="howavhdsworks">How a VHDS works</h1>
<p>Quoted by <a href="https://docs.microsoft.com/en-us/windows-server/virtualization/hyper-v/manage/create-vhdset-file">Microsoft</a>:</p>
<blockquote>
<p>VHD Set files are a new shared Virtual Disk model for guest clusters in Windows Server 2016. VHD Set files support online resizing of shared virtual disks, support Hyper-V Replica, and can be included in application-consistent checkpoints.</p>
</blockquote>
<p>... That definition makes it sound pretty cool. But also gives off the idea that VHDS are a new fancy thing. In reality, at its core, it looks like its simply a binary file with a <code>.vhds</code> extension that has a pointer to a VHDX with a <code>.avhdx</code> file extension.</p>
<p>Don't believe me? Opening up the <code>.vhds</code> file in notepad will be hectic, but if you dig around towards the bottom of the file, you will see the reference to the <code>.avhdx</code> file.</p>
<p><img src="https://blog.kanto.cloud/content/images/2019/08/vhds-avhdx-reference.PNG" alt="vhds-avhdx-reference"></p>
<h1 id="theproblem">The Problem</h1>
<p>So the real problem is I have a VHDS pointing to an <code>.avhdx</code>, however the <code>.avhdx</code> file it is pointed to has a bunch of parent <code>.avhdx</code> files that I need to merge together. Here is what an example directory (sorry, no productions screenshots here!) reflecting the problem looks like:</p>
<p><img src="https://blog.kanto.cloud/content/images/2019/08/vhds-dir.PNG" alt="vhds-dir"></p>
<p>Also, due to my basic understanding of how a VHDS is working behind the scenes, we have to take into account some potential problems:</p>
<ul>
<li>The VHDS might somehow be aware of the other checkpoints (though after some tests I doubt it!)</li>
<li>Depending on the size of the checkpoints and speed of the underlying storage, it could take a VERY long time to merge</li>
<li>This will break any replication going on, and a re-seed will be necessary</li>
</ul>
<h1 id="thesolution">The Solution</h1>
<p>For every problem, a solution exists. And that solution probably can be written in PowerShell!</p>
<p><em>Disclaimer</em>: Your results may vary. TEST THIS IN A LAB FIRST! Also, double-check your backups. And then triple-check them. I do not take any fault if you lost all of your data. Merging checkpoints is dangerous, and if done wrong, can and will loose all your data.</p>
<h2 id="ourtasks">Our tasks</h2>
<p>Here are the basic tasks we will need to accomplish:</p>
<ol>
<li>Power down the VMs referencing this VHDS file, and pause any replication going on</li>
<li>Sort the <code>.avhdx</code> files based on the parent-child relationship</li>
<li>Merge the files without breaking the parent-child relationship</li>
<li>This step has a few possiblities.<br>
a. According to <a href="https://docs.microsoft.com/en-us/windows-server/virtualization/hyper-v/manage/create-vhdset-file">this article</a>, you can convert a vhdx to a vhds using <code>Convert-VHD</code>. Since the <code>.avhdx</code> file being used is technically just a VHDX, we could rename it and run the PowerShell conversion. However, when converting, it creates a DUPLICATE of the original VHDX file for the conversion, even if you have <code>-DeleteSource</code> specified when running the command. If you have a 5TB volume, you will need 5TBs of scratch space.<br>
b. If you are confident that the VHDS does not have any knowledge of the checkpoints, then you can rename the final merged file to the name of the <code>.avhdx</code> file the VHDS references<br>
c. If you wanted to be tricky, you could create a new VHDS, delete the newly created <code>.avhdx</code> file, and rename our merged file to that.</li>
</ol>
<h2 id="step1">Step 1</h2>
<p>I'm leaving this to you to handle!</p>
<h2 id="step2sortingtheavhdxfiles">Step 2: Sorting the AVHDX Files</h2>
<p>The explanation for each step of this is in the comments of the code!</p>
<pre><code class="language-prettyprint"># Get the VHD files
$disks = Get-VHD &quot;.\temp*&quot; | select -Property Path,ParentPath | ? Path -NotLike &quot;*.vhds&quot;

# Create an Arraylist. Better than arrays for performance reasons
$list = New-Object System.Collections.ArrayList($null)

# Add the Highest Parent VHDX first
$list.add($($disks | where {$_.ParentPath -eq &quot;&quot;}).Path)

# Cycle through the rest, and add them to the array as we find the parent
while($list.Count -lt $disks.Count )
{   
    forEach($x in $disks)
    {
        if($x.ParentPath -eq $list[$list.count-1]){
            $list.add($($x.Path))
        }
    }
}
</code></pre>
<h2 id="step3mergingtheavhdxfiles">Step 3: Merging the AVHDX Files</h2>
<p>So, in case you did not know this, you actually do NOT have to merge every single VHDX file to the previous parent. If you select the Parent of Parent VHD files, and the lowest child, it will merge all of the inbetween files too.</p>
<pre><code class="language-prettyprint">Merge-VHD -Path $list[$list.Count - 1] -DestinationPath $list[0]
</code></pre>
<h2 id="step4">Step 4</h2>
<p>Okay, pick your poison on this one.</p>
<h3 id="step4aconvertingthefile">Step 4a: Converting the file</h3>
<p>I must remind you again, if storage is an issue with this, keep that in mind.</p>
<pre><code class="language-prettyprint"># Create the avhdx files new name, and backup the path
$path = Split-Path $list[0]
$baseFileName = (Split-Path $list[0] -Leaf).Substring(0, (Split-Path $list[0] -Leaf).indexof('_'))

# Rename the file
Rename-Item $list[0] &quot;$baseFileName.vhdx&quot;

# Perform the conversion
Convert-VHD &quot;$path\$baseFileName.vhdx&quot; &quot;$path\$baseFileName.vhds&quot;
</code></pre>
<h3 id="step4brenamingthefile">Step 4b: Renaming the file</h3>
<pre><code class="language-prettyprint">Rename-Item $list[0] $(Split-Path $list[$list.Count - 1])
</code></pre>
<h3 id="step4cnewvhds">Step 4c: New VHDS</h3>
<pre><code class="language-prettyprint"># Create the new VHDS with the same size as previous VHDS
New-VHD .\new.vhds -SizeBytes $(Get-VHD $list[0]).Size

# Rename newly created avhdx to .old
$newAvhdxName = Split-Path $(Get-VHD .\new_*).Path -Leaf
Rename-item .\$newAvhdxName &quot;$newAvhdxName.old&quot;

# Rename old avhdx to new avhdx
Rename-Item $list[0] $newAvhdxName
</code></pre>
<h1 id="results">Results</h1>
<p>Well, I ended up testing all three of these in a lab environment, and all of them worked as far as I can tell.</p>
<p>To fix my actual problem though, I ended up going with Option C. I didn't have enough scratch space to perform option A, and I just didn't quite feel comfortable with running Option B.</p>
<p>Good luck and good day to you!</p>
]]></content:encoded></item></channel></rss>