Levelling up your PowerShell modules with Plaster

Between personal, community and team projects, I am involved in the maintenance of over 30 PowerShell modules. I have two recommendations to anyone starting with PowerShell, read the community style and formatting guidelines, and most importantly, standardise your work. I have developed a structure for all of my projects over the years, and now that I am performing more mentoring and code reviews I thought I would share some of my tips.

In the past, I would manually create the framework for my new modules. I would create a folder, copy some files from a previous project, rename some things, etc. It was ALL MANUAL! A few weeks, ago I attend the Melbourne PowerShell Meetup where a fellow Cloud and Datacenter MVP, Rob Sewell spoke about how he uses Plaster. Rob did a wonderful job showing how Plaster and Pester can be used to develop PowerShell modules and got me thinking about how I too could make use of Plaster.

I grabbed a copy of Rob’s Plaster template and then set about making it my own and changing it to meet my requirements. Going through the process of making my own template caused me to critically look at some of the decisions I was making when creating new modules and thought that others might benefit from hearing some of my thought process.

Why use Plaster?

The reason you should use Plaster is to save time!

If you have spent much time doing web development, Plaster is a like Yeoman but with a PowerShell focus. With Plaster, I create a template that defines the structure of my modules including what files and folders should be created. The next time I want to create a new module, I use Plaster, specify the template and some parameters and presto! The module has been created just as I like.

Plaster allows us to quickly make modules that follow the same structure and allows us to get on with coding and delivering quality PowerShell code.

Why should you standardize your file and folder layout?

Sticking to some simple rules about where you place your module’s code, including folder structure, will help you as a maintainer and anyone else who might want to contribute.

Each module needs a starting point, and for PowerShell modules, that will be a root folder. This root folder will have the same name as the module and the psd1 and psm1 files.

I create a new Git repository for each module, even if it is for internal projects or I am experimenting with something. This goes for repositories in VSTS and GitHub. The reason for this is that it keeps items like issues, pull requests, build, and release pipelines separate. This might seem like a bit messy, and you might be worried about having many repositories, however the cost of repositories is free or negligible, and with this structure I know that a pull request on my Posh-SYSLOG module will only impact a single module.

Now I don’t place any functions within the psm1 file, instead each function has its own ps1 file. The benefits of maintaining one function per file are:

  1. Functions are clearly defined and contained,
  2. It is easier to see read and search through the modules code,
  3. Adding new functions is easier,
  4. Removing functions is easier,
  5. The option to reuse functions is possible (as it is often as simple as copying the file), and
  6. Reviewing changes to code is easier, especially for GitHub pull requests as a change to a function only impacts a single file.

Obviously, I don’t leave the ps1 files sitting within the root folder. In most of my current modules, you will see that all of the functions are placed in a folder aptly named functions. I am now separating these to:

  • Public functions, that is, functions intended for users are placed in functions;
  • Internal functions, those that shouldn’t be available to users are placed in internal.

There is a simple reason to maintain separate folders. It allows for anyone looking at the code to quickly and visually determine which functions will be exposed to an end user of the module. There is no digging around in files to determine which functions should be, and which ones should not be available to the user.

So, what do I put in the psm1 file? For many years, my psm1 file has contained some dynamic loading code from the Chocolatey project. I have recently switched this code for that created by David Christian. The advantage of David’s code is it correctly handles all the file and folder structure and it only exports those functions I intended to make public.

Since PowerShell 5, there has been support for classes in PowerShell. I think it makes sense to place each class within its own folder, and to place them into a folder of their own, classes. Classes should be placed within their own ps1 files bearing the name of the class.

You are writing Pester tests, aren’t you??? I prefer to give a similar treatment to my Pester files, so I put them into a folder called tests.

The last folder I want to discuss is resources. I use this folder to place any additional files, like executables, template files, or other files that my module may need and that need to be maintained in source control.

Helping users with a Readme and Change Log

I must admit, I haven’t done an excellent job maintaining readmes and change logs for my PowerShell modules. Most of my modules don’t have either of these files, so to encourage me to keep and maintain these files, I have created two template files, README.MD and CHANGELOG.MD.

Using the template file functionality in Plaster, the template will create a readme file containing a description of the module, the author (that’s you), information on how to install, update and even remove the module.

If you are hosting your module on GitHub, your README.MD is incredibly important. When you browse to a repository on GitHub, this file will be rendered (from markdown) as the homepage. What I have learnt is that this is one of your big opportunities to sell your code to the world, it isn’t just about talking about how to install and use the code, but also sell you as a professional.

Including a License

One thing that is often overlooked in the PowerShell community is licensing. No, I am not talking about charging money for your modules, I am talking about open source licenses.

Whilst the licensing of a PowerShell module might not seem important for you, I can ensure you that for some, it is extremely critical. In some organisations, developers and administrators may not be able to use an application or library unless they can clearly determine what license is applied to it.

My preference is for the MIT license, firstly, it is really short and easy to understand, others can be extremely long and discourage people from reading and understanding what they are agreeing with. The MIT license contains conditions requiring the preservation of the original copyright and license notices whilst allowing commercial use, modification, distribution, and private use. Importantly, it clearly defines your liability or lack thereof.

The License parameter within the Plaster template controls if the license is included in any new module that you are creating.

A great tool to assist in deciding what License is right for your project is ChooseAlicense.com. It asks you a few questions and will then make recommendations. Github also provide some great guidance on licenses too.

Include customized VS Code settings

After many years of using the PowerShell ISE and ISE Steroids, I have made the switch to using Visual Studio Code as my primary PowerShell IDE. One of the things I love about VS Code is that it is highly customable.

With VS Code we can specify a variety of settings that impact how our PowerShell is formatted, for instance:

  • Tabs or spaces?
  • Tab size. Do you like 2 spaces or 4?
  • Where to place open and close braces? Same line? New line?
  • Do we insert of whitespace after operators?
  • Etc

Now some of these things are personal preferences, however I want each of the projects I work on to follow the same format, no matter who is working on the code base. After all, maintaining a uniform coding style across a project assists in its maintenance!

You can actually specify VS Code settings at a global and at a project level. By specifying how I want the code to be formatted using the settings.json file located in .vscode within your project’s root folder I can ensure that anyone who works on the code will produce the same code.

I also like to include some common tasks that I often want to run from within VS Code. In my template, I currently only include a task to run Pester tests that has been taken from the template included with Plaster. The tasks.json file defines what shell and what tasks can be executed.

Encouraging the use of Pester

One of the things I have been trying to push more for my personal projects is the creation and maintenance of Pester tests. Testing is crucial for delivering reliable code, and Pester provides us with a foundation to create reusable tests for our code.

Rob’s original template included some Pester scaffolds based upon the work of June Blender. June was a driving force behind PowerShell’s documentation and Sapien’s PowerShell HelpWriter.

These scaffolds separate tests into:

  • Unit tests – testing individual functions in isolation.
  • Comment based help tests – validating that comment based help has been written.
  • Feature Tests – Do the features of the module work as a whole?
  • Project tests – Focus on Script Analyzer and that the module loads cleanly.

I have taken these and made some minor modifications. My modifications of Rob and June's templates were just to fix some performance issues I had seen, and to separate the project and help test exceptions into separate files (that are now txt files).

I have included a rough guide on the different files in README_TESTS.md.

UnitTests.jpg

Encouraging contributions via GitHub

It is often said that GitHub is the social network for coding. Users can interact with projects, create issues, share knowledge, and contribute to projects together. GitHub isn’t just for hardcore software developers, it is also a place for script developers like PowerShell developers!

It isn’t just open source, Linux types that use GitHub. Microsoft is now one of the biggest organisations on GitHub and one of the biggest contributors. Don’t believe me? PowerShell, Pester, Plaster and even the Azure PowerShell modules can all be found on GitHub. If you haven’t created a GitHub account, now is the time to make a start.

As a developer, putting your code up on GitHub is a great start. It can allow you to participate in the PowerShell community and develop a public profile. If you want others to get involved in your own projects, then you need to take some steps to encourage them.

One step I believe is a fantastic way to encourage participation is to define a code of conduct. Now you often associate these with larger projects, however, I believe they are a positive sign, a sign of a welcoming and inclusive project no matter what size the project is. A code of conduct will define how you will treat people, and how you expect as a project maintainer to be treated. There are several sites that provide resources for developing a code of conduct, I used Contributor Covenant as I liked the style and language it uses.

There are three other files that I recommend that you include, a guide to contributing (contributing.md) and issue and pull request templates (issue_template.md and pull_request_template.md). I based all three of the files contained in my Plaster template on the Atom project, but you can really put whatever you want into them, there really isn’t a set structure.

Issue and pull request templates are a fantastic idea as they allow you, the maintainer, to provide some prompts or hits as to what should be included. For instance, in the issue template I ask about what operating system and PowerShell version a user is running; helping me to reproduce the issues that a user might have.

Not all of my projects are hosted on GitHub, so I used a Plaster parameter, GitHub, to control the creation of these files.

The final layout

The final layout for a project would look something like this when viewed from VS Code:

PlasterTemplate.PNG

Using this Plaster template

Installing Plaster

You can install Plaster from the PowerShell Gallery

PS> Install-Module -Name Plaster

Clone the template

You can obtain the template from its GitHub Repository, from the command line you can use the following command to clone it to your local system.

PS> git clone https://github.com/poshsecurity/PlasterTemplate

Creating a new module

Now that you have template locally, you can run Invoke-Plaster to create a new module based upon the template.

I typically follow this workflow:

  1. Create a public (or private) on GitHub
  2. Clone the repository locally
    PS> git clone <Path to repository>
  3. Create a hash table containing the required parameters, and then call Invoke-Plaster

    PS> $PlasterParameters = @{
         TemplatePath      = "<path to the Plaster Template above>"
         DestinationPath   = "<path to the new repository you cloned>"
         AuthorName        = "Cool PowerShell Developer"
         AuthorEmail       = "[email protected]"
         ModuleName        = "MyNewModule"
         ModuleDescription = "This is my awesome PowerShell Module!"
         ModuleVersion     = "0.1"
         ModuleFolders     = @("functions", "internal")
         GitHub            = "Yes"
         License           = "Yes"
     }
    
     PS> Invoke-Plaster @PlasterParameters
  4. Plaster should then execute, creating the required files and folders.
  5. When you are ready you can push everything up to GitHub.

Congratulations you are ready to start coding!

Wrapping Up

You can get my Plaster Template here.

The benefits of using Plaster and a standardized structure for PowerShell module development are:

  • Faster project start-up,
  • Clear delineation of internal and public functions,
  • Separation of functions, classes, tests, and resources, and,
  • Simpler psm1 files.

We have also seen how we can use:

  • Files like README.MD and CHANGELOG.MD to create better documentation.
  • How to use a license provide users and other developers with guidance on how they can use and extend your code.
  • We can customise VS Code on a per folder level, allow for uniform developer experience.
  • Through testing with Plaster, using a standardised structure for our tests.
  • How we can encourage contributions with a code of conduct, contribution guide and issues and pull request templates.

Big thanks to Rob Sewell for his Plaster template and June Blender for her Pester templates.

Kieran Jacobsen

DNS Squatting with Azure App Services

TL DR

  • SaifAllah benMassaoud from Vulnerability Lab discovered that resnet.microsoft.com was resolving to resnetportal-prod.azurewebsites.net, however there was no Azure App Service at this address.
  • Anyone could have established an App Service with this DNS name, and thus squat on a microsoft.com subdomain.
  • This vulnerability has the potential to affect any organisation that is using App Services (or similar PaaS services) and custom domain names where they do not have appropriate controls in place.

Like many involved in the InfoSec industry, I monitor a bunch of vulnerability disclosure feeds. They can be a valuable source of knowledge: new techniques, new bugs, new breaches or just interesting tools and technology.

Several days ago, a post titled: Microsoft Resnet - DNS Configuration Web Vulnerability grabbed my interest. It has an innocuous title, and I hadn’t recalled anyone else talking about a Microsoft DNS Vulnerability. The post wasn't that long, the description and the proof-of-concept are only a few paragraphs in length; however what I did discover was an interesting vulnerability, one that, I feel, is going to become more and more prevalent with the use of Platform As A Service (PaaS) technologies like Azure App Services.

Simply put, in this situation, someone had created a CNAME entry within Microsoft’s DNS to point resnet.microsoft.com to resnetportal-prod.azurewebsites.net, unfortunately, resnetportal-prod.azurewebsites.net didn’t exist. This doesn’t sound that bad? Right?

ihaveabadfeeling-luke.gif

For those who are not familiar with the azurewebsites.net domain, this is the domain used by Azure App Services to host services. When you create an App Service, you specify a name, like myawesomewebapp.azurewebsites.net, you can then deploy your application to that App Service. You can pick whatever you want as the applications name, as long as no one else has taken it before you.

You can optionally specify a custom domain name for your App Service, like myawesomewebapp.com, and use a CNAME entry to map your custom domain to your azurewebsites.net domain (you can now also use an A record).

You should now be able to see the problem; resnetportal-prod.azurewebsites.net didn’t exist yet the name resnet.microsoft.com was pointed to this App service. What does exist is a great squatting/hijacking opportunity. Anyone could have signed up for an Azure Subscription, created an App Service with the name resnetportal-prod.azurewebsites.net and then hijacked resnet.microsoft.com. Vulnerability Lab maanged to discover a pretty significant issue.

What could one do with a subdomain of microsoft.com? Phishing, credential theft, and ransomware comes to mind pretty quickly. I am sure an APT crew would love to have a domain or subdomain like this. There are probably only a few domain names in the world where the average user and even the average system administrator are extremely trusting, and microsoft.com would have to be one.

It isn't just the big organisations like Microsoft at risk. Any company that makes use of PaaS services like Azure App Services and CNAME entries could potentially become the next victim. Attackers might use your domain name to attack others or perhaps create more effective attacks against your own users.

Let’s consider our friends Contoso Limited; they deployed an application for their users to contosoapp.azurewebsites.net, they also established a custom domain name, home.contoso.com. The app was used for some time, and eventually they decide to decommission it. A developer, maybe a sysadmin deletes the Azure App Service, but in their haste, they forget about the DNS entries. More time goes by, and now Bob from an APT group finds the entry for home.contoso.com pointing to contosoapp.azurewebsites.net, he then goes and sets up his own App Service and hijacks home.contoso.com.

Bob the sends out this email to some Contoso email addresses:

Subject: New Employee Experience
From: Contoso Marketing
Body:
    Hi Team,

    We have launched a new employee portal, it is great and has a bunch of awesome features. The site can be found at http://home.contoso.com. 

    From,

    The Contoso Marketing Team

Bob doesn’t even need to hide the links in the email, he doesn’t need any of the usual masking techniques, he can simply display the company’s domain name. If the email structure, text and links are well crafted, how would Fred from Accounting determine if this was a legitimate email?

When a user navigates to the page, perhaps it prompts for credentials, maybe it tries to run a browser exploit? I have no doubt that a campaign against an organisation like this would be extremely successful.

Now the details of how SaifAllah benMassaoud from Vulnerability Lab initially discovered the misconfiguration are not described in the release. I am going to guess that he probably used an automated DNS enumeration tool like DNSRecon and DNSNinja. These make the discovery of DNS records easy, and it would be easy to automate additional checks based upon their results to find vulnerable configurations.

In terms of defending against these issues, there are two methods, both of which need to be implemented by organisations:

  1. Appropriate change control processes: If an App Service or similar PaaS solution is being decommissioned, processes should be in place to ensure that any associated DNS records are removed;
  2. Monitor your DNS zones for configuration issues: Have automated scripts that check and send alerts if configuration issues are found.

If you haven’t looked at a DNS management tool like DNSControl, do so now! DNSControl was originally developed by Stack Overflow, and you don’t need to have hundreds of domain names and records to gain value from a tool that allows you to manage DNS as code.

DNSControl uses a Domain Specific Language for defining domains and their records, independently of your provider. You can use macros and variables to simplify your configuration. My favorite features is the ability to configure multiple DNS providers, this is great for migrations and for fault tolerance. The CloudFlare provider still allows for control over their proxy as well, ensuring that all of our configuration remains in source control.

Defending against this vulnerability is fairly simple, practice good change control processes and monitor your DNS zones. DNS enumeration tools like DNSRecon and DNSNinja can also assist in determining your organisations risk, whilst DNS as code tools like DNSControl will give us better control over our DNS change processes.

Kieran Jacobsen

Sending SYSLOG messages to TCP hosts and POSH-SYSLOG V3.0

The Posh-SYSLOG module has been very popular since its first release in 2013. SYLOG provides a common integration point into enterprise monitoring, alerting and security systems and administrators and developers often need to push messages from their scripts and automation activities to a SYSLOG server.

There are two common pieces of feedback, TCP support and improving the performance. I am excited to announce a new version of POSH-SYSLOG, which introduces TCP support and a number of performance improvements.

Kudos must go to Jared Poeppelman (powershellshock) from Microsoft who provided the TCP logic, optimised performance, and added additional Pester cases. I took some time out to make some additional improvements on top of Jared’s work. Due to the significant changes and additional functionality, I am considering this to be version 3.0.

The easiest way to get the module is from the PowerShell Gallery using Install-Module -Name Posh-SYSLOG. You can also clone the GitHub repository.

The first big change by Jared, was to implement the Begin {} Process {} End {} structure for Send-SyslogMessage. With this structure, we can leverage the pipeline to send multiple messages. I have developed scripts where I needed to read application logs and then send them to a SIEM product using SYSLOG; access via the pipeline simplifies these scripts and hopefully improves their performance.

The logic around determining the correct value to be sent as the hostname has been cleaned up and refined. This piece was free of issues, however there were some small tweaks that potentially improved performance. The function is now called as part of the Begin {} block, improving performance for bulk message transmissions. The logic has been moved out to its own internal function, allowing for separate testing and better mocking opportunities in Pester.

Another source of performance improvement is the removal on the need to call Test-NetConnection. This is a dramatic source of improvement when Send-SyslogMessage is executed in a workgroup (that is, a non-domain joined) environment. Previously we called Test-NetConnection to determine the correct network interface that we are using to communicate with the SYSLOG server; now we simply ask the socket for the source IP address and then use Get-NetIPAddress to check if this is a statically assigned address.

All the network functionality and calls have been moved to internal functions. This helped with testing, I can now mock all the network activities which allows for better testing with Pester. The network tests are now much more reliable.

Finally, Jared and I have increased the number of Pester tests. I have tried to aim for over testing everything in the hope that all potential issues are flushed out. With a massive upgrade to the functionality, and such a refactoring have the potential to introducing issues. I am confident that things have been appropriately requested. If issues are found, please raise them via GitHub.

So what is the future now for Posh-SYSLOG? Well for now, I just want to ensure there are no bugs or issues, after that I want to look at implementing the other commonly asked feature; TLS support.

The PowerShell community has been amazing, I have been lucky to have such wonderful community contributions over the past few years. A massive thanks to Jared, Ronald, Xtrahost and Fredruk.

Kieran Jacobsen

Securing PowerShell DSC within Azure ARM Templates

There is no doubt that Azure ARM templates are now the best way to deploy and manage resources within Azure. Recently, I found myself creating an ARM template that deployed some Windows Server 2016 virtual machines with PowerShell DSC extension to configure them. This is typically very simple, define the virtual machine include the extension, however, this time I also needed to include sensitive pieces of information; API keys and credentials, and then I wondered, well, what is the best way to protect these?

If you are familiar with DSC, you can either leave these assets in plaintext or encrypt them with a certificate. Now most documentation and posts found online, will direct for the plaintext approach when using the DSC extension, but there is a significantly better approach, which allows for sensitive information to be encrypted throughout the deployment process. Even better, it is really simple to implement.

Whilst I was working through setting this up, I found there are some quirks in how you need to put the ARM template together, and thought it would be a good topic for a post. I am going to talk about some of the challenges that I found, as well as provide some working examples.

Understand the apiVersion and typeHandlerVersion

When you define the PowerShell DSC extension for an Azure virtual machine, there are two fields that control the version and functionality that the extension will provide, apiVersion and typeHandlerVersion.

In terms of ARM Templates, the apiVersion specifies how the deployment process talks to Azure, that is the layout and content of calls that are made to the underlying API, as such, it controls what we can place into out template files. Microsoft uses the apiVersion to provide backward compatibility. When breaking changes are introduced, typically as part of introducing new features, Microsoft will define a new version. This allows for older scripts and templates to continue to function long after they have been written. For the PowerShell DSC extension, the apiVersion will impact what fields are valid in your template definition.

The second, typeHandlerVersion, is extremely critical, it defines what version of the extension will be deployed to the virtual machine and then executed. Whilst apiVersion controls talking to API, the typeHandlerVersion controls what is happening on the virtual machine itself. This extension regularly receives new features and bug fixes, from support for new operating systems, privacy settings, to WMF versions, and a tonne of issues fixed in between.

Now for the catch, where the devil enters the details. Depending upon what versions you specify, your template syntax may need to change. I found out about these quirks the hard way, this tiny detail can cost you quite a bit of time troubleshooting syntax issues. A mismatch between the apiVersion¸ typeHandlerVersion and your template’s syntax could leave to errors during template validation, DSC compilation or DSC application.

So, what happens if you use older version or mix versions and syntax? Well, nothing bad might happen, or you might end up with either template validation errors, DSC complication errors, or even errors during DSC processing (that is, as the DSC is applied to the server).

But what about autoUpgradeMinorVersion?

According to the documentation, the extensions have a Boolean attribute, autoUpgradeMinorVersion, that allows a user to pick simply the major version, and have Azure install the latest version to their virtual machine during provisioning time. It should be highlighted that this only happens during the provisioning of the extension, extensions will not be upgraded unless the user explicitly removes and then re-provisions the extension. Hot fixes (that is, those of the format of 2.9.x.x), are automatically selected, you don’t get any control. The problem is, this only upgrades the extension, it isn’t going to impact your ARM template syntax.

Be Wary of Visual Studio Created Templates

If you are developing your ARM templates within Visual Studio, you are probably using the “Add New Resource” window to include the PowerShell DSC Extension into your virtual machine. It so happens, that, at least in Visual Studio 2017, it will default to an old typeHandlerVersion of 2.9. For most, this isn’t a problem, especially if autoUpgradeMinorVersion is set to true.

I do want to point out that versions 2.4 up to 2.13 were retired in August 2016.

If you do need to handle sensitive data, then work with the release history for the PowerShell DSC extension, and the documentation on the syntax to ensure that you are working against the latest versions.

How to correctly handle sensitive data?

Note/Warning: At the time of writing, this all worked correctly for versions 2.24 to 2.26. It may not be correct for later versions.

So how do we correctly pass sensitive data between our template through to our virtual machine and DSC?

To begin with, you need to understand that within the properties for the DSC extension, we have two attributes that can be used to specify settings, the first, settings, which you are probably familiar with. There is another, protectedSettings, you might already be familiar with this one, typically you will see configurationUrlSasToken, but you can also specify sensitive data to be passed to the DSC configuration here as well.

So, I have a super simple DSC configuration, it will create a user account using the specified details. I have opted to provide the accounts username and password using a PSCredential parameter, whilst I am also going to set a description for the account, specified as a string parameter.

Configuration Main
{
    Param
    (
        [Parameter(Mandatory=$true)]
        [ValidateNotNullorEmpty()]
        [PSCredential]
        $Credential,

        [Parameter(Mandatory=$true)]
        [ValidateNotNullorEmpty()]
        [String]
        $AccountDescription
    )

    Node Localhost
    {
        User NewUser
        {
            UserName             = $Credential.UserName
            Description          = $AccountDescription
            Disabled             = $false
            Ensure               = 'Present'
            Password             = $Credential.Password
            PasswordNeverExpires = $true
        }
    }
}

Obviously, I want to ensure that credential is kept securely, I don’t want it left in plaintext as part of the DSC compilation.

The ARM template looks like this:

{
    "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
    "contentVersion": "1.0.0.0",
    "parameters": {
        "VirtualMachineName": {
            "type": "string",
            "metadata": {
                "description": "Name of the virtual machine"
            }
        },
        "Username": {
            "type": "string",
            "metadata": {
                "description": "Account Username"
            }
        },
        "Password": {
            "type": "securestring",
            "metadata": {
                "description": "Account Password"
            }
        },
        "DSCPackagePath": {
            "type": "string",
            "metadata": {
                "description": "Path to DSC Package"
            }
        },
        "DSCPackageSasToken": {
            "type": "securestring",
            "metadata": {
                "description": "Sas Token"
            }
        }
    },
    "variables": {
        "AccountDescription": "Account Created as part of DSC"
    },
    "resources": [
        {
            "name": "[concat(parameters('VirtualMachineName'),'/Microsoft.Powershell.DSC')]",
            "type": "Microsoft.Compute/virtualMachines/extensions",
            "location": "[resourceGroup().location]",
            "apiVersion": "2016-03-30",
            "dependsOn": [
                "[concat('Microsoft.Compute/virtualMachines/', parameters('VirtualMachineName'))]"
            ],
            "properties": {
                "publisher": "Microsoft.Powershell",
                "type": "DSC",
                "typeHandlerVersion": "2.24",
                "autoUpgradeMinorVersion": true,
                "protectedSettings": {
                    "configurationUrlSasToken": "[parameters('DSCPackageSasToken')]",
                    "configurationArguments": {
                        "Credential": {
                            "Username": "[parameters('Username')]",
                            "Password": "[parameters('Password')]"
                        }
                    }
                },
                "settings": {
                    "configuration": {
                        "url": "[parameters('DSCPackagePath')]",
                        "script": "MyDsc.ps1",
                        "function": "Main"
                    },
                    "configurationArguments": {
                        "AccountDescription": "[variables('AccountDescription')]"
                    }
                }
            }
        }
    ]
}

See the cool shortcut I have done in the protectedSettings? See how the Credential is built within the template, I provide the username and password and the Azure takes care of the rest and DSC just gets a PSCredential. Pretty neat!

It is worth highlighting that my non-sensitive data, like the AccountDescription, are still found within the settings attribute, I have only moved my sensitive data to protectedSettings. Of course, this decision is entirely up to you.

Now, it might just have been my paranoia, but I found that things when much more smoothly if I placed the protectedSettings, before settings. It shouldn’t matter, but things just seemed happier, and I know that doesn’t sound very logical.

If you have organisational privacy concerns

It is worth noting that if you have strict organisational privacy or data sharing controls, you should disable data collection,

"settings": {
    "privacy": {
        "DataCollection": "Disable"
    }
}

In Summary

The two examples show here are up on GitHub, I have also included a more detailed example as a Visual Studio 2017 project.

Kieran Jacobsen

PS. I am still looking for talented individuals to join my team at Readify. I am after people with a passion for infrastructure, Azure, operations, and everything in between. Does that sound like you? Hit me up on Twitter or LinkedIn!

Azure Automation and DevSecOps Presentation Content

I have had a number of speaking opportunities over the last 2 months, and after each one, I have promised that I would post the slides up here. So, without further ado, you can find the slides for each of the following sessions:

I want to thank all of those who attended, your interest and your questions make all of the challenging work worthwhile. I also want to thank all of the organisers, without your work there simply wouldn’t be any conference for me to present at.

My next presentation will be my Ransomware 0, Admins 1 at Experts Live Australia on the 6th of April.


Readify Dev Breakfast WA

Infrastructure Saturday 2017

CrikeyCon 2017