DNS Squatting with Azure App Services

TL DR

  • SaifAllah benMassaoud from Vulnerability Lab discovered that resnet.microsoft.com was resolving to resnetportal-prod.azurewebsites.net, however there was no Azure App Service at this address.
  • Anyone could have established an App Service with this DNS name, and thus squat on a microsoft.com subdomain.
  • This vulnerability has the potential to affect any organisation that is using App Services (or similar PaaS services) and custom domain names where they do not have appropriate controls in place.

Like many involved in the InfoSec industry, I monitor a bunch of vulnerability disclosure feeds. They can be a valuable source of knowledge: new techniques, new bugs, new breaches or just interesting tools and technology.

Several days ago, a post titled: Microsoft Resnet - DNS Configuration Web Vulnerability grabbed my interest. It has an innocuous title, and I hadn’t recalled anyone else talking about a Microsoft DNS Vulnerability. The post wasn't that long, the description and the proof-of-concept are only a few paragraphs in length; however what I did discover was an interesting vulnerability, one that, I feel, is going to become more and more prevalent with the use of Platform As A Service (PaaS) technologies like Azure App Services.

Simply put, in this situation, someone had created a CNAME entry within Microsoft’s DNS to point resnet.microsoft.com to resnetportal-prod.azurewebsites.net, unfortunately, resnetportal-prod.azurewebsites.net didn’t exist. This doesn’t sound that bad? Right?

ihaveabadfeeling-luke.gif

For those who are not familiar with the azurewebsites.net domain, this is the domain used by Azure App Services to host services. When you create an App Service, you specify a name, like myawesomewebapp.azurewebsites.net, you can then deploy your application to that App Service. You can pick whatever you want as the applications name, as long as no one else has taken it before you.

You can optionally specify a custom domain name for your App Service, like myawesomewebapp.com, and use a CNAME entry to map your custom domain to your azurewebsites.net domain (you can now also use an A record).

You should now be able to see the problem; resnetportal-prod.azurewebsites.net didn’t exist yet the name resnet.microsoft.com was pointed to this App service. What does exist is a great squatting/hijacking opportunity. Anyone could have signed up for an Azure Subscription, created an App Service with the name resnetportal-prod.azurewebsites.net and then hijacked resnet.microsoft.com. Vulnerability Lab maanged to discover a pretty significant issue.

What could one do with a subdomain of microsoft.com? Phishing, credential theft, and ransomware comes to mind pretty quickly. I am sure an APT crew would love to have a domain or subdomain like this. There are probably only a few domain names in the world where the average user and even the average system administrator are extremely trusting, and microsoft.com would have to be one.

It isn't just the big organisations like Microsoft at risk. Any company that makes use of PaaS services like Azure App Services and CNAME entries could potentially become the next victim. Attackers might use your domain name to attack others or perhaps create more effective attacks against your own users.

Let’s consider our friends Contoso Limited; they deployed an application for their users to contosoapp.azurewebsites.net, they also established a custom domain name, home.contoso.com. The app was used for some time, and eventually they decide to decommission it. A developer, maybe a sysadmin deletes the Azure App Service, but in their haste, they forget about the DNS entries. More time goes by, and now Bob from an APT group finds the entry for home.contoso.com pointing to contosoapp.azurewebsites.net, he then goes and sets up his own App Service and hijacks home.contoso.com.

Bob the sends out this email to some Contoso email addresses:

Subject: New Employee Experience
From: Contoso Marketing
Body:
    Hi Team,

    We have launched a new employee portal, it is great and has a bunch of awesome features. The site can be found at http://home.contoso.com. 

    From,

    The Contoso Marketing Team

Bob doesn’t even need to hide the links in the email, he doesn’t need any of the usual masking techniques, he can simply display the company’s domain name. If the email structure, text and links are well crafted, how would Fred from Accounting determine if this was a legitimate email?

When a user navigates to the page, perhaps it prompts for credentials, maybe it tries to run a browser exploit? I have no doubt that a campaign against an organisation like this would be extremely successful.

Now the details of how SaifAllah benMassaoud from Vulnerability Lab initially discovered the misconfiguration are not described in the release. I am going to guess that he probably used an automated DNS enumeration tool like DNSRecon and DNSNinja. These make the discovery of DNS records easy, and it would be easy to automate additional checks based upon their results to find vulnerable configurations.

In terms of defending against these issues, there are two methods, both of which need to be implemented by organisations:

  1. Appropriate change control processes: If an App Service or similar PaaS solution is being decommissioned, processes should be in place to ensure that any associated DNS records are removed;
  2. Monitor your DNS zones for configuration issues: Have automated scripts that check and send alerts if configuration issues are found.

If you haven’t looked at a DNS management tool like DNSControl, do so now! DNSControl was originally developed by Stack Overflow, and you don’t need to have hundreds of domain names and records to gain value from a tool that allows you to manage DNS as code.

DNSControl uses a Domain Specific Language for defining domains and their records, independently of your provider. You can use macros and variables to simplify your configuration. My favorite features is the ability to configure multiple DNS providers, this is great for migrations and for fault tolerance. The CloudFlare provider still allows for control over their proxy as well, ensuring that all of our configuration remains in source control.

Defending against this vulnerability is fairly simple, practice good change control processes and monitor your DNS zones. DNS enumeration tools like DNSRecon and DNSNinja can also assist in determining your organisations risk, whilst DNS as code tools like DNSControl will give us better control over our DNS change processes.

Kieran Jacobsen

Sending SYSLOG messages to TCP hosts and POSH-SYSLOG V3.0

The Posh-SYSLOG module has been very popular since its first release in 2013. SYLOG provides a common integration point into enterprise monitoring, alerting and security systems and administrators and developers often need to push messages from their scripts and automation activities to a SYSLOG server.

There are two common pieces of feedback, TCP support and improving the performance. I am excited to announce a new version of POSH-SYSLOG, which introduces TCP support and a number of performance improvements.

Kudos must go to Jared Poeppelman (powershellshock) from Microsoft who provided the TCP logic, optimised performance, and added additional Pester cases. I took some time out to make some additional improvements on top of Jared’s work. Due to the significant changes and additional functionality, I am considering this to be version 3.0.

The easiest way to get the module is from the PowerShell Gallery using Install-Module -Name Posh-SYSLOG. You can also clone the GitHub repository.

The first big change by Jared, was to implement the Begin {} Process {} End {} structure for Send-SyslogMessage. With this structure, we can leverage the pipeline to send multiple messages. I have developed scripts where I needed to read application logs and then send them to a SIEM product using SYSLOG; access via the pipeline simplifies these scripts and hopefully improves their performance.

The logic around determining the correct value to be sent as the hostname has been cleaned up and refined. This piece was free of issues, however there were some small tweaks that potentially improved performance. The function is now called as part of the Begin {} block, improving performance for bulk message transmissions. The logic has been moved out to its own internal function, allowing for separate testing and better mocking opportunities in Pester.

Another source of performance improvement is the removal on the need to call Test-NetConnection. This is a dramatic source of improvement when Send-SyslogMessage is executed in a workgroup (that is, a non-domain joined) environment. Previously we called Test-NetConnection to determine the correct network interface that we are using to communicate with the SYSLOG server; now we simply ask the socket for the source IP address and then use Get-NetIPAddress to check if this is a statically assigned address.

All the network functionality and calls have been moved to internal functions. This helped with testing, I can now mock all the network activities which allows for better testing with Pester. The network tests are now much more reliable.

Finally, Jared and I have increased the number of Pester tests. I have tried to aim for over testing everything in the hope that all potential issues are flushed out. With a massive upgrade to the functionality, and such a refactoring have the potential to introducing issues. I am confident that things have been appropriately requested. If issues are found, please raise them via GitHub.

So what is the future now for Posh-SYSLOG? Well for now, I just want to ensure there are no bugs or issues, after that I want to look at implementing the other commonly asked feature; TLS support.

The PowerShell community has been amazing, I have been lucky to have such wonderful community contributions over the past few years. A massive thanks to Jared, Ronald, Xtrahost and Fredruk.

Kieran Jacobsen

Securing PowerShell DSC within Azure ARM Templates

There is no doubt that Azure ARM templates are now the best way to deploy and manage resources within Azure. Recently, I found myself creating an ARM template that deployed some Windows Server 2016 virtual machines with PowerShell DSC extension to configure them. This is typically very simple, define the virtual machine include the extension, however, this time I also needed to include sensitive pieces of information; API keys and credentials, and then I wondered, well, what is the best way to protect these?

If you are familiar with DSC, you can either leave these assets in plaintext or encrypt them with a certificate. Now most documentation and posts found online, will direct for the plaintext approach when using the DSC extension, but there is a significantly better approach, which allows for sensitive information to be encrypted throughout the deployment process. Even better, it is really simple to implement.

Whilst I was working through setting this up, I found there are some quirks in how you need to put the ARM template together, and thought it would be a good topic for a post. I am going to talk about some of the challenges that I found, as well as provide some working examples.

Understand the apiVersion and typeHandlerVersion

When you define the PowerShell DSC extension for an Azure virtual machine, there are two fields that control the version and functionality that the extension will provide, apiVersion and typeHandlerVersion.

In terms of ARM Templates, the apiVersion specifies how the deployment process talks to Azure, that is the layout and content of calls that are made to the underlying API, as such, it controls what we can place into out template files. Microsoft uses the apiVersion to provide backward compatibility. When breaking changes are introduced, typically as part of introducing new features, Microsoft will define a new version. This allows for older scripts and templates to continue to function long after they have been written. For the PowerShell DSC extension, the apiVersion will impact what fields are valid in your template definition.

The second, typeHandlerVersion, is extremely critical, it defines what version of the extension will be deployed to the virtual machine and then executed. Whilst apiVersion controls talking to API, the typeHandlerVersion controls what is happening on the virtual machine itself. This extension regularly receives new features and bug fixes, from support for new operating systems, privacy settings, to WMF versions, and a tonne of issues fixed in between.

Now for the catch, where the devil enters the details. Depending upon what versions you specify, your template syntax may need to change. I found out about these quirks the hard way, this tiny detail can cost you quite a bit of time troubleshooting syntax issues. A mismatch between the apiVersion¸ typeHandlerVersion and your template’s syntax could leave to errors during template validation, DSC compilation or DSC application.

So, what happens if you use older version or mix versions and syntax? Well, nothing bad might happen, or you might end up with either template validation errors, DSC complication errors, or even errors during DSC processing (that is, as the DSC is applied to the server).

But what about autoUpgradeMinorVersion?

According to the documentation, the extensions have a Boolean attribute, autoUpgradeMinorVersion, that allows a user to pick simply the major version, and have Azure install the latest version to their virtual machine during provisioning time. It should be highlighted that this only happens during the provisioning of the extension, extensions will not be upgraded unless the user explicitly removes and then re-provisions the extension. Hot fixes (that is, those of the format of 2.9.x.x), are automatically selected, you don’t get any control. The problem is, this only upgrades the extension, it isn’t going to impact your ARM template syntax.

Be Wary of Visual Studio Created Templates

If you are developing your ARM templates within Visual Studio, you are probably using the “Add New Resource” window to include the PowerShell DSC Extension into your virtual machine. It so happens, that, at least in Visual Studio 2017, it will default to an old typeHandlerVersion of 2.9. For most, this isn’t a problem, especially if autoUpgradeMinorVersion is set to true.

I do want to point out that versions 2.4 up to 2.13 were retired in August 2016.

If you do need to handle sensitive data, then work with the release history for the PowerShell DSC extension, and the documentation on the syntax to ensure that you are working against the latest versions.

How to correctly handle sensitive data?

Note/Warning: At the time of writing, this all worked correctly for versions 2.24 to 2.26. It may not be correct for later versions.

So how do we correctly pass sensitive data between our template through to our virtual machine and DSC?

To begin with, you need to understand that within the properties for the DSC extension, we have two attributes that can be used to specify settings, the first, settings, which you are probably familiar with. There is another, protectedSettings, you might already be familiar with this one, typically you will see configurationUrlSasToken, but you can also specify sensitive data to be passed to the DSC configuration here as well.

So, I have a super simple DSC configuration, it will create a user account using the specified details. I have opted to provide the accounts username and password using a PSCredential parameter, whilst I am also going to set a description for the account, specified as a string parameter.

Configuration Main
{
    Param
    (
        [Parameter(Mandatory=$true)]
        [ValidateNotNullorEmpty()]
        [PSCredential]
        $Credential,

        [Parameter(Mandatory=$true)]
        [ValidateNotNullorEmpty()]
        [String]
        $AccountDescription
    )

    Node Localhost
    {
        User NewUser
        {
            UserName             = $Credential.UserName
            Description          = $AccountDescription
            Disabled             = $false
            Ensure               = 'Present'
            Password             = $Credential.Password
            PasswordNeverExpires = $true
        }
    }
}

Obviously, I want to ensure that credential is kept securely, I don’t want it left in plaintext as part of the DSC compilation.

The ARM template looks like this:

{
    "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
    "contentVersion": "1.0.0.0",
    "parameters": {
        "VirtualMachineName": {
            "type": "string",
            "metadata": {
                "description": "Name of the virtual machine"
            }
        },
        "Username": {
            "type": "string",
            "metadata": {
                "description": "Account Username"
            }
        },
        "Password": {
            "type": "securestring",
            "metadata": {
                "description": "Account Password"
            }
        },
        "DSCPackagePath": {
            "type": "string",
            "metadata": {
                "description": "Path to DSC Package"
            }
        },
        "DSCPackageSasToken": {
            "type": "securestring",
            "metadata": {
                "description": "Sas Token"
            }
        }
    },
    "variables": {
        "AccountDescription": "Account Created as part of DSC"
    },
    "resources": [
        {
            "name": "[concat(parameters('VirtualMachineName'),'/Microsoft.Powershell.DSC')]",
            "type": "Microsoft.Compute/virtualMachines/extensions",
            "location": "[resourceGroup().location]",
            "apiVersion": "2016-03-30",
            "dependsOn": [
                "[concat('Microsoft.Compute/virtualMachines/', parameters('VirtualMachineName'))]"
            ],
            "properties": {
                "publisher": "Microsoft.Powershell",
                "type": "DSC",
                "typeHandlerVersion": "2.24",
                "autoUpgradeMinorVersion": true,
                "protectedSettings": {
                    "configurationUrlSasToken": "[parameters('DSCPackageSasToken')]",
                    "configurationArguments": {
                        "Credential": {
                            "Username": "[parameters('Username')]",
                            "Password": "[parameters('Password')]"
                        }
                    }
                },
                "settings": {
                    "configuration": {
                        "url": "[parameters('DSCPackagePath')]",
                        "script": "MyDsc.ps1",
                        "function": "Main"
                    },
                    "configurationArguments": {
                        "AccountDescription": "[variables('AccountDescription')]"
                    }
                }
            }
        }
    ]
}

See the cool shortcut I have done in the protectedSettings? See how the Credential is built within the template, I provide the username and password and the Azure takes care of the rest and DSC just gets a PSCredential. Pretty neat!

It is worth highlighting that my non-sensitive data, like the AccountDescription, are still found within the settings attribute, I have only moved my sensitive data to protectedSettings. Of course, this decision is entirely up to you.

Now, it might just have been my paranoia, but I found that things when much more smoothly if I placed the protectedSettings, before settings. It shouldn’t matter, but things just seemed happier, and I know that doesn’t sound very logical.

If you have organisational privacy concerns

It is worth noting that if you have strict organisational privacy or data sharing controls, you should disable data collection,

"settings": {
    "privacy": {
        "DataCollection": "Disable"
    }
}

In Summary

The two examples show here are up on GitHub, I have also included a more detailed example as a Visual Studio 2017 project.

Kieran Jacobsen

PS. I am still looking for talented individuals to join my team at Readify. I am after people with a passion for infrastructure, Azure, operations, and everything in between. Does that sound like you? Hit me up on Twitter or LinkedIn!

Azure Automation and DevSecOps Presentation Content

I have had a number of speaking opportunities over the last 2 months, and after each one, I have promised that I would post the slides up here. So, without further ado, you can find the slides for each of the following sessions:

I want to thank all of those who attended, your interest and your questions make all of the challenging work worthwhile. I also want to thank all of the organisers, without your work there simply wouldn’t be any conference for me to present at.

My next presentation will be my Ransomware 0, Admins 1 at Experts Live Australia on the 6th of April.


Readify Dev Breakfast WA

Infrastructure Saturday 2017

CrikeyCon 2017