Quantcast
Channel: Matthew Dowst – Catapult Systems
Viewing all 39 articles
Browse latest View live

SQL 2014 Install Error: The specified account already exists (Solved)

$
0
0

I was recently attempting to install SQL Server 2012 on Windows Server 2016 and the install kept hanging at the step “Install_sqlncli_Cpu64_Action: PublishProduct. Publishing product information.”

sql1

After checking the Summary.txt log in the folder “C:\Program Files\Microsoft SQL Server\120\Setup Bootstrap\Log”, I noticed that every feature failed with the same details:

Feature: Management Tools – Complete
Status: Failed
Reason for failure: An error occurred for a dependency of the feature causing the setup process for the feature to fail.
Next Step: Use the following information to resolve the error, and then try the setup process again.
Component name: SQL Server Native Client Access Component
Component error code: 1316
Component log file: C:\Program Files\Microsoft SQL Server\120\Setup Bootstrap\Log\20161010_084553\sqlncli_Cpu64_1.log
Error description: The specified account already exists.
Error help link: http://go.microsoft.com/fwlink?LinkId=20476&ProdN
ame=Microsoft+SQL+Server&EvtSrc=setup.rll&EvtID=
50000&ProdVer=12.0.2000.8&EvtType=sqlncli.msi
%40PublishProduct%401316

When I checked the sqlncli_Cpu64_1.log that was listed in the failed “Component log file”, I noticed that it failed with a an error 1603, which is a pretty generic error message. However, I also noticed that the product name in the log was Microsoft SQL Server 2012 Native Client, not 2016. I checked my installed programs and confirmed that the Microsoft SQL Server 2012 Native Client was indeed installed on this server.

Solution

All I had to do, to resolve this error was uninstall the Microsoft SQL Server 2012 Native Client. Once I did that, I was able to install SQL Server 2016 without any issues.


Backup Your Azure Automation Modules

$
0
0

I recently ran into a situation where I had created some custom Azure Automation modules, and I wanted to be able to make a backup of all of them to a centralized location. Since Azure Automation does not have any source control integration for the assets I decided I needed to come up with a way of backing these up.

After doing some digging, I discovered that all the custom modules are loaded to the directory C:\Modules\User, when you run an Azure Automation runbook in Azure. So using this and a blog I had previously seen by Robin Shahan, Uploading and downloading files to Azure Blob Storage with PowerShell, I was able to create runbook that will backup all my custom modules. The runbook will connect to your Azure Storage Account, create a container named with the current date and time, and then upload all the files and folders from C:\Modules\User to the blob storage.

Follow the instructions below to implement this solution for yourself.

  1. Create a Classic Storage Account in the Azure Portal.
    Note: It must be a classic storage account because the cmdlets to upload files to Azure blob storage do not exist in the AzureRM modules yet.
  2. Once you create your storage account click on Access Keys under Settings, and make a note of the Storage Account Name and Primary Access Key.
  3. Import the Runbook from the Gallery in Azure Automation by searching for “Backup Azure Automation Modules”, or download it from TechNet Gallery and manually import the Backup-AAModules.ps1 to your Azure Automation runbooks.
  4. Create a credential asset for an account with access to upload to the blob storage.
  5. Open the Backup-AAModules, save and publish it
  6. Start the runbook using the name of the credential asset you created, and Storage Account Name and Primary Access Key you noted in step 2.

The runbook should start and your custom modules should be uploaded to your storage account.

Install Office 2016 OneDrive for All Users

$
0
0

I recently ran into an interesting issue when deploying Office 2016 ProPlus on Windows 7. I discovered that the OneDrive application installs under the user context, and not to the system like the rest of the suite does. This is not a big deal if you are using Windows 10, as OneDrive client is built in, but I was deploying to Windows 7 machines. So on Windows 7 only the user who ran the installer would have the OneDrive client installed. If I added a new profile to the machine it would also receive the OneDrive client. But it would not be available to any users with existing profiles on the machine. So to work around this, I created an Active Setup registry key and pointed it to the OneDriveSetup.exe that is installed along with the suite.

Active Setup is used to execute commands once per user during the login process. In this case I am using it to run the OneDriveSetup.exe silently for anyone who logs onto the machine, regardless of whether or not they had an existing profile.

Below is a copy of the PowerShell I used to create the registry entry. I also includes the command line version in case you are using a batch file.

$activeSetupPath = "HKLM:\SOFTWARE\Microsoft\Active Setup\Installed Components\OneDrive"
IF(!(Test-Path $activeSetupPath))
{
    New-Item -Path $activeSetupPath -Force | Out-Null
    New-ItemProperty -Path $activeSetupPath -Name "Version" -Value "1" -PropertyType String -Force | Out-Null
    New-ItemProperty -Path $activeSetupPath -Name "StubPath" -Value '"C:\Program Files\Microsoft Office\root\Integration\OneDriveSetup.exe" /silent' -PropertyType String -Force | Out-Null
}
 ELSE {
    New-ItemProperty -Path $activeSetupPath -Name "Version" -Value "1" -PropertyType String -Force | Out-Null
    New-ItemProperty -Path $activeSetupPath -Name "StubPath" -Value '"C:\Program Files\Microsoft Office\root\Integration\OneDriveSetup.exe" /silent' -PropertyType String -Force | Out-Null
}

reg add "HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Active Setup\Installed Components\OneDrive" /v "Version" /d "1" /t REG_SZ /f

reg add "HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Active Setup\Installed Components\OneDrive" /v "StubPath" /d "C:\Program Files\Microsoft Office\root\Integration\OneDriveSetup.exe /silent" /f

 

Replacing Verizon’s Message+ for Android

$
0
0

Yesterday while I was working my Message+ application, that I use for text messaging, updated itself to version 6.4.1. I soon discovered the app now requires you to create a public profile. There is no way to opt out or cancel this. So, in order to read the text my wife sent me, I had to agree to this. When I got home I called Verizon, and ended up having press the 0 key a few dozen times, just to talk to a person, and I got no real answer. Then I checked their privacy policy from the link in the app, and it gave no clear indication of what this public profile is, or who can access the information. Luckily, when it forced me to create the profile I just put in some random characters instead of my name. Now I’ve decided it is time to use a new app.

Fortunately, with Android this is a very simple task. After some research, I decided to switch to Signal. Signal is a free open source app that is similar in Messenger in the look and feel. Plus it has the added feature of encrypting the messages if both the sender and recipient are using Signal. And what really sealed the deal for me was it also has a Chrome extension allowing you to send messages from your desktop.

Replacing Message+

Installing and setting it up Signal to replace Message+ was a breeze. All you have to do is download it from the Google Play Store https://play.google.com/store/apps/details?id=org.thoughtcrime.securesms&hl=en. Then once it finishes installing launch the application. It will prompt you if you want to make it the default SMS app. Just tap on the banner to do this. Then it prompted me to import my current SMS messages. I did this as well, and all my previous messages showed up. As of right now my only complaint is it prompts you to invite contacts that are not using Signal. It is kind of annoying, but so far, it does not reappear for that contact once you click the X on the invite banner.

Follow Up

I’ll be sure to report back after a few days, to see if Signal is going to meet my messaging needs.

Fix Azure Storage Queue Error 403 from Web App

$
0
0

I recently ran into a problem uploading to an Azure Storage Queue from an Azure App Services web app. The problem began when I moved the web app from one subscription to another. After the move, I received a (403) Forbidden message when attempting to write to a queue. The storage account for the queue did not move and other deployments of this app were still able to write to it. If I ran it locally, from my computer, it worked.

After trying multiple different combinations of storage locations, connection strings, Nuget package versions, etc. I decided to create a new App Service this time back in the original subscription. I deployed the web app to this new App Service, and it worked. This made me think about Resource Providers.

So, I checked the registered resource providers in both subscriptions, and noticed that the original subscription had over a dozen more resource providers enabled, than the subscription I was moving to App Service to. I started going down the list and registering the ones, that looked like they might play a role in this issue.

Solution

After I registered Microsoft.ServiceBus provider, the request worked and the web app was able to write to the queue once again. The other two providers I enabled before the Microsoft.ServiceBus were Microsoft.ApiManagement and Microsoft.AppService. I’m not sure it was just the Microsoft.ServiceBus or a combination of the three, but it is working now!

Below is a list of things I tried prior to enabling the Resource Providers. These are thing you might want to consider as well, if you run into a similar situation.

  1. Confirmed the storage name and key are correct.
  2. Created a new storage account in the same subscription
  3. Updated the Microsoft.WindowsAzure.Storage package to the latest version
  4. Confirmed that the web.config is being updated on publish
  5. Logged the connection string and ensured it was passing the right value
  6. Confirmed that the time on the App Service server is correct
  7. Set the time zone on the App Service to Central Standard Time. (The storage account is in South Central US)

Also, I have included a sample script below that you can use to compare the resource providers between two different subscriptions. It will output the providers names, which you can then use with the Register-AzureRmResourceProvider cmdlet to quickly enable in your new subscription.

$creds = Get-Credential
$sourceSubscription = 'GUID of the source subscription'
$destinationSubscription = 'GUID of the destination subscription'

# Get the resource providers from the source subscription
Add-AzureRmAccount -Credential $creds -SubscriptionId $sourceSubscription 
$source = Get-AzureRmResourceProvider 

# Get the resource providers from the source subscription
Add-AzureRmAccount -Credential $creds -SubscriptionId $destinationSubscription 
$destination = Get-AzureRmResourceProvider

# Check each enabled resource providers from the source against the destination
Foreach($resource in $source)
{
    # Check if the resource is enabled in the destination and display if not
    if(!($destination | ?{$_.ProviderNamespace -eq $resource.ProviderNamespace}))
    {
        Write-Output $resource.ProviderNamespace
    }
}

 

Azure Webjob Schedule Deployment Error 409 Conflict

$
0
0

If you receive an Error 409 Conflict, when deploying a scheduled Webjob to Azure there are a few things you’ll want to check.

Pricing Tier

First, in the Azure Portal navigate to Schedule Job Collections, and check the Pricing Tier that your collection is using. If it is set to the Free tier, then you are limited to a max frequency of every hour. So, if you are attempting to create a job with a frequency of less than an hour, you will receive the Error 409. You need to be on the Standard tier or higher to create jobs that run using a minute interval less than one hour.

Quotas

I recently discovered that after changing the Pricing tier from Free to Standard, my jobs still failed to deploy. Making the change does not automatically change the quotas on the Schedule Job Collection. To do this in the Azure Portal, navigate to Schedule Job Collections, select your collection, then check the Quotas blade for your max recurrence. This must be set to a value less than or equal to the frequency set in your scheduler configuration file.

Create Zero-Touch Windows 10 ISO

$
0
0

I’ve recently been doing some testing between the different Windows 10 releases, and wanted to quick way to be able to install new VMs without maintaining a bunch of different VM templates, or using MDT. To do this I made a ISO image that installs the base Windows 10 image without any manual interaction required. This post will go over the steps you can use to make your own Windows 10 Zero-Touch ISO.

Prerequisites

Before you begin, you must have a Windows 10 ISO. If you don’t have one, you can use the Windows Media Creation Tool to create one from any licensed installation of Windows 10.

You will also need to install the Windows Assessment and Deployment Kit (ADK). When installing the ADK you only need to install the Deployment Tools feature.

Prep the ISO Files

Since we want this to be Zero-Touch, there are a few things you need to do to prevent the “Press any key to boot from CD or DVD” prompt.

  1. Extract the contents of your ISO. In the examples used below I extracted mine to E:\Win10_ISO.
  2. Delete the file .\boot\bootfix.bin (This will prevent the prompt when booting using BIOS.)
  3. In the folder .\efi\microsoft\boot\ rename the following files (This will prevent the boot prompt on UEFI systems)
    1. bin –> efisys_prompt.bin
    2. bin –> efisys.bin
  4. The next step only applies if you used the Media Creation Tool. If you used and ISO image you can skip to the next section.
  5. In the folder .\sources\ rename the file install.esd –> install.wim

Script It

$ISO = "E:\Windows.iso"
$FolderPath = "E:\Win10_ISO\"

# Get current drive letters
$drives = (Get-Volume).DriveLetter

# Mount the ISO image
$image = Mount-DiskImage -ImagePath $ISO -PassThru

# Get the new drive letter
$drive = (Get-Volume | ?{$drives -notcontains $_.DriveLetter -and $_.DriveLetter -ne $null}).DriveLetter

# Create destination folder if it doesn't exist
If (!(test-path $FolderPath)){
    New-Item -type directory -Path $FolderPath}

# Copy the ISO files
Get-ChildItem -Path "$($drive):\" | %{
    Copy-Item -Path $_.FullName -Destination $FolderPath -recurse -Force}

# dismount the ISO
$image | Dismount-DiskImage

# Delete the bootfix.bin
Remove-Item (Join-Path $FolderPath "boot\bootfix.bin") -Force

# Rename the efisys files
Rename-Item (Join-Path $FolderPath "efi\microsoft\boot\efisys.bin") "efisys_prompt.bin" 
Rename-Item (Join-Path $FolderPath "efi\microsoft\boot\efisys_noprompt.bin") "efisys.bin" 

# Rename install.esd to install.wim
If (Test-Path $(Join-Path $FolderPath "source\install.esd")){
	Rename-Item $(Join-Path $FolderPath "source\install.esd") "install.wim"
}

Create Autounattend.xml

The Autounattend.xml is used to answer all the questions that you are asked during the installation process. I have uploaded a sample file to my Gist profile that you can download, to get started pretty quick and easily. This Autounattend.xml has been tested on version 1511, 1607, and 1709, using x64 architecture.

  1. Download the Autounattend.xml and open in your preferred text editor
  2. The DiskConfiguration section sets the partitions. This file will create a 100 MB EFI partition, a 4 GB recovery volume, and assign the rest of the disk space to the OS partition. You should not need to make any changes here.
  3. Update the sections SetupUILanguage, InputLocale, SystemLocale, UILanguage, and UserLocale to your required language and location.
  4. In the UserAccounts section you can set the password for the administrator account.
  5. Update the AutoLogon section with the same password as the UserAccounts
  6. Make note of the SkipMachineOOBE and SkipUserOOBE. These are set to true to allow you to bypass the initial setup screens after Windows is installed. These should only be used for testing purposes. If you are creating a production image be sure to remove these two sections.
  7. ComputerName is set to * to generate a random name.
  8. Update the RegisteredOwner and RegisteredOrganization to your organization.
  9. Place the xml in the top level of the folder you extracted the ISO to.

If you want to do any customization beyond what is covered above you can use the Windows System Image Manager that was included with the Windows ADK you installed earlier.

Script It

$FolderPath = "E:\Win10_ISO\"
[string]$password = Read-Host -Prompt "Enter the admin password to use"
[string]$ComputerName = Read-Host -Prompt "Enter the computer name use '*' to randomly generate it"
[string]$RegisteredOwner = Read-Host -Prompt "Enter the Registered Owner"
[string]$RegisteredOrganization = Read-Host -Prompt "Enter the Registered Organization"

$AutounattendXML = $(Join-Path $FolderPath "Autounattend.xml")
# Download the sample Autounattend.xml
$Uri = "https://gist.githubusercontent.com/mdowst/e81cc0608a0c554d8c3381ebc7b6e15e/raw/dc55c6c1eef66fc0c4db0652ce8300e9ff507e0f/Autounattend.xml"
Invoke-WebRequest -Uri $Uri -OutFile $AutounattendXML

# load the Autounattend.xml
[xml]$Autounattend = Get-Content $AutounattendXML

# Update the values
($Autounattend.unattend.settings | ?{$_.pass -eq 'oobeSystem'}).component.AutoLogon.Password.Value = $password
($Autounattend.unattend.settings | ?{$_.pass -eq 'oobeSystem'}).component.UserAccounts.AdministratorPassword.Value = $password
($Autounattend.unattend.settings | ?{$_.pass -eq 'specialize'}).component.ComputerName = $ComputerName
($Autounattend.unattend.settings | ?{$_.pass -eq 'specialize'}).component.RegisteredOwner = $RegisteredOwner
($Autounattend.unattend.settings | ?{$_.pass -eq 'specialize'}).component.RegisteredOrganization = $RegisteredOrganization

# Save the updated XML file
$Autounattend.Save($AutounattendXML)

Create the ISO

  1. Go to Start > Windows Kits > Deployment and Imaging Tools Environment
  2. Run the command below to generate your ISO.

oscdimg.exe -m -o -u2 -udfver102 -bootdata:2#p0,e,bE:\Win10_ISO\boot\etfsboot.com#pEF,e,bE:\Win10_ISO\efi\microsoft\boot\efisys.bin E:\Win10_ISO E:\Win10Ent1607x64.iso

That’s it! Your ISO is now ready to use.

Script It!

# Create the ISO image
$DevToolsDirectory = "C:\Program Files (x86)\Windows Kits\10\Assessment and Deployment Kit\Deployment Tools\"
$FolderPath = "E:\Win10_ISO\"
$ISO = 'E:\Win10_ISO.iso'

$command = $(Join-Path $DevToolsDirectory 'amd64\Oscdimg\oscdimg.exe')
$arguments = '-m','-o','-u2','-udfver102',"-bootdata:2#p0,e,b$(Join-Path $FolderPath 'boot\etfsboot.com')#pEF,e,b$(Join-Path $FolderPath 'efi\microsoft\boot\efisys.bin')",$FolderPath,$ISO

& $command $arguments

Bonus Script: Create Virtual Machine

The script below can be used to create a Hyper-V Virtual Machine, mount your newly created ISO, and starts the VM, so the Windows 10 installation begins.

# Create new VM and install OS
$VM = 'blog01'
$ISO = "E:\Win10Entx64.iso"
$Path = "E:\Virtual Machines"
$SwitchName = 'External NIC'

New-VM –Name $VM –NewVHDPath (Join-Path $Path "$VM\$VM.vhdx") -NewVHDSizeBytes 40GB -SwitchName $SwitchName -Path $Path -Generation 2
Set-VMMemory -VMName $VM -DynamicMemoryEnabled $true -MinimumBytes 512MB -MaximumBytes 2048MB -Buffer 20 -StartupBytes 1024MB
Add-VMDvdDrive -VMName $VM -Path $ISO
Set-VMFirmware -VMName $VM -BootOrder $(Get-VMDvdDrive -VMName $VM),$(Get-VMHardDiskDrive -VMName $VM)
Start-VM $VM
vmconnect localhost $VM

All script from this post can also be find on my Gist page.

Azure Storage PowerShell Error: Cannot find an overload

$
0
0

I recently ran into an issue where sometimes when I would attempt to query or write to an Azure Storage table I would receive an error similar to the ones below.

  • Cannot find an overload for “Insert” and the argument count: “1”.
  • Cannot find an overload for “ExecuteQuery” and the argument count: “1”.

The strange thing is only seemed to happen on certain machines or scripts. Then I found this bug report on GitHub talking about just that. It turns out this can happen when you have both the Azure.Storage and AzureRM.Storage modules loaded at the same time. It appears there are two different versions of the DLL file Microsoft.WindowsAzure.Storage.dll. In some cases, it would grab the Azure one and in other cases it would grab the AzureRM one. The problem is the Azure.Storage module is required to query or write to a storage table, whether or not the storage account is a Classic or Resource Manager (RM) account.

The development team is aware of this, and are working on a fix. However, in the meantime, there is a workaround you can use to ensure your scripts will work.

What you need to do is to force any query and entity objects to use the same version of the Microsoft.WindowsAzure.Storage.dll file, as the Azure.Storage module. You can do this by saving the version information to a variable, and then specifying it when you create these objects.

If you look at the example below of a table query, you’ll see on line 8 we create the $assemblySN variable with the assembly’s full name. Then on line 11, we add that to the New-Object command for creating the query object from the TableQuery class.

#Define the storage account and context.
$Ctx = New-AzureStorageContext -StorageAccountName $StorageAccountName -StorageAccountKey $StorageAccountKey

#Get a reference to a table.
$table = Get-AzureStorageTable -Name $TableName -Context $Ctx

#get the $table Assembly FullName
$assemblySN = $table.CloudTable.GetType().Assembly.FullName

#Create a table query.
$query = New-Object -TypeName  "Microsoft.WindowsAzure.Storage.Table.TableQuery,$assemblySN"

#Execute the query.
$entities = $table.CloudTable.ExecuteQuery($query)

When you want to write or delete rows from the table you need to use the TableOperation and the DynamicTableEntity classes. For the DynamicTableEntity you can use the same trick, you used with the TableQuery above. See line 11 below. However, you cannot call TableOperation class using the New-Object cmdlet, like you can with the other classes. In this case you can use the Invoke-Expression cmdlet to load the class with the specific version. You can see this on line 14 of the example below.

#Define the storage account and context.
$Ctx = New-AzureStorageContext -StorageAccountName $StorageAccountName -StorageAccountKey $StorageAccountKey
    
#Get a reference to a table.
$table = Get-AzureStorageTable -Name $TableName -Context $Ctx

#get the $table Assembly FullName
$assemblySN = $table.CloudTable.GetType().Assembly.FullName

#Execute the Insert.
$entity = New-Object -TypeName "Microsoft.WindowsAzure.Storage.Table.DynamicTableEntity,$assemblySN" -ArgumentList $partitionKey, $rowKey
$entity.Properties.Add("columnA", $columnA)
$entity.Properties.Add("columnB", $columnB)
$result = $table.CloudTable.Execute((invoke-expression "[Microsoft.WindowsAzure.Storage.Table.TableOperation,$assemblySN]::InsertOrReplace(`$entity)"))

Hopefully, we’ll get a cleaner solution to this in the future, but for now, this solution is working both locally and in Azure Automation. Thanks to the Microsoft team on GitHub for working with me and others to get us functioning while they work on a permanent fix.


SCSM Orchestrator Error: An item with the same key has already been added

$
0
0

I recently ran into an issue with the Service Manager integration pack for Orchestrator, not returning object in a particular class. In my class it happened to be the Service Request class, but as I discovered this same problem could happen to any class. When a runbook would execute I would receive the error message, “An item with the same key has already been added.” Also, when I went into the Get-Object activity in Orchestrator I would only see the “SC Object Guid” as an available property.

It turns out this problem is caused when two entries on an enumeration list have the same internal name value. This can happen if you use one of the external Enum Builder solutions, or just edit the XLM yourself. To find the rogue enum entry I wrote a PowerShell script that you can use to query a class in Service Manager, find every enumeration list for the class and all classes it inherits, and checks if it contains any duplicate values, and outputs the results.

You can download the script here, from the Microsoft TechNet gallery.

To run the script, you just need to specify the display name of the class and your management server name. The display name should match the name listed in the Orchestrator activity.

The script will output the results from each enumeration list.

Once you identify the duplicate entry, all you have to do is export the management pack, change the internal name of one of the duplicates, and import the updated management pack back into Service Manager. Then if you run the script again, it should come back with no duplicates.

After that, the Orchestrator activities should start working again.

Install ElastiFlow on Ubuntu 18.04 – Part 1: Installing Ubuntu

$
0
0

ElastiFlow is a great open source NetFlow analyzer that works with Elastic Stack (formerly ELK Stack). Of all the netflow tools I’ve tested it has, by far, the best visualizations. However, if like me you aren’t familiar with Elastic Stack the setup can be rather intimidating. In this tutorial, I hope to make it easier for you and everyone who wants to use this awesome tool.

This tutorial is broken up into 4 parts. One for installing the Ubuntu server. One for installing and configuring Elastic Stack. One on how to implement ElastiFlow on top of it all. And finally one on how to properly maintain the solution.

Part 1: Installing Ubuntu
Part 2: Installing Elastic Stack
Part 3: Install ElastiFlow
Part 4: Solution Maintenance (coming soon)

Install and Setup of Ubuntu Server 18.04

I performed my installation of Ubuntu Sever using the latest version of 18.04 on a Hyper-V virtual machine (VM), but the instructions will be the same regardless of what hypervisor you are using. The VM had 40GB hard drive and 4GB of RAM.

Install Ubuntu 18.04

  1. Download Ubuntu server https://www.ubuntu.com/download/server
    Note: I found downloading the BitTorrent was actually much faster than downloading directly from the Ubuntu servers. https://www.ubuntu.com/download/alternative-downloads
  2. Create a new VM with a 40GB hard disk and at least 4GB of RAM.
  3. Insert the install media and start the VM.
  4. Select your preferred language
  5. Select your keyboard layout
  6. Choose Install Ubuntu
  7. At this set you have the choose to stick with DHCP or use a static address. If you choose to use a static address it is best to set it up now, as it provides a nice easy interface to set it here.
  8. Configure a proxy address if required
  9. On the Filesystem setup screen select Use An Entire Disk
  10. Press Enter to accept the default disk
  11. Select Done
  12. Select Continue
  13. Create a name for your server and setup the username and password for the root user
  14. Wait for the installation to complete
  15. When prompted select Reboot Now
  16. If prompted eject the installation media from the VM and press Enter to continue booting

Setup Ubuntu for ElastiFlow

If you set your IP address during the installation process the only remaining setup action is to install and configure SSH. This will allow you to use a tool like Putty to connect to the server and more easily configure the items in part 2 and 3. (copy and paste FTW!)

  1. Log into the VM using the username and password you created during the setup process
  2. Install SSH using the command below:
    sudo add-apt-repository -y ssh
  3. Start the SSH service so you can connect to the server
    service ssh status
  4. On another computer open your preferred SSH client. I recommend PuTTY if you don’t have one. (https://www.putty.org/)
  5. Enter the IP address of your server, set the port to 22, select SSH connection type, and click OK
  6. If you receive a Security Warning click Yes

You are now all set to start the installation process.

Part 2: Installing Elastic Stack

Install ElastiFlow on Ubuntu 18.04 – Part 2: Installing Elastic Stack

$
0
0

This blog is part of a series. Refer to the links below for the other posts in this series.

Part 1: Installing Ubuntu
Part 2: Installing Elastic Stack
Part 3: Install ElastiFlow
Part 4: Solution Maintenance (coming soon)

In this section, we will cover installing and configuring Elastic Stack 6.x, which will be used to power the ElastiFlow solution. Elastic Stack, often referred to as ELK Stack, consists of Elasticsearch, Logstash, and Kibana. Elasticsearch is a full-text based search engine. Logstash is data-collection and log-parsing engine, and Kibana is an analytics and visualization platform used to display the ElastiFlow dashboards.

Please note this tutorial is designed for personal or lab environment setups, so we are not going to cover security considerations with the Kibana website. I have provided links below to additional resources if you need to setup restricted access to the Kibana dashboards.

Installing Elastic Stack 6.x

Install Java

Logstash requires Java 8. Java 9 is not supported. So, we need to ensure that we install the proper version.

Add the Oracle Java PPA to apt

sudo add-apt-repository -y ppa:webupd8team/java

Update apt

sudo apt-get update

Install the latest stable version of Oracle Java 8

sudo apt-get install -y oracle-java8-installer

Install Elasticsearch

Import Elasticsearch Signing Key PGP key

wget -qO - https://artifacts.elastic.co/GPG-KEY-elasticsearch | sudo apt-key add -

Confirm apt-transport-https is installed

sudo apt-get install -y apt-transport-https

Add the repository definition to ensure you are getting the latest version

echo "deb https://artifacts.elastic.co/packages/6.x/apt stable main" | sudo tee -a /etc/apt/sources.list.d/elastic-6.x.list

Update apt

sudo apt-get update

Install Elasticsearch

sudo apt-get -y install elasticsearch

Configure Elasticsearch to start automatically when the system boots

sudo /bin/systemctl daemon-reload
sudo /bin/systemctl enable elasticsearch.service

Start the Elasticsearch service

sudo systemctl start elasticsearch.service

Install Kibana

Update apt

sudo apt-get update

Install Kibana

sudo apt-get -y install kibana

Configure Kibana to start automatically when the system boots

sudo /bin/systemctl daemon-reload
sudo /bin/systemctl enable kibana.service

Start the Kibana service

sudo systemctl start kibana.service

Install Logstash

Update apt

sudo apt-get update

Install Logstash

sudo apt-get -y install logstash

 

Configuring Elastic Stack

Before you can install ElastiFlow there are a few things that need to be setup in the Elastic Stack.

Configure Elasticsearch

Open the Elasticsearch configuration file for editing.

sudo nano /etc/elasticsearch/elasticsearch.yml

Edit the network.host entry in the Elasticsearch configuration to block access to Elasticseach from outside the server.

Set – network.host: localhost

Restart the Elasticsearch service to force the changes to take affect

sudo systemctl restart kibana

Configure Kibana

Open the Kibana configuration file for editing.

sudo nano /etc/kibana/kibana.yml

Edit the server.host entry in the configuration to allow external access to Kibana.
note: As I mentioned that the beginning of this tutorial this will allow anonymous access to the Kibana dashboard. If you need to restrict access, I recommend installing and configuring Nginx.

Set – server.host: <Your Server’s IP Address>

Restart the Kibana service to force the changes to take affect

sudo systemctl restart kibana

To ensure that you can access the Kibana site externally you will need to open the inbound port on the service’s firewall.

sudo ufw allow from <Your Server's IP Address> to any port 5601 proto tcp

On your local computer open a web browser, navigate to the Kibana URL, and confirm Kibana loads

http://<Your Server’s IP Address>:5601/

If you see a page load similar to the one above, then everything is setup and ready for you to install ElastiFlow.

Part 3: Install ElastiFlow

Install ElastiFlow on Ubuntu 18.04 – Part 3: Installing ElastiFlow

$
0
0

This blog is part of a series. Refer to the links below for the other posts in this series.

Part 1: Installing Ubuntu
Part 2: Installing Elastic Stack
Part 3: Install ElastiFlow
Part 4: Solution Maintenance (coming soon)

In parts 1 and 2 of this tutorial we installed the Ubuntu server and Elastic Stack (ELK Stack). Now we are ready into install and configure ElastiFlow.

Before beginning I recommend setting up at least one network device to start sending logs to the server. In my environment, I configured my pfSense firewall to send IPv4 flows using port 9995. It is important that you make note of the port you setup in your environment, as we will need to configure ElastiFlow to receive them as part of this tutorial.

The steps below are based on the directions found in ElastiFlow GitHub site. I’ve just expanded upon them and given you the command relevant to the Ubuntu and Elastic Stack 6.3 install we performed in parts 1 and 2. The instructions here are for ElastiFlow 3.x

Set JVM heap size.

It is recommended to set the JVM heap size to at least 2GB. If you are going to be doing DNS lookups then 4GB is recommended.

Open the jvm.options for Logstash to set the heap size

sudo nano /etc/logstash/jvm.options

Edit the Xms and Xmx sizes in the jvm.options configuration

-Xms4g
-Xmx4g

Add and Update Required Logstash plugins

sudo /usr/share/logstash/bin/logstash-plugin install logstash-codec-sflow
sudo /usr/share/logstash/bin/logstash-plugin update logstash-codec-netflow
sudo /usr/share/logstash/bin/logstash-plugin update logstash-input-udp
sudo /usr/share/logstash/bin/logstash-plugin update logstash-filter-dns

Copy the pipeline files to the Logstash configuration path

Create a temp folder to hold install files

mkdir flowtemp

Navigate to the temp folder you just created

cd flowtemp

Download Elastiflow install files

wget https://github.com/robcowart/elastiflow/archive/master.zip

Install unzip, so you can extract the archive file you just downloaded

sudo apt-get install -y unzip

Unzip the Elastiflow files

unzip master.zip

Copy ElastiFlow configuration files to the Logstash directory

sudo cp -a elastiflow-master/logstash/elastiflow/. /etc/logstash/elastiflow/

Setup environment variable helper files

Copy the elastiflow.conf to systemd

sudo cp -a elastiflow-master/logstash.service.d/. /etc/systemd/system/logstash.service.d/

Add the ElastiFlow pipeline to pipelines.yml

Open the Logstash pipline configuration file for editing.

sudo nano /etc/logstash/pipelines.yml

Add the two line below to the bottom of the pipelines.yml file

- pipeline.id: elastiflow
  path.config: "/etc/logstash/elastiflow/conf.d/*.conf"

Configure inputs

Open the elastiflow.conf file for editing.

sudo nano /etc/systemd/system/logstash.service.d/elastiflow.conf

The items you set here will be unique to your environment and setup. In my environment, I set the following:

ELASTIFLOW_NETFLOW_IPV4_HOST=<The Server’s IP Address>
ELASTIFLOW_NETFLOW_IPV4_PORT=9995

Remember 9995 is the port I configured the network equipment to send flows on.

I also set ELASTIFLOW_RESOLVE_IP2HOST to true and set my DNS server in ELASTIFLOW_NAMESERVER so that the dashboards will attempt to resolve the DNS names instead of just displaying IP Address. There is a performance hit for this, but since it is just my lab network, it should not be a problem.

Ensure that the port for the incoming flows is open, on the firewall, so that Logstash is able to receive them.

sudo ufw allow from <IP Address> to any port 9995 proto tcp

Create logstash system startup script

sudo /usr/share/logstash/bin/system-install

Reload systemd manager configuration and start logstash

sudo systemctl daemon-reload
sudo systemctl start logstash

Run the command below to check that logs are being received.

tail -f /var/log/logstash/logstash-plain.log

You should see log entries scrolling up the screen. Logstash can take some time to start so wait a few minutes after running the command. If after a little bit, it is just sitting there doing nothing, then either flows are not being sent or something is wrong with your configuration. If something is not configured correctly, you should see the error listed in the log. You can ignore any errors about there being nothing in the “/etc/logstash/conf.d/*.conf” folder. This is because we added the ElastiFlow to a second pipeline, so unless you previously setup anything on this server, than that folder should be empty.

Note: If using Netflow v9 or IPFIX you will likely see warning messages related to the flow templates not yet being received. They will disappear after templates are received from the network devices, which should happen every few minutes. Some devices can take a bit longer to send templates. Fortinet in particular send templates rather infrequently.

Hit Ctrl-C to exit from log tail

Setup Kibana

Assuming you are still in the flowtemp directory, run the command below to import the ElastiFlow indexes.

curl -X POST http://<Your Server's IP Address>:5601/api/saved_objects/index-pattern/elastiflow-* -H "Content-Type: application/json" -H "kbn-xsrf: true" -d @elastiflow-master/kibana/elastiflow.index_pattern.json

  1. On your local machine download the ElastiFlow dashboards. Right-click the link below choose save as https://github.com/robcowart/elastiflow/raw/master/kibana/elastiflow.dashboards.json
  2. Open your web browser and open the Kibana site.
  3. Navigate to Management > Advanced Settings
  4. Search for and set the recommended settings listed below. For details and additional information on what these are, refer to the ElastiFlow documentation.
    doc_table:highlight false
    filters:pinnedByDefault true
    state:storeInSessionStorage true
    timepicker:quickRanges see link
  5. Navigate to Saved Objects and Import elastiflow.dashboards.json file you downloaded in step 1.

Once dashboard import completes you are done. You can now navigate to the Dashboard page in Kibana and start exploring the different visualizations. You can also check out the ElastiFlow Dashboard Documentation.

    

PowerShell Script Searcher

$
0
0

If you are like me, then you are writing PowerShell scripts for pretty much everything now-a-days, and even if you aren’t like me, chances are you still have a bunch of PowerShell scripts saved on your computer. I can’t tell you how many times I’m writing a script, and I realize that I’ve already written something similar. However, at my last count I had close to 4,000 PowerShell scripts on my computer. As you can imagine finding the one I’m thinking about can be kind of difficult. So, that is why I’ve written a function that I can use to search all the PowerShell scripts on my computer for a specific word or phrase.

All you have to do is pass the string to search for and the folder to look in. It will then find all the ps1 and psm1 files in that folder. Then it will check each one for the string you specified. For all matches, it will display the full path to the script and the last time it was written to.

If you don’t supply a value for path it will default to the path of your user profile. You can also specify whether or not to perform a recursive search. By default, the search is not recursive, meaning it will only search the folder provided in the path, and not the sub-folders. If you provide the Recurse parameter it will also search the sub-folders.

You can also use the -Verbose parameter to display the lines that matched your string in each of the files it found.

You can download the most current version of the script from my GitHub Gist, or copy it from below.

Function Search-PSScripts{
<#
.SYNOPSIS
Use to search the text inside PowerShell scripts for a particular string

.PARAMETER SearchString
The string to search for inside the script file
    
.PARAMETER Path
The folder path to search for PowerShell files in. Default to userprofile if not specified.

.PARAMETER Recurse
Indicates that this function gets the items in the specified locations and in all child items of the locations.

.EXAMPLE 
Search-PSScripts -searchString "Get-Help" -recurse

Description
-----------
This command searches all of the script files in the user's profile path and its subdirectories.

.EXAMPLE 
Search-PSScripts -searchString "Invoke-WebRequest" -path 'C:\Scripts' -recurse

Description
-----------
This command searches all of the script files in the current directory and its subdirectories.

.EXAMPLE 
Search-PSScripts -searchString "Invoke-WebRequest" -path 'C:\Scripts' 

Description
-----------
This command searches only the script files in the current directory.


#>
    [cmdletbinding()]
    param(
        [Parameter(Mandatory=$true)]
        [string]$SearchString, 
	    [Parameter(Mandatory=$false)]
        [string]$Path = $env:USERPROFILE,
        [Parameter(Mandatory=$false)]
        [switch]$Recurse
    )

    $filter = "*.ps1","*.psm1"

    # Confirm path is valid
    if(!(Test-Path $Path)){
        throw "'$Path' is not a valid folder or is not accessible."
    }

    # Get the name of this script to exclude it
    $invocation = (Get-Variable MyInvocation -Scope 1).Value;

    
    $progressParam = @{
        Activity = "Search for PowerShell Script in $Path"
        Status = "Depending on the number of scripts this may take some time"
        PercentComplete = 0
        id = 1
        }
    Write-Progress @progressParam
    
    # Get all files in the path
    if($Recurse){
        $fileList = Get-ChildItem $Path -Recurse -include $filter -Exclude $invocation.MyCommand -File
    } else {
        $Path = (Join-Path $Path '*.*')
        $fileList = Get-ChildItem $Path -include $filter -Exclude $invocation.MyCommand -File
    }

    [System.Collections.Generic.List[PSObject]] $results = @()
    $progress=1
    # Check each file for the string pattern
    Foreach($file in $fileList){
        $progressParam = @{
            Activity = "Search for '$SearchString' - $progress of $(@($fileList).count)"
            Status = "Found: $(@($results).count)"
            PercentComplete = $(($progress/$($fileList.count))*100)
            id = 1
            }
        Write-Progress @progressParam
        $progress++
        $found = Select-String -Path $file.fullname -pattern $SearchString 
        if($found){
            Write-Verbose ($found | Out-String)
            $results.Add(($file | Select-Object LastWriteTime, FullName))
        }
    }
    Write-Progress -Activity "Done" -Id 1 -Completed

    # Return found scripts sorted by last write time
    $results | sort LastWriteTime
}

 

Create Direct Link to a Log Analytic Query

$
0
0

This post will show you how you can create a URL to a specific Log Analytics query. When navigated to, this URL will automatically open the Log Analytics Query editor, input your query, and execute it. This URL can be embedded in your custom alerting solutions, used to create a bookmark for queries, or anything else you can think of. All you need to get started is a Log Analytics Workspace and a little bit of PowerShell.

The PowerShell functions you need can be downloaded from my GitHub Gist. To execute it you need to get your Log Analytics Workspace name and the Subscription GUID and Resource Group that contains your workspace. To find these values, you can navigate to your Log Analytics Workspace in the Azure Portal, and copy then from the Overview blade.

One you have this information you simply need to call the Write-LogAnalyticsURL function to create your link. I have provided a couple of examples below.

# Create URL that you can change the computer name on
$ComputerName = 'SEVER01.BLOG.DEMO'
$queryString = @"
Heartbeat
| where TimeGenerated >= ago(1h)
| where Computer == "$ComputerName"
"@

$URL = Write-LogAnalyticsURL -SubscriptionId $SubscriptionId -ResourceGroup $ResourceGroup -Workspace $Workspace -QueryString $QueryString

 

# Create URL and Open it in Your default browser
$queryString = @'
Usage 
| where TimeGenerated > ago(3h)
| where DataType == "Perf" 
| where QuantityUnit == "MBytes" 
| summarize avg(Quantity) by Computer
| sort by avg_Quantity desc nulls last
| render barchart
'@

$URL = Write-LogAnalyticsURL -SubscriptionId $SubscriptionId -ResourceGroup $ResourceGroup -Workspace $Workspace -QueryString $QueryString
[System.Diagnostics.Process]::Start($URL) | Out-Null

The most recent version of this script, and many more scripts, are available on my GitHub Gist (https://gist.github.com/mdowst).

Find and Remove All WSUS Deadlines

$
0
0

Every admin at some point has had the call that an update has just installed on someones machine in the middle of an important meeting or something similar. This can often be caused by deadlines set in WSUS. While deadlines can be good to forcing updates at a particular time, they can also come back to bite you after that deadline has expired. So, I wrote a quick script that you can run to remove deadlines from approved updates in WSUS. The script will allow you to either remove all deadlines, past and future, or you can use the OnlyPast switch and just remove deadlines that have already pasted.

The script is available on my public GitHub GIST repository. Remove Wsus Deadline


Log Analytics: Alert on Domain Time Deviations

$
0
0

As we all know accurate time synchronization in Active Directory domains is a must. I recently ran into a problem where one domain controller’s times was starting to get off. Once it got off by about 8 minutes, it started to wreak havoc in the environment as some servers were synced to its time and others were synced to different domain controllers that had the correct time. As it turns out this particular DC was a VM and had the Time Synchronization in Active Directory service turned on. While turning this off should prevent it from happening again, I decided to go ahead and make an alert using Azure Log Analytics.

When I first started troubleshooting issues of people having authentication issues, I quickly checked Log Analytics to see if I had a domain controller offline. When I ran a query to get the last heartbeat on all servers, I found a large portion of them had the last Time Generated as 8 minutes ago. So, it appears that the Time Generated field is set to the systems time, not the time in Log Analytics. I was able to confirm this by changing the time a couple of test systems. So, I created the query below to show me all machines that have a heartbeat that is greater than or less than, 5 minutes from the current time.

Heartbeat
| where OSType == 'Windows' and TimeGenerated > ago(1d)
| summarize arg_max(TimeGenerated, *) by SourceComputerId
| where TimeGenerated < now(-5m) or TimeGenerated > now(5m) 
| extend TimeAgoMinutes = toint((now() - TimeGenerated)/1m)
| project Computer, LastHeartbeat=TimeGenerated, TimeAgoMinutes

When I created the alert rule, I set the alert logic as follows:

domain time deviations

I set the threshold to greater than 4 because there are around 50 servers in this environment. So, this will send me an alert if around 10% of the servers have their time off by more than 5 minutes. I set the Period to 15 minutes because if I left it at the default 5 minutes, it won’t return any greater than 5 minutes. Left the frequency to 5 minutes because I want to know as soon as possible.

Query Explanation

For those interested here is a breakdown of the query line by line.

First, I start by getting all Windows devices that have sent a heartbeat in the last 24 hours.

Heartbeat
| where OSType == ‘Windows’ and TimeGenerated > ago(1d)

Next, I get the latest heartbeat time for each machine by summarizing on the max TimeGenerated by SourceComputerId, which is unique to each device.

| summarize arg_max(TimeGenerated, *) by SourceComputerId

Then I filter out all results that are not less than 5 minutes or greater than 5 minutes.

| where TimeGenerated < now(-5m) or TimeGenerated > now(5m)

Now I only have the machines that I want to report on, so I’ll make my data a little more user friendly by calculating the actual number of minutes the time if off, and present it as a integer in the results.

| extend TimeAgoMinutes = toint((now() – TimeGenerated)/1m)

Finally, I only project out the fields that I care about and want to see I the alert email. I also change the title of TimeGenerated to the more relevant, LastHeartbeat.

| project Computer, LastHeartbeat=TimeGenerated, TimeAgoMinutes

Creating an inexpensive Ping monitor for Azure Monitor

$
0
0

Using Azure Monitor to provide availability of systems works extremely well for most configurations, but what about situations where you can’t install a Log Analytics agent on the system? (whether the OS is not supported, or for a router as an example where it’s not possible to install the agent). For these use cases, we have found it useful to provide a ping level monitor for these types of systems. This blog post will provide details on the solution which we have developed which provides a ping level monitor for an extremely low cost on a monthly basis.

What’s required:

The architecture that we are using for this solution runs in Azure Automation using a watcher node and it consists of three runbooks:

  • PingMonitor-Watcher.ps1: This script runs every 60 seconds to check and see if any ping tests fail based on the criteria defined in the “PingMonitorDevices” variable (which contains JSON content populated by the Ping-Monitor-Updater.ps1 script).
  • PingMonitor-Updater.ps1: This script automates the population of the “PingMonitorDevices” variable to do items like adding or deleting items to be tested via the PingMonitor-Watcher.ps1 script.
  • PingMonitor-Action.ps1: This script activates when there is a failure to ping of one of the systems defined in the “PingMonitorDevices” variable.

This solution also requires one or more Hybrid Runbook workers where the watcher and action scripts will execute.

Installing the solution:

Pre-requisites: This solution assumes that you already have the following:

  • An Azure subscription
  • A resource group where Azure Automation is stored
  • One or more Azure Hybrid Runbook workers

Adding the runbooks:

Once we have our Azure Automation environment, we can easily create the three required scripts by creating each of the three runbooks as the PowerShell runbook type with the names defined above (PingMonitor-Watcher, PingMonitor-Updater, PingMonitor-Action). These scripts are available for download here. Once these have been added you can save and publish them. After they are created, they should look like the screenshot below:

Defining variables:

Create the three following variables with their appropriate content (PingMonitorDevices, PingMonitorWorkspaceId, PingMonitorWorkspaceKey):

  • PingMonitorDevices – String which is created as blank, but populated with the correct JSON content by the PingMonitor-Updater.ps1 script (create this as NOT encrypted)
  • PingMonitorWorkspaceId – String which contains the Log Analytics Workspace ID (create this as encrypted).
  • PingMonitorWorkspaceKey – String which contains the Log Analytics Workspace Key (create this as encrypted).

Once created these should look like the following:

Populating the PingMonitorDevices variable:

The PingMonitor-Updater script populates the information required to ping the various systems. We run this script and provide the following information:

  • Device: Name of the device which will attempt to be pinged, required string field.
  • VariableName: Variable where the content is stored, optional string field – defaults to the value ‘PingMonitorDevices’
  • ThresholdMinutes: Variable which defines the threshold for how long the system should not respond to ping, optional int32. Defaults to the value 5.
  • SuppressMinutes: Variable which defines how long to suppress alerts before re-sending the alert, optional int32. Defaults to the value 15.
  • Delete: Delete the record instead of creating one, optional booleon. Default is “$false”
  • TestTime: This performs a one time test of all the connections in your PingMonitorDevices. This allows you to see the execution time and confirm everything is set correctly. Default is “$true”

In my example I used a single value of “TestServer” for the device and took the defaults for the remainder. Below shows the JSON value which the script put into this variable:

Scheduling the watcher task:

Now that all of the scripts and variables are in place we can configure the PingMonitor-Watcher to be run as a watcher task. This is done under process automation / watcher tasks.

We add a watcher task, which I defined as “PingMonitor” with a frequency of 1 and I pointed it to the PingMonitor-Watcher runbook for the watcher and the PingMonitor-Action runbook for the action.

This creates the task and the user experience shows the last watcher status in this same pane.

Additionally, you can dig into the various watcher task runs to see if data is being written such as in the example below:

How does this all come together?

So how does this all work when it’s installed? The watcher task checks every minute for a ping failure. If it does find a ping failure it writes out to the Log Analytics workspace details into the PingMonitor_CL class. If none of the pings fail, it does not write to the Log Analytics workspace. Once this information is logged to Log Analytics we can use Azure Monitor to send an alert whenever a ping failure occurs. Additionally, we can surface this information via dashboards in Azure (both of these topics will be covered in the next blog post).

Solution restrictions:

It is important to note that there is a 30 second maximum for the watcher task to complete. Additionally, as mentioned earlier in this blog post this solution does NOT write data if systems are successfully contacted via ping. We only write data when errors are occurring (IE: systems are offline).

Azure Cost breakdown:

To get the data into Log Analytics this solution uses both Azure Automation runbooks and the worker tasks. The prices on these are below:

  • Worker tasks:
    • Charged at .002 per hour.
    • This would be up to $1.50 a month
    • However, the first one is free per month so the worker tasks should be free if this is the only worker task.
  • Azure Automation runbooks:
    • Charged at .002 per minute
    • On average these jobs run in about 1 minute to log the data into Log Analytics.
    • However, the first 500 minutes are free so this task should be free if it is the only Azure Automation runbook.
  • In general, the cost for this regardless of whether existing worker tasks and Azure Automation runbooks are in place should be less than $5 a month.

Additional readings:

There were two other blog posts which existed which are similar in the goal (pinging systems via Azure Monitor/Log Analytics). Their approaches were focused on running scheduled tasks or running the tasks via an Azure Hybrid Runbook worker. This approach is good as well and would be the best choice if you need to have ping statistics (response time, etc) but if the goal is to keep the cost down the approach in this blog post is significantly less expensive to run on a monthly basis.

Summary: If you are looking for a cost-effective method to provide notifications when systems or devices are not responding from Azure Monitor you will want to try this solution out! In the next part of this blog series, Cameron Fuller will show how we can alert from failed ping responses and will show how this data can be showcased in an Azure dashboard.

Format Data Returned from Get-PnPListItem

$
0
0

If you have ever used the SharePoint PnP PowerShell Cmdlets you know that the data returned from the list is not done in the cleanest manor. It is returned as a hashtable and it includes all the internal columns. So, I created a function that will convert this hashtable to a standard PowerShell object and only return the columns you really care about. These are the custom columns you’ve added and some of the common ones like title, modified and created date, as well as the author and last editor.

Function Get-ListValues{
<#
.SYNOPSIS
Use to create a PowerShell Object with only the columns you want,
based on the data returned from the Get-PnPListItem command.

.DESCRIPTION
Creates a custom PowerShell object you can use in your script.
It only creates properties for custom properties on the list and
a few common ones. Filters out a lot of junk you don't need.

.PARAMETER ListItems
The value returns from a Get-PnPListItem command

.PARAMETER List
The name of the list in SharePoint. Should be the same value
passed to the -List parameter on the Get-PnPListItem command

.EXAMPLE
$listItems = Get-PnPListItem -List $List 
$ListValues = Get-ListValues -listItems $listItems -List $List


#>
    param(
    [Parameter(Mandatory=$true)]$ListItems,
    [Parameter(Mandatory=$true)]$List
    )
    # begin by gettings the fields that where created for this list and a few other standard field
    begin{
        $standardFields = 'Title','Modified','Created','Author','Editor'
        # get the list from SharePoint
        $listObject = Get-PnPList -Identity $List
        # Get the fields for the list
        $fields = Get-PnPField -List $listObject
        # create variable with only the fields we want to return
        $StandardFields = $fields | Where-Object{$_.FromBaseType -ne $true -or $standardFields -contains $_.InternalName} | 
            Select @{l='Title';e={$_.Title.Replace(' ','')}}, InternalName
    }
    
    process{
        # process through each item returned and create a PS object based on the fields we want
        [System.Collections.Generic.List[PSObject]] $ListValues = @()
        foreach($item in $listItems){
            # add field with the SharePoint object incase you need to use it in a Set-PnPListItem or Remove-PnPListItem
            $properties = @{SPObject = $item}
            foreach($field in $StandardFields){
                $properties.Add($field.Title,$item[$field.InternalName])
            }
            $ListValues.Add([pscustomobject]$properties)
        }
    }
    
    end{
        # return our new object
        $ListValues
    }
}

To run this, all you have to do is pass the name of the list and the returned data from the Get-PnPListItem command.

$listItems = Get-PnPListItem -List $List 
$ListValues = Get-ListValues -listItems $listItems -List $List

 

Azure Update Management – Fix Failed to Start Status

$
0
0

Over the past couple of months, I’ve experienced an issue where an Azure Update Management deployment will fail to run on several servers. When I look at the deployment history these servers where listed with the status of Failed to start.

Checking the logs for the individual servers, I see that they all had an exception message that stated, “Job was suspended. For additional troubleshooting, check the Microsoft-SMA event logs on the computers in the Hybrid Runbook Worker Group that tried to run this job.”

So, I checked the Microsoft-SMA event logs on the computers and found they all had an error event id 15105 with the task category of HybridErrorWhilePollingQueue. In each case the event showed that the remote server returned an error: (401) Unauthorized.

Going back into the Azure Automation Account I checked the System Hybrid Workers and saw that all the machines that where having this problem, had a Last Seen Time of over a month ago. Anything over 60 minutes is considered to be in a troubled state by Update Management.

The Fix

To resolve this issue, you have to remove the device as a Hybrid Worker in Azure Automation. After doing this, it will automatically add itself back as a Hybrid Worker and was able to run update deployments again.

  1. Connect to the server and run the script below to stop the Microsoft Management Agent, clear the cache, and remove the Hybrid Worker configuration.
    Stop-Service -Name HealthService
    Remove-Item -Path 'C:\Program Files\Microsoft Monitoring Agent\Agent\Health Service State' -Recurse
    Remove-Item -Path "HKLM:\software\microsoft\hybridrunbookworker" -Recurse -Force
  2. Open the Azure Portal and navigate to the Automation Account with the Update Management solution
  3. Open the Hybrid Worker Group blade and select the System Hybrid Worker Groups
  4. Select the server that you are removing and click Delete
    Note: Alternatively, you can use the Remove-AzureRmAutomationHybridWorkerGroup cmdlet if you prefer
  5. Restart the Microsoft Management Agent on the server
    Start-Service -Name HealthService

After 5-15 minutes you should see the server reappear on the System Hybrid Worker Groups list in Azure. Once it does you are good to go.

Viewing all 39 articles
Browse latest View live