All posts by ThingBreaker

Scheduled Powershell Scripts without storing credentials

Sometimes I want to schedule a script to run with specific domain credentials without storing anything blatantly risky. For example here I wanted to schedule maintenance notifications to users that have been logged in for so long that their hosts are up for replacement…

Easiest way is to set up a GMSA and use it for the scheduled tasks, the only caveat is that you can’t select a G/MSA account for those tasks in the task scheduler UI. Hmm.

The workaround is to set the task account from either SC or PowerShell; however I also wanted to script the rest of the task setup. Nobody likes instructions that involve “Do a dozen things by hand in the UI, then write some lines to modify it”.

The next step to low footprint bliss would be to say goodbye to all the files and ACLs, let’s just inline the scripts (if they’re short enough)! So here’s my script to create a file-less, credential-less (kind of) PowerShell scheduled task that in this case schedules desktop messages for Citrix sessions. Ironically this specific example drops transcript copies but you get the point.

#Encode script as Base64, send a Citrix message in this example
function EncodeMessageTaskScript ($MessageText, $AdminAddress)
    $TaskScript = @"
    Start-Transcript "C:\ScriptLogs\ScheduledPS.log" -Append
    & { Add-Pssnapin @('Citrix.Host.Admin.V2′,'Citrix.Broker.Admin.V2′)}

    `$CurrentSessions = Get-BrokerSession -AdminAddress "$AdminAddress" -MaxRecordCount 1000 | ? DesktopGroupName -eq 'Nope'
    `$CurrentSessions | % { Send-BrokerSessionMessage -AdminAddress "$AdminAddress" -InputObject `$_ -MessageStyle Critical -Text "$MessageText"`nMessage Sent [`$(Get-Date)]" -Title "Maintenance Warning"}


"@#Let's pretend that this is indented and that you can't end a herestring with whitespace...


function CreateScheduledGMSATask ($EncodedPsScript, [datetime]$TriggerDateTime, $TaskName)

    $Action = New-ScheduledTaskAction -Execute "powershell.exe" -Argument "-EncodedCommand `"$EncodedPsScript`" -NoLogo -NoProfile -ExecutionPolicy Bypass"
    $Principal = New-ScheduledTaskPrincipal -LogonType Password -RunLevel Limited -UserId 'DOMAIN\[G]MSA$'
    $Settings = New-ScheduledTaskSettingsSet -Compatibility Win8
    $Trigger = New-ScheduledTaskTrigger -Once -At $TriggerDateTime

    $TaskObj = New-ScheduledTask -Action $Action -Principal $Principal -Trigger $Trigger -Settings $Settings
    Register-ScheduledTask -TaskName $TaskName -InputObject $TaskObj


$EncodedReminderTask = EncodeMessageTaskScript -MessageText "Your Message Text" -AdminAddress $CitrixAdminAddress
CreateScheduledGMSATask -EncodedPsScript $EncodedReminderTask -TriggerDateTime $MaintReminder -TaskName "MessageTask $($MaintStartTime.ToString('yyyyMMdd.HHmmss'))"

DPM Scripted VM Recovery Fails (Error 104 0x80041002, 3111)


Specifies the recovery type. If you specify the HyperVDatasource parameter, the only valid value is Recover. The acceptable values for this parameter are: Recover or Restore.

The Microsoft documentation is flat out wrong. It very explicitly states that the only valid RecoveryType for HyperVDatasource is Recover. When trying to recover to an alternate disk location their example does not work. Based on the example script you would expect the code below to work. Instead if you try it you’ll get a powershell error stating “The recovery point location that you have passed is invalid. Please try again with a different value (ID:31050).”

$BadOption = New-DPMRecoveryOption -HyperVDatasource -TargetServer "" -RecoveryLocation CopyToFolder -RecoveryType Recover -TargetLocation "D:\DestinationFolder"

So maybe instead of that you’d google around then try the following, and that appears to work… At first.

$BadOption = New-DPMRecoveryOption -HyperVDatasource -TargetServer "" -RecoveryLocation AlternateHyperVServer -RecoveryType Recover -TargetLocation "D:\DestinationFolder"

However at some point that job will fail with “An unexpected error occurred while the job was running. (ID 104 Details: Unknown error (0x80041002) (0x80041002))” which is entirely unhelpful. If you go to the job details you’ll get an equally unhelpful error 3111. Making some assumptions around that error code (WMI object not found error) I’m thinking that it’s trying to import the VM to a hyper-v instance running on that server. That doesn’t work if there’s no valid hypervisor running. Instead you need to user the parameters -RecoveryLocation CopyToFolder and -RecoveryType Restore.

$WorkingOption = New-DPMRecoveryOption -HyperVDatasource -TargetServer "" -RecoveryLocation CopyToFolder -RecoveryType Restore -TargetLocation "D:\DestinationFolder"


Archived Ten Laws of Security 2.0 (

Archived Ten Laws response (

Archived Ten Laws Re-Review (

Krebs on Security (

Raymond Chen’s Blog (

Barracuda Spam Firewall Rooting (

Group Policy team blog (

Aaron Stebner’s Weblog (notes on .Net) (

AskPerf Ask The Performance Team (

AskDS Ask the Directory Services Team ( (Archive: (A lot of interesting deep dives on ESE)

Thomas Maurer’s Blog (Azure Advocate) (

Carl Stalhood’s EUC Blog (

Robin Hobo (

Helge Klein’s Blog (

Brent Ozar’s Corp Blog (

DBA Reactions (Lighthearted fun) (

VMM Migration Error 20413 (Hyper-V-VMMS 20770)

I was trying to migrate a VM from one of our less-used staging hosts when I started getting an exception at the Live Migration step.

--------------- Bucketing Parameters ---------------

SCVMM Version=4.0.2413.0


Base Exception Assembly name=ImgLibEngine.dll
Base Exception Method Name=Microsoft.VirtualManager.Engine.ImageLibrary.HyperVHAVM.AddDiskResourceToVMFromFilePath
Exception Message=Object reference not set to an instance of an object.


System.NullReferenceException: Object reference not set to an instance of an object.
   at Microsoft.VirtualManager.Engine.ImageLibrary.HyperVHAVM.AddDiskResourceToVMFromFilePath(String path, IVmmDbConnection dbConnection)
   at Microsoft.VirtualManager.Engine.VmOperations.DeployVmBase.MigrateVM(IVmmDbConnection dbConnection)
   at Microsoft.VirtualManager.Engine.VmOperations.DeployHost2Host.RunSubtask(IVmmDbConnection dbConnection)
   at Microsoft.VirtualManager.Engine.TaskRepository.SubtaskBase.Run(IVmmDbConnection dbConnection)
   at Microsoft.VirtualManager.DB.SqlContext.Connect(Action`1 action)
   at Microsoft.VirtualManager.Engine.TaskRepository.Task`1.SubtaskRun(Object state)

On the source host I saw a number of SMBClient errors with ID 30905:

The client cannot connect to the server due to a multichannel constraint registry setting.

Server name: \<TARGETHOST>

The client attempted to use SMB Multichannel, but an administrator has configured multichannel support to prevent multichannel on the client. You can configure SMB Multichannel on the client using the Windows PowerShell cmdlets: New-SmbMultichannelConstraint and Remove-SmbMultichannelConstraint.

Short answer was that the source server had some constraints (Get-SmbMultichannelConstraint) and I was in a position where I could just temporarily disable multichannel (Set-SmbClientConfiguration -EnableMultiChannel $false). Realistically the right answer would have been to validate the configuration and get it working correctly, but this host was up for decommissioning so we let it slide.

Error 0x8009030E Trying to Migrate VM in System Center VMM

Working with VMM 2016

Error (23008)
The VM BlahBlahBlah cannot be migrated to Host due to incompatibility issues. The Virtual Machine Management Service failed to establish a connection for a Virtual Machine migration with host '': No credentials are available in the security package (0x8009030E).
  1. Double checked that hosts were setup with the correct Kerberos delegation settings (and set to Kerberos only, others say this doesn’t work but I *think* you just have to wait a few minutes after doing klist purge -li 0x3E7 to clear the computer account tickets on each host and it will start working)
  2. Double checked that our VMM management account was setup under Host Access > Host management credentials > Run As Account
  3. Double check that hosts are configured to use Kerberos as their Live Migration method

Zabbix HTTP Agent LLD Rule Example

UPDATE: The DPM part of this whole ordeal was partially invalidated by the recent addition of event publishing for DPM. If you can, get the update and just setup windows event monitors for backup actions. Although in all honesty I don’t think I’d trust DPM’s events for critical monitoring.

Jump to Zabbix Item Examples

TL;DR: Built an API to query a DPM view and spit out JSON that Zabbix could handle for both discovery and data. Put this here because there weren’t many resources on the whole HTTP LLD deal.

Rough Draft, I built this whole project in about 5 hours. I imagine you’re here for the Zabbix HTTP Agent LLD stuff so I left the API part out. If you want the whole shebang (API, Code, Setup) let me know with a comment. I don’t want to clean up a whole project if it’s just going to rot in my corner of the internet.

We’ve been using DPM for our backups only to be thwarted in our monitoring attempts. We could have used Operations Manager but the problem was that we weren’t using OM for anything else. The only thing worse than an incomplete dashboard is two incomplete dashboards. So I bit the bullet and now we can finally monitor DPM with Zabbix.

DPM’s built in reporting was a royal pain and took too much manual review time. The email alerts were pretty much all or nothing and I’m loathe to contribute to alert blindness so I hammered this… thing… out.

I built a quick web API with two controllers, one to provide discovery data, and another for the details. The discovery URL (/api/DpmDiscovery/{HOST.NAME}) would hand back the LLD formatted JSON and the other URL (/api/DpmStatus/{#RECPOINT.BACKUPPATH}) would spit out details.

A call for would return the following json.

    "data": [
            "{#RECPOINT.STATUS}": 2,
            "{#RECPOINT.IDSN}": "D:\\",
            "{#RECPOINT.SERVERNAME}": "",
            "{#RECPOINT.BACKUPPATH}": "",
            "{#RECPOINT.CREATIONTIME}": "2019-04-25T00:05:28-06:00",
            "{#RECPOINT.UNIXTIME}": "1556172328"
            "{#RECPOINT.STATUS}": 2,
            "{#RECPOINT.IDSN}": "E:\\",
            "{#RECPOINT.SERVERNAME}": "",
            "{#RECPOINT.BACKUPPATH}": "",
            "{#RECPOINT.CREATIONTIME}": "2019-04-25T00:05:42-06:00",
            "{#RECPOINT.UNIXTIME}": "1556172342"
            "{#RECPOINT.STATUS}": 2,
            "{#RECPOINT.IDSN}": "System State",
            "{#RECPOINT.SERVERNAME}": "",
            "{#RECPOINT.BACKUPPATH}": "",
            "{#RECPOINT.CREATIONTIME}": "2019-04-25T02:10:59-06:00",
            "{#RECPOINT.UNIXTIME}": "1556179859"

Then the LLD rule creates an HTTP Agent item to call

    "status": 2,
    "interpretedDsn": "System State",
    "serverName": "",
    "backupPath": "",
    "creationTime": "2019-04-25T02:10:59-06:00",
    "unixCreationTime": "1556179859"

Technically my API returned the data as application/json; however I had accidentally checked “Convert To JSON” so you’ll see a body element in the JSON path below (e.g. $.body.status). In theory I could uncheck that and remove the body element. In practice it works as-is so it’ll stay that way for now.

Example Screenshots

Zabbix Discovery Rule

Zabbix Data Item

Dependent Item

These dependent items use JSON Path processing to extract the actual data out of my details response.

(body element was inserted because I had checked “Convert to JSON”)

Citations Nonsense:
DPM SQL View Documentation:
Handy JSON validator:
Go-To JSON Browser:

Dynamics CRM Plugin Mistake

Quick one (i.e. not the prettiest article): I was building another CRM plugin and kept getting a really annoying exception. Followed by an uncatchable exception.

System.NullReferenceException: Microsoft Dynamics CRM has experienced an error.

Useful, I know. If I turned on profiling and tried to replay the plugin it would execute as expected. Turing to the CRM server event log I saw this:

ASP.NET event 1309
Exception information: 
    Exception type: NullReferenceException 
    Exception message: Object reference not set to an instance of an object.
   at Microsoft.Crm.Application.InlineEdit.InlineEditJsonConverter.IsLocalizedAttribute(AttributeMetadata attributeMetadata)
   at Microsoft.Crm.Application.InlineEdit.InlineEditJsonConverter.AppendDataValueJson(StringBuilder dataValues, String attributeLogicalName, Entity entity, FormMediator formMediator, Boolean encodeValues, IOrganizationContext context)
   at Microsoft.Crm.Application.InlineEdit.InlineEditJsonConverter.GetEntityAttributeJsonContent(Entity entity, FormMediator formMediator, Boolean encodeValues, IOrganizationContext context)
   at Microsoft.Crm.Application.InlineEdit.InlineEditJsonConverter.<EntityPropertiesToJsonInternal>d__3.MoveNext()
   at System.Linq.Enumerable.WhereEnumerableIterator`1.MoveNext()
   at Microsoft.Crm.Application.InlineEdit.InlineEditExtensionMethods.WriteSeparatedValues(TextWriter writer, IEnumerable`1 values, Char separator)
   at Microsoft.Crm.Application.InlineEdit.InlineEditJsonConverter.WriteEntityProperties(TextWriter writer, Entity entity, FormMediator formMediator, NotificationCollection notifications, PrivilegeCheck privilegeChecks, Boolean appendEntriesForFirstTimeLoad, Dictionary`2 parameters, Boolean encodeValues)
   at Microsoft.Crm.Application.InlineEdit.ReadFormDataBuilder.WriteFormDataJson(TextWriter writer)
   at Microsoft.Crm.Application.InlineEdit.ReadFormDataBuilder.WriteFormattedEntityData(TextWriter writer, Boolean isTurboForm)
   at Microsoft.Crm.Application.Pages.Form.FormDataPage.Render(HtmlTextWriter writer)
   at System.Web.UI.Control.RenderControlInternal(HtmlTextWriter writer, ControlAdapter adapter)
   at System.Web.UI.Page.ProcessRequestMain(Boolean includeStagesBeforeAsyncPoint, Boolean includeStagesAfterAsyncPoint)

I figured I was sending data that couldn’t be rendered. After going back and forth trying to debug I noticed that my attribute keys had a capital letter in the middle (i.e. “contoso_entity_customBlah”). That’s was an hour of my life because of a capital letter.

p.s. I noticed that sometimes when debugging the profiler would throw an uncatchable exception, but only if a debugger was attached. The debugger couldn’t detach once the exception was thrown.

I’d replay the plugin: no exception.

Attach the debugger and replay: see a caught exception! Then the plugin tool would crash due to an uncaught win32 exception. Of course I couldn’t debug the plugin tool because I already had a debugger attached, and I couldn’t detach the debugger because yadda yadda yadda. Turns out if you try to debug a sandboxed plugin in some circumstances the debugger in traceinternal tries to get fileiopermission and fails (because sandbox). So yeah, it was the debugger throwing an exception that it didn’t catch.

I ended up attaching the debugger, hitting a breakpoint, detaching the debugger, then reattaching the debugger after the plugin tool threw an exception. Of course the solution was to debug outside the sandbox.

Remote bulk fix for VSS LLDP CAPI 513 error.

I’m a stickler for keeping error logs clean where possible. I wanted to fix the VSS CAPI 513 error ( on my DPM protected servers; however, I’m also lazy efficient and didn’t want to do it manually. Here’s my quick and dirty powershell function to apply the fix to all of the appropriate servers.

Automation is a fantastic way to break things with unprecedented speed. Scripts should be understood before running. Also all the error decorations aren’t necessary, but who’s to say I can’t have fun with a blog post?
Caveat Emptor.

function Repair-mslldpPermissions {

    param (




    $mslldpSDDL = Invoke-Command -ComputerName $TargetComputer -ScriptBlock {sc.exe sdshow mslldp}

    $ntserviceSecString = ‘(A;;CCLCSWLOCRRC;;;SU)’


    if ($mslldpSDDL -match $ntserviceSecString) {

        Write-Warning “mslldp service already has NT Service permission fix applied on $TargetComputer!”




    if ($mslldpSDDL -match “[OGS]:”) {

        Write-Error “I’m not smart enough to understand the SDDL on $TargetComputer.

        I expect the SDDL for this service to match the default, which only contains dacl flags.

        Make me smarter if you want to continue!” -Category InvalidOperation



    $newSDDL = $mslldpSDDL$ntserviceSecString

    $output = Invoke-Command -ComputerName $TargetComputer -ScriptBlock {$sddl = $args[0]; sc.exe sdset mslldp $sddl} -ArgumentList $newSDDL


    switch -Wildcard ($output) {

        “*5*” {

            Write-Error “Insufficient permissions to alter SDDL of mslldp service. Failed to set SDDL” -Category PermissionDenied



        “*SetServiceObjectSecurity SUCCESS*” {

            Write-Host “Successfully updated mslldp service SDDL”



        Default {

            Write-Error “sc returned unexpected result:`n$output -RecommendedAction “RTError” -Category InvalidResult







DPM Azure Recovery Services Agent Crashing

Update: We did start having dependency issues after updating the MARS agent. It appears that the agent now depends on the management service. Not getting errors anymore though so we reset things back to normal. Stuff below is just for posterity.

DPM 2016 deployments have been filling up my error logs with crash reports for the Microsoft Azure Recovery Services Management Agent. Turns out that’s the statistics agent for the Azure dashboards that don’t work on the LTSC releases of DPM (

System Event ID: 7031 
The Microsoft Azure Recovery Services Management Agent service terminated unexpectedly
Application Event ID: 1000
Faulting application name: OBRecoveryServicesManagementAgent.exe
Application Event ID: 1026
Application: OBRecoveryServicesManagementAgent.exe
Description: The process was terminated due to an unhandled exception.
Exception Info: System.AccessViolationException
at .CTraceProvider.TraceToErrorFile(CTraceProvider, DLS_TRACE_EVENT)

Disable it if you’re on DPM 2016 or DPM 2012. No impact that we’ve seen.

Shoretel Users Can’t Change Call Handling Mode or Agent Status

TL;WR Probably the SG90 acting up again. Those things are weird. Rebooting the SG90 and the Director server fixed it for me. YMMV.

While moving around some VM’s we had a Shoretel Director server running without a network connection for 4 hours during a maintenance window. Afterwards users couldn’t change their Call Handling Modes or change their agent logged in/out status. It failed from both the phones and from communicator. At first I thought it was a CAS problem; however the phone directory, history, options, and speed dial features were all working correctly.

I popped up the IPDSCASCfgTool (see bottom) to set the log levels for the CAS to include all the DB and CAS flags for a start. After that I used powershell to stream the logs with Get-Content. I use Measure-Object first to grab the line count of the file so that we can skip the first 393,000 lines straight to the live output. That’ll work like tail -f in linux and just continuously stream the logs to the console.

Note: You SHOULD be able to use Get-Content -Wait -Tail <Number of Lines> to skip to the end, but that wasn’t working on this particular server. Gremlins…

PS C:\Shoreline Data\Logs> Get-Content .\ipds-190225.000000.Log | measure
Count : 393693
Average :
Sum :
Maximum :
Minimum :
Property :
PS C:\Shoreline Data\Logs> Get-Content .\ipds-190225.000000.Log -Wait | where -Property ReadCount -gt 393700
17:49:28.837 ( 3264: 3512) >SetUserCHM. User: 123. CHM: 2
17:49:28.888 ( 3264: 3512)
15:52:01.574 ( 7508: 5168) >CDBWriter::SetUserCHM::CDBUpdateTable::Update() failed. Error: 0xc1200db5.

SetUserCHM was me (unsuccessfully) changing from CHM 1 (Standard) to CHM 2 (In a Meeting) from communicator (testing from a phone will also log here but it’s noisier). That error sent me off looking for database issues, communications problems, etc. No dice. The evt log showed some interesting output though:

15:55:17.013 ( 4080: 4476) [evtl] (Error) CEventLibImpl::sendReceiveIPC failed - 0xC126100C
15:55:17.029 ( 7508: 4036) [evtl] (Error) CEventLibImpl::sendReceiveIPC failed - 0xC126100C
15:55:17.183 ( 4080: 4436) [evtl] (Error) CEventLibImpl::sendReceiveIPC failed - 0xC126100C
15:55:17.183 ( 2992: 6552) [evtl] (Error) CEventLibImpl::sendReceiveIPC failed - 0xC126100C
15:55:17.183 ( 1764: 1856) [evtl] (Error) CEventLibImpl::sendReceiveIPC failed - 0xC126100C
15:55:17.187 ( 4972: 5232) [evtl] (Error) CEventLibImpl::sendReceiveIPC failed - 0xC126100C

After digging around for named pipe issues and doing traces I tried the same on the voice switch and didn’t see any interesting errors. Theoretically the phone’s button and control traffic hits the voice switch and is making it to the director but I couldn’t verify that because I couldn’t get the switch to start a packet cap. Supposedly that’s because of a cipher mismatch: the director server tries to ssh into the voice switch to start the packet cap but it fails to login using it’s certificate.

Anyway I ran out of ideas and just waited until I could restart the switch and it worked. Everything was fine. Once again the SG90 blew up in my face was behaving unexpectedly and sent me one step closer to trying the vswitch.

Another one of those 1/1 google searches: