Follow Mike's blog
Archives for the month of: March, 2016



Tells you the slowest queries (up to 10) and who ran them.

The PowerShell script runs 2 test SQL queries a few seconds apart. If both queries take longer than the threshold you specify, then it will run the stored procedure mentioned above to find details of the 10 longest running queries and the users who ran them. It writes this detailed info to the Windows event log (application log EventID 777 severity=ERROR) and therefore LogicMonitor can alert on it.


  1. Download this Stored Procedure to your SQL server and open it in Microsoft’s SQL Management Studio and click “Execute!” ( this installs it) so it can be called in the future.
    Read and download here:
  2. Download the DataSource file  and add it to your LogicMonitor account (Settings > DataSources > Add > From file)
  3. Change the script to use your test query and YOUR database. They are at the beginning of the script (in ‘$test_query’ and ‘$DBname’ variable).
  4. Run this PowerShell command on your SQL server so it will accept new events from this source (substitute your server instead of ‘server2’
    New-EventLog -ComputerName localhost -Source Top_SQL -LogName Application
  5. Set a property called ‘apply_Top_SQL’ on your device in LogicMonitor so this datasource is applied to your server. Or use the other typical methods.


One way to test this script, is to add load by running this looping query shown below. Careful not to do this on a production server. You can adjust the time it runs by changing the number on ‘flag’ line. Then you can adjust the threshold to and look for event log alert for EventID 777.

note: make sure to use YOUR values instead of database name of 'Acme-sales', table name of 'products' and index name of 'PK_products'

USE [Acme-sales]
declare @flag int
set @flag = 1
while(@flag < 99888)
 alter index [PK_products] on [dbo].[products] rebuild
 set @flag = @flag + 1

The screenshot below shows the alert as it appears in LogicMonitor.

DISCLAIMER: Use at your own risk. LogicMonitor tech support is not responsible for tech support on this datasource.



WMI communication *starts* on TCP port 135 but, by default, the communication continues on a port that’s randomly selected between 1024-65535.  Fortunately, you can configure this to a static port. The default for this port is TCP 24158.  If you can, I recommend just using this 24158.
If you’re willing to go through yet another step, you can specify the port of your choosing (see steps below).
If you need to automate this, it would be a challenge, but one way is to add this to the “RunOnce” entry in the registry, a logon script, or the “post install” steps of a robust deployment tool.


  1. Type this command to do the real magic:
    winmgmt -standalonehost
  2. You must restart WMI for this to take effect. You can use the normal Services.msc applet in Windows or these commands:
    net stop winmgmt
    net start winmgmt
  3. If Windows is running a firewall, then you must add a “rule” that will allow this port. If you’re using the built-in Windows firewall then you can use the GUI or this command:
    netsh firewall add portopening TCP 24158 WMIFixedPort
If you want to specify a port different than the default port 24158:
  1. Start > Run > DCOMcnfg.exe
  2. Navigate deep into the tree to this path:
    Console Root > Component Services > Computers > My Computer > DCOM Config > Windows Management Instrumentation  (right click on this > Properties)
  3. Click on “EndPoints” tab
  4. Click “Use static endpoint”
  5. Type in the port number you desire  (make sure you choose one that’s not in use in YOUR network). I chose 134 to make it so I can open the range of 134-135 for my LAN.
The screenshot below should help.
I suggest you test by using the utility that’s built-into Windows called WBEMtest.exe. Details in this article ( link  )

March 2016

This process was not very intuitive and not documented (that I could find) so I wanted to help in case others had this confusion.  SSH is not enabled by factory default.

  1. Use your web browser to Login to the web interface as usual.
  2. Click ‘Security’ tab > Access tab > SSH > Host keys management.
  3. Click “Generate RSA keys” and “Generate DSA keys” then click ‘Apply’.
  4. Now  click the parent (Security > Access > SSH > SSH configuration)
  5. Click “SSH admin mode = enable”  and “SSH version 2” then click ‘Apply’
  6. Test using free ‘Putty’ app or similar SSH app.

See two screenshots below:

Step 1 (generate the keys)

Step 1

Step 1


Step 2: enable SSH



It shows each database (typically there are 1-20) as an instance and a graph shows the size/growth over time. This is helpful to some Email administrators because they want to balance the load evenly on each database.
Note: Don’t forget to apply it to your particular Exchange server. I think this requires Exchange 2010 and newer. Sorry, it probably won’t work with super old Exchange 2007.


Download the datasource file and add it into LogicMonitor (Settings > Datasources > Add > From file )


Sometimes a device or app sends email alerts but you want to see it and alert on it within LogicMonitor. So, I built a script EventSource to retrieve emails from a specified mailbox and alert if it has a keyword in subject (or body). It retrieves one email at a time from the mailbox using IMAP or secure IMAP.  The alert looks like the screenshot below.


  1. Download the EventSource and add it into your LogicMonitor account (Settings > EventSources > Add > From file )
  2. Set a property on any device to specify the IMAP mail password. The EventSource will show in the device tree where this device is.
    imap.pass   =  ********   (whatever your password is)
  3. Set/change the email settings at the beginning of the Groovy script:
    imapUsername = the username of the mailbox you’re retrieving from (e.g. ‘’ )
    imapServer = the name of your imap server (e.g. ‘’ )
    imapPort =   usually ‘993’ for SSL secure or ‘143’ if not secure.
  4. Set/change the keyword in the ‘Filters’ section of the EventSource screen. My example uses ‘purple’ in the subject.
  5. Create a folder in your mailbox called “DONE” so the script can move the messages it processes into this folder. If you want some other folder name, just change the name in the script to match.
  6. Test by sending an email with your keyword and watch for the alert in the “Alerts” tab.


DISCLAIMER:  Like most of my creations, this is not officially supported by LogicMonitor tech support.


This datasource for LogicMonitor shows you which users have the biggest mailboxes and monitors how big they are growing/shrinking over time.

It’s a Multi-instance datasource meaning the “discovery” script finds the names of the biggest mailboxes (these are the ‘instances’) and the collector script retrieves the actual size of each of those mailboxes in MB. Since the top mailboxes probably doesn’t change very often, I set it to discover (find the user names) at a 1 day interval and retrieve the sizes at the longest interval we can (1 hour). Of course, you can adjust these intervals.


  • Collector computer must have “Exchange Management Tools” installed on it (free on installation media/DVD). note: this is only available on 64-bit version so Windows must also be 64bit
  • Exchange 2010 or Exchange 2013
  • Both PowerShell 3.0 and 4.0 were tested and work fine. 5.0 will probably work. 2.0 will probably NOT work

2016-03-18 04_50_38-Top Mailboxes (for Microsoft Exchange) datasource - Sales - LM Confluence


  • Download this DataSource file and add it into your LogicMonitor account (Settings > DataSources > Add > From file )
  • “Apply” the datasource to your server in the usual LogicMonitor way
  • Test. Usually by looking at the ‘Raw Data’ tab

Sync-to-Chain diagram

Video demo:


  • Visual schedules are more intuitive and allow you to see that you have proper “coverage” the entire time
  • The Calendar can inform and remind your staff WHO is on-call/duty at any time.
  • Overcome the limitations of the “time-based chain” which can’t currently do “overnight” shifts like “6pm to 8am the next morning” and rotations that change each week or on a specified date.


It’s a datasource (PowerShell script) that reads your calendar every ~5 minutes and verifies or changes your LogicMonitor Escalation chain(s) to match.



  • Calendars supported: Google, Office 365, On-Premise Exchange 2010 or newer
  • Your escalation chains must have the word “schedule” in their names
  • Your Calendar must have names that match the names of your “Escalation chains”
  • Your Calendar event name must specify usernames in the event name
  • Your stages must use 1=username 2=username if you use stages and (sms) and (voice) after each name if it’s not the default of (email)
  • You must not have 2 or more Calendar events on the same schedule at the same time
  • LogicMonitor collector must be newer than 20.0 (so embedded PowerShell scripting will work)
  • Disclaimer: This is not officially supported by LogicMonitor tech support. Use at your own risk. If you need help, email me mike [at]


  • Sign up for a free Cronify account at and create your authentication “token”
  • Create a restricted user account in LogicMonitor for the API calls
  • Download the datasource and import it into your account
  • Set these properties on a device (preferably your collector) so the datasource is applied:
    api.pass = whatever
  • Set some settings at the top of PowerShell script in datasource
  • If you haven’t already, create your schedules and test.
  • Test. Look at graphs in datasource for errors/alerts. A status of 0 or 1 are ok, other numbers indicate problems as follows:
    0=no changes needed
    1=ok – changes were successful
    2=Calendar named (CCCC) has no matching ‘chain’ in LogicMonitor
    3=Calendar event named (EEEE) does not specify a username that exists in LogicMonitor
    4=Calendar named (CCCC) has 2 or more events but only 1 is allowed
    5=For some unknown reason, after the script changed the chain (HHHH) ; the chain doesn’t match the event.



Download this DataSource file and add it into your LogicMonitor account (Settings > DataSources > Add > From file )


If you need to debug, you can run the script in the interactive debug command window ( Settings > Collectors > Manage > Support > Run Debug Command)

Type “!posh” to open a window. Paste in the contents of the script. You will need to change the ##api.pass## to your real password. Then click “Submit”

returns 1
[20160316 04:32:51]-REST status: 200, 5 total escalation chains found for suding account.
[20160316 04:32:51]-2 escalation chains found that contain 'schedule'.
[20160316 04:32:52]-DEBUG: 6 calendars found for client_id xxxxxxxxx[redacted]
[20160316 04:32:52]-2 calendars found that contain 'schedule'.
[20160316 04:32:53]-DEBUG: 70 events found for client_id xxxxxxxx[redacted]
[20160316 04:32:53]-Processing calendar 'Network on-call schedule'.
[20160316 04:32:53]-Processing calendar 'Database on-call schedule'.
[20160316 04:32:53]-66 on call events found in 2 calendars.
[20160316 04:32:53]-Current Date is Wednesday, March 16, 2016 11:32:53 AM UTC.
[20160316 04:32:53]-The current event is evt_external_56e4380773767685b80808d4 - start 2016-03-16T00:00:00Z UTC, end 2016-03-16T15:00:00Z UTC, summary: 1=Bob (sms) 2=Cole.
[20160316 04:32:53]-The current event is evt_external_56e6f90a73767685b80b3050 - start 2016-03-15T23:00:00Z UTC, end 2016-03-16T16:00:00Z UTC, summary: Al (voice).
[20160316 04:32:54]-DEBUG: Event has 2 stages, listing contacts:
[20160316 04:32:54]-DEBUG: Stage 1, addr Bob, method sms
[20160316 04:32:54]-DEBUG: Stage 2, addr Cole, method email
[20160316 04:32:54]-DEBUG: Escalation chain has 2 stages, listing contacts:
[20160316 04:32:54]-DEBUG: Stage 1, addr Don, method email
[20160316 04:32:54]-DEBUG: Stage 2, addr Ed, method sms
[20160316 04:32:54]-Updating LM escalation chain.
[20160316 04:32:56]-Update successful, REST status 200.
[20160316 04:32:56]-DEBUG: Event has 1 stages, listing contacts:
[20160316 04:32:56]-DEBUG: Stage 1, addr Al, method voice
[20160316 04:32:56]-DEBUG: Escalation chain has 1 stages, listing contacts:
[20160316 04:32:56]-DEBUG: Stage 1, addr Al, method voice
[20160316 04:32:56]-No update to LM escalation chain required.
[20160316 04:32:56]-EXIT Status/Error 1: Changes applied successfully

I made a datasource to retrieve alert emails with certain key words in it and then it will run a script specified in the body of the email. To prevent security problems, it only processes emails that match your specification.

This video shows an example and explains briefly how it works: ( link to video )


This assumes you have some experience with LogicMonitor and are familiar with datasources and how to apply them etc.

  1. Download this DataSource file and add it into your LogicMonitor account (Settings > DataSources > Add > From file )
  2. Since PowerShell doesn’t natively support IMAP retrieval, you must also put the IMAPX.DLL and 2 other files in \Program Files\LogicMonitor\agent\bin. You might have to right click on each of these 3 files and click “unblock” since Windows tags any file downloaded from internet with “blocked”.
  3. Set the settings near the top of the script for your mailbox and preferences for key words and script folder. Set logging=false after you verify it works.
  4. Set properties on your collector for  “imap.pass”, “mail_username” and “mail_server” These settings are used in the script and used to apply the datasource to a device. I recommend you apply this datasource to the collector since that’s where it runs anyways.
  5. Setup a separate dedicated mailbox (e.g. and add a folder called ARCHIVE because the script needs to move the emails to this folder after they are ‘processed’.
  6. Add a “custom email delivery” under Settings > Integration. Set the destination to the email address mentioned above. The subject should contain “run this script” and body should contain “run this script: myscript.ps1” (where myscript.ps1 is the name of your ‘fix-it’ script. You can use different words as long as they match your settings in the script.
  7. Use an “Alert Rule” in LogicMonitor that sends an alert email to the address above.
  8. Put your “fix-it” script on your collector in the folder that matches your setting in the script. In my example, The default is c:\scripts
  9. Test – test – test (by triggering the alert and making sure the script datasource works). The script can write to a log file to see details.


imapx.dll and related


Currently, this is not officially a feature of LogicMonitor so you probably cannot get help from tech support.