Follow Mike's blog


LogicMonitor has a great DataSource for detecting when website certificates expire but it needs to be exposed on a TCP port. A customer recently needed to check on a RADIUS certificate that was not exposed via TCP.  This DataSource takes a different approach. It uses a PowerShell command to retrieve some or all of the Certificates on the specified server then looks at the expiration date and does some math to show the number of days until it expires.



  1. Download this DataSource and import it into your LogicMonitor account.  Settings > DataSources > Add > from File
  2. Set the ‘Applies to’ to the server that has the certificate.
  3. Set the ‘Filter’ to look for the certificate(s) you care about.
  4. If you haven’t already, enable ‘PS remoting’. This is needed because the PowerShell scripts use the ‘invoke-command’ to run the command on the target server (not the collector). Some PowerShell commands let you specify a parameter for the target but this and several others don’t allow that.
  5. Test by clicking the ‘Poll now’ button on the ‘Raw Data’ tab


At my previous job I used the free edition of a ticketing system by I liked it and integrated it with LogicMonitor to create tickets on certain alerts and change the ticket status to pending when I ack the alert and finally ‘close’ the ticket when the alert ‘clears’.


Below is how you configure the ‘webhooks’ in LogicMonitor. Copy and paste in the fields.
You need to add this to an escalation chain to send alerts/info to the ticketing system.

█████ ‘Active’ alerts aka new ██████████

Method: POST


{ “helpdesk_ticket”: { “subject”:”##ALERTID##”, “description”:”##MESSAGE##”, “email”:””, “priority”:1 }}


█████ acked ███████████████

Method: PUT


{ “helpdesk_ticket”: { “status”: 3, “custom_field”:{“note1_138794″:”ack note ##MESSAGE##” }}}

█████ cleared █████████████████

Method: PUT


{ “helpdesk_ticket”: { “status”: 4, “custom_field”:{“note2_138794″:”close note ##MESSAGE##” }}}


x   Include an ID provided in HTTP response when updating alert status
Format: JSON

JSON Path:  item.helpdesk_ticket.display_id



  1. The URL begins with https//.  Also replace ‘’ with your domain that you created on FreshService.
  2. The email should be one of your valid email addresses that FreshService allows to submit tickets
  3. Status=3 means ‘pending’. Status=4 means ‘resolved’, and status=5 means ‘closed’
  4. The note1 and note2 are custom text fields I added in FreshService > Admin > Field templates. The 138794 is my account – replace it with your own that you get back from a ‘get’ command. I needed to do this because the API would not allow to add a ‘note’ and change the status in one step. Also, the API would not allow me to change the ‘subject’ or ‘description’ when updating with the ‘put’ method. Hopefully they’ll add a ‘patch’ method to their API soon.
  5. I learned how to do this with help docs on FreshService website
I created a ‘PropertySource’ that creates a shortened ‘sysinfo’ string. It creates a new property that I called ‘short_sysinfo’. I did it for Cisco devices but it can be easily adapted to other devices.
See the example differences below.
It’s helpful for reports because it takes out the non-helpful text and redundant text that makes it too long.
It’s fairly flexible, so it can be tweaked
This is short_sysinfo…
Catalyst L3 Switch (CAT3K_CAA-UNIVERSALK9-M), Version 16.3.2, RELEASE (fc4)
instead of…
Cisco IOS Software [Denali], Catalyst L3 Switch Software (CAT3K_CAA-UNIVERSALK9-M), Version 16.3.2, RELEASE SOFTWARE (fc4) Technical Support: Copyright (c) 1986-2016 by Cisco Systems, Inc. Compiled Tue 08-Nov-16 17:31 b
This is the short_sysinfo…
C2900 (C2900-UNIVERSALK9-M), Version 15.4(3)M7, RELEASE (fc1)
instead of…
Cisco IOS Software, C2900 Software (C2900-UNIVERSALK9-M), Version 15.4(3)M7, RELEASE SOFTWARE (fc1) Technical Support: Copyright (c) 1986-2017 by Cisco Systems, Inc. Compiled Thu 02-Feb-17 21:48 by prod_rel_team
This is short_sysinfo…
(ucs-6100-k9-system), Version 5.0(3)N2(3.12f), RELEASE
instead of long…
Cisco NX-OS(tm) ucs, Software (ucs-6100-k9-system), Version 5.0(3)N2(3.12f), RELEASE SOFTWARE Copyright (c) 2002-2013 by Cisco Systems, Inc. Compiled 2/27/2017 8:00:00
If you’re curious…Below is the groovy script I wrote:
info = hostProps.get(“system.sysinfo”)
step_one   = info.replaceAll(‘Technical Support:‘,”) // replace the string with nothing (remove it)
step_two   = step_one.replaceAll(‘SOFTWARE’,”) // replace the string ‘SOFTWARE ‘ with blank (remove it)
step_three = step_two.replaceAll(‘Software’,”)  
if (info.contains(‘Copyright’)){  // to avoid errors if it doesn’t contain ‘Copyright’ in string
    index_of_copyright = step_three.indexOf(‘Copyright’) // get position of where the string ‘Copyright’ starts
    step_four = step_three.substring(0,index_of_copyright) // grab from start to ‘Copyright’
    step_four = step_three
if (info.contains(‘,’)){ // to avoid errors if string doesn’t contain a comma
    index_of_first_comma = step_four.indexOf(‘,’) +1 // get position of first comma which is after ‘Cisco IOS Software’
    step_five   = step_four.substring(index_of_first_comma) // grab from first comma to end
    step_five = step_four
step_six   = step_five.trim() // trim the leading and ending spaces if they exist
length = step_six.size() 
println “short_sysinfo=” + step_six
return 0

A customer asked if these old operating systems would work so I tested them. Yes, they work. See my videos below.


Windows XP:  I used Professional edition with SP3 (service pack 3)  32-bit.  Firewall off. It’s joined to my Windows domain.
Since I couldn’t figure out how to download and install the Windows Updates on XP, I tried a free utility called ‘Portable update‘ and installed ~150 updates.

Windows 2000 with SP4 (service pack 4). Windows 2000 only has 32 bit architecture and firewall is not a feature. It’s joined to my Windows domain.
I tried ‘Portable Update’ on Win 2000 but I got certificate errors and I was unable to find a place to download the Microsoft fix KB931125 that several articles said would get that working.

In both cases, LogicMonitor version is v90 and the collector is version 23.102. I worked in tests where collector was Windows 2012-R2 and Windows 2008-R2.

Disclaimer/Warning: Microsoft stopped supporting Windows XP in 2014 and Windows 2000. I don’t recommend using them.


Sometimes you want to monitor HP hardware directly without installing HPs software on Windows or Linux. If you have version 4 of iLO or newer, then you can do this. I combed thru the MIB files to get some good OIDs and created these datasources to monitor key metrics in the following categories:




  • Download the DataSource files and import them into your account ( Settings > DataSources > Add > From file )
  • Add a SysOID map ( –> HP_iLO ) to automate the setting of categories or manually add ‘HP_iLO’ to ‘system.categories’. This is used to ‘apply’ the datasources.
  • Adjust the thresholds if needed
  • Test by using the ‘Raw data’ tab and click ‘poll now’ or wait.

Here’s a recording:

Here’s my slide deck Webinar- Kick it up a notch_FINAL


What does the ‘phone’ alert sound like from LogicMonitor?  Listen to a phone alert I got. I responded by scheduling downtime for 5 hours for that alert. I can also acknowledge or escalate immediately.



A customer wanted to use a freeware tool called SmartCtl.exe to check the health of physical drives in Windows. I wrote a PowerShell script to do this. It runs the command on the remote/target computers. It checks for ‘PASSED’ in the result and alerts if it didn’t pass. It does this for each physical drive that it finds.

Screenshot below is the output from regular command. I convert it to <attribute name>   = <value> so it can be ‘interpreted’ more easily by RegEx in LogicMonitor


  1. Download this DataSource file (link) and install it in your account ( Settings > DataSources > Add > From file)
    Here’s the DataSource file ( link ) for the DataSource that retrieves the ‘attributes and their values’. My example only captures ‘Reallocated_Sector_Ct’ and ‘CRC_Error_Count’. Don’t forget to set your desired thresholds (if any).
  2. Set the ‘applies to’ in DataSource so it applies to the target computers you want to monitor.
  3. Install SmartCtl.exe on each of your target computers (download link)
  4. Enable PowerShell remoting feature if it isn’t already. (Microsoft link)
  5. Test by looking at ‘Raw Data’ tab.


Sometimes you only need simple up/down monitoring. You can ‘hang’ this test on an existing device (no extra $ charge). I usually hang it on the collector since that’s where the pinging happens from anyway but technically you can hang it off any existing device.  There’s a similar datasource called ‘port multi’ that can do a port check if your device or service listens on a specific port.

Below is my video because I think it’s easier to show you.

For people who prefer text instructions:

  1. Select the device (I recommend the collector)
  2. Pull down the blue arrow button that’s to the right of ‘Manage gear’ button then choose ‘add monitored instance’
  3. Type in ‘multi’ to find and select the datasource
  4. Type in the IP address or DNS name of what you want to ping in the ‘wildvalue’ field
  5. You should now see it under the datasource ‘branch’ on your device called ‘Ping (Multi)’
  6. To add more, you click the datasource and then click ‘Instances’ tab on right pane.  You can then add more ‘instances’.


It’s nice to know if the connection and login to a secure FTP site is working. This datasource uses a groovy script to connect then login and list the files. It even counts the number of files and folders in the root folder. If needed, you can add logic to alert you if a specified file exists.


  1. Download this DataSource file and import it into your account (Settings > DataSources > Add > from file )
  2. Set these properties so it can apply and work:   =
    sftp.user = mike
    sftp.pass = *****   (it auto hides)
    sftp.port = 22    (usually)
  3. Test by looking at the “Raw Data” tab. If it’s successful, the result is 0. Failure shows as 1.

DISCLAIMER:  official tech support for this is not available from LogicMonitor. Contact me and I will try to help you.


This Chrome Extension allows people to use, create, modify, and run JavaScript scripts and it has a GUI and place to enter API credentials.
So far, it has a few example scripts in it.
My goal was to allow people to…
  • Run scripts – especially API calls without having to redo the credential overhead
  • Use a  GUI to see a description and ‘run’ a script
  • Run on most any operating system (Windows, Mac, Linux, Chromebook) without needing to install a scripting language.
You can see and edit the scripts by clicking the “edit mode” tab to switch back and forth between edit and run modes.
You can also use Chrome menu > Developer Tools  (F12 key on Windows) then click the “Console” tab to see it running and error messages.

Click HERE to get it from the Chrome Web Store.


LogicMonitor Tech support is not provided. If you have feedback or questions, I will try to help.


All the cool kids these days are using Docker for their apps. To keep cool, you want to monitor their load and status. LogicMonitor does this using cAdvisor (called Container advisor) which is a free container made by google. There is a LogicMonitor DataSource that taps into the API of cAdvisor to monitors each of your containers. Below is a video showing how I set this up and how to troubleshoot.





Download and install the free container called cAdvisor.

You can do this when you run it or beforehand with this command:

docker pull google/cadvisor

The command you can use to start cAdvisor is shown below. Notice I used the ‘restart’ policy so it will run each time my computer boots and the ‘detach’ so I can get my command prompt back after running this command.

docker run –restart unless-stopped –detach -v=/:/rootfs:ro -v=/var/run:/var/run:rw -v=/sys:/sys:ro -v=/var/lib/docker/:/var/lib/docker:ro –privileged=true -v=/cgroup:/cgroup:ro -p=8080:8080 -d=true –name=cadvisor google/cadvisor:latest

If the ‘Docker containers’ branch doesn’t show up on your server in LogicMonitor, you can try these troubleshooting steps.  Make sure cAdvisor is running by going to your server IP address and port number   (e.g. ). You should see the cAdvisor web page.  You should be able to click the link to open and see more details on each of your containers.

First the disclaimers: Microsoft stopped providing tech support and updates back in 2015 and LogicMonitor tech support does not support running it as a collector. But, I got the collector to install and work by installing this Microsoft patch ( link ). It seems this forces more secure communications that LogicMonitor central servers require. Specifically the article says it allows AES-128 and AES-256 encryption methods.

These are the first 2 symptoms you see during installation. This patch overcomes these errors.

Send http request failed
errcode: -2146893018 (0x80090326)
message: The message received was unexpected or badly formatted.

Read http response failed
errcode: 12017 (0x00002ef1)
message: The operation has been canceled

Note: This was Windows 2003 (32-bit) with SP2. I also did all the Microsoft updates that were available.

Some of Microsoft’s Hotfixes require you to submit your email address and they send you a download link.


If you use Scheduled Tasks, you probably want to know when they fail. This DataSource uses PowerShell to show all your enabled scheduled tasks and the LastResult code. 0 (zero) means success. Believe it or not, there are ~40 other codes. Microsoft, it it’s infinite wisdom lists them here in hexadecimal but PowerShell shows them as decimal. I will attach the most common conversions. I took advantage of the Discovery Filter feature of LogicMonitor to exclude the typical ~50 that Microsoft includes on most Windows computers and 2 with “Optimize Start Menu” in the name.


  1. Download the DataSource file and add them to your account (Settings > DataSources > Add > From file )
  2. Change the “Applies to” to apply them to the computer(s) that have Scheduled Tasks
  3. Test by making sure expected values show in “Raw Data” tab.

Disclaimer: LogicMonitor tech support cannot officially provide support for this DataSource.



It’s often helpful to know how fast your WAN is performing between various offices. This copies a 10MB file (or specified size) from any collector computer to any target server(s) by specifying a UNC path. To closer approximate real-life users…It automatically creates the test file and fills it with random content each time so it doesn’t get cached and compressed by WAN accelerator appliances.


  1. Download the DataSource file and import it into your account ( Settings > DataSources > Add > From file )
  2. To do the file creation, I used a free utility called DUMMYCMD.exe (download here) and put it on the collector computer in any folder that’s in the path.
  3. On the device tree, select your desired collector and click the blue pull-down menu and click “add monitored instance” for each target to copy to. Fill in the blanks – see example below
  4. Test by looking at the “Raw Data” tab after the first few polls come in.
  5. Change the alert threshold to suit your needs.


  • This requires Windows collector because it’s a PowerShell script.
  • Don’t expect this datasource to be officially supported by LogicMonitor tech support.


The Collector listens on these ports:

162 UDP for SNMP traps
514 UDP for SysLog traps
2055 UDP for Netflow
6343 UDP for Sflow
2056 UDP for jflow

‘within’ the collector:

7211 TCP Watchdog service (sbwinproxy.exe)
7212 TCP Collector Service (java.exe)
7213 TCP Watchdog service (java.exe)
7214 TCP Collector Service (java.exe)
31000 TCP Collector service (java.exe) establishes connection with wrapper.exe
31001+ TCP Watchdog service (java.exe) establishes connection with wrapper.exe

Common outbound ports:

135 TCP for Windows. Unfortunately after initial connection it uses one other port between 1000-65000 (This is decided on the fly but you can lock it down to TCP 24158 or with more clicks you can specify any port)
22 TCP for SSH connections
80 TCP for HTTP
443 TCP for HTTPS
25 TCP for SMTP (email)
161 UDP for SNMP
1433 TCP for Microsoft SQL
1521 TCP for Oracle
3306 TCP for MySQL
5432 TCP for PostgreSQL
Various both for various apps like JMX or other

In the case of LogicMonitor I was trying to run the collector on a computer on domain1 and the target computer I wanted to monitor was on a different domain (say domain2). During the collector install I chose “use Local SYSTEM account” and I specified the usual wmi.user and wmi.pass properties on the group. The datasources using WMI worked fine but the datasources using PerfMon did not even show up. To get this to work I created a local user account on collector computer and then changed the 2 LogicMonitor services to use this account.

I experimented with PowerShell to try retrieving PerfMon counters. Unlike most cmdlets, “Get-Counter” cannot specify different credentials. But, you can use the invoke-command.


LogicMonitor Datasources are designed for grabbing NUMBERS not text. This is good for performance info but there’s a lot of useful info that is NOT NUMBERS. Like model name/numbers, serial numbers (with letters, dashes etc), contact name, and location (street address etc). PropertySources can retrieve info using a groovy script and set properties automatically. For official help article, click here.  Note: The web interface warns you that you MUST have collector version 22.180 or newer (I think the help doc mentions 22.227 or newer because that’s required for the “test” button). In any case, the current “early access” version is 22.253, so I suggest you install this or newer.

Here’s what properties look like. Notice auto. prefix


Click Settings > PropertySources > Add  (note the name is being changed from Property Rules to PropertySources.  Copy & paste the info below to fill in the fields. Be sure to use the TEST button before you save.


My next goal is to write some to detect more stuff. This would be a nifty automatic way for Dynamic groups and for Inventory reports and searches.

  • What version of MS-SQL is running

Description: Sets ‘location=whatever’, ‘contact=whatever’, etc that it gets from SNMP values under the common branch

Name: SNMP basic info

Applies to:  hasCategory(“SNMP”)

Groovy Script:

def hostname = hostProps.get("system.hostname") 
import com.santaba.agent.groovyapi.snmp.Snmp; //for SNMP get 
def OID_model = "" // the OID we want 
def model = Snmp.get(hostname, OID_model); 
println "model=" + model //show on screen 
def OID_uptime = "" // the OID we want 
def uptime = Snmp.get(hostname, OID_uptime); 
println "uptime=" + uptime 
def OID_contact = "" // the OID we want 
def contact = Snmp.get(hostname, OID_contact); 
println "contact=" + contact 
def OID_system_name = "" // the OID we want 
def system_name = Snmp.get(hostname, OID_system_name); 
println "system_name=" + system_name 
def OID_location = "" // the OID we want 
def location = Snmp.get(hostname, OID_location); 
println "location=" + location 
return 0

/* ==== EXAMPLE OUTPUT ========
location= 11 Main Street, Goleta, CA 93111
contact=Mike Suding
model=RouterOS RB750Gr2
uptime= 242 days, 12:32:43.00
system_name= PE-E48D8C982B35

Description: Set ‘serial_number=whatever’ and ‘BIOS_version=whatever’ using WMI (useful if you run Windows on bare-metal (not a hypervisor)

Name:  Serial Number and BIOS from Windows

Applies to: system.model !~ “Virtual” && system.model !~ “domU” && isWindows()    
(note: for efficiency, narrow to only Windows on bare-metal –not virtual)

Groovy Script:

my_query="Select * from Win32_BIOS" 
import com.santaba.agent.groovyapi.win32.WMI; 
import com.santaba.agent.groovyapi.win32.WMISession; 
def session =; 
def obj = session.queryFirst(namespace, my_query, 10); 
println "serial_number=" + obj.SERIALNUMBER 
println "bios_version=" + obj.SMBIOSBIOSVERSION 
return 0

Description: Set ‘postgres=yes’  by checking to see if a specific TCP port is open (useful for detecting lots of applications. example Oracle uses port 1521, Postgres=5432 etc)

Name:  Is Postgres database running

Applies to: Servers() 
(note: for efficiency, narrow to only servers –not switches etc)

Groovy Script:

def hostname = hostProps.get("system.hostname")
def port_number = 5432
try {
 new Socket(hostname, port_number).close(); // Socket try to open a REMOTE port
 println "postgres=yes" // remote port can be opened
 } catch(Exception e) {
 println "connection on port " + port_number + " failed"
return 0

Description: Set ‘hyper-v=yes’ Detects Microsoft Hyper-V by detecting if the service named VMMS (Hyper-V Virtual Machine Management) is running

Name:  HyperV

Applies to: system.model !~ “Virtual” && system.model !~ “domU” && isWindows()    
(note: for efficiency, narrow to only Windows on bare-metal)

Groovy Script:

my_query="Select name,state from Win32_Service Where Name ='vmms'" 
import com.santaba.agent.groovyapi.win32.WMI; 
import com.santaba.agent.groovyapi.win32.WMISession; 
def session =; 
def obj = session.queryFirst("CIMv2", my_query, 10); 
if (obj.STATE == "Running") { 
 println "hyper-v=yes" 
} else { 
 println "hyper-v=no" 
return 0 

Description: Set ‘Web_server=something’ property by retrieving the web page and grabbing text next to “SERVER:” (example:  Microsoft-IIS/8.5)

Name:  Web_server

Applies to:  isWindows()   (note: for efficiency, narrow to only Windows)

Groovy Script:

import com.santaba.agent.groovyapi.http.*; // needed for http.get 
def hostname=hostProps.get("system.hostname"); 
def my_url="http://" + hostname 
my_input_string = HTTP.get(my_url) 
def my_regex = /(?i)server: (.*)/ 
def my_matcher = (my_input_string =~ my_regex) 
my_stuff = my_matcher[0][1] 
println "web_server=" + my_stuff 
return 0; 

Description: Set ‘domain_controller=yes’ property if the service ‘NTDS’ (Active Directory Domain Services) is running

Name:  Active Directory

Applies to:   isWindows()  (note: for efficiency, narrow to only Windows)

Groovy Script:

my_query="Select name,state from Win32_Service Where Name ='NTDS'" 
import com.santaba.agent.groovyapi.win32.WMI; 
import com.santaba.agent.groovyapi.win32.WMISession; 
def session =; 
def result = session.queryFirst("CIMv2", my_query, 10); 
if (result.STATE == "Running") { 
println "active_directory=yes" 
} else { 
println "active_directory=no" } 
return 0; 

Description: Set ‘exchange=yes’ property if the service ‘MSExchangeADTopology’ (Microsoft Exchange Active Directory Topology) is running

Name:  Exchange
Applies to: isWindows()     (note: for efficiency, narrow to only Windows)

Groovy Script:

my_query="Select name,state from Win32_Service Where Name ='MSExchangeADTopology'" 
import com.santaba.agent.groovyapi.win32.WMI; 
import com.santaba.agent.groovyapi.win32.WMISession; 
def session =; 
def result = session.queryFirst("CIMv2", my_query, 10); 
if (result.STATE == "Running") { 
println "exchange=yes" 
} else { 
println "exchange=no" } 
return 0 

Description: Set ‘MS-SQL=yes’ if the service named MSSQLSERVER is running

Name:  MS_SQL

Applies to: isWindows()     (note: for efficiency, narrow to only Windows)

Groovy Script:

my_query="Select name,state from Win32_Service Where Name ='MSSQLSERVER'" 
import com.santaba.agent.groovyapi.win32.WMI; 
import com.santaba.agent.groovyapi.win32.WMISession; 
def session =; 
def obj = session.queryFirst("CIMv2", my_query, 10); 
if (obj.STATE == "Running") { 
 println "MS-SQL=yes" 
} else { 
 println "MS-SQL=no" 
return 0 

Description: Set ‘MS_SQL_version=’ showing version. Uses SQL query. Uses Windows auth but can be changed to use SQL auth

Name:  MS_SQL version

Applies to: system.db.mssql     (narrow to only MS-SQL computers)

Groovy Script:

def hostname = hostProps.get("system.hostname"); 
import groovy.sql.Sql // needed for SQL connection and query 
def url = "jdbc:sqlserver://" + hostname + ";databaseName=master;integratedSecurity=true"; 
def driver = "";  
sql = Sql.newInstance(url, driver) // connects to SQL server 
def query_result = sql.firstRow("SELECT @@version as info") 
full_version_info = 
sql.close() // Close connection 
input_string_1 = full_version_info 
def regex_1 = /(?<=SQL Server)(.*)(?=-)/; //capture between 'SQL Server' until '-' 
def matcher_1 = (input_string_1 =~ regex_1) 
version = matcher_1[0][1].trim() 
input_string_2 = full_version_info 
def regex_2 = /(?<=\n)(.*)(?=Edition)/; //capture between line break until 'Edition' 
def matcher_2 = (input_string_2 =~ regex_2) 
edition = matcher_2[0][1].trim() 
println "ms_sql_version=" + version +" "+ edition 
return 0; 


Since many IT people use a ticketing system, they often want to track and handle the alerts and the associated “work” in a ticketing system like Zendesk.

Here’s what this integration does:

  1. When an alert triggers, LogicMonitor sends a notification (a webhook call) and this creates an incident “ticket” in Zendesk with a status of “new”
  2. When someone acknowledges the alert, LogicMonitor sends another notification (a webhook call) and this adds a comment to the Zendesk ticket and changes the status from “new” to “pending” to inform people.
  3. When the problem is fixed or goes away, LogicMonitor sends a ‘clear’ notification (via a webhook call) and this adds a comment to the Zendesk ticket and changes the status to “solved”


  • I made a LogicMonitor user called “Zendesk user”. It wasn’t required but it just makes things easier to understand when looking at settings. The role/permissions are not important.
  • I created an “Integration” called “Zendesk” (this makes an extra ‘contact method’ show up when you build an escalation chain). See below for details on 3 items (new, ack, and close) in this ‘integration’.
  • I created an Escalation chain called “Zendesk escalation chain” and I specified the Stage to go to user “Zendesk” and picked the contact method called “Zendesk test”
  • I created a rule called “Zendesk rule”. For testing I only made it send notifications for one device for only for my one special datasource called “Mike ping” which makes it less disruptive to other people.



Below is the info I typed in the “Alert Data” field. I chose JSON format. Notice for this customer, I needed to put in 1 custom field because they set they were set as ‘required fields’ in Zendesk. They are Phone number. If I didn’t specify these fields, then Zendesk would not allow me to change the ticket status to “solved”.

Type of webhook:  Active alert (aka new alert)

HTTP method: POST

URL:  https://

Use Custom Headers: NO

Data type: RAW (not key/value)

Format: JSON

{"ticket": {"requester": {"name": "Mike Suding", "email": ""}, "subject": "LM alert ##ALERTID## ##LEVEL## on host named ##HOST##" starting ##START##, "comment": { "body": "This is an alert from LogicMonitor via webhook \n ##MESSAGE##"}, "priority": "normal", "custom_fields": [{"id": 23842396, "value": "805 phone"}] }}

Type of webhook: Acknowledge

HTTP method: PUT

URL: https://

Use Custom headers: NO

Data type: RAW    and format: JSON

{"ticket": { "comment": { "body": "Since the LogicMonitor alert was acked, this comment is added to ticket and status changed to pending ##ALERTID## ##ADMIN.EMAIL##" }, "status": "pending" }}

Type of webhook: Cleared (ie close the ticket)

HTTP method: PUT

URL:  https://

Use custom headers: NO

Data type: RAW     and format: JSON

{"ticket": { "status": "solved", "comment": { "body": "The alert is cleared ##ALERTID## so we will set ticket status to SOLVED", "author_id": 21950241428 }}}

note: you need to specify YOUR author_id ( a Zendesk ID number for the agent/user). You can see this in a response when you create a ticket. I used CURL.exe command line. It also shows in response as ‘submitter_id’.

REQUIREMENTS: You need to use Firefox or Safari (if mac). Chrome doesn’t support java browser applets anymore. Also, on the target Windows computer, you need to disable NLA (Network Level Authentication) a setting for RDP shown in the video.




Some disk drives and SSD (Solid State Drives) have a feature called “S.M.A.R.T.” which means “Self-Monitoring, Analysis & Reporting Technology”. On Windows computers, this information is stored in WMI. I wrote a PowerShell script-based datasource to collect this info. Since not all drives have this SMART feature, I put in a datapoint that also shows if it has this feature/capability (1=yes, 0=no).




  1. Download this datasource file and add it to your account:  Settings > Datasources > Add > From file
  2. Wait a few minutes for it to start collecting….Test it by looking in the “Raw Data” tab.
  3. I set it to alert threshold to ‘warning’ if the value is anything other than “OK” but you can change it.


As with most of my datasources, you cannot get official technical support from LogicMonitor.

I was experimenting and testing with my second-hand Cisco ASA 5505.  I learned some things that might be useful.

I think starting with version 8.0 of Cisco’s ASA software, it shows your SNMP community string in your config as just ***** for security reasons. If you want to see the actual string, then get into ‘enable’ mode and type the command shown below:

MikeASA1# more system:running-config | inc snmp
snmp-server host inside community mysnmp805
snmp-server location Atlanta office
snmp-server contact
snmp-server community mysnmp805
snmp-server enable traps snmp authentication linkup linkdown coldstart

In this case, my community string is set to “mysnmp805”. Notice it’s on two lines. specifies the IP address that’s allowed to communicate. It’s also the IP address where the device sends the SNMP traps. MikeASA1# is just the prompt that shows the name of my device. I hope this helps.


This is useful if you want to be notified when you go over some desired quantity. For example, if your subscription commitment is 100 devices, you might send yourself a “warning” alert at 95 devices and “critical” alert at 99 devices.
It might also be useful if you have a “child accounts” which are additional LogicMonitor accounts that are entirely separate but get billed to the same person/company.


It’s a Datasource using a Groovy script which uses the REST API to get a list of all devices in your account. Fortunately, a total quantity of devices is also included in the API response. Maybe someday I will enhance it to give an alert when a device is added or deleted. To minimize load and because it usually doesn’t change rapidly, I set the collection interval to the max of 60 minutes.

Qty devices and threshold

Qty devices and threshold


  1. If you haven’t already, create a separate LogicMonitor user account (called for example “api.user”) and generate the API Token so you can use the API commands. Normally, I recommend you create a separate role with ‘manage’ permissions to “devices” (ie not dashboards, services, settings, reports, etc) but this particular datasource only needs ‘read only’ permissions.
  2. Set the following properties. For simplicity, I recommend you set them on the ‘root/base’ folder so it’s seen by any devices you apply this datasource to. = <the cryptic looking ID that you generated in step1>
    rest.api.key = <the cryptic looking key that you generated in step1>
    account = <the part of your URL before; usually your company name>
  3. Download the datasource file and add it to your company account via Settings > Datasource > Add > From file.
  4. Apply this datasource to any one of your servers. For simplicity, I suggest you apply it to one of your collectors.
  5. If desired, set an alert threshold and you might want a special Alert Rule and Escalation Chain to notify a person or group of people if it goes over a threshold.
  6. Test by waiting 1-3 minutes for the first poll/run then look at ‘Raw Data’ tab to make sure you see what you expect.


OpenVMS has a long history and was very popular because of it’s reliability and industry specific applications. It supports SNMP but the datapoints are not very helpful. Luckily, a company named ComTek makes an SNMP agent with lots of good datapoints. (click here for more details). Below are screenshots that show some examples of the datapoints and graphs.

vms-misc vms-disks vms-errors



  1. Buy (or get a trial) of ComTek and install it on your OpenVMS system.
  2. Download these DataSsource files (disk, misc, errors) and add them to your LogicMonitor account using Settings > Datasources > Add > From file.
  3. You can add more OIDs to these datasources or create additional datasources. Look at the MIB file to figure out which OIDs.


I bought UniFi for my home for these reasons:

  • Relatively low cost
  • Designed for enterprise with centralized controller
  • Has an API
  • We use it in our office
  • To learn more

I made this datasource using Groovy so it will work on Linux or Windows collectors. It shows lots of info including:

List of Access Points with info on:

  • Qty of clients
  • Bytes used on uplink port Tx and Rx
  • Bytes used by 2GHz client devices and 5GHz client devices
  • CPU, memory, and uptime


List of Client devices connected with info on:

  • Bytes transmitted and received and rate transmit and receive
  • Signal strength in dBm
  • Uptime
Graphs for each client

Graphs for each client


Download the DataSource for APs and download the datasource for Clients.

Import them into your LogicMonitor account using Settings > Datasources > Add > From file

If you haven’t already, add your UniFi controller or CloudKey to LogicMonitor and set properties for Unifi.user and Unifi.pass

If you have multiple sites, clone the datasource to make a copy and edit the site name at the top of both scripts in the datasource. I also made a datasource (download) to show the site name because the UniFi web interface has you changing the ‘description’ but the site name is assigned by the system and doesn’t show in the GUI. At least for me, the first site is named ‘default’ even if you change the site name in the GUI.

Test by looking at ‘Raw Data’ tab and graphs.


Thanks to Erik Slooff on the Ubiquiti community forum because he made it easier to find the API calls since Ubiquiti has not officially documented them.

Thanks to my co-worker Jerry Wiltse for helping me on the Groovy script. I did my initial one with cURL then PowerShell.

As usual, these datasources are not officially supported by LogicMonitor tech support (until/unless they get reviewed and published in the core repository)


LogicMonitor can easily alert you when a service fails, but it sure would be nice to TRY to restart it automatically (and alert you if it cannot). This datasource does just that.
If it’s running, it will show as a value of “1”.
If it’s stopped, it will try to restart and a value of “2” will show.
If it’s still stopped, it will try a second time. If it succeeds, a value of 3 shows.
If it fails to start after two tries – a value of “4” will show.
I chose this simple numbering because I set it to trigger a warning if it’s 2 or greater; error if 3 or greater and critical alert if 4 or greater.


  1. Download this DataSource file then import it into your account. Settings > Datasources > Add > From file
  2. While you have your desired computer selected in Device tree, add a monitored instance (blue down arrow > Add monitored instance. Type in the service name (e.g. Spooler) in the “wildcard” field. Give this instance a name and description. You can add additional instances on the “Instances” tab of right pane.
  3. Test by looking in the “Raw Data” tab and “Graph”. A value of 1 should show normally. Manually stop the service and you should see a 2 on the next ‘cycle’ which is at 1 minute interval.

LIMITATIONS/DISCLAIMER: Official LogicMonitor tech support is not available. Use at your own risk.
This currently requires the typical setup whereby your collector is running as a domain account and computer you are monitoring is also on that domain. i.e. I haven’t enhanced it to allow it to use wmi.user and wmi.pass properties specified on the device.

This shows when a service is restarted

This shows when a service is restarted




Citrix stores it’s license counts in WMI. Some people buy several licenses and you want them to add up. This datasource is capable of adding up all the licenses for the same product. e.g. if you have 3 licenses (200 users + 100 users + 50 users = 350 users). The instance names are the license names for example MPS_ENT_CCU. I set it to trigger alerts when your licenses run low (90%=warning, 95%=error, 98%=critical)



  1. Download this DataSource file and add it to your account. Click Settings > Datasources > Add > From file
  2. Apply it to your Citrix License server(s).
  3. Test by looking at your graphs and ‘Raw data’ tab


It helps to make sure your Database Availability Group(s) are healthy and replicated so they are ready to failover in the event of a problem. Copies are often in different sites for “site resiliency”




  • Download this DataSource file and add it into your LogicMonitor account (Settings > Datasources > Add > From file)
  • Make sure your collector computer has Microsoft’s Exchange Mgmt tools installed (here for Exchange 2013)
  • Set these properties on each Exchange server so it applies and can authenticate
    • ex.fqdn  (the name of the Exchange server)
    • ex.user (a username that has Exchange permissions)
    • ex.pass  (the password for this user)


Sometimes it’s important to monitor the speed of a file copy. I wrote this datasource because a customer wanted to monitor the time to copy a file from their London office to their Baltimore office. I chose to script it in PowerShell. To detect problems, I set it to alert if the file doesn’t exist and I set it to alert if the target file is not the same size as the source file. For his 10MB test file, I set the time thresholds to
>3 seconds = warning
>5 seconds = error
>7 seconds = critical

but you can easily adjust as needed.


  1. Download this DataSource file. Add it to your LogicMonitor account  (Settings > Datasources > Add > From file)
  2. On the device add a property called “copy_file” to specify the file you want to copy (e.g.  \\server2\share1\ )
  3. Adjust the alert threshold if necessary.
  4. As usual, don’t forget to test. I suggest you look at the “Raw data” tab to make sure the numbers look correct.



Below is an example link that takes you directly to a dashboard without requiring login. Basically you must specify your company/account, and username & password in the URL. I recommend you minimize the risk by making sure the user has a role of READ-ONLY or just permissions to the dashboard(s) you want. I suggest you use a password that only uses URL friendly characters ( letters, numbers, and  _ – ~ and .  )

To get and build this URL, navigate to your desired dashboard and add the portion


in the middle (after the index.jsp and before the #dashboard=XX)




It’s helpful sometimes to know the database names and the sizes of each, available space, user count, status. This uses the JDBC database connection method to get a list of databases as ‘instances’ then it gets detailed info on each of these databases. I filtered out the system default databases of master, model, msdb, and tempdb but you can change that. In Nov 2016 I updated this so it also works with Azure SQL. This was mainly due to the fact that Azure doesn’t let you specify master.sysdatabases. Instead it uses sys.databases.  I also changed it to use SQL authentication which is commonly used with Azure.


  1. Download this DataSource file and add it into your LogicMonitor account (Settings > DataSources > Add > From file ). It’s set to apply to all Windows servers but it will only show where there are instances (where the database query succeeds).
  2. On the device, set properties for jdbc.mssql.user and jdbc.mssql.pass if you use SQL authentication or change the URL/connection-string to use “integratedAuthentication=true” and remove the user= & password= .
  3. If desired, create threshold(s) to alert you.



This shows you how to configure Windows so that you can overcome the limitation of many monitoring tools that retrieve events because most of them can only retrieve from the “basic 4 logs” (ie  System, Application, Security, Setup).  Unfortunately, back in the Windows Vista (circa 2007) days, Microsoft changed the event log architecture slightly and made an additional type of logs called “Application and Services” which many software vendors now use. This “subscription” feature can be used to copy events that occur into the new logs to one of the basic 4 mentioned above. In most cases, it’s appropriate to use the “Application” log.


Please see all the steps in this video I made:




This is helpful to avoid getting alerts when downstream device like a router or switch fails so you don’t get alerts on the other devices connected to it. It does a simple ping test on a device and if it is outside the threshold, it uses the LogicMonitor API to set an SDT on all devices in the same group where the applied device is located and all devices in subgroups (if any). The duration can be adjusted if needed.



  1. If you don’t already have a user account for API usage, please create one with just the minimum role that has permissions that are needed which is ‘modify’ for devices (to set the SDT).
  2. Download this DataSource file and add it into your LogicMonitor account (Settings > DataSources > Add > From file)
  3. Edit the script with settings for API usage (company (aka account name), api username, API password)
  4. Apply the datasource to the desired devices (usually router or a switch) by some clever method (perhaps name or property)
  5. Add these properties to the device that you apply it to so the script can use them
    ping_loss_threshold = 9   (note: this is in percent)
    SDT_duration = 8   (note: this is minutes)
  6. Please test. You may need to modify the default ping datasource (or create/use a different one) with settings that won’t trigger an alert too soon (ie increase the “consecutive” setting by a few so it doesn’t trigger before the SDT).


There are 3 ways that I have used to setup permissions to allow WMI to work.
Note: These are all for the situation where the collector and target computers are all in the same Windows domain.

  1. Normally, the easiest setup is to create a separate dedicated user account for monitoring and add it to the ‘domain admins’ group. This works because, by default, the ‘domain admins’ group are in the local ‘Administrators’ group.
  2. Since WMI is designed to be used by a ‘local admin’, you can achieve this by using a GPO (Group Policy Object) method to add a group into the local ‘Administrators’ group on every computer in your domain. Article later.
  3. The lowest permission method I know of involves setting very specific permissions as described below but the user does not even need to be a local admin (except on the collector computer).


The downside of method #3 is it takes ~2 minutes to click/configure for each target computer you monitor. I think this could be partially or fully automated using PowerShell or other scripting language but I haven’t found it or built it yet.



Summary Steps:

Step 1:

On target computer, put your “logicmonitor” user in these local groups:
– Distributed COM users
– Performance Monitoring users

  • note: If you do this, you shouldn’t need to use DCOMcnfg to set permissions.

Step 2:

  • If the computers are on a windows domain, put your “logicmonitor” user in the domain groups called ‘Distributed COM users’, and ‘Performance Monitoring Users’.
  • On the collector computer, put your ‘logicmonitor’ user in the local Administrators group (so the service can install and run properly).

Step 3:

  • On target computer, start ‘Computer Management’  by either running WMImgmt.msc or ‘Control Panel > Administrative Tools > Computer Management’
  • Right click on “WMI control” and click “Properties”
  • Click the “Security” tab
  • Click on “Root” then click the “Security” button
  • Add “logicmonitor” user to the list (or the group “Performance Monitoring users”)
  • Click the permissions checkboxes to allow “Execute methods”, and “Enable account” and “Remote enable”. Click ‘advanced’, then click the user, then click ‘edit’ and set to ‘this namespace and all subs’. Click OK all the way out.

If you’re monitoring Domain Controllers:

Control Panel > Administrative tools > Local Security Policy

  • Once inside, expand Security Settings > Local Policies > User Rights Assignment.
    Assign your new group at least the following rights:
  • Act as part of the operating system
  • Log on as a batch job
  • Log on as a service
  • Replace a process level token

If you want to monitor Windows Services or Processes, you need to give this ‘logicmonitor’ user access permissions to ‘see’ all the services

  1. Show the current permissions with this command:
    C:\>sc sdshow scmanager
  2. Get the SID (Security ID) for your user or group.
    I like using the dsquery command because it allows you to copy the long cryptic SID to your clipboard whereas the ADSIedit GUI doesn’t
    C:\>dsquery user -name non-admin | dsget user -sid
    dsget succeeded
  3. Set the new permissions (note: you must add 1 entry but also include existing permissions).
    Combine your SID with permission settings and your existing permissions shown above into a command to set permissions (as shown in example below)
    SC SDSET SCMANAGER D:(A;;CCLCRPRC;;;S-1-5-21-453481523-53804703-39165276-1624)(A;;CC;;;AU)(A;;CCLCRPRC;;;IU)(A;;CCLCRPRC;;;SU)(A;;CCLCRPWPRC;;;SY)(A;;KA;;;BA)(A;;CC;;;AC)S:(AU;FA;KA;;;WD)(AU;OIIOFA;GA;;;WD)
  4. Check your work by showing the new permissions with this command:
    C:\>sc sdshow scmanagerD:(A;;CCLCRPRC;;;S-1-5-21-453481523-53804703-39165276-1624)(A;;CC;;;AU)(A;;CCLCRPRC;;;IU)(A;;CCLCRPRC;;;SU)(A;;CCLCRPWPRC;;;SY)(A;;KA;;;BA)(A;;CC;;;AC)S:(AU;FA;KA;;;WD)(AU;OIIOFA;GA;;;WD)
    Notice the new permissions are

Hat tip to John Cardenas at and this article I found



I suggest you test with WBEMtest on the collector computer.
Make sure you already did the EASY STUFF:
Firewalls are fully off on both computers or open a rule for WMI (remote mgmt)
I suggest that you disable UAC  on both computers (slider at bottom) (changes require a reboot)

Hat tip to this article


Sometimes you need to know when a certain quantity of old files are sitting in a folder and shouldn’t be. For example, a daily process normally moves/transfers files to a different server but somehow it missed one.


  1. Download this DataSource file and add it into your LogicMonitor account (Settings > DataSource > Add > From file)
  2. Set these properties on the device so the datasource ‘applies to’ some device (for example)
    file_spec =  \\server1\share\*.txt
    minutes_age = 33
  3. If desired, set the threshold to alert you on a certain quantity of files. Do this by setting the datapoint within the datasource as shown in screenshot.


Screenshot below shows datasource and related graph:


Below shows where to set threshold for Qty of files:


For the technically curious, below is the PowerShell script I’m using:

$minutes_age = ##minutes_age##
$file_list = Get-ChildItem -path $file_spec | where {($_.Lastwritetime -lt (date).addminutes(-$minutes_age)) -and (! $_.PSIsContainer)}

Write-Host "The file list is... $file_list" # just show the list

$Qty = @($file_list).Count # counts the qty in list

Write-host "minutes_age threshold: $minutes_age"
Write-host "qty of files older than threshold minutes: $Qty"


Sometimes people want to know if one of their co-workers or subordinates disabled a SQL job.


Download the datasource file and add it into to LogicMonitor (Settings > Datasources > Add > From file)

It should alert you if enabled is not 1. If you want to ignore some jobs, just click on that instance and turn off alerting or monitoring.

It’s set to use Windows Integrated Authentication but if you want to use SQL authentication, you can change the datasource and specifying a SQL username and password.


I did this with a datasource that uses JDBC collection type to run a SQL query on the database to list the jobs and show those job names as instances. The collection portion of datasource runs another SQL query for each of those jobs to retrieve the current setting for “enabled”.




This will alert you via LogicMonitor when a MS-SQL ‘job’ fails


  1. Create a LogicMonitor “EventSource”. Named “SQL job failures” or similar. I suggest that you clone the default “SQL eventsource”.  Set these settings:
    LogName   EQUAL    “Application”
    EventID    In    “1304|208”
    SourceName    Contains    “SQL”
  2. Use Microsoft’s SQL Management Studio to set each job on each of your SQL servers so that when a job fails, it will write a message to the Windows Application event log (screenshot below)
    Right click on job and click “Properties”
    Click on “Notifications” on left pane
    Click on “Write to Windows event logs” checkbox and click “When the job fails”
  3. Test. I suggest you create and run a bogus backup job to test. Look in the Windows Event viewer and you should notice an EventID 1304. 


Optional:  In LogicMonitor you can create an Alert rule that notifies a specific person or team. Do this by selecting the datasource name of the eventSource you named above

Below is screenshot showing how to create/clone a LogicMonitor EventSource


Below shows how to set “Write to event log” on each SQL Job


Below shows what the alert looks like in LogicMonitor.




Tells you the slowest queries (up to 10) and who ran them.

The PowerShell script runs 2 test SQL queries a few seconds apart. If both queries take longer than the threshold you specify, then it will run the stored procedure mentioned above to find details of the 10 longest running queries and the users who ran them. It writes this detailed info to the Windows event log (application log EventID 777 severity=ERROR) and therefore LogicMonitor can alert on it.


  1. Download this Stored Procedure to your SQL server and open it in Microsoft’s SQL Management Studio and click “Execute!” ( this installs it) so it can be called in the future.
    Read and download here:
  2. Download the DataSource file  and add it to your LogicMonitor account (Settings > DataSources > Add > From file)
  3. Change the script to use your test query and YOUR database. They are at the beginning of the script (in ‘$test_query’ and ‘$DBname’ variable).
  4. Run this PowerShell command on your SQL server so it will accept new events from this source (substitute your server instead of ‘server2’
    New-EventLog -ComputerName localhost -Source Top_SQL -LogName Application
  5. Set a property called ‘apply_Top_SQL’ on your device in LogicMonitor so this datasource is applied to your server. Or use the other typical methods.


One way to test this script, is to add load by running this looping query shown below. Careful not to do this on a production server. You can adjust the time it runs by changing the number on ‘flag’ line. Then you can adjust the threshold to and look for event log alert for EventID 777.

note: make sure to use YOUR values instead of database name of 'Acme-sales', table name of 'products' and index name of 'PK_products'

USE [Acme-sales]
declare @flag int
set @flag = 1
while(@flag < 99888)
 alter index [PK_products] on [dbo].[products] rebuild
 set @flag = @flag + 1

The screenshot below shows the alert as it appears in LogicMonitor.

DISCLAIMER: Use at your own risk. LogicMonitor tech support is not responsible for tech support on this datasource.



WMI communication *starts* on TCP port 135 but, by default, the communication continues on a port that’s randomly selected between 1024-65535.  Fortunately, you can configure this to a static port. The default for this port is TCP 24158.  If you can, I recommend just using this 24158.
If you’re willing to go through yet another step, you can specify the port of your choosing (see steps below).
If you need to automate this, it would be a challenge, but one way is to add this to the “RunOnce” entry in the registry, a logon script, or the “post install” steps of a robust deployment tool.


  1. Type this command to do the real magic:
    winmgmt -standalonehost
  2. You must restart WMI for this to take effect. You can use the normal Services.msc applet in Windows or these commands:
    net stop winmgmt
    net start winmgmt
  3. If Windows is running a firewall, then you must add a “rule” that will allow this port. If you’re using the built-in Windows firewall then you can use the GUI or this command:
    netsh firewall add portopening TCP 24158 WMIFixedPort
If you want to specify a port different than the default port 24158:
  1. Start > Run > DCOMcnfg.exe
  2. Navigate deep into the tree to this path:
    Console Root > Component Services > Computers > My Computer > DCOM Config > Windows Management Instrumentation  (right click on this > Properties)
  3. Click on “EndPoints” tab
  4. Click “Use static endpoint”
  5. Type in the port number you desire  (make sure you choose one that’s not in use in YOUR network). I chose 134 to make it so I can open the range of 134-135 for my LAN.
The screenshot below should help.
I suggest you test by using the utility that’s built-into Windows called WBEMtest.exe. Details in this article ( link  )

March 2016

This process was not very intuitive and not documented (that I could find) so I wanted to help in case others had this confusion.  SSH is not enabled by factory default.

  1. Use your web browser to Login to the web interface as usual.
  2. Click ‘Security’ tab > Access tab > SSH > Host keys management.
  3. Click “Generate RSA keys” and “Generate DSA keys” then click ‘Apply’.
  4. Now  click the parent (Security > Access > SSH > SSH configuration)
  5. Click “SSH admin mode = enable”  and “SSH version 2” then click ‘Apply’
  6. Test using free ‘Putty’ app or similar SSH app.

See two screenshots below:

Step 1 (generate the keys)

Step 1

Step 1


Step 2: enable SSH



It shows each database (typically there are 1-20) as an instance and a graph shows the size/growth over time. This is helpful to some Email administrators because they want to balance the load evenly on each database.
Note: Don’t forget to apply it to your particular Exchange server. I think this requires Exchange 2010 and newer. Sorry, it probably won’t work with super old Exchange 2007.


Download the datasource file and add it into LogicMonitor (Settings > Datasources > Add > From file )


Sometimes a device or app sends email alerts but you want to see it and alert on it within LogicMonitor. So, I built a script EventSource to retrieve emails from a specified mailbox and alert if it has a keyword in subject (or body). It retrieves one email at a time from the mailbox using IMAP or secure IMAP.  The alert looks like the screenshot below.


  1. Download the EventSource and add it into your LogicMonitor account (Settings > EventSources > Add > From file )
  2. Set a property on any device to specify the IMAP mail password. The EventSource will show in the device tree where this device is.
    imap.pass   =  ********   (whatever your password is)
  3. Set/change the email settings at the beginning of the Groovy script:
    imapUsername = the username of the mailbox you’re retrieving from (e.g. ‘’ )
    imapServer = the name of your imap server (e.g. ‘’ )
    imapPort =   usually ‘993’ for SSL secure or ‘143’ if not secure.
  4. Set/change the keyword in the ‘Filters’ section of the EventSource screen. My example uses ‘purple’ in the subject.
  5. Create a folder in your mailbox called “DONE” so the script can move the messages it processes into this folder. If you want some other folder name, just change the name in the script to match.
  6. Test by sending an email with your keyword and watch for the alert in the “Alerts” tab.


DISCLAIMER:  Like most of my creations, this is not officially supported by LogicMonitor tech support.


This datasource for LogicMonitor shows you which users have the biggest mailboxes and monitors how big they are growing/shrinking over time.

It’s a Multi-instance datasource meaning the “discovery” script finds the names of the biggest mailboxes (these are the ‘instances’) and the collector script retrieves the actual size of each of those mailboxes in MB. Since the top mailboxes probably doesn’t change very often, I set it to discover (find the user names) at a 1 day interval and retrieve the sizes at the longest interval we can (1 hour). Of course, you can adjust these intervals.


  • Collector computer must have “Exchange Management Tools” installed on it (free on installation media/DVD). note: this is only available on 64-bit version so Windows must also be 64bit
  • Exchange 2010 or Exchange 2013
  • Both PowerShell 3.0 and 4.0 were tested and work fine. 5.0 will probably work. 2.0 will probably NOT work

2016-03-18 04_50_38-Top Mailboxes (for Microsoft Exchange) datasource - Sales - LM Confluence


  • Download this DataSource file and add it into your LogicMonitor account (Settings > DataSources > Add > From file )
  • “Apply” the datasource to your server in the usual LogicMonitor way
  • Test. Usually by looking at the ‘Raw Data’ tab

Sync-to-Chain diagram

Video demo:


  • Visual schedules are more intuitive and allow you to see that you have proper “coverage” the entire time
  • The Calendar can inform and remind your staff WHO is on-call/duty at any time.
  • Overcome the limitations of the “time-based chain” which can’t currently do “overnight” shifts like “6pm to 8am the next morning” and rotations that change each week or on a specified date.


It’s a datasource (PowerShell script) that reads your calendar every ~5 minutes and verifies or changes your LogicMonitor Escalation chain(s) to match.



  • Calendars supported: Google, Office 365, On-Premise Exchange 2010 or newer
  • Your escalation chains must have the word “schedule” in their names
  • Your Calendar must have names that match the names of your “Escalation chains”
  • Your Calendar event name must specify usernames in the event name
  • Your stages must use 1=username 2=username if you use stages and (sms) and (voice) after each name if it’s not the default of (email)
  • You must not have 2 or more Calendar events on the same schedule at the same time
  • LogicMonitor collector must be newer than 20.0 (so embedded PowerShell scripting will work)
  • Disclaimer: This is not officially supported by LogicMonitor tech support. Use at your own risk. If you need help, email me mike [at]


  • Sign up for a free Cronify account at and create your authentication “token”
  • Create a restricted user account in LogicMonitor for the API calls
  • Download the datasource and import it into your account
  • Set these properties on a device (preferably your collector) so the datasource is applied:
    api.pass = whatever
  • Set some settings at the top of PowerShell script in datasource
  • If you haven’t already, create your schedules and test.
  • Test. Look at graphs in datasource for errors/alerts. A status of 0 or 1 are ok, other numbers indicate problems as follows:
    0=no changes needed
    1=ok – changes were successful
    2=Calendar named (CCCC) has no matching ‘chain’ in LogicMonitor
    3=Calendar event named (EEEE) does not specify a username that exists in LogicMonitor
    4=Calendar named (CCCC) has 2 or more events but only 1 is allowed
    5=For some unknown reason, after the script changed the chain (HHHH) ; the chain doesn’t match the event.



Download this DataSource file and add it into your LogicMonitor account (Settings > DataSources > Add > From file )


If you need to debug, you can run the script in the interactive debug command window ( Settings > Collectors > Manage > Support > Run Debug Command)

Type “!posh” to open a window. Paste in the contents of the script. You will need to change the ##api.pass## to your real password. Then click “Submit”

returns 1
[20160316 04:32:51]-REST status: 200, 5 total escalation chains found for suding account.
[20160316 04:32:51]-2 escalation chains found that contain 'schedule'.
[20160316 04:32:52]-DEBUG: 6 calendars found for client_id xxxxxxxxx[redacted]
[20160316 04:32:52]-2 calendars found that contain 'schedule'.
[20160316 04:32:53]-DEBUG: 70 events found for client_id xxxxxxxx[redacted]
[20160316 04:32:53]-Processing calendar 'Network on-call schedule'.
[20160316 04:32:53]-Processing calendar 'Database on-call schedule'.
[20160316 04:32:53]-66 on call events found in 2 calendars.
[20160316 04:32:53]-Current Date is Wednesday, March 16, 2016 11:32:53 AM UTC.
[20160316 04:32:53]-The current event is evt_external_56e4380773767685b80808d4 - start 2016-03-16T00:00:00Z UTC, end 2016-03-16T15:00:00Z UTC, summary: 1=Bob (sms) 2=Cole.
[20160316 04:32:53]-The current event is evt_external_56e6f90a73767685b80b3050 - start 2016-03-15T23:00:00Z UTC, end 2016-03-16T16:00:00Z UTC, summary: Al (voice).
[20160316 04:32:54]-DEBUG: Event has 2 stages, listing contacts:
[20160316 04:32:54]-DEBUG: Stage 1, addr Bob, method sms
[20160316 04:32:54]-DEBUG: Stage 2, addr Cole, method email
[20160316 04:32:54]-DEBUG: Escalation chain has 2 stages, listing contacts:
[20160316 04:32:54]-DEBUG: Stage 1, addr Don, method email
[20160316 04:32:54]-DEBUG: Stage 2, addr Ed, method sms
[20160316 04:32:54]-Updating LM escalation chain.
[20160316 04:32:56]-Update successful, REST status 200.
[20160316 04:32:56]-DEBUG: Event has 1 stages, listing contacts:
[20160316 04:32:56]-DEBUG: Stage 1, addr Al, method voice
[20160316 04:32:56]-DEBUG: Escalation chain has 1 stages, listing contacts:
[20160316 04:32:56]-DEBUG: Stage 1, addr Al, method voice
[20160316 04:32:56]-No update to LM escalation chain required.
[20160316 04:32:56]-EXIT Status/Error 1: Changes applied successfully

I made a datasource to retrieve alert emails with certain key words in it and then it will run a script specified in the body of the email. To prevent security problems, it only processes emails that match your specification.

This video shows an example and explains briefly how it works: ( link to video )


This assumes you have some experience with LogicMonitor and are familiar with datasources and how to apply them etc.

  1. Download this DataSource file and add it into your LogicMonitor account (Settings > DataSources > Add > From file )
  2. Since PowerShell doesn’t natively support IMAP retrieval, you must also put the IMAPX.DLL and 2 other files in \Program Files\LogicMonitor\agent\bin. You might have to right click on each of these 3 files and click “unblock” since Windows tags any file downloaded from internet with “blocked”.
  3. Set the settings near the top of the script for your mailbox and preferences for key words and script folder. Set logging=false after you verify it works.
  4. Set properties on your collector for  “imap.pass”, “mail_username” and “mail_server” These settings are used in the script and used to apply the datasource to a device. I recommend you apply this datasource to the collector since that’s where it runs anyways.
  5. Setup a separate dedicated mailbox (e.g. and add a folder called ARCHIVE because the script needs to move the emails to this folder after they are ‘processed’.
  6. Add a “custom email delivery” under Settings > Integration. Set the destination to the email address mentioned above. The subject should contain “run this script” and body should contain “run this script: myscript.ps1” (where myscript.ps1 is the name of your ‘fix-it’ script. You can use different words as long as they match your settings in the script.
  7. Use an “Alert Rule” in LogicMonitor that sends an alert email to the address above.
  8. Put your “fix-it” script on your collector in the folder that matches your setting in the script. In my example, The default is c:\scripts
  9. Test – test – test (by triggering the alert and making sure the script datasource works). The script can write to a log file to see details.


imapx.dll and related


Currently, this is not officially a feature of LogicMonitor so you probably cannot get help from tech support.


A Windows computer, especially a Terminal Server (or similar XenApp server) will become slow because some user is running a program that hogs all the CPU or memory. Sometimes it’s caused by part of the operating system. It often comes and goes within a few minutes, so you have to quickly login and use Task Manager or similar tool to see the hogging process(es).

Solution: The first step of the solution is to find out which process(es) are hogging CPU or memory. This script and datasource do just that.

“CPU_Top_5” is a PowerShell script in a LogicMonitor datasource. It would typically be set to run every 2-10 minutes.



Use at your own risk. Not officially supported by LogicMonitor tech support.



Top-5 checks CPU or memory usage on local or remote computer. If it's above the specified threshold, it gets the top 5 processes and optionally logs them in the event logs of target computer.
target_computer:     computername of local or remote computer
type_of_check:       CPU or memory
threshold:           percent of CPU or percent of memory (without the % sign). Don't get details unless it's beyond threshold
severity (optional): error or warning in System event log (EventID is 888)

LogicMonitor by default collects the system event log and alerts on severity level of error and above.

The datasource includes a graph that shows CPU percent, the threshold, and error detection.

Compatibility:  Tested on Windows server 2008 and newer. Version 3 or newer of PowerShell is required (free).

Requirements:  You must run this as a user that’s a administrator on both the target computer and the collector computer. Usually this isn’t a problem because the LogicMonitor collector service is set to run with a “service account” that is a user with these permissions.


OUTPUT / message in log:

Name           PID         Owner         CPU %
-------       -------     ----------     ------
cpustres        4567     domain\mike      94
excel           6543     domain\bob        3
svchost         876      SYSTEM            1
wmiprvse        9876     SYSTEM            0
explorer        6545     domain\bob        0

Name        Memory (MB)
-------    -------------
outlook     566
explorer     64
svchost      22
excel        11
myapp         8


The challenge I found in this project is that there are a few utilities, scripts, and methods to do this but most showed the output as “CPU time” which means you have to take 2 samples a few seconds apart and subtract and calculate the percent. One utility I found PSLIST.exe by Microsoft SysInternals allowed you to do this but it displays a lot more information than I needed, it didn’t have a threshold capability, nor the write to event log capability, and for some reason, it wouldn’t exit automatically as documented when I used the -s parameter (task manager mode).


  1. Download this datasource file ( link ) and add it into your LogicMonitor account (Settings > DataSources > Add > From file )
  2. Apply it to device(s). I suggest creating a property on the device called “apply_Top_5”. You must type something in the ‘value’ field.
  3. Set the ‘threshold” in the script or as properties on the device. Default is 90%
  4. If you haven’t already, make sure you allow unsigned scripts to run using command “Set-ExecutionPolicy unrestricted” or similar.
  5. You might have to enable the “remoting” feature of Windows using this command:Enable-PSRemoting -Force
  6. Test by using the ‘Raw Data’ tab. I used the free CPUstres utility to simulate high CPU.
    If you think you have errors, I suggest testing by copying the PowerShell script to your collector and type in the computername and test it right at the PowerShell prompt or ISE (powershell editor).