Thursday 25 October 2012

Finding CA eHealth Missing Elements

I had an issue lately that we were unable to identify elements in our ehealth that are still being polled but no longer are valid. ehealth doesn't flag in the database whether elements are missing or not and in some cases sub -elements can still be polling data even thought they are invalid. So you can have elements that still have data but are no longer found on the target devices.

For example if someone changes the volumes on a solaris box with a systemedge agent the old disk names are still discovered and polled (and if the snmp index is the same as  anew disk it will still have a value). Say we used to have a diskX with index 12 and now have diskY with index 12 you get two disk elements with the same metrics being read per poll but one is valid and one 'missing'.

I wrote this script this week to identify all the 'missing' elements in ehealth and export them to a single file. Then you can action investigating / cleaning them up with nhDeleteElement or whatever.

The script scans all the autodiscvoery log files looking for the 'missing elements' section and spits out the line containing the element name.

#!/bin/ksh
# /appl/ehealth/missingelements.ksh
# THIS SCRIPT WILL GENERATE A FILE CONTAINING ALL THE MISSING EHEALTH ELEMENTS FROM SCHEDULED AUTO DISCVOERY LOGS
host=`hostname`
# this is the final list of all missing files
masterlistfile="/var/tmp/ehealthreport/MissingElementList.$host"

#this is temp directory for working files
ehtemp="/appl/ehealth/tmp"
temp1="$ehtemp/missingelements1"
temp2="$ehtemp/missingelements2"
temp3="$ehtemp/missingelements3"

# configure this to directory where your ehealth logs are
logpath="/appl/ehealth/log"

# days of files to search.
goback=7

################# MAIN STARTS HERE #################
rm $masterlistfile
rm $temp1 $temp2 $temp3

files=`find $logpath/discoverScheduled* -mtime -$goback`

for file in $files; do
    grep "Missing Elements" $file
    if [ $? -eq 0 ] ; then
        echo $file
        missingstart=`grep -n "Missing Elements" $file | sed 's/:  Missing Elements//'`
        missingstart=$((missingstart + 1))
        echo "from $missingstart"
        tail +$missingstart $file > $ehtemp/missingelements1

        missingend=`grep -n "GROUP ASSIGNMENT DETAILS" $temp1 | sed 's/:GROUP ASSIGNMENT DETAILS//'`
        missingend=$((missingend -1))
        echo "to $missingend"

        head -$missingend $temp1 > $temp2
        grep \( $temp2 > $temp3

        cat $temp3 >> $masterlistfile

    else
         echo "$file has no missing elements!"
    fi

done

echo " MISSING ELEMENT REPORT WRITTEN TO: $masterlistfile"
#cat $masterlistfile

Thursday 19 July 2012

Automatic CA eHealth Script to Delete Elements with old Data

I wrote this ksh script this week so we can have a scheduled job to purge elements that are no generating any database data for CA eHealth 6.3

For our site I found the nhListElements -nodbdata didn't actually return a list of all the elements we wanted to delete that were not polling properly.

We added this to crontab for ehealth user to automate management. It makes a day of month file listing all the deleted elements that day.


# PurgeOldElements
# 1 6 * * * /appl/ehealth/telecom/PurgeOldElements

# ./"PurgeOldElements"
# nhElementStatus -noDbDataFor 129600 >/var/tmp/ehealth.report.nodbdatafor.129600.20.txt
# NUMBER OF MATCHING ELEMENTS=
# 153
# nhDeleteElements -inFile /var/tmp/ehealth.elements.nodbdatafor.129600.20.txt
# ./"PurgeOldElements"
# nhElementStatus -noDbDataFor 129600 >/var/tmp/ehealth.report.nodbdatafor.129600.20.txt
# NUMBER OF MATCHING ELEMENTS=
# 0
# no elements to delete

# delete elements with no dbdata for 90 days (60*24*90)=129600
# get a list in a file
daynumber=`date +"%d"`
threshold=129600
report=/var/tmp/ehealth.report.nodbdatafor.$threshold.$daynumber.txt
elementlist=/var/tmp/ehealth.elements.nodbdatafor.$threshold.$daynumber.txt
rm $report $elementlist $report.2

echo "nhElementStatus -noDbDataFor $threshold >$report"
nhElementStatus -noDbDataFor $threshold >$report

# strip off headers
tail +5 $report>$report.2

echo "NUMBER OF MATCHING ELEMENTS="
grep -c "Was not polled in $threshold" $report.2
if [ $? -ne 0 ]; then
echo "no elements to delete"
else
# get the first field (element name
i=0
while read line
do
echo $line| awk  '{print $1}' >>$elementlist
done <"$report.2"

cd $NH_HOME
echo "nhDeleteElements -inFile $elementlist"
nhDeleteElements -inFile $elementlist

fi


Monday 28 May 2012

Line6 Pocket Pod Quick Reference Chart

I love my Line6 pocket pod - but its got a lot of functionality in such a tiny device...so I made this quick reference chart - essentially a 1 page printable A3 summary of the reference guide.


It shows the line6 pocket pod default user presets by title and bank, the button controls and a quick description of all the amp models and what historical amps they are based on.

Wednesday 16 May 2012

CA eHealth Oracle Queries for Systemedge Agents

In our site we wanted to export some CA eHealth data daily to use in another system. These queries use direct oracle query to export data using the regularised tables in the database for systemedge agents.

Here are some ehealth sql queries that that use the aggregate hourly sample tables. e.g 

These queries have &1 and &2 as variables for epoc timestamp of the data but you can use hardcode values.

system edge agent host key stats - NHV_ST_H_SYSEDGEUNIXSYSTEM 

COLUMN AVGCPUUTILIZATION FORMAT 999.9;
COLUMN CPUSYSTEMUTILIZATION FORMAT 999.9;
COLUMN CPUUSERUTILIZATION FORMAT 999.9;
COLUMN CPUWAITUTILIZATION FORMAT 999.9;
COLUMN LOADAVERAGE FORMAT 9999.99;
COLUMN PHYSICALMEMORYUTILIZATION FORMAT 999.9;
COLUMN TOTALCPUUTILIZATION FORMAT 9999.9;
COLUMN VIRTUALMEMORYUTILIZATION FORMAT 999.9;
COLUMN PAGESCANRATE FORMAT 999999999;
COLUMN PROCESSES FORMAT 99999;
COLUMN RUNQUEUELENGTH FORMAT 99999;
COLUMN ACTIVEVIRTMEMORY_MB FORMAT 999999999;
COLUMN PHYSICALMEMORYFREE_MB FORMAT 999999999;
COLUMN PHYSICALMEMORYUSED_MB FORMAT 9999999999;
COLUMN TOTALBYTES_MB FORMAT 999999999;
COLUMN TOTALPHYSICALMEMORY_MB FORMAT 999999999;
COLUMN TOTALVIRTUALMEMORY_MB FORMAT 999999999;
COLUMN VIRTUALMEMORYFREE_MB FORMAT 999999999;
COLUMN VIRTUALMEMORYUSED_MB FORMAT 999999999;
COLUMN CPUIMBALANCE FORMAT 999999999.9;
COLUMN NUMSWITCHES FORMAT 999999999;

Select el.NAME, data.* from
(select distinct ELEMENT_ID, replace(NAME,'-1691-SH','') AS NAME
from NH_ELEMENT
where ELEMENT_TYPE = 104108) el, 
(select ELEMENT_ID,SAMPLE_TIMESTAMP,ACTIVEVIRTMEMORY/1048576 AS ACTIVEVIRTMEMORY_MB,AVGCPUUTILIZATION,CPUIMBALANCE,CPUSYSTEMUTILIZATION,
CPUUSERUTILIZATION,CPUWAITUTILIZATION,LOADAVERAGE,NUMSWITCHES,PAGESCANRATE,
PHYSICALMEMORYFREE/1048576 AS PHYSICALMEMORYFREE_MB,PHYSICALMEMORYUSED/1048576 AS PHYSICALMEMORYUSED_MB,PHYSICALMEMORYUTILIZATION,PROCESSES,
RUNQUEUELENGTH,TOTALBYTES/1048576 AS TOTALBYTES_MB,TOTALCPUUTILIZATION,TOTALPHYSICALMEMORY/1048576 AS TOTALPHYSICALMEMORY_MB,
TOTALVIRTUALMEMORY/1048576 AS TOTALVIRTUALMEMORY_MB,
VIRTUALMEMORYFREE/1048576 AS VIRTUALMEMORYFREE_MB,VIRTUALMEMORYUSED/1048576 AS VIRTUALMEMORYUSED_MB,VIRTUALMEMORYUTILIZATION
from NHV_ST_H_SYSEDGEUNIXSYSTEM 
--where SAMPLE_TIME > 1333050481 and SAMPLE_TIME < 1333057681) data
where SAMPLE_TIME > '&1' and SAMPLE_TIME < '&2') data
where 
el.ELEMENT_ID = data.ELEMENT_ID;

Disk partition for Systemedge - NHV_ST_H_DISKPARTITION
COLUMN INODEUTILIZATION FORMAT 999.9;
COLUMN PARTITIONUTILIZATION FORMAT 999.9;
COLUMN PCTPARTITIONFREE FORMAT 999.9;
COLUMN PARTITIONSTORAGECAPACITY_MB FORMAT 999999999;
COLUMN PARTITIONSTORAGEFREE_MB FORMAT 999999999;
COLUMN PARTITIONSTORAGEUSED_MB FORMAT 999999999;

Select el.NAME, data.* from
(select distinct ELEMENT_ID, replace(NAME,'-1691-SH','') AS NAME
from NH_ELEMENT
where ELEMENT_TYPE = 104045 or ELEMENT_TYPE =104039) el, 
(SELECT SAMPLE_TIME,ELEMENT_ID,SAMPLE_TIMESTAMP,INODEUTILIZATION,
PARTITIONSTORAGECAPACITY/1048576 AS PARTITIONSTORAGECAPACITY_MB,
PARTITIONSTORAGEFREE/1048576 AS PARTITIONSTORAGEFREE_MB,
PARTITIONSTORAGEUSED/1048576 AS PARTITIONSTORAGEUSED_MB,
PARTITIONUTILIZATION,PCTPARTITIONFREE
From NHV_ST_H_DISKPARTITION
where -- ELEMENT_ID>=1000013 and ELEMENT_ID <=1000020
-- SAMPLE_TIME > 1333050481 and SAMPLE_TIME < 1333057681
SAMPLE_TIME > '&1' and SAMPLE_TIME < '&2') data
where 
el.ELEMENT_ID = data.ELEMENT_ID;


SCOM Operations Manager Alarm Export Report

This simple powershell script will export SCOM / Operations Manager 2007 alarms to csv file using the command shell.

The second script is particularly useful for answering one of the long unanswerable questions of Operations Manager... how to tell if alarms are generated by a monitor or a rule because from this report we can tell if it is or not by the column 'IsMonitorAert'. Generally speaking

  • IsMonitorAlert=true means the alarms come from monitors and will resolve themselves when the situation is fixed
  • IsMonitorAlert=fase means the alarms come from rules and won't resolve themselves when the situation is fixed.
Rule monitors can create problems when not resolved by operations as repeat alarms are suppressed as duplicates.

One common example is the service terminated unexpectedly rule in the windows server management pack.

We can also categorize the alarms and tell which management pack they came from from the column MonitoringObjectFullName 

# this script will make some csv reports of System Center Operations Manager Alarms
#**********setup root ms connection assume local ***************

$cs = get-wmiobject win32_computersystem
$rootMS = $cs.Name + "." + $cs.Domain
$rootMS = $rootMS.ToUpper() 

#*****************************************************************
#Connect to the Mgmt Group (SDK)

if (Get-PSSnapin | select-string -pattern "OperationsManager")
{echo "OPs mgr snap in is installed already";}
else
{Add-PSSnapin "Microsoft.EnterpriseManagement.OperationsManager.Client" -ErrorVariable errSnapin;}

Set-Location "OperationsManagerMonitoring::" -ErrorVariable errSnapin;
new-managementGroupConnection -ConnectionString:$rootMS -ErrorVariable errSnapin;
set-location $rootMS -ErrorVariable errSnapin;

############ ALL including resolved Alerts ########################
$title="ALLAlertsByMonitoringObjectFullName"
$Body =get-alert | select Id,Name,Priority,Severity,TimeRaised,ResolutionState,TimeResolved,ResolvedBy,RepeatCount,LastModified,MonitoringObjectDisplayName,MonitoringObjectPath,Category,Description,MonitoringObjectId,MonitoringClassId,MonitoringObjectFullName,IsMonitorAlert | sort-object MonitoringObjectFullName 
$body | Export-Csv c:temp\$Title.csv

############ OPEN Alerts ########################
$title="OpenAlertsByUTCDateModDesc"
$Body =get-alert -criteria 'ResolutionState = "0"' | select Id,Name,Priority,Severity,TimeRaised,RepeatCount,LastModified,MonitoringObjectDisplayName,MonitoringObjectPath,Category,Description,MonitoringObjectId,MonitoringClassId,IsMonitorAlert | sort-object LastModified -descending
$body | Export-Csv c:temp\$Title.csv


Thursday 29 March 2012

How to get the current epoch time in ...

this site is awesome. when you need to know how to get an epoch from a date for vice versa in many different platforms (sql, unix, vbscript, asp etc...)

http://www.epochconverter.com/

Sunday 25 March 2012

exporting attachments from outlook pst file

made the final move of my legacy local email into 'the cloud' this week, and needed a solution to export the years of pictures and files out of my wife's 700 MB outlook pst file that we never got round to saving outside outlook.

Its funny how all those quick email checks over the years and thinking 'I'll save those pictures into my pictures later' adds up into a massive job.

this plugin for outlook "Outlook Attachment Remover Add-in"

is magic and did the job perfectly so I'd highly recommend it to anyone faced with the same dilemma (or even as a way of keeping the pst file size down as its actually intended for.) It grinded away for a about 10 minutes and now I have several thousand files in one nice folder on my hard disk. AWESOME!

What is really cool is it replaces your attachment with a shortcut in the pst to where the file is saved on the computer so you can keep the pst file size down.

Personally I've always thought pst files were devil spawn (I used to be a IT system admin and they were a scourge... we had 50MB mailbox allowances back then and people would build up GB size PST files and store them on our shared network drives). Now we have the 'cloud' and what once were unthinkable size free online storage...


Tuesday 28 February 2012

loving xkpasswd

it seems a sad reality of modern life that you have a million passwords to manage often with different rules.

I'm loving www.xkpasswd.net to generate memorable secure passwords.