Some SNMP Basic Grok’age

I’m digging into SNMP (Simple Network Management Protocol) to see what it offers in terms of application / server monitoring.

The specific scenario I have in mind is the monitoring of queues in MSMQ for our NServiceBus windows services. We need email alerts to be triggered if a message is moved to the error queue or if the number of items in a queue exceeds a certain threshold.

I’m working on a Windows 7 x64 laptop so the first thing I had to do was install the SNMP service (Under CP > Programs and Features).

After that was installed I had to restart the services applet and right click > Properties > Security on the SNMP Service and create a “public” community.

After this I had to download a command line tool Net-SNMP for issuing SNMP queries to test all was well.
I opted for the basic installation and uncheck all the other options.
Open a new command prompt so that it picks up the PATH variable added during installation then run the following command

snmptranslate -IR -Td IF-MIB::linkDown

See the README.txt for the correct response but if you see no errors you are in good shape.

Next download SnmpTools Follow the instructions. I opted to install the 32bit version using the regagentWow6432.reg
Restart the Microsoft SNMP Service then run the command:
snmpget -v1 -c public localhost 1.3.6.1.4.1.15
If all is well you should see:
SNMPv2-SMI::enterprises.15 = STRING: “snmptools by erwan.l@free.fr”

That’s all for now but I will update this post again.

As Promised Part 2

I have made a little more progress accessing the MSMQ Performance counters

I have added the following two lines to the Counters.ini file and restarted the SNMP Service:

[1.3.6.1.4.1.15.5]

counter=MSMQ QueueMessages in Queueacerboticprivate$distributordatabus

Now when I issue the following snmpget query from the command line:

snmpget -v1 -c public localhost 1.3.6.1.4.1.15.5

I see the response:

SNMPv2-SMI::enterprises.15.5 = INTEGER: 2

Thus indicating there are two messages in the private$distributordatabus queue on my laptop.

The End Game

At time of writing it seems that in order to allow our SNMP monitoring software to access this OID I need to supply it with a MIB file.

The story continues…

I have found this page that talks about using a MIB editor to create such a file.

Conclusion

SNMP is a great way to monitor your application, but it involves a good deal of upfront investment if you don’t already have a developer on your team who is proficient. We don’t have such a developer so have decided to use the .Net framework to expose peformance counters and use WMI to monitor those counters. If the thresholds we set are exceeded we will get alerts.

My first UK SQL Server User Group meeting

Yesterday I attended my first UK SQL Server User Group meeting.
Event web page

Topics

– SQL Server I/O – Tony Rogerson
– Understanding how Sql performs Joins with NestedLoops, Merge & Hash – Ami Levin
– DbSophic QURE – Workload Tuning – Ami Levin
– SQL Streaminsight – Allan Mitchell – twitter: @allanSQLIS

I was most interested in the first topic but missed the talk as I did not arrive till 18:30 and the evening started @ 17:30. I don’t finish work till 18:00 and had to get across town. Of the three talks that I caught the content and delivery was at a very high standard.

Understanding how Sql Server performs Joins with NestedLoops, Merge & Hash – Ami Levin

If you write a lot of SQL queries or are often called upon to tune SQL Server then you should learn how/why the query optimizer selects from three physical operators to perform joins and how sometime it can be caught out. This talk was given by Ami Levin who is a SQL MVP and works for DBSophic. It was really interesting and the first time I have had any understanding of what these terms that appear in the execution plan mean. One really interesting thing that Ami pointed out is that sometimes the method Sql uses to estimate the number of rows and actual number of rows will differ by a large amount, this can result in a sub optimal operator being selected to perform the join. If when tuning you spot this sort of discrepancy further investigation is warranted. The following is a scenario that Ami gave when this can occur:

SELECT *
FROM SalesLT.SalesOrderHeader SOH
INNER JOIN SalesLT.SalesOrderDetail SOD ON SOH.SalesOrderID = SOD.SalesOrderID
INNER JOIN SalesLT.Product P ON P.ProductID = SOD.ProductID
WHERE SOH.TotalDue > 1500
AND SOD.UnitPrice > 300
AND P.StandardCost > 300

So in the SQL above the optimizer will use statistics to estimate roughly how many rows it will return by looking at each table in turn and estimating how many rows each where clause will return. In this case say that each Where clause will return ~1% of the rows in a table containing 100k rows so the optimizer estimates the number of rows to be 1% of 100,000 = 1000 rows then, 1% of 1000 = 10, then 1% of 10 < 1. So the optimizer thinks 1 row or less will be in the results set. Unfortunately what the optimizer cannot know is that the where clauses are releated to the data, in this case ORDERS that cost more and have more expensive unit cost and a higher so the actual number of rows returned will be nearer to 1000, hence why sometimes the optimizer can choose a sub optimal physical operator to perform a join.

If you think the wrong operator is being chosen you can use the OPTION keyword this article discusses in more detail: