export hostgroups to xml via API with python

According to the Zabbix docs, the only way to export hostgroups is through the API.  My exposure to the Zabbix API is limited, but I knew there were coding giants out there whose shoulders I could stand on.

I would like to give credit to someone directly, but the code I found had no author listed.  Here’s the link to the original on the zabbix.org wiki site for reference.


The code, as is, works great for exporting templates, but I needed to make some changes to get it to export hostgroups.  Luckily, the API reference pages on the Zabbix website are very helpful.

I’ll leave it up to you to diff the 2 versions to see exactly what changed, but for the basic summary, modify a couple parameters and a couple object properties and the script can used to export many other things.

See the API reference pages for the hostgroup method details.  https://www.zabbix.com/documentation/2.4/manual/api/reference/hostgroup/get

Here’s what I ended up with and it works great!  This will export all the hostgroups into separate xml files and put them into the ./hostgroups directory.



 # pip install py-zabbix
 # source: https://www.zabbix.org/wiki/Python_script_to_export_all_Templates_to_individual_XML_files
 # usage: python zabbix_export_hostgroups_bulk.py --url https://<zabbix server name>/zabbix --user <api user> --password <user passwd>

import argparse
 import logging
 import time
 import os
 import json
 import xml.dom.minidom
 from zabbix.api import ZabbixAPI
 from sys import exit
 from datetime import datetime

parser = argparse.ArgumentParser(description='This is a simple tool to export zabbix hostgroups')
 parser.add_argument('--hostgroups', help='Name of specific hostgroup to export',default='All')
 parser.add_argument('--out-dir', help='Directory to output hostgroups to.',default='./hostgroups')
 parser.add_argument('--debug', help='Enable debug mode, this will show you all the json-rpc calls and responses', action="store_true")
 parser.add_argument('--url', help='URL to the zabbix server (example: https://monitor.example.com/zabbix)',required = True)
 parser.add_argument('--user', help='The zabbix api user',required = True)
 parser.add_argument('--password', help='The zabbix api password',required = True)
 args = parser.parse_args()

if args.debug:
 logging.basicConfig(level = logging.DEBUG, format='%(asctime)s %(message)s', datefmt='%m/%d/%Y %I:%M:%S %p')
 logger = logging.getLogger(__name__)

def main():
 global args
 global parser

if None == args.url :
 print "Error: Missing --url\n\n"

if None == args.user :
 print "Error: Missing --user\n\n"

if None == args.password :
 print "Error: Missing --password\n\n"

if False == os.path.isdir(args.out_dir):

zm = ZabbixHostgroups( args.url, args.user, args.password )


class ZabbixHostgroups:

def __init__(self,_url,_user,_password):
 self.zapi = ZabbixAPI(url=_url, user=_user, password=_password)

def exportHostgroups(self,args):
 request_args = {
 "output": "extend"

if args.hostgroups != 'All':
 request_args.filter = {
 "name": [args.hostgroups]

result = self.zapi.do_request('hostgroup.get',request_args)
 if not result['result']:
 print "No matching name found for '{}'".format(hostname)

if result['result']:
 for t in result['result']:
 dest = args.out_dir+'/'+t['name']+'.xml'

def exportTemplate(self,tid,oput):

print "groupid:",tid," output:",oput
 args = {
 "options": {
 "hostgroups": [tid]
 "format": "xml"

result = self.zapi.do_request('configuration.export',args)
 hostgroup = xml.dom.minidom.parseString(result['result'].encode('utf-8'))
 date = hostgroup.getElementsByTagName("date")[0]
 # We are backing these up to git, steralize date so it doesn't appear to change
 # each time we export the hostgroups
 f = open(oput, 'w+')

if __name__ == '__main__':


Install latest Ruby version on CentOS/RHEL with RVM

While trying to install rack and passenger on a CentOS 6.8 box I ran into errors…

ERROR:  Error installing rack:
        rack requires Ruby version >= 2.2.2.
ERROR:  Error installing passenger:
        rake requires Ruby version >= 1.9.3.

Seems newer versions of rack and passenger are looking for more recent versions of Ruby than what is installed via CentOS/RHEL RPMs.  I haven’t needed to upgrade Ruby beyond the version that the installed RPMs provide, so I needed to research a bit to get past this road block.

I found a utility called RVM (Ruby Version Manager) that can be used quite easily to upgrade Ruby to pretty much any version you need.  I chose to install the latest stable version of Ruby.  RVM also allows you to have multiple versions of Ruby installed on your system and quickly switch between them.

It’s pretty easy to use.  Here’s what I did:

  • Install some required RPMs
yum -y install gcc-c++ patch readline readline-devel zlib zlib-devel libyaml-devel libffi-devel openssl-devel make bzip2 autoconf automake libtool bison iconv-devel sqlite-devel
  • Download and install the latest stable version of RVM
curl -sSL get.rvm.io | bash -s stable
  • Set up environment for Ruby
source /etc/profile.d/rvm.sh
rvm install 2.3.1
  • Set default version once installation completes
rvm use 2.3.1 --default
  • Finally, check that the version is correct
ruby --version

It’s that easy! Once I updated Ruby to the latest version I was able to successfully install both Rack and Passenger. Problem solved!

Thanks to the references I used to get this working:

RVM: Ruby Version Manager

How to Install Ruby 2.1.8 on CentOS & RHEL using RVM

Install Puppet Server CentOS 6.5/6.4

chroot sftp with OpenSSH


This describes configuring OpenBSD server specifically, but the sshd_config settings should work on any distro.

The result will be users with sftp only privileges where upon login they will be jailed into a directory and only have write access to a subdirectory.


A recent version of OpenBSD or some other Linux variant running openssh-server


Add the following to your /etc/ssh/sshd_config file:

# override default of no subsystems
#Subsystem      sftp    /usr/libexec/sftp-server

# sftp configuration
Subsystem       sftp    internal-sftp

  Match Group sftponly
    ChrootDirectory %h
    ForceCommand internal-sftp
    X11Forwarding no
    AllowTCPForwarding no
    PasswordAuthentication yes

jail directory and user configuration

A quick and dirty bash script to configure the user directories.

 useradd -d $SFTPDIR/$SFTPUSER -s /sbin/nologin -g sftponly $SFTPUSER
 mkdir -p $SFTPDIR/$SFTPUSER/upload
 chown root:sftponly $SFTPDIR
 chmod 700 $SFTPDIR
 chown root:sftponly $SFTPDIR/$SFTPUSER
 chown $SFTPUSER:nobody $SFTPDIR/$SFTPUSER/upload
 chmod 700 $SFTPDIR/$SFTPUSER/upload 


  • User will not be allowed to write to their home directory, but they will be allowed to write to the ‘upload’ subdirectory.
  • Users will have read-only access to their home directory.
  • Restart the sshd server after making any changes to /etc/ssh/sshd_config

Puppet agent on Windows 7

Puppet is very particular about the Ruby version on Windows.  While 2.2 and 2.3 versions of Ruby are available, puppet only runs without complaint on Ruby 2.1 on my Windows 7 box.

As of May 2016, I installed ruby 2.1.8 and puppet-agent 3.8.7.  I also had to install some gems to make puppet-agent happy.

 gem install win32-security win32-dir require win32-process top win32-service

Here are the links to downloads for puppet-agent and ruby:



No issues if the right version of ruby and the right gems.

Using ntpstat to check NTPD status with Zabbix

The standard way of checking a service in Zabbix checks that the service is running, but I wanted to know not only that the NTPD service was running but that the time was synchronized.  ntpstat is a great utility that does both, checks that the ntpd service is running and then tells you whether the server is synchronized.   ntpstat will report the synchronization state of the NTP daemon running on the local machine.  ntpstat returns 0 if clock is synchronized.  ntpstat returns 1 if clock is  not  synchronized.  ntpstat returns 2 if clock state is unknown, for example if ntpd can’t be contacted.

I created a Zabbix item to use ntpstat.  Here are the 2 ways I have used this new check:

The first way to use ntpstat with Zabbix is to simply create an item using the system.run function.

Name - ntpstat status
Type - Zabbix agent (active)
Key  - system.run[ntpstat &> /dev/null ; echo $?]
Type of Information - Text

Ensure EnableRemoteCommands=1 is set in your zabbix_agentd.conf file for this to work.

The second way to create the item is to use custom user parameters.  This requires a file modification on the monitored instance, so if you have a lot of instances to monitor or do not have a good way to automate this file modification, you may want to stick with option 1

I like creating new userparameter files for custom parameters.

UserParameter=custom.net.ntpstat,ntpstat &> /dev/null ; echo $?

Then create an item similar to above but with a change to the key

Name - ntpstat status
Type - Zabbix agent (active)
Key  - custom.net.ntpstat
Type of Information - Numeric (unsigned)
Data Type - Decimal

Once your custom userparameter file is placed you’ll need to restart the zabbix agent. The last step with either item creation option is to create a trigger that alerts when the returned value is not 0.

I like this check much better than my original one that just alerted when the ntpd service was down.  Now I get alerted before time synchronization issues become an issue for the applications.

This was tested on both CentOS 6.7 and CentOS 7.1, but this should work on your Linux distro of choice as long as you have ntpstat installed.

Hope this helps



Device eth0 is not present after cloning RHEL/CentOS in VMWare

#ifup eth0
Device eth0 does not seem to be present, delaying initialisation

Easy fix:

Remove the networking interface rules file, it will be regenerated

# rm -f /etc/udev/rules.d/70-persistent-net.rules

Update your interface configuration file

# vim /etc/sysconfig/networking/devices/ifcfg-eth0

Remove the MACADDR and  the UUID entries

Save and exit the file

Restart the networking service

# service network restart

Use dstat on command line for quick system resource stat collection

 dstat -tv --output /tmp/${HOSTNAME}-dstat-$(date +"%Y%m%d-%H%M%S").csv 10

Starts a dstat process at 10 second intervals and writes output to /tmp called <hostname>-dstat-yyyymmdd-hhmmss.csv &

Kill it with

kill `ps -ef | grep dstat | grep -v grep | awk '{print $2}'`

This csv file is easy to open in a Excel to chart performance metrics.  Consider adding to a script for automation.