AWS S3 on CentOS/RHEL for off-site backups

AWS S3 has been around for a long time now.  Yet, I am just now getting around to using it for an off-site backup location.

Here’s my plan.  Backup to local RAID disk on my backup server, then make an off-site copy up to S3.  In my mind, this covers the 3-2-1 backup rule.  Here’s how I see it broken down.  Reach out if you think my thought process is off.

3 copies: The first copy is your primary data, second copy is the local backup on your backup server, and the 3rd copy is what gets put into S3.

2 media types: This is where I might be off.  One is the local backup and the second is S3.  I question this a little because I’ve seen out there on the internet where some are talking about different physical media types, but I think that is overly redundant as long as you ensure that your off-site backup is secure.  IE: first is hard drive so second can’t be hard drive.   What do you think?

1 off-site copy:  The copy out to S3.

This seems like a pretty solid backup policy.

How to set it up

yum install gcc libstdc++-devel gcc-c++ curl-devel libxml2-devel openssl-devel mailcap automake fuse-devel fuse-libs git libcurl-devel libxml2-devel make
git clone https://github.com/s3fs-fuse/s3fs-fuse.git
cd s3fs-fuse/
./autogen.sh
./configure
make && make install
ln -s /usr/local/bin/s3fs /usr/bin/s3fs
  • Once you have fuse and s3fs installed, create a bucket in S3, and record credentials for user with access to bucket. s3fs will use /etc/passwd-s3fs for credential storage. Please enter your bucket credentials in /etc/passwd-s3fs as follows:
accessKeyId:secretAccessKey
  • If you have multiple buckets that will be mounted to this machine, add the credentials in /etc/passwd-s3fs as follows:
bucketName:accessKeyId:secretAccessKey
  • create a directory for mounting the s3 bucket
mkdir -p /mnt/s3fs-bucketname
  • manually mount the bucket into the mount point
s3fs -o use_cache=/tmp/cache bucketname /mnt/s3fs-bucketname

The -f switch is helpful to run the process in the foreground to troubleshoot mounting.

  • Once you confirm the mount is successful, you can enter the mount attributes in /etc/fstab so it mounts at startup.
s3fs#bucketname /mnt/s3fs-bucketname fuse allow_other,use_cache=/tmp/cache 0 0


Set up your backup client to put an extra copy in your /mnt/s3fs-bucketname directory.  If you were really paranoid about data loss, you could always age your data in S3 to send it to Glacier at a certain time.  I need to run with this for a little while and see what works best for my use case.  Let me know if this works for you.

 

 

Advertisements

export hostgroups to xml via API with python

According to the Zabbix docs, the only way to export hostgroups is through the API.  My exposure to the Zabbix API is limited, but I knew there were coding giants out there whose shoulders I could stand on.

I would like to give credit to someone directly, but the code I found had no author listed.  Here’s the link to the original on the zabbix.org wiki site for reference.

https://www.zabbix.org/wiki/Python_script_to_export_all_Templates_to_individual_XML_files

The code, as is, works great for exporting templates, but I needed to make some changes to get it to export hostgroups.  Luckily, the API reference pages on the Zabbix website are very helpful.

I’ll leave it up to you to diff the 2 versions to see exactly what changed, but for the basic summary, modify a couple parameters and a couple object properties and the script can used to export many other things.

See the API reference pages for the hostgroup method details.  https://www.zabbix.com/documentation/2.4/manual/api/reference/hostgroup/get

Here’s what I ended up with and it works great!  This will export all the hostgroups into separate xml files and put them into the ./hostgroups directory.

 

#!/usr/bin/python

#
 # pip install py-zabbix
 #
 # source: https://www.zabbix.org/wiki/Python_script_to_export_all_Templates_to_individual_XML_files
 #
 # usage: python zabbix_export_hostgroups_bulk.py --url https://<zabbix server name>/zabbix --user <api user> --password <user passwd>
 #

import argparse
 import logging
 import time
 import os
 import json
 import xml.dom.minidom
 from zabbix.api import ZabbixAPI
 from sys import exit
 from datetime import datetime

parser = argparse.ArgumentParser(description='This is a simple tool to export zabbix hostgroups')
 parser.add_argument('--hostgroups', help='Name of specific hostgroup to export',default='All')
 parser.add_argument('--out-dir', help='Directory to output hostgroups to.',default='./hostgroups')
 parser.add_argument('--debug', help='Enable debug mode, this will show you all the json-rpc calls and responses', action="store_true")
 parser.add_argument('--url', help='URL to the zabbix server (example: https://monitor.example.com/zabbix)',required = True)
 parser.add_argument('--user', help='The zabbix api user',required = True)
 parser.add_argument('--password', help='The zabbix api password',required = True)
 args = parser.parse_args()

if args.debug:
 logging.basicConfig(level = logging.DEBUG, format='%(asctime)s %(message)s', datefmt='%m/%d/%Y %I:%M:%S %p')
 logger = logging.getLogger(__name__)

def main():
 global args
 global parser

if None == args.url :
 print "Error: Missing --url\n\n"
 exit(2)

if None == args.user :
 print "Error: Missing --user\n\n"
 exit(3)

if None == args.password :
 print "Error: Missing --password\n\n"
 exit(4)

if False == os.path.isdir(args.out_dir):
 os.mkdir(args.out_dir)

zm = ZabbixHostgroups( args.url, args.user, args.password )

zm.exportHostgroups(args)

class ZabbixHostgroups:

def __init__(self,_url,_user,_password):
 self.zapi = ZabbixAPI(url=_url, user=_user, password=_password)

def exportHostgroups(self,args):
 request_args = {
 "output": "extend"
 }

if args.hostgroups != 'All':
 request_args.filter = {
 "name": [args.hostgroups]
 }

result = self.zapi.do_request('hostgroup.get',request_args)
 if not result['result']:
 print "No matching name found for '{}'".format(hostname)
 exit(-3)

if result['result']:
 for t in result['result']:
 dest = args.out_dir+'/'+t['name']+'.xml'
 self.exportTemplate(t['groupid'],dest)

def exportTemplate(self,tid,oput):

print "groupid:",tid," output:",oput
 args = {
 "options": {
 "hostgroups": [tid]
 },
 "format": "xml"
 }

result = self.zapi.do_request('configuration.export',args)
 hostgroup = xml.dom.minidom.parseString(result['result'].encode('utf-8'))
 date = hostgroup.getElementsByTagName("date")[0]
 # We are backing these up to git, steralize date so it doesn't appear to change
 # each time we export the hostgroups
 date.firstChild.replaceWholeText('2016-01-01T01:01:01Z')
 f = open(oput, 'w+')
 f.write(hostgroup.toprettyxml().encode('utf-8'))
 f.close()

if __name__ == '__main__':
 main()

 


chroot sftp with OpenSSH

Overview

This describes configuring OpenBSD server specifically, but the sshd_config settings should work on any distro.

The result will be users with sftp only privileges where upon login they will be jailed into a directory and only have write access to a subdirectory.

Requirements

A recent version of OpenBSD or some other Linux variant running openssh-server

/etc/ssh/sshd_config

Add the following to your /etc/ssh/sshd_config file:

# override default of no subsystems
#Subsystem      sftp    /usr/libexec/sftp-server

# sftp configuration
Subsystem       sftp    internal-sftp

  Match Group sftponly
    ChrootDirectory %h
    ForceCommand internal-sftp
    X11Forwarding no
    AllowTCPForwarding no
    PasswordAuthentication yes

jail directory and user configuration

A quick and dirty bash script to configure the user directories.


 #!/bin/bash
 SFTPUSER=<username>
 SFTPDIR=/sftp_jail
 useradd -d $SFTPDIR/$SFTPUSER -s /sbin/nologin -g sftponly $SFTPUSER
 mkdir -p $SFTPDIR/$SFTPUSER/upload
 chown root:sftponly $SFTPDIR
 chmod 700 $SFTPDIR
 chown root:sftponly $SFTPDIR/$SFTPUSER
 chmod 750 $SFTPDIR/$SFTPUSER
 chown $SFTPUSER:nobody $SFTPDIR/$SFTPUSER/upload
 chmod 700 $SFTPDIR/$SFTPUSER/upload 

Notes

  • User will not be allowed to write to their home directory, but they will be allowed to write to the ‘upload’ subdirectory.
  • Users will have read-only access to their home directory.
  • Restart the sshd server after making any changes to /etc/ssh/sshd_config

Using ntpstat to check NTPD status with Zabbix

The standard way of checking a service in Zabbix checks that the service is running, but I wanted to know not only that the NTPD service was running but that the time was synchronized.  ntpstat is a great utility that does both, checks that the ntpd service is running and then tells you whether the server is synchronized.   ntpstat will report the synchronization state of the NTP daemon running on the local machine.  ntpstat returns 0 if clock is synchronized.  ntpstat returns 1 if clock is  not  synchronized.  ntpstat returns 2 if clock state is unknown, for example if ntpd can’t be contacted.

I created a Zabbix item to use ntpstat.  Here are the 2 ways I have used this new check:

The first way to use ntpstat with Zabbix is to simply create an item using the system.run function.

Name - ntpstat status
Type - Zabbix agent (active)
Key  - system.run[ntpstat &> /dev/null ; echo $?]
Type of Information - Text

Ensure EnableRemoteCommands=1 is set in your zabbix_agentd.conf file for this to work.

The second way to create the item is to use custom user parameters.  This requires a file modification on the monitored instance, so if you have a lot of instances to monitor or do not have a good way to automate this file modification, you may want to stick with option 1

I like creating new userparameter files for custom parameters.

/etc/zabbix/zabbix_agend.d/userparameter_custom_linux.conf
UserParameter=custom.net.ntpstat,ntpstat &> /dev/null ; echo $?

Then create an item similar to above but with a change to the key

Name - ntpstat status
Type - Zabbix agent (active)
Key  - custom.net.ntpstat
Type of Information - Numeric (unsigned)
Data Type - Decimal

Once your custom userparameter file is placed you’ll need to restart the zabbix agent. The last step with either item creation option is to create a trigger that alerts when the returned value is not 0.

I like this check much better than my original one that just alerted when the ntpd service was down.  Now I get alerted before time synchronization issues become an issue for the applications.

This was tested on both CentOS 6.7 and CentOS 7.1, but this should work on your Linux distro of choice as long as you have ntpstat installed.

Hope this helps

 

 


Device eth0 is not present after cloning RHEL/CentOS in VMWare

#ifup eth0
Device eth0 does not seem to be present, delaying initialisation

Easy fix:

Remove the networking interface rules file, it will be regenerated

# rm -f /etc/udev/rules.d/70-persistent-net.rules

Update your interface configuration file

# vim /etc/sysconfig/networking/devices/ifcfg-eth0

Remove the MACADDR and  the UUID entries

Save and exit the file

Restart the networking service

# service network restart

Use dstat on command line for quick system resource stat collection


 dstat -tv --output /tmp/${HOSTNAME}-dstat-$(date +"%Y%m%d-%H%M%S").csv 10

Starts a dstat process at 10 second intervals and writes output to /tmp called <hostname>-dstat-yyyymmdd-hhmmss.csv &

Kill it with

kill `ps -ef | grep dstat | grep -v grep | awk '{print $2}'`

This csv file is easy to open in a Excel to chart performance metrics.  Consider adding to a script for automation.


Use SCP to copy directories in Linux

scp -r <source_directory> username@ip:destination_directory

it’s that easy