Using AWS SES for Zabbix email notifications

I’ve been using Amazon Simple Email Service as the SMTP server for my home lab notifications.  I don’t have to worry about setting up my own SMTP server and there is a minimal cost to this configuration.  My SES account is set for ‘sandbox’ access where there is a limit to daily email of 200.  I sent an email to Amazon Support and they upped the quota to 1000.  The pricing schedule says that it’s $.10 per 1000 email per day.  I send about 20 per month, so I see a $.01 charge every couple of months.  Good deal to me.

To get this going you’ll have to set up your SES account on AWS.  I won’t go into detail, but would do a post if someone wants more detail.

Once your account is set up, you’ll need to note the SMTP credentials from SES homepage, then under Email Sending you’ll click on SMTP Settings.

You’ll need the following to set up a media type in Zabbix:

  • SMTP Username
  • SMTP Password
  • Servername
  • port

Once you have these credentials, jump over to the Zabbix dashboard and click Administration, then Media types.  Click Create New Media Type and your new type a name ( IE: Email – AWS SES).  Here’s what I entered in the media type to get this working:

Type - EmailSMTP
server - servername from SES SMTP Settings page
SMTP server port - 587
SMTP helo - servername from SES SMTP Settings page
SMTP email - an email that's been verified by SES
Connection Security - STARTTLS
Authentication - Normal Password
Username - SES SMTP Username
Password - SES SMTP Password
Enabled - checked

Click add to save your config and let’s move on to configuring the new media type for a user.

Under Administration then Users, select the user who will be receiving trigger notifications and click the Media tab for that user.  Add a new Media type of Email – AWS SES then enter an email address in Send to.  Modify When active and Use if severity if desired.  Make sure it’s Enabled, then click Add.

Click Update and move on to Configuration, Actions.

Make sure the Event Source is set to TriggersCreate Action if you don’t already have one.  Enter a descriptive name and add Conditions to the Action.  See below for some good starting conditions.  These will send an email if a trigger is at least a severity of High.  It will also not send email when a host is in maintenance status.  Change these to your needs.

Once you finish the Action tab, move to the Operations tab.  Here you decide how the email message will be formatted and how the email will be sent.

I configure my actions to send an email at some interval (1hr in this example) until the solution is resolved or acknowledged.  This way no one can say they didn’t see the alert…

Click the Recovery Operation tab and add an entry under Operations.  Sam as you did on the Operations tab, click New, then

Operations type - Send Message
Send to Users - <select a user>
Send only to  - All or Email - AWS SES

Click add to save your operation details entries.

Once you’ve added the Operations and Recovery Operations steps, you should save your changes by clicking Add

Now, it’s time to test out your new email alerts.

An easy way to test this is to stop a zabbix-agent service or otherwise activate a trigger that has a severity of at least High.  If this is a lab test box, one trick I use regularly is to create an item that watches a file then set a trigger to alert when the file is missing.

Name - Trigger Test
Type - Zabbix Agent (active)
Key - vfs.file.exists[/tmp/zabbix-trigger-test-.txt]
Type of Information - Numeric (unsigned)
Data Type - Decimal

Name - Trigger Test
Severity - Disaster
Expression - {hostname001:vfs.file.exists[/tmp/zabbix-trigger-test-.txt].last()}<>1

Now just create a file called /tmp/zabbix-trigger-test.txt on your host.  When you want to test the trigger, simply rename/delete the file and the trigger will activate.  Add the file back and the trigger goes to RESOLVED state.

Troubleshooting the email send happens in Monitoring, Problems.  Show Recent Problems and in the Actions column you will either see Done or FailuresDone is good and everything is working as expected.  Failures will show an error message to help track down the issue.  Click on Failures and hover over the ‘i’ to see what the error message is.  You are on your own for figuring out what the error means.  It’s most likely a wrong port or connection security on your media type, but you’ll have to track it down.

This write-up, in general, will work with any SMTP server you’d like to connect to.  I hope this helps someone get their email alerts working in Zabbix.


AWS S3 on CentOS/RHEL for off-site backups

AWS S3 has been around for a long time now.  Yet, I am just now getting around to using it for an off-site backup location.

Here’s my plan.  Backup to local RAID disk on my backup server, then make an off-site copy up to S3.  In my mind, this covers the 3-2-1 backup rule.  Here’s how I see it broken down.  Reach out if you think my thought process is off.

3 copies: The first copy is your primary data, second copy is the local backup on your backup server, and the 3rd copy is what gets put into S3.

2 media types: This is where I might be off.  One is the local backup and the second is S3.  I question this a little because I’ve seen out there on the internet where some are talking about different physical media types, but I think that is overly redundant as long as you ensure that your off-site backup is secure.  IE: first is hard drive so second can’t be hard drive.   What do you think?

1 off-site copy:  The copy out to S3.

This seems like a pretty solid backup policy.

How to set it up

yum install gcc libstdc++-devel gcc-c++ curl-devel libxml2-devel openssl-devel mailcap automake fuse-devel fuse-libs git libcurl-devel libxml2-devel make
git clone
cd s3fs-fuse/
make && make install
ln -s /usr/local/bin/s3fs /usr/bin/s3fs
  • Once you have fuse and s3fs installed, create a bucket in S3, and record credentials for user with access to bucket. s3fs will use /etc/passwd-s3fs for credential storage. Please enter your bucket credentials in /etc/passwd-s3fs as follows:
  • If you have multiple buckets that will be mounted to this machine, add the credentials in /etc/passwd-s3fs as follows:
  • create a directory for mounting the s3 bucket
mkdir -p /mnt/s3fs-bucketname
  • manually mount the bucket into the mount point
s3fs -o use_cache=/tmp/cache bucketname /mnt/s3fs-bucketname

The -f switch is helpful to run the process in the foreground to troubleshoot mounting.

  • Once you confirm the mount is successful, you can enter the mount attributes in /etc/fstab so it mounts at startup.
s3fs#bucketname /mnt/s3fs-bucketname fuse allow_other,use_cache=/tmp/cache 0 0

Set up your backup client to put an extra copy in your /mnt/s3fs-bucketname directory.  If you were really paranoid about data loss, you could always age your data in S3 to send it to Glacier at a certain time.  I need to run with this for a little while and see what works best for my use case.  Let me know if this works for you.