Groesbeek, view of the 'National Liberation Museum 1944-1945' in Groesbeek. © Ton Kersten
Fork me on GitHub

Runninng it through Tattr (part 2)

2018-08-08 (148) by Ton Kersten, tagged as ansible, sysadm

Some time ago I created a playbook to show the content of a rendered template. When you keep digging in the Ansible documentation, you suddenly stumble over the template lookup-plugin. And then it turns out that my playbook is a bit clumsy.

A nicer and shorter way to do it:

---
#
# This playbook renders a template and shows the results
# Run this playbook with:
#
#       ansible-playbook -e templ=<name of the template> template_test.yml
#
- hosts: localhost
  become: false
  connection: local

  tasks:
    - fail:
        msg: "Bailing out. The play requires a template name (templ=...)"
      when: templ is undefined

    - name: show templating results
      debug:
        msg: "{{ lookup('template', templ) }}"

Ansible, loop in loop in loop in loop in loop

2018-06-08 (147) by Ton Kersten, tagged as ansible, loop, sysadm

A couple of days ago a client asked me if I could solve the following problem:

They have a large number of web servers, all running a plethora of PHP versions. These machines are locally managed with DirectAdmin, which manages the PHP configuration files as well. They are also running Ansible for all kind of configuration tasks. What they want is a simple playbook that ensures a certain line in all PHP ini files for all PHP versions on all webservers.

All the PHP directories match the pattern /etc/php[0-9][0-9].d.

Thinking about this, I came up with this solution (took me some time, though) smiley

---
- name: find all ini files in all /etc/php directories
  hosts: webservers
  user: ansible
  become: True
  become_user: root

  tasks:
    - name: get php directories
      find:
        file_type: directory
        paths:
          - /etc
        patterns:
           - php[0-9][0-9].d
      register: dirs

    - name: get files in php directories
      find:
        paths:
          - "{{ item.path }}"
        patterns:
          - "*.ini"
      loop: "{{ dirs.files }}"
      register: phpfiles

    - name: show all found files
      debug:
        msg: "Files is {{ item.1.path }}"
      with_subelements:
        - "{{ phpfiles.results }}"
        - files

The part with the with_subelements did the trick. Of course this line can be written as:

loop: "{{ query('subelements', phpfiles.results, files) }}"

Ditched Disqus

2018-05-31 (146) by Ton Kersten, tagged as gdpr, privacy

As the new GDPR finds it’s way all over Europe I decided to have a closer look at my website. I was using the Disqus comment system for some time now, but hardly ever someone really takes the time to comment.

As the Disqus systems uses a lot of Javascript and cookies, I decided it was time to get rid of these tools and make my site fly, again.

At Disqus: So long and thanks for all the fish.

Did you run it through TAttr

2017-08-15 (145) by Ton Kersten, tagged as ansible, sysadm

During my last Ansible training the students needed to create some Ansible templates for them selfs. As I do not want to run a testing template against some, or all, machines under Ansible control I created a small Ansible playbook to test templates.

Read more »

Stupid Fedora

2016-05-26 (144) by Ton Kersten, tagged as sysadm

Yesterday I removed a simple package from my Fedora 23 machine and after that I got the message

error: Failed to initialize NSS library

WTF??????

Searching the interwebs I found out I wasn’t the first, and probably not the last, to run into this problem.

It seems that, one way or another, the DNF package doesn’t know about the dependency it has on SQLite. So, when a package removal requests to remove SQLite, DNF removes it without questions. Ans thus break itself.

But how to fix this? DNF doesn’t work, but RPM doesn’t either, so there is no way to reinstall the SQLite packages.

Tinkering and probing I found this solution:

#!/bin/bash
url="http://ftp.nluug.nl/os/Linux/distr/fedora/linux/updates/23/x86_64/s/"
ver="3.11.0-3"

wget ${url}/sqlite-${ver}.fc23.x86_64.rpm
wget ${url}/sqlite-libs-${ver}.fc23.x86_64.rpm
rpm2cpio sqlite-${ver}.fc23.x86_64.rpm | cpio -idmv
rpm2cpio sqlite-libs-${ver}.fc23.x86_64.rpm | cpio -idmv
cp -Rp usr /
dnf --best --allowerasing install sqlite.x86_64

This downloads the SQLite package and SQLite library packages, extracts them and copies the missing files to their /usr destination. After doing that, DNF and RPM get working again. It could be that I downloaded an older version of the SQLite stuff, so to make sure I have a current version I reinstall SQLite again.

Maybe a good idea to fix that in DNF!

Building an Ergodox

2015-03-03 (143) by Ton Kersten, tagged as news

After a lot of thought I decided it was time for a new project, one I would enjoy and a project that would be useful for a long time.

Searching the web and reading articles I found the ErgoDox.

The ErgoDox is a split-hand ergonomic keyboard with mechanical switches and open source, layer-based firmware running on a Teensy microcontroller. While other keyboards offer dip-switches or GUI config tools, the firmware and layouts can be built from source on the command line or through a layout configuration tool. Flashing a new build onto the ErgoDox is easy with the multi-platform Teensy loader.

I immediately got interested and after searching a bit more I ordered the kit at Falbatech. Unfortunately they do not supply the keycaps, so I ordered them from Signature Plastics.

Read more »

Stable Internet

2014-10-01 (142) by Ton Kersten, tagged as internet

My stable internet connection

Since a couple of years I’m running a fiber connection to the Internet, supplied by XMS-Net.

I also have an Atlas probe to do some internet measurements for RIPE.

Today I got a status email from RIPE with the connection status of last month. I guess I can say I have a stable internet connection. smiley

This is your monthly availability report for probe xxxx (TonKs Atlas).

Calculation interval    : 2014-09-01 00:00:00 - 2014-10-01 00:00:00
Total Connected Time    :  30d 00:00
Total Disconnected Time :   0d 00:00
Total Availability      :    100.00%

+---------------------+---------------------+------------+--------------+
| Connected (UTC)     | Disconnected (UTC)  | Connected  | Disconnected |
|---------------------+---------------------+------------+--------------+
| 2014-08-26 07:09:17 | Still up            |  30d 00:00 |     0d 00:00 |
+---------------------+---------------------+------------+--------------+

Puppet environments

2014-05-26 (141) by Ton Kersten, tagged as puppet

For my job I do a lot of Puppet and I thought it was about time to write some tips and tricks down.

First part of this post is about my environment setup. In my test setup I use a lot of environments. They are not at all useful, but that’s not the point. It’s my lab environment so things need to break once in a while. But with multiple environments Puppetlabs says that you should switch to directory environments (PuppetDoc) but some way or another I cannot get that to work in a good way with my PE version (3.4.3 (Puppet Enterprise 3.2.3)). So I started implementing dynamic environments, which is a simple way of specifying the directories for your environments.

Part of my puppet.conf looks like

[master]
    environment = production
    manifest    = $confdir/environments/$environment/manifests/site.pp
    manifestdir = $confdir/environments/$environment/manifests
    modulepath  = $confdir/environments/$environment/modules:/usr/share/puppet/modules
    templatedir = $confdir/environments/$environment/templates

So, my default environment is production and a client can specify another environment to be in. The command

puppet agent --environment=test

would place this node in the test environment. A simple module places a new puppet.conf file on the client stating this new environment. Couldn’t be more simple.

Well, that’s what you think. But what if you need to deploy 10.000+ hosts of which there are about a third in environment test and about a 1000 in environment development? It would take a lot of time to ssh into all these servers and run Puppet with the correct environment.

There has to be a way around that. And, of course, there is. In Puppet version 3 and up Hiera is integrated into Puppet and we already use that a lot. Why not integrate the environment in Hiera? Well, our hiera.yaml is now:

---
:hierarchy:
    - "%{environment}/hiera/%{::fqdn}"
    - "%{environment}/hiera/%{::hostname}"
    - "%{environment}/hiera/%{::domainname}"
    - "%{environment}/hiera/%{::systemtype}"
    - "%{environment}/hiera/%{::osfamily}"
    - "%{environment}/hiera/common"

:backends:
    - yaml

:yaml:
    :datadir:
        /etc/puppetlabs/puppet/environments

This challenges me with a chicken and egg problem. To get the environment I need to know the environment. But what if I make Hiera into an ENC and let this one deliver the environment? Can this be done? Yes, it can.

This is how I did it:

First create a part of the Hiera structure that’s not in the current environment, for example like this:

---
:hierarchy:
    - "hiera/%{::fqdn}"
    - "hiera/default"
    - "%{environment}/hiera/%{::fqdn}"
    - "%{environment}/hiera/%{::hostname}"
    - "%{environment}/hiera/%{::domainname}"
    - "%{environment}/hiera/%{::systemtype}"
    - "%{environment}/hiera/%{::osfamily}"
    - "%{environment}/hiera/common"

:backends:
    - yaml

:yaml:
    :datadir:
        /etc/puppetlabs/puppet/environments

And in the directory /etc/puppetlabs/puppet/environments/hiera I place a very small file, called default.yaml, which contains:

---
environment: 'production'

This makes sure that any node without a specific file, will get the production environment. This is the default for Puppet as well, so nothing changes for that.

To test this, run:

hiera environment ::fqdn=$(hostname -f)

This will give you something like environment: production. For every host in another environment as the production one, create a small file named the FQDN of the host with the contents stating the wanted environment.

(Watch for the :: in front of the fqdn. This means that the fqdn variable is a top scope variable, as all facter variables are.

Now integrate this into Puppet. First create a little script that executes the command above and returns the wanted output.

My script is called getenv and placed in /etc/puppetlabs/puppet/bin

#!/bin/bash

penv="$(/opt/puppet/bin/hiera                       \
            -c /etc/puppetlabs/puppet/hiera.yaml    \
            environment ::fqdn="${1}")"

echo "environment: ${penv}"

This returns a string like environment: production.

And last, but not least, place this settings in the [master] of your puppet.conf

node_terminus  = exec
external_nodes = /etc/puppetlabs/puppet/bin/getenv

It took some work to get things started, but a small shell thingy read the file with all 10.000+ hosts and required environments, that created all the Hiera files for all nodes that are not in the production environment.

Just one thing to do: When I have a lot of host-files in a single directory, this could become slow. I could place all definitions in a simple database, but things would get complicated again, and that’s not what I want. I also could split things up per letter, but I’m not sure yet if I really want that.

When I have resolved this, this entry will be continued.

Docker panics

2014-04-14 (140) by Ton Kersten, tagged as sysadm

This morning I was messing around with Docker and I wanted to build me a nice, clean container with Ubuntu in it, to test Ansible thingies. I’ve done that before and everything worked as a charm. Until today.

I have this Dockerfile (I’ve stripped it to the bare bones that still fail):

FROM ubuntu:latest
MAINTAINER Ton_Kersten
RUN echo "deb http://archive.ubuntu.com/ubuntu precise main universe" > /etc/apt/sources.list
RUN apt-get -y update
RUN apt-get -y upgrade
RUN apt-get -y install git git-flow
RUN apt-add-repository -y ppa:mozillateam/firefox-next
RUN apt-get install -y firefox

and when I run

docker build .

I end up with a beautiful kernel panic. Whatever I try, panic Nothing in any logfile

I’m running kernel version Linux lynx 3.2.0-60-generic #91-Ubuntu SMP Wed Feb 19 03:54:44 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux which had no problems before.

The Docker version is Docker version 0.10.0, build dc9c28f

Is there anybody out there that knows what’s happening?

Thanks.

Ansible @ Loadays

2014-04-05 (139) by Ton Kersten, tagged as ansible, sysadm

Last Saturday I attended Loadays in Antwerp, Belgium.

After listening to Jan Piet Mens’s talk about Ansible, I was up for it.

At 11:30 sharp, I started my own presentation for an almost packed room. It’s called “Ansible, why and how I use it” and you can find it on SpeackerDeck.

It was a lovely talk, with a very knowledgeable crowd.

Please, have a look at it and if you have any questions, let me know.

Thanks to the crew for organizing such a lovely event, every year.

Photos of the event where taken by Robert Keerse and you can see them at his Google Plus page. Do enjoy!!

For those of you with a strong stomach, the complete presentation is on Youtube as well. Have a look at the Youtube stream