Wednesday, November 27, 2013

Old Thinkpad X40

In 2005 I taught a Python class and made enough money to buy myself an IBM (pre-Lenovo) Thinkpad X40:

  IBM ThinkPad X40 2371
  Dimensions (WxDxH) 10.6 in x 8.3 in x 1.1 in
  Weight 2.6 lbs
  Processor Intel Pentium M 713 1.1 GHz ULV
  Cache Memory 1 MB - L2 cache
  RAM 256 MB (installed) / 1.28 GB (max) - DDR SDRAM - PC2700 - 333 MHz
  W7P9838   FREE 512MB PC2700 CL2.5 NP DDR  $0.00
  Hard Drive 30 GB - 4200 rpm
  Display 12.1" TFT active matrix XGA (1024 x 768) - 24-bit (16.7 million colors)
  Telecom Fax / modem - CDC - 56 Kbps
  Networking Network adapter - Ethernet, Fast Ethernet, Gigabit Ethernet, IEEE 802.11b, IEEE 802.11g
  IBM 11 A/B/G WIRELESS LAN MINI PCI ADAPTER II 

I am in a state where I'll be between laptops so I've installed an XFCE spin of Fedora 20 Beta RC 5 and I'm surprised how usable it is. I bought a new battery (IBM-22-4400) from laptopbatterydepot.com so I can travel with it and I'm getting decent battery life (~5 hours so far) but my bottle neck is disk IO. sweclockers.com has a nice write up on installing $15 PATA to mSata adaptor along with a ~$100 SSD on the exact same model of laptop to speed it up and increase the reliability of an 8 year old battery that doesn't owe anyone anything anymore. Now the temptation sets in. Do I want to spend ~$115 and a few hours on an old laptop?

Saturday, November 9, 2013

My Puppet Module Writing Environment

I previously documented my Foreman/Puppet environment. In this entry I am documenting how I set things up so I could comfortably develop puppet modules inside of this environment. This includes the writing of a very simple module to make /etc/hosts the same on all of the hosts in my environment. This hosts module itself reinvents the wheel as there are many more advanced puppet forge modules for doing this. I am only trying to solve a simple problem for an environment exclusive to my own. What I hope is more relevant for this entry is how the development tools were set up and the conventions being used.

Puppet Mode for Emacs

I am planning to write my manifests locally in emacs so I will get puppet mode for emacs.


$ cd ~/elisp/
$ git clone http://github.com/puppetlabs/puppet-syntax-emacs.git
Cloning into 'puppet-syntax-emacs'...
remote: Counting objects: 63, done.
remote: Compressing objects: 100% (46/46), done.
remote: Total 63 (delta 20), reused 55 (delta 17)
Unpacking objects: 100% (63/63), done.
$ 
Add the following to ~/.emacs and evaluate the following S-expressions:
(add-to-list 'load-path "~/elisp/puppet-syntax-emacs")
(setq py-install-directory "~/elisp/puppet-syntax-emacs")
(require 'puppet-mode)


Set up versioning for a module

At work we use an authoritative (by convention) git depot. I also keep a personal authoritative git depot on a vps on the Internet so I will create a bare repository there but which depot I use is arbitrary.

ssh me.xen.prgmr.com
cd /opt/git/
mkdir hosts.git
cd hosts.git
git init --bare
logout
I will then clone this empty repository down into a place on my laptop to work:
$ cd ~/src/puppet/hosts
$ git clone me@me.xen.prgmr.com:/opt/git/hosts.git
Cloning into 'hosts'...
warning: You appear to have cloned an empty repository.
$ 

Write a simple module to manage /etc/hosts

Inside the empty repository that I cloned I will create the module:

$ mkdir -p hosts/{files,templates,manifests}
$ touch hosts/manifests/init.pp
$ mkdir -p hosts/templates
$ touch templates/hosts.erb 
Put the following in hosts/manifests/init.pp:
class hosts {
  file { "/etc/hosts":
    ensure => file,
    owner => "root",
    group => "root",
    mode => 0644,
  }
  File['/etc/hosts'] {
    content => template('hosts/hosts.erb'),
  }
}
Put the following in templates/hosts.erb:
127.0.0.1 localhost.localdomian localhost 
127.0.0.1 <%= @fqdn %> <%= @hostname %>
# continue with other hosts as you see fit

Commit the changes and push them back to the depot:

$ cd ~/src/puppet/hosts
$ git add *
$ git status
# On branch master
#
# Initial commit
#
# Changes to be committed:
#   (use "git rm --cached ..." to unstage)
#
# new file:   templates/hosts.erb
# new file:   manifests/init.pp
#
$ git commit -m "first version of hosts"
[master (root-commit) 524416d] first version of hosts
 2 files changed, 23 insertions(+)
 create mode 100644 hosts/templates/hosts.erb
 create mode 100644 hosts/manifests/init.pp
$ 
$ git push origin master
Counting objects: 8, done.
Delta compression using up to 4 threads.
Compressing objects: 100% (4/4), done.
Writing objects: 100% (8/8), 754 bytes, done.
Total 8 (delta 0), reused 0 (delta 0)
To me@me.xen.prgmr.com:/opt/git/hosts.git
 * [new branch]      master -> master
$ 


On the Puppet Server

A note about some conventions on my set up. My Puppet server has /etc/puppet/modules but it's a symlink to environments/common and I have used an ACL to give myself access to it so I can radiply make changes without being root (this will come in handy later when we talk about how I speed up the cycle):

[root@puppet puppet]# pwd
/etc/puppet
[root@puppet puppet]# ls -l modules
lrwxrwxrwx. 1 root root 19 Sep 24 02:10 modules -> environments/common
[root@puppet puppet]# cd environments
[root@puppet environments]# setfacl -m u:me:rwx common/
[root@puppet environments]# 

Import the new module into the puppet server from git:
[me@puppet ~]$ cd /etc/puppet/modules/
[me@puppet modules]$ git clone me@me.xen.prgmr.com:/opt/git/hosts.git 
Initialized empty Git repository in /etc/puppet/environments/common/hosts/.git/
remote: Counting objects: 8, done.
remote: Compressing objects: 100% (4/4), done.
Receiving objects: 100% (8/8), done.
remote: Total 8 (delta 0), reused 0 (delta 0)
[me@puppet modules]$ 

On Foreman

Login to the Foreman web interface and navigate to the puppet classes:
More > Configuration > Puppet Classes

Click "Import from puppet". You should then see the class name "hosts". Select it and then you should see something like the following:
















Select the environments you want to import the new module into and then select "Update". Once the module is imported, select the servers you want to have the new module. For example, you can apply the new module to the puppet-agent client.


On a Puppet client
From the puppet-agent server you can test the new configuration and observe the change.


[root@puppet-agent ~]# md5sum /etc/hosts
b259bdfd326846029c25beff10fc5ac6  /etc/hosts
[root@puppet-agent ~]# puppet agent
[root@puppet-agent ~]# puppet agent --test
Info: Retrieving plugin
Info: Caching catalog for puppet-agent.example.com
Info: Applying configuration version '1384016280'
Notice: Finished catalog run in 0.29 seconds
[root@puppet-agent ~]# md5sum /etc/hosts
fd34f5442b4209a472d6ea8ac8e24d77  /etc/hosts
[root@puppet-agent ~]# 

Speeding up the Cycle
I am used to having a git depot at work so I've shown how to stick one in the middle of the cycle but to do a commit and update for each revision of code is not practical. To get around this I can write code on the puppet VM itself (I am the only user for this environment), but use a local editor on my laptop. I prefer to do this with emacs tramp and have the following to my .emacs:


(setenv "puppet" "/ssh:me@puppet.example.com:/etc/puppet/modules/")

These conventions could be applied when developing puppet modules in a development environment like the one described here and then applied to a production environment. For example, I could write a module in this development puppet environment and check it into the git depot at work as I reached the appropriate points in my development. When ready it could then get checked into the puppet modules directory on a production puppet server but in the staging or QA environment of that puppet server (see Chapter 3 of Pro Puppet) so it could be further tested before getting pushed into production. The assumed goal here is that you would want to develop a puppet module without needing to be tethered to the servers at work.

Update: a previous version of this post had a bug as /etc/host was in files and not a template. This would not work correctly has there would be no reference to the the loopback device relative to the hostname. The need for a variable here means you need a template, not a file.


Sunday, November 3, 2013

Foremen Environment Inside VirtualBox

This entry describes how I set up an environment on my laptop to learn Foreman. There are other ways to do this but this worked for me. My goal was to be able to use Foreman to provision VMs with no intervention beyond turning the VM on and using only VirtualBox as my hypervisor. I also wanted to be able to do provision CentOS systems with my laptop not connected to the Internet (e.g. suppose I am on an airplane). As a result of these goals there are a variety of network tweaks to set this up which required some reading of Chapter 6 of the VirtualBox manual.


Network Overview

Foreman provisions systems using PXE, which requires DHCP and TFTP. Foreman can take care of TFTP but in this case I am running a separate machine for DHCP which is also acting as a router for the VMs inside my environment. I am calling this machine netman (short for network manager). I am using a couple of NAT hacks to make this work for the situations I will encounter; e.g. if Foreman tells a VM to pull a repo from the Internet, assuming dom0 is on the network. I am also keeping a local CentOS mirror for when not on the Internet. Here is a diagram of the network ranging from my laptop's wifi connection down into VirtualBox and then into the VMs.

















Here is a list of systems and their configurations.

1. dom0 this my laptop running VirtualBox. It has two network devices:

* en0
** Wifi card which is DHCP'd some NAT'd address depending my location
** let's assumue 192.168.1.10x (RFC1918-C)

* vboxnet1
** This is how my laptop communicates with my VMs
** It is made by VirtualBox: VirtualBox > Preferences > Network > +
** In this case I specified 172.16.1.1/24 with no DHCP server  
** I am using a subset of RFC1918-B for this network

2. netman this is my network manager VM. It has two devices:

* eth0:
** In vbox terms it is Adapter 1, running NAT to reach the Internet
** vbox DHCPs to eth0 from a subset of RFC1918-A with gateway 10.0.2.2 
** /etc/sysconfig/network-scripts/eth0 is configured for DHCP

* eth1:
** In vbox terms it is Adapter 2, running Host-only Networking
** The vbox name it uses is vboxnet1 
** /etc/sysconfig/network-scripts/eth1 is statically configured
*** 172.16.1.2 

netman acts as a router and DHCP server to hosts within 172.16.1.0/24. It uses iptables masquerading to NAT for those hosts so their Internet bound packets can come in eth1 and go out eth0. It is true that eth0 was itself NAT'd by VirtualBox and might then go out wifi which itself was probably NAT'd. So we have some ugly NAT hacks within the three regions of RFC1918 space: B > A > C. It is ugly, but within my virtual box environment for learning, I found this sufficient to achieve my goals.


3. client this could be any VM within my environment

* eth0:
** In vbox terms it is Adapter 1, running Host-only Networking
** The vbox name it uses is vboxnet1 
** /etc/sysconfig/network-scripts/eth1 is configured for DHCP
** IP Address is DHCP'd from within 172.16.1.3/24
** Gateway is DHCP'd to 172.16.1.2 (eth1 of netman)
** DNS handed out by DHCP server is google's 8.8.8.8, 8.8.4.4
** Static IPs are assigned by mac address
** dom0 can use it's vboxnet1 device (172.16.1.1) to SSH to client


Configure dom0

Download and install VirtualBox. Virtual machines cannot PXE boot unless you install the extpack:
$ VBoxManage extpack install Oracle_VM_VirtualBox_Extension_Pack-4.2.18-88780.vbox-extpack 
0%...10%...20%...30%...40%...50%...60%...70%...80%...90%...100%
Successfully installed "Oracle VM VirtualBox Extension Pack".
$ 

Create the vboxnet1 network: VirtualBox > Preferences > Network > +

























Configure the IP 172.16.1.1 and netmask 255.255.255.0 and leave DHCP disabled since netman will be the DHCP server.

Configure netman

I keep a minimally installed RHEL6 VM within VirtualBox to clone, so I cloned that into netman.

Add two network devices

Configure Adapter 1 for NAT:





















Configure Adapter 2 for Host-only networking using vboxnet1:



























Start netman with virtualbox. As eth0 and eth1 try to come up you might see a message like:

Device eth0 does not seem to be present, delaying initialization
Edit /etc/sysconfig/network-scripts/ifcfg-eth{0,1} to set the correct MAC addresses and then run:
rm /etc/udev/rules.d/70-persistent-net.rules
udevadm trigger
Configure eth0 for DHCP:
[root@netman ~]# cat /etc/sysconfig/network-scripts/ifcfg-eth0
DEVICE=eth0
ONBOOT=yes
BOOTPROTO=dhcp
HWADDR=08:00:27:20:C2:8E
[root@netman ~]# 
Configure eth1 with a static address and to act as a router:
[root@netman ~]# cat /etc/sysconfig/network-scripts/ifcfg-eth1
BOOTPROTO=none
PEERDNS=yes
HWADDR=08:00:27:62:E4:C1
TYPE=Ethernet
IPV6INIT=no
DEVICE=eth1
NETMASK=255.255.255.0    
BROADCAST=""
IPADDR=172.16.1.2     # Gateway of the LAN
NETWORK=172.16.1.0 
USERCTL=no
ONBOOT=yes
[root@netman ~]# 
Configure iptables so it will NAT
# Flush the firewall
iptables -F
iptables -t nat -F
iptables -t mangle -F
iptables -X
iptables -t nat -X
iptables -t mangle -X

# Set up IP FORWARDing and Masquerading
iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE
iptables -A FORWARD -i eth1 -j ACCEPT

# Enable packet forwarding by kernel 
# You should save this setting in /etc/sysctl.conf too
echo 1 > /proc/sys/net/ipv4/ip_forward

# Apply the configuration
service iptables save
service iptables restart

Configure a DHCP server

This borrows from RedHat's guide.

Install the DHCP server:

yum install dhcp

Configure /etc/dhcp/dhcpd.conf with something like this
option domain-name "example.com";
option domain-name-servers 8.8.8.8, 8.8.4.4;

default-lease-time 600;
max-lease-time 7200;

log-facility local7;

subnet 172.16.1.0 netmask 255.255.255.0 {
  range 172.16.1.13 172.16.1.23;
  option routers 172.16.1.2;
}

host client {
  hardware ethernet 08:00:27:59:59:33;
  fixed-address 172.16.1.3;
}
A few things of note:
  • I'm using Google's Public DNS servers since it works well everywhere I have been and is sufficient for a test environment.
  • I have a range of IPs I will hand out between 172.16.1.13 and 172.16.1.23;
  • Hosts that I provision are also getting static assignments by MAC address; e.g. I have an entry for the MAC address of "client" which I will set up next.

Start dhcpd and configure it to start on boot.

/etc/init.d/dhcpd start
chkconfig dhcpd on

Configure a static CentOS Mirror

netman will serve this mirror over HTTP so install Apache.

yum install httpd
service httpd start
chkconfig httpd on

Next we will get content to serve the mirror.

  • Download an
    CentOS
    ISO
    file and store it somewhere on dom0.
  • Within Virtualbox's entry for netman go to Settings > Storage
    and then add a virtual IDE device which is the ISO file.
Next we will mount the ISO file under the web tree. To do this I made
a directory:
mkdir -p /var/www/html/6.4/os/x86_64
and then put the following in my /etc/fstab:
/dev/cdrom1  /var/www/html/6.4/os/x86_64 iso9660 ro 0 0

After a "mount -a", I was then able to point a browser at http://172.16.1.2/6.4/os/x86_64/ and see what I expected to see. The mirror of a static ISO might get out of date but will get the system booted if it doesn't have Internet access and I could yum upgrade it later.


Configure Client

I am showing the network configuration as it would apply to a generic RHEL6 or CentOS6 box. Later we will let Foreman provision these configurations.

Set up a basic RHEL6 box (perhaps from a clone) and configure Adapter 1 as a Host-only Adapter using the vboxnet1 network. Boot the system and configure configure eth0 for DHCP:

[root@client ~]# cat /etc/sysconfig/network-scripts/ifcfg-eth0 
DEVICE=eth0
ONBOOT=yes
BOOTPROTO=dhcp
HWADDR=08:00:27:59:59:33
[root@client ~]# 

Restart networking and "tail -f /var/log/messages" on netman to see your client boot and get an address from your DHCP server.

Oct 18 18:00:41 netman dhcpd: DHCPREQUEST for 172.16.1.3 from 08:00:27:59:59:33 via eth1
Oct 18 18:00:41 netman dhcpd: DHCPACK on 172.16.1.3 to 08:00:27:59:59:33 via eth1

If you can then SSH into the client and resolve Internet hosts from the client, then you have a working network configuration and can move on to configuring Foreman.


Set up some form of DNS (even a hack)


Until I set up a DNS server in my environment my /etc/hosts file on all my systems for this environment contain the following:
172.16.1.1   laptop laptop.example.com
172.16.1.2   netman netman.example.com
172.16.1.3   client client.example.com
172.16.1.4   puppet puppet.example.com
172.16.1.4   foreman foreman.example.com
172.16.1.5   unattended unattended.example.com
Note that, we will configue 172.16.1.4 next and that it will run Puppet and Foreman. At the end we'll configure unattended which will be an unattended install.

Build a host to run Puppet/Foreman

Next I will configure a server running Puppet and Foreman. I will start with a RHEL6 host configured on the network as described above with the generic client. I called my host puppet. Remember to create an entry in netman for DHCP in dhcpd.conf:

host puppet {
  hardware ethernet 08:00:27:B9:B3:E8;  
  fixed-address 172.16.1.4;
}

If you booted this host and got a DHCP'd address from the range being handed out and want to static it as described above, then you can revert the lease file as described below.

cd /var/lib/dhcpd/
mv dhcpd.leases~ dhcpd.leases
service dhcpd restart

Once you have your a booting vanilla RHEL6 box you can install Puppet and Foreman.

First install EPEL

wget https://dl.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm
yum localinstall epel-release-6-8.noarch.rpm

Install Puppet from their repos. To do this I added the following to /etc/yum.repos.d/puppet.repo

[Puppet]
name=Puppet
baseurl=http://yum.puppetlabs.com/el/6Server/products/x86_64/
gpgcheck=0

[Puppet_Deps]
name=Puppet Dependencies
baseurl=http://yum.puppetlabs.com/el/6Server/dependencies/x86_64/
gpgcheck=0
Install Puppet:
yum -y install puppet facter puppet-server

Then run the Foreman Installer

yum -y install http://yum.theforeman.org/releases/1.3/el6/x86_64/foreman-release.rpm
yum -y install foreman-installer
Answer yes to all defaults. Near the end you should see something like:
...
Notice: Finished catalog run in 294.38 seconds

 Okay, you're all set! Check
/usr/share/foreman-installer/foreman_installer/answers.yaml for your
config.

 You can apply it in the future as root with:
  echo include foreman_installer | puppet apply --modulepath
/usr/share/foreman-installer -v
# 

Once Foreman is running you should be able to login to its web interface at https://puppet.example.com. Read the documentation on what the default username/password is. Once you're in, configure Foreman with a smartproxy.

GUI > More > Configuration > Smart Proxies

Make sure the FQDN in /etc/hosts is consistent with the FQDN as defined in /etc/foreman-proxy/settings.yml.


Configure a client to be managed by Puppet


As described in the client section above, set up a basic RHEL6 box (perhaps from a clone) and configure Adapter 1 as a Host-only Adapter using the vboxnet1 network. Take note of its mac address so you can put an entry in netman's DHCP server. Boot the system and configure configure eth0 for DHCP.


First install EPEL

wget https://dl.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm
yum localinstall epel-release-6-8.noarch.rpm

Install Puppet from their repos. To do this I added the following
to /etc/yum.repos.d/puppet.repo

[Puppet]
name=Puppet
baseurl=http://yum.puppetlabs.com/el/6Server/products/x86_64/
gpgcheck=0

[Puppet_Deps]
name=Puppet Dependencies
baseurl=http://yum.puppetlabs.com/el/6Server/dependencies/x86_64/
gpgcheck=0

Install ruby, puppet, and facter:

yum install ruby ruby-libs puppet facter

Now that puppet, the agent, should be installed, add "server=puppet.example.com" to /etc/puppet/puppet.conf. Then run "puppet agent --server=puppet.example.com --no-daemonize --verbose" and observe:

[root@puppet-agent ~]# puppet agent --server=puppet.example.com --no-daemonize --verbose
info: Creating a new SSL key for puppet-agent.example.com
info: Caching certificate for ca
info: Creating a new SSL certificate request for puppet-agent.example.com
info: Certificate Request fingerprint (md5): EA:5E:82:B8:A8:BA:8D:7E:5B:3D:20:2C:68:06:B0:04
...
notice: Did not receive certificate
...

The above sends a cert to be signed by the master. It will check every two minutes if there is a new cert.

Back on puppet.example.com, sign the certificate:

[root@puppet ~]# puppet cert --list
  "puppet-agent.example.com" (EA:5E:82:B8:A8:BA:8D:7E:5B:3D:20:2C:68:06:B0:04)
[root@puppet ~]# 

[root@puppet-master public_keys]# puppet cert --sign puppet-agent.example.com
notice: Signed certificate request for puppet-agent.example.com
notice: Removing file Puppet::SSL::CertificateRequest puppet-agent.example.com at '/var/lib/puppet/ssl/ca/requests/puppet-agent.example.com.pem'
[root@puppet-master public_keys]# 

Then on puppet-agent continue to observe the output of "puppet agent --server=puppet.example.com --no-daemonize --verbose". You should see news of the cert's acceptance:

...
info: Caching certificate for puppet-agent.example.com
notice: Starting Puppet client version 2.6.18
info: Caching certificate_revocation_list for ca
info: Caching catalog for puppet-agent.example.com
info: Applying configuration version '1369603392'
info: Creating state file /var/lib/puppet/state/state.yaml
notice: Finished catalog run in 0.01 seconds
^c

Now run the puppet agent as a daemon and configure it to run on boot:

[root@puppet-agent ~]# service puppet start
Starting puppet:                                           [  OK  ]
[root@puppet-agent ~]# chkconfig puppet on
[root@puppet-agent ~]#

You can see how your agent is doing anytime:

[root@puppet-agent ~]# puppet agent --test
Info: Retrieving plugin
Info: Caching catalog for puppet-agent.example.com
Info: Applying configuration version '1383503512'
Notice: Finished catalog run in 0.03 seconds
[root@puppet-agent ~]# 

Now that your host is in puppet, you should be able to see it in the Foreman web GUI under Hosts.


Install a module from the puppet forge and push it to puppet-agent


On the puppet master, install jeffmccune/motd.

[root@puppet ~]# cd /usr/share/puppet/modules
[root@puppet modules]# puppet module install jeffmccune/motd
Installed "jeffmccune-motd-1.0.3" into directory: motd
[root@puppet modules]# ls
motd
[root@puppet modules]# 
On the same puppet master, define $puppetserver and node list in /etc/puppet/manifests/site.pp

import 'nodes.pp'
$pupppetserver = 'puppet.example.com'
Define the nodes in /etc/puppet/manifests/nodes.pp:
node 'puppet-agent.example.com' {
     include motd
}

In the above case I am making a place for my puppet-agent node and asking that the motd module be on the agent.


On the agent, wait the default amount of time (30 minutes) or reload puppet.

[root@puppet-agent ~]# service puppet reload
Restarting puppet:                                         [  OK  ]
[root@puppet-agent ~]# tail -f /var/log/messages
May 26 19:50:24 puppet-agent puppet-agent[13924]: Restarting with '/usr/sbin/puppetd '
May 26 19:50:25 puppet-agent puppet-agent[14060]: Reopening log files
May 26 19:50:25 puppet-agent puppet-agent[14060]: Starting Puppet client version 2.6.18
May 26 19:50:28 puppet-agent puppet-agent[14060]: (/File[/etc/motd]/content) content changed
'{md5}d41d8cd98f00b204e9800998ecf8427e' to '{md5}1b48863bff7665a65dda7ac9f57a2e8c'
May 26 19:50:28 puppet-agent puppet-agent[14060]: Finished catalog run in 0.02 seconds

Now if I SSH into the agent, I see my new MOTD.

$ ssh puppet-agent -l root
Last login: Sun Nov  3 13:44:16 2013 from dom0
-------------------------------------------------
Welcome to the host named puppet-agent
RedHat 6.4 x86_64
-------------------------------------------------
Puppet: 3.3.0
Facter: 1.7.3

FQDN: puppet-agent.example.com
IP:   172.16.1.6

Processor: Intel(R) Core(TM) i7-3667U CPU @ 2.00GHz
Memory:    490.63 MB

-------------------------------------------------
[root@puppet-agent ~]$ 

So far we have verified that puppet-agent is able to be managed from Puppet directly after editing nodes.pp on the Puppet server. If we were to now tell Foreman to manage this host and apply the same motd puppet module then we'd see an error. For example, in the GUI under hosts select the puppet-agent host and then:

Edit > Manage Host > Puppet Classes> motd > Sumbit 
While under Edit select "Manage host". If you were to then re-run puppet you'd see:
[root@puppet-agent ~]# puppet agent --test
Info: Retrieving plugin
Error: Could not retrieve catalog from remote server: 
Error 400 on SERVER: Duplicate declaration: Class[Motd] 
is already declared; cannot redeclare on node puppet-agent.example.com
Warning: Not using cache on failed catalog
Error: Could not retrieve catalog; skipping run
[root@puppet-agent ~]#
My goal is to have Foreman manage my host and Foreman maintains its list of hosts to manage by Puppet's ENC API. So rather than edit my nodes.pp on my Puppet server directly to tell Puppet which modules to apply to my hosts, I will remove the entry for motd so that my nodes.pp looks like the following:
node 'puppet.example.com' {
}
node 'puppet-agent.example.com' {
}
and then re-run my puppet test: 3
[root@puppet-agent ~]# puppet agent --test
Info: Retrieving plugin
Info: Caching catalog for puppet-agent.example.com
Info: Applying configuration version '1383513022'
Notice: Finished catalog run in 0.03 seconds
[root@puppet-agent ~]# 
The above works because I had told foreman to apply that motd module.

I can test this further by installing the puppetlabs/ntp module.

[root@puppet ~]# cd /usr/share/puppet/modules
[root@puppet modules]# ls
[root@puppet modules]# puppet module install puppetlabs/ntp -i common
Notice: Preparing to install into /etc/puppet/modules/common ...
Notice: Created target directory /etc/puppet/modules/common
Notice: Downloading from https://forge.puppetlabs.com ...
Notice: Installing -- do not interrupt ...
/etc/puppet/modules/common
└─┬ puppetlabs-ntp (v2.0.0-rc1)
  └── puppetlabs-stdlib (v4.1.0)
[root@puppet modules]# 
From there I can have the Foreman GUI tell puppet-agent to use that module.
More > Configuration > Puppet Classes > Import from $host > Update

I now see my class name of "ntp" with 19 keys. I click ntp and I could over ride settings; e.g. add the address of my NTP server. To apply that module to my NTP server I can use the Foreman GUI with:

Hosts > select your host > Edit > Puppet Classes
Then select one of the available classes like NTP and make it an included class and click submit. After you click submit you can have your host check in:
[root@puppet-agent ~]# puppet agent --test
Info: Retrieving plugin
Info: Caching catalog for puppet-agent.example.com
Info: Applying configuration version '1383513022'
Notice: /Stage[main]/Ntp/Service[ntpd]/ensure: ensure changed 'stopped' to 'running'
Info: /Stage[main]/Ntp/Service[ntpd]: Unscheduling refresh on Service[ntpd]
Notice: Finished catalog run in 1.47 seconds
[root@puppet-agent ~]# 

You can then view your host and see a report about it. If you click the yaml button, then you can see the config file that was applied to that host.


Configure a VM to be provisioned by Foreman


We'll now configure a VM which can be defined in virtualbox, given a DHCP reservation, defined in Foreman, such that the act of then turning the VM on alone, should result in that VM being PXE booted, having the OS installed and having a listening puppet agent which will apply all configurations you deem appropriate for the host.

Assuming you have the environment described above you should be able to set up something similar to what you see in Dominic Cleal's Foreman Quickstart: unattended installation screen cast.

Create a new virtual machine called "unattended" in virtualbox and configure Adapter 1 as a Host-only Adapter using the vboxnet1 network. Take note of its mac address so you can put an entry in netman's DHCP server. This host's DHCP entry should also contain the IP of the TFTP server as well as the file to request from the TFTP server that it can use to boot.

host unattended {
  hardware ethernet 08:00:27:3c:b0:a4;
  fixed-address 172.16.1.5;
  next-server 172.16.1.4;
  filename "pxelinux.0";
}
Next we have to define the host in Foreman so that it can configure it. First we need to define what components Foreman has to define that host starting with an operating system in the Foreman GUI:
More > Provisioning > Operating Systems 
Define a CentOS 6.4 OS. Then set up a Provisioning Template:
More > Provisioning > Provisioning Templates
Select "PXE Default" and edit the template to point to the local mirror we set up on netman's IP address:
"url --url http://172.16.1.2/6.4/os/x86_64/"
Looking over the script more you will see that the default is configured to install Puppet using EPEL. This is not as current as Puppet's repository but that can be fixed later. Under the Association tab, tick CentOS 6.4 and click Save.

Select "PXE Default PXELinux" and edit the Template so that the last five lines look as follows with a reference to the provisioning template on the Puppet server and click Save.

<% if @host.operatingsystem.name == "Fedora" and @host.operatingsystem.major.to_i > 16 -%>
append initrd=<%= @initrd %> ks=http://172.16.1.4/unattended/provision ks.device=bootif network ks.sendmac
<% else -%>
append initrd=<%= @initrd %> ks=http://172.16.1.4/unattended/provision ksdevice=bootif network kssendmac
<% end -%>
Under Association, assoctiate this template with CentOS 6.4 and click Save.

Associate a partition table with your CentOS 6.4 OS by selecting:

More > Provisioning > Provisioning Templates
and then selecting an Operating System Family as RedHat and then clicking Submit.

Define an operating system mirror by selecting:

More > Provisioning > Installation Media > CentOS Mirror
and then edit as follows:
Name: CentOS mirror
Path: http://172.16.1.2/$major.$minor/os/$arch/
Operating System Family: RedHat
Associate this mirror with the OS.
More > Provisioning > Operating Systems > CentOS 6.4
Tick the Architecture, Partition tables, Installation Media. The number the Templates file, set the Provision to "Kickstart Default" and PXELinux to "Kickstart Default PXELinux" and click Submit.

Define a subnet to build the host within Foreman. In this case I am going to enter the criteria of vboxnet1. From the Foreman GUI:

More > Provisioning > Subnets > New Subnet
Then define the subnet as follows:
Name: quickstart
Network Address: 172.16.1.0
Netmask: 255.255.255.0
Domain: example.com
TFTP Proxy: puppet
Leave the other fields blank and click Submit.

Define the host itself within the Foreman GUI:

Hosts > New Host  
Then define the host as follows:
Host Tab:
   Name: unattended
   Deploy on: Bare Metal
   Environment: Production
   Puppet CA: puppet
   Puppet Master: puppet
Operating System:
   Architecture: x86_64 
   Operating System: CentOS 6.4
   Media: CentOS Mirror
   Partition Table: RedHat Default
You will also need to fill in the items under the Network tab based on the entries you made for DHCP above. Finally click Submit to enter the host into Foreman's database and have Foreman contact the TFTP server smartproxy so that the smartproxy will load the TFTP server with the desired images to build this host.

Next you can check that the TFTP server entries were created properly. On the puppet master, /var/lib/tftpboot/pxelinux.cfg should contain a file whose name is the same as the mac address of unattended.example.com and /var/lib/tftpboot/pxelinux.cfg/boot should contain CentOS-6.4-x86_64-initrd.img and CentOS-6.4-x86_64-vmlinuz, which can be used to boot.

Now that you have defined the host in Foreman, test the TFTP server.

[root@netman ~]# tftp puppet
tftp> get pxelinux.0
tftp> quit
[root@netman ~]# ls -lh pxelinux.0 
-rw-rw-r--. 1 root root 27K Oct 19 13:46 pxelinux.0
[root@netman ~]# 

You should then be able to turn the host on and watch it PXE boot.














If you have trouble getting the host PXE booting see the Unattended Provisioning Troubleshooting flow chart or the Unattended Provisioning wiki entry.


Update 11/16/13: I forgot to mention to make sure you edit /etc/sudoers:

foreman-proxy ALL = NOPASSWD: /usr/bin/puppet cert *
Defaults:foreman-proxy !requiretty
and that /var/log/foreman-proxy/proxy.log is very helpful when debugging.

selinux authorized_keys context

IMHO, Googling "selinux authorized_keys context" produces too many results with restorecon. To me it makes more sense to just change the context directly.
chcon -t ssh_home_t .ssh/
chcon -t ssh_home_t .ssh/authorized_keys
If you want this context change to persist future runs of restorecon, then use
semanage.

Sunday, October 27, 2013

xterm gems link

I've been using xterm for 10 years but here are some features in it I was not aware of: Hidden gems of xterm. This is one of those bookmark posts for me.

Tuesday, September 10, 2013

Disable Zimbra Bubbles

Ever since Zimbra 7 there has been support for "bubbles". A bubble is an object that results when a user supplies a name or an email address. The UI converts the text into a bubble that you can then drag and drop:

The text that is supplied by the user should be parsed so that only the correct parts of it are turned into a bubble. The problem is that if what it tries to parse has quotes, then it gets confused. To disable this:

1. Login to webmail and then under the Preferences, tab select General and then look under Other. You should see something like the following:

2. Uncheck "Email Addresses" (display names etc)
3. Uncheck "Bubbles" (show email addresses in bubbles)
4. Click Save

Sunday, August 18, 2013

My first taste of Foreman

I installed The Foreman on one of my test systems to configure other nodes in my
toy cluster. I used the following quick start video:


What follows are my notes on doing what's in the video above on a CentOS 6.4 box and following the documentation.

* EPEL

wget https://dl.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm
yum localinstall epel-release-6-8.noarch.rpm

* Puppet
Added the following to /etc/yum.repos.d/puppet.repo

[Puppet]
name=Puppet
baseurl=http://yum.puppetlabs.com/el/6Server/products/x86_64/
gpgcheck=0

[Puppet_Deps]
name=Puppet Dependencies
baseurl=http://yum.puppetlabs.com/el/6Server/dependencies/x86_64/
gpgcheck=0
Verified that puppet 3.2.x would be installed from Puppet's repo
yum install puppet
...
=================================================================================
 Package              Arch        Version                 Repository        Size
=================================================================================
Installing:
 facter               x86_64      1:1.7.2-1.el6           Puppet            83 k
 puppet               noarch      3.2.4-1.el6             Puppet           1.0 M
Installing for dependencies:
 augeas-libs          x86_64      0.9.0-4.el6             base             317 k
 hiera                noarch      1.2.1-1.el6             Puppet            21 k
 libselinux-ruby      x86_64      2.0.94-5.3.el6_4.1      updates           99 k
 ruby                 x86_64      1.8.7.352-12.el6_4      updates          534 k
 ruby-augeas          x86_64      0.4.1-1.el6             Puppet_Deps       21 k
 ruby-irb             x86_64      1.8.7.352-12.el6_4      updates          313 k
 ruby-libs            x86_64      1.8.7.352-12.el6_4      updates          1.6 M
 ruby-rdoc            x86_64      1.8.7.352-12.el6_4      updates          376 k
 ruby-rgen            noarch      0.6.5-1.el6             Puppet_Deps       87 k
 ruby-shadow          x86_64      1.4.1-13.el6            Puppet_Deps       11 k
 rubygem-json         x86_64      1.5.5-1.el6             Puppet_Deps      763 k
 rubygems             noarch      1.3.7-1.el6             base             206 k

Transaction Summary
=================================================================================

* Foreman Installer

yum -y install http://yum.theforeman.org/releases/1.1/el6/x86_64/foreman-release.rpm
yum -y install foreman-installer
Answer yes to all defaults. Saw...
...
Notice: Finished catalog run in 294.38 seconds

 Okay, you're all set! Check
/usr/share/foreman-installer/foreman_installer/answers.yaml for your
config.

 You can apply it in the future as root with:
  echo include foreman_installer | puppet apply --modulepath
/usr/share/foreman-installer -v
# 
Update iptables:
# tail /etc/sysconfig/iptables
-A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT
-A INPUT -p icmp -j ACCEPT
-A INPUT -i lo -j ACCEPT
-A INPUT -m state --state NEW -m tcp -p tcp --dport 22 -j ACCEPT
-A INPUT -m state --state NEW -m tcp -p tcp --dport 80 -j ACCEPT
-A INPUT -m state --state NEW -m tcp -p tcp --dport 443 -j ACCEPT
-A INPUT -m state --state NEW -m tcp -p tcp --dport 8140 -j ACCEPT
-A INPUT -m state --state NEW -m tcp -p tcp --dport 8443 -j ACCEPT
-A INPUT -j REJECT --reject-with icmp-host-prohibited
-A FORWARD -j REJECT --reject-with icmp-host-prohibited
COMMIT
# 

* Configure Foreman with a smartproxy

GUI > Gear > Configuration > Smart Proxies
If you are doing this at home you might need to tweak your /etc/hosts to have the FQDN in your cert as defined in /etc/foreman-proxy/settings.yml.

* configure your host with puppet

puppet agent --test

Admin GUI > Hosts > See yourself

* Install a module from puppet forge
[root@james ~]# cd /usr/share/puppet/modules
[root@james modules]# ls
[root@james modules]# puppet module install puppetlabs/ntp -i common
Notice: Preparing to install into /etc/puppet/modules/common ...
Notice: Created target directory /etc/puppet/modules/common
Notice: Downloading from https://forge.puppetlabs.com ...
Notice: Installing -- do not interrupt ...
/etc/puppet/modules/common
└─┬ puppetlabs-ntp (v2.0.0-rc1)
  └── puppetlabs-stdlib (v4.1.0)
[root@james modules]# 

Admin GUI > Gear > Configuration > Puppet Classes > Import from $host > Update

I now see my class name of "ntp" with 19 keys. I click ntp and I could over ride settings; e.g. add the address of my NTP server.

* Apply the puppet module to my server

Admin GUI > Menu (lines) > Hosts > select your host > Edit > Puppet Classes

The select one of the available classes like NTP and make it an included class and click submit. After you click submit you can have your host check in with "puppet agent --test".

You can then view your host and see a report about it. If you click the yaml button, then you can see the config file that was applied to that host.

I think my next step is to read about Provisioning with foreman.

Friday, August 16, 2013

Why a Zimbra zmmboxmove can result in seemingly missing folders

On Zimbra 7 when if you use zmmboxmove to migrate an account between mail store servers the command will return that the migration was a success except the folders under the mail tab are blank until about 30 minutes after the move. This is the case even with a test account that has only one email.

This is because the mailboxes are getting indexed. You can check on the status of an account and if it's getting index and you can also manually start it.

$ zmprov rim user@domain.com start
status: started

$ zmprov rim user@domain.com status
status: running
progress: numSucceeded=96, numFailed=0, numRemaining=623

$ zmprov rim user@domain.com cancel
status: cancelled
progress: numSucceeded=139, numFailed=0, numRemaining=580

Monday, July 29, 2013

My Thunderbird Extensions on Fedora 19

This is only for my notes.

I recently upgraded to Fedora 19 (XFCE Spin) and wanted to document what my Thunderbird Extensions were.

Saturday, June 29, 2013

hp psc1200 mac os x print black white

<not_technically_interesting>
Googling "hp psc1200 mac os x print black white" produced unsatisfactory results IMHO. This post aims to correct this:
System Preferences > Print & Scan > Options & Supplies > Driver > Installed Print Cartridge > Black


</not_technically_interesting>
<!-- hopefully future posts will be technically_interesting -->

Tuesday, June 25, 2013

Cisco EOL'd a screw

Fitting that it's a screw.

Sunday, May 26, 2013

RHEL6 Puppet install of master/agent

This is based on Chapter 1 of Pro Puppet by James Turnbull and Jeffrey McCune but for RHEL6.4. Also, I have SELinux running on both the master and the agent.


I. Install Packages

Master
yum install ruby ruby-libs
wget https://dl.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm
yum localinstall epel-release-6-8.noarch.rpm
yum -y install puppet facter puppet-server 
Agent
yum install ruby ruby-libs
wget https://dl.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm
yum localinstall epel-release-6-8.noarch.rpm
yum -y install puppet facter
Versions
Versions for the master as of May 26, 2013 are below. The agent will be the same except it won't have puppet-server.
[root@puppet-master ~]# rpm -qa | egrep "ruby|pup|fact"
libselinux-ruby-2.0.94-5.3.el6_4.1.x86_64
ruby-augeas-0.4.1-1.el6.x86_64
ruby-shadow-1.4.1-13.el6.x86_64
ruby-1.8.7.352-10.el6_4.x86_64
facter-1.6.18-3.el6.x86_64
puppet-server-2.6.18-3.el6.noarch
ruby-libs-1.8.7.352-10.el6_4.x86_64
puppet-2.6.18-3.el6.noarch
[root@puppet-master ~]# 

Update use Puppet Lab's own repo so you get newer packages (added 6/14/2013).

/etc/yum.repos.d/puppet.repo

[Puppet]
name=Puppet
baseurl=http://yum.puppetlabs.com/el/6Server/products/x86_64/
gpgcheck=0

[Puppet_Deps]
name=Puppet Dependencies
baseurl=http://yum.puppetlabs.com/el/6Server/dependencies/x86_64/
gpgcheck=0


II. Initialize Services

Master

1. Added the following to /etc/puppet/puppet.conf:
[master]
    certname=puppet.example.com
2. Created empty file in /etc/puppet/manifests/site.pp
3. Opened only 8140 in iptables
4. Start puppetmaster and configure it for boot:
service puppetmaster start
chkconfig puppetmaster on
5. Observe certs in /var/lib/puppet/

Agent

1. Added "server=puppet.example.com" to /etc/puppet/puppet.conf
2. Run "puppet agent --server=puppet.example.com --no-daemonize --verbose" and observe:
[root@puppet-agent ~]# puppet agent --server=puppet.example.com --no-daemonize --verbose
info: Creating a new SSL key for puppet-agent.example.com
info: Caching certificate for ca
info: Creating a new SSL certificate request for puppet-agent.example.com
info: Certificate Request fingerprint (md5): EA:5E:82:B8:A8:BA:8D:7E:5B:3D:20:2C:68:06:B0:04
...
notice: Did not receive certificate
...
The above sends a cert to be signed by the master. It will check every two minutes if there is a new cert.

Master

Sign the certificate
[root@puppet-master public_keys]# puppet cert --list
  "puppet-agent.example.com" (EA:5E:82:B8:A8:BA:8D:7E:5B:3D:20:2C:68:06:B0:04)
[root@puppet-master public_keys]# 

[root@puppet-master public_keys]# puppet cert --sign puppet-agent.example.com
notice: Signed certificate request for puppet-agent.example.com
notice: Removing file Puppet::SSL::CertificateRequest puppet-agent.example.com at '/var/lib/puppet/ssl/ca/requests/puppet-agent.example.com.pem'
[root@puppet-master public_keys]# 

Agent

Continue to observe the output of "puppet agent --server=puppet.example.com --no-daemonize --verbose" and you should see news of the cert's acceptance:
...
info: Caching certificate for puppet-agent.example.com
notice: Starting Puppet client version 2.6.18
info: Caching certificate_revocation_list for ca
info: Caching catalog for puppet-agent.example.com
info: Applying configuration version '1369603392'
info: Creating state file /var/lib/puppet/state/state.yaml
notice: Finished catalog run in 0.01 seconds
^c
Now run the puppet agent as a daemon and configure it to run on boot:
[root@puppet-agent ~]# service puppet start
Starting puppet:                                           [  OK  ]
[root@puppet-agent ~]# chkconfig puppet on
[root@puppet-agent ~]#

III. Push a puppet forge module to the agent

In this example I will use jeffmccune/motd.

Master

Before I can install puppet-module I need the gem binary, which you can't get on RHEL6 without the optional channel.
[root@puppet-master manifests]# rhn-channel --list
rhel-x86_64-server-6
[root@puppet-master manifests]#
[root@puppet-master manifests]# rhn-channel -a --channel=rhel-x86_64-server-optional-6
Username: $username
Password: 
[root@puppet-master manifests]# rhn-channel --list
rhel-x86_64-server-6
rhel-x86_64-server-optional-6
[root@puppet-master manifests]# 
Install rubygems:
yum install rubygems
Install puppet-module
[root@puppet-master manifests]# gem install puppet-module
******************************************************************************

  Thank you for installing puppet-module from Puppet Labs!

  * Usage instructions: read "README.markdown" or run `puppet-module usage`
  * Changelog: read "CHANGES.markdown" or run `puppet-module changelog`
  * Puppet Forge: visit http://forge.puppetlabs.com/
  * If you don't have Puppet installed locally by your system package
    manager, please install it with:

        sudo gem install puppet


******************************************************************************
Successfully installed puppet-module-0.3.4
1 gem installed
Installing ri documentation for puppet-module-0.3.4...
Installing RDoc documentation for puppet-module-0.3.4...
Could not find main page README.rdoc
Could not find main page README.rdoc
Could not find main page README.rdoc
Could not find main page README.rdoc
[root@puppet-master manifests]# 

Puppet looks for modules in /etc/puppet/modules, so create that directory:

[root@puppet-master ~]# cd /etc/puppet/
[root@puppet-master puppet]# mkdir modules
[root@puppet-master puppet]# cd modules/
[root@puppet-master modules]# 
Install the jeffmccune/motd module from the puppet forge:
[root@puppet-master modules]# puppet module install jeffmccune/motd
Installed "jeffmccune-motd-1.0.3" into directory: motd
[root@puppet-master modules]# ls
motd
[root@puppet-master modules]# 
Define $puppetserver and node list in /etc/puppet/manifests/site.pp
import 'nodes.pp'
$pupppetserver = 'puppet.example.com'
Define the nodes in /etc/puppet/manifests/nodes.pp
node 'puppet-agent.example.com' {
     include motd
}
In the above case I am making a place for my puppet-agent node and asking that the motd module be on the agent.

Agent

Wait the default amount of time (30 minutes) or reload puppet.

root@puppet-agent ~]# service puppet reload
Restarting puppet:                                         [  OK  ]
[root@puppet-agent ~]# tail -f /var/log/messages
May 26 19:50:24 puppet-agent puppet-agent[13924]: Restarting with '/usr/sbin/puppetd '
May 26 19:50:25 puppet-agent puppet-agent[14060]: Reopening log files
May 26 19:50:25 puppet-agent puppet-agent[14060]: Starting Puppet client version 2.6.18
May 26 19:50:28 puppet-agent puppet-agent[14060]: (/File[/etc/motd]/content) content changed '{md5}d41d8cd98f00b204e9800998ecf8427e' to '{md5}1b48863bff7665a65dda7ac9f57a2e8c'
May 26 19:50:28 puppet-agent puppet-agent[14060]: Finished catalog run in 0.02 seconds
Now if I SSH into the agent, I see my new MOTD.
me@workstation:~$ ssh puppet-agent
me@puppet-agent's password: 
Last login: Sun May 26 19:07:17 2013 from 192.168.1.50
-------------------------------------------------
Welcome to the host named puppet-agent
RedHat 6.4 x86_64
-------------------------------------------------
Puppet: 2.6.18
Facter: 1.6.18

FQDN: puppet-agent.example.com
IP:   192.168.1.67

Processor: Intel(R) Xeon(R) CPU           E5462  @ 2.80GHz
Memory:    486.71 MB

-------------------------------------------------
[me@puppet-agent ~]$ 

todo: crowbar in virtualbox

Todo: Get Crowbar running in VirtualBox VMs

Review of Puppet NTP

Posting an example which helps me remember what I learned about puppet a week ago; an NTP class which hasn't yet been converted into a module from Learning — Modules and Classes (Part One). I borrowed the HTML/CSS from puppetlabs.com.

      
    class ntp {
      case $operatingsystem {
        centos, redhat: { 
          $service_name = 'ntpd'
          $conf_file    = 'ntp.conf.el'
        }
        debian, ubuntu: { 
          $service_name = 'ntp'
          $conf_file    = 'ntp.conf.debian'
        }
      }
      
      package { 'ntp':
        ensure => installed,
      }
      
      service { 'ntp':
        name      => $service_name,
        ensure    => running,
        enable    => true,
        subscribe => File['ntp.conf'],
      }
      
      file { 'ntp.conf':
        path    => '/etc/ntp.conf',
        ensure  => file,
        require => Package['ntp'],
        source  => "/root/learning-manifests/${conf_file}",
      }
    }

Sunday, May 19, 2013

Learning Puppet: Modules & Classes

I am having a good time leaning puppet and have finished up to Modules & Classes for today. One thing we need where I work is a way to insure that PHP is set up consistently and thias/php might do the trick. Checking it out...
[root@learn ~]# puppet module install thias-php
Preparing to install into /etc/puppetlabs/puppet/modules ...
Downloading from http://forge.puppetlabs.com ...
Installing -- do not interrupt ...
/etc/puppetlabs/puppet/modules
└── thias-php (v0.2.5)
[root@learn ~]# 

Tuesday, May 14, 2013

USB to RS-232

Today I received my USB to Serial Converter for connecting my laptop to an RS-232 serial device. Cisco has a cheat sheet for using screen to connect to their gear.

Sunday, May 12, 2013

Personal OpenStack via Crowbar

Crowbar was made to easily deploy OpenStack. Here are two 10 minute videos showing how to install Crowbar:

and then install OpenStack on top of it.

The narrator, Rob Hirscheld, does it all on one physical box using a couple of VMs.

Saturday, May 11, 2013

toy cluster

I built a quick lab at home to learn some new cluster tools. I got four Dell OptiPlex 755s (Intel Core 2 Duo, 3G RAM, 50G disk) named James, Lars, Kirk, and Cliff. They are connected with a Cisco 806. As previously documented I am using minicom to reach the Cisco. My next move is to uplink the Cisco to my WRT54G. Time for a crash course in IOS.

Router>show version
Cisco Internetwork Operating System Software 
IOS (tm) C806 Software (C806-K9OSY6-M), Version 12.3(26), RELEASE SOFTWARE (fc2)
Technical Support: http://www.cisco.com/techsupport
Copyright (c) 1986-2008 by cisco Systems, Inc.
Compiled Mon 17-Mar-08 20:31 by dchih

ROM: System Bootstrap, Version 12.2(1r)XE2, RELEASE SOFTWARE (fc1)

Router uptime is 3 hours, 35 minutes
System returned to ROM by power-on
System image file is "flash:c806-k9osy6-mz.123-26.bin"
...
CISCO C806 (MPC855T) processor (revision 0x301) with 30720K/2048K bytes of memory.
Processor board ID JAD05510XLM (908819059), with hardware revision 0000
CPU rev number 5
Bridging software.
2 Ethernet/IEEE 802.3 interface(s)
128K bytes of non-volatile configuration memory.
16384K bytes of processor board System flash (Read/Write)
2048K bytes of processor board Web flash (Read/Write)

Configuration register is 0x2102

Router>


Tuesday, April 16, 2013

Good old tape?

The need for large storage that can be very slow is becoming clearer at work. If that data must be kept on site then getting 150T usable (in R1-like config) in 5U via a TS3310 is appealing. So is being able to grow to 2.5 PB in a 41U configuration.

Thursday, March 21, 2013

Cisco UCS lsmod on VIC

A UCS blade's most popular daughter card is a VIC which can provide both Fibre and Ethernet connectivity. Support for this hardware is in a vanilla RHEL6 Linux kernel and a screen cap of running lsmod and modinfo on a machine with this hardware is below. This hardware allows you to present virtual Ethernet or HBAs directly to a virtual machine on RHEV. Cisco's name for this is VM-FEX and RedHat has benchmarks on VM-FEX and KVM performance.



Update: Cisco has documentation on Configuring VM-FEX for KVM.

Tuesday, March 19, 2013

Tried UCS in a Lab

I got to try Cisco UCS in a lab environment yesterday. So far I like it more than my Dells. You can SSH into either fabric interconnect and find NX-OS running:

and use NX-OS to display information about your servers:

The GUI lets you copy information about any object including its XML:

Tuesday, March 12, 2013

querying sip SRV records

box:~ dude$ nslookup
> set q=srv
> _sip._udp.colab.sipfoundry.com
Server:  141.133.112.120
Address: 141.133.112.120#53

Non-authoritative answer:
*** Can't find _sip._udp.colab.sipfoundry.com: No answer

Authoritative answers can be found from:
_sip._udp.colab.sipfoundry.com
 origin = ns1.sedoparking.com
 mail addr = hostmaster.sedo.de
 serial = 2007021501
 refresh = 86400
 retry = 10800
 expire = 604800
 minimum = 86400
> 
> _sip._udp.sip.redhat.com
Server:  141.133.112.120
Address: 141.133.112.120#53

Non-authoritative answer:
_sip._udp.sip.redhat.com service = 3 0 5060 sipx-sbc03.geo.redhat.com.
_sip._udp.sip.redhat.com service = 4 0 5060 sipx-sbc04.geo.redhat.com.
_sip._udp.sip.redhat.com service = 1 0 5060 sipx-sbc01.geo.redhat.com.
_sip._udp.sip.redhat.com service = 1 0 5060 sipx-sbc02.geo.redhat.com.
_sip._udp.sip.redhat.com service = 2 0 5060 sipx-sbc03.geo.redhat.com.

Authoritative answers can be found from:
> _sip._udp.sbc.lafayette.edu
Server:  141.133.112.120
Address: 141.133.112.120#53

** server can't find _sip._udp.sbc.lafayette.edu: NXDOMAIN
> ^D 
box:~ dude$ 

Wednesday, February 27, 2013

Thursday, February 14, 2013

glusterFS

I am thinking more about using GlusterFS with commodity hardware (e.g. Infortrend or SuperMicro) to bring slower but larger disk to my users. One thing that comes to mind is mounting a large Zimbra HSM volume to the mail servers and increasing everyone's mail quota dramatically.

Saturday, February 2, 2013

Beamer

Todo: Try Beamer for my next presentation.

Thursday, January 31, 2013

Novell

My institution recently completed a project to replace Novell with Samba. So we just don't renew our Novell contract right? Wrong. I now have to speak with someone in Novell about conducting an Exit Audit. I wonder what that will entail and what we really have to do for them.

Update:They said they wouldn't be auditing us.