Camshaft timing fail

My car just broke down a week or two ago while driving on the highway. The blue dragon (as I call her) had been running a bit odd and missing every now and again – I had originally thought it might be water in the fuel. But, when it broke down, there was fuel coming out of the intake manifold and going into the air cleaner. Sounded like a timing issue. After about a week of after-work effort and a few bloody knuckles, look what I found:

About 3/4ths of the teeth on the camshaft’s timing gear are missing! I have an “Iron Duke” engine from late 1987, the cam and crank timing gears go straight together.

When I got my puller on the gear and pulled, this happened. This is ok, I’ve decided I’m just going to dremel the rest of the cog off of the camshaft.

The gear was really chewed up.

Catastrophic failure.

Using [this .config file] with the git repository located [here], you can build your own pandaboard kernel.

Simply run “git clone” on the repo URL posted above, copy in my config file and rename to .config (add a dot to the front of the filename), and then run “make oldconfig” to update any options that may have changed since I posted it. Once that’s done, a “make uImage && make modules && make modules_install” should finish all the compiling. Then move arch/arm/boot/uImage in the git root directory (where you just ran make) to /boot (which for me is the first partition on the SD card from where the pandaboard boots).

dhcp2static

I wrote this bash script to run after a kickstart and convert information that is received by dhcp into a static network configuration. This should work on Fedora, RHEL, PUIAS, CentOS, etc. If your distro is totally different, you can expand upon this script fairly easily by writing 7 functions and grepping for your distro in /etc/issue or what have you. If anyone out there ever does expand upon it and add support for their distro, I’ll be happy to update my copy.

Download: [As always, hope it’s useful].

In switching all of my repositories to a local in-house RPM repository, I needed to figure out which packages were installed on machines in the network that did not come from the mirror to which I synchronize. These packages needed to be added manually into the repository. I wrote this little one-liner to list packages that did not come from the official RHEL repo, and therefore have a different vendor.

for i in `rpm -aq`; do if [ “`rpm -qi ${i} | grep ‘Vendor: Red Hat, Inc.’`” == “” ]; then echo $i; fi; done

You could use this easily on a non-RHEL repo, as long as your system uses RPM packages. Just update the vendor name in the middle there and you’re all set.

In our environment, we make decisions on the importance of hosts based on their specific role. Puppet is great and has gone a long way towards simplifying our standard build. Finding out it has built-in types for working with nagios was exciting, but the problem became – how can we have our hosts automatically get added into nagios through puppet, but easily augment those additions with human-decided service levels? Leveling is the technique we use to determine which hours of the day we’re alerted to problems, and through which medium. For example, level1 means “any hour, day or night, via both email and text message.” and level5 means “email me during work hours only.”

The initial solution, unsurprisingly, came from puppet exported resources. You can read more about them [here]. Briefly, putting a @ character in front of an object definition makes it virtual. That is, the object will exist, but will not get sent to the client. Putting two @ characters in front of a definition also exports the virtual resource, making it available to other hosts. A bit of initial work is needed to enable exported resources, mysql (or some other database) must be set up. So, to do that:

[root@puppet ~]# cat /etc/puppet/puppet.conf
[main]
logdir = /var/log/puppet
rundir = /var/run/puppet
ssldir = $vardir/ssl

[puppetmasterd]
storeconfigs = true
dbadapter = mysql
dbuser = puppet
dbpassword = whatever
dbserver = localhost
dbsocket = /var/lib/mysql/mysql.sock
downcasefacts = true

Now that that’s out of the way, let’s see how we create nagios hosts for each puppet client. In our environment, my base standard-config class is called every-server. So, in everyserver.pp, we find this:

@@nagios_host { “${fqdn}”:
ensure => present,
alias => “${hostname}”,
address => “${ipaddress}”,
use => “level5”,
target => “/etc/nagios/dynamic.cfg”;
}

@@nagios_service { “${hostname}_check_ping”:
ensure => present,
host_name => “${fqdn}”,
notification_interval => 60,
flap_detection_enabled => 0,
service_description => “Ping”,
check_command => “check_ping!300.0,20%!500.0,60%”,
use => “level5”,
target => “/etc/nagios/dynamic.cfg”;
}

@@nagios_service { “${hostname}_check_ssh”:
ensure => present,
host_name => “${fqdn}”,
notification_interval => 60,
flap_detection_enabled => 0,
service_description => “SSH”,
check_command => “check_ssh”,
use => “level5”,
target => “/etc/nagios/dynamic.cfg”;
}

So what does this do? Every client that connects to puppet creates these objects. However, they do not exist on the client in /etc/nagios/dynamic.cfg because the @ characters mark the resource as virtual and exported. So basically, each host creates its own nagios host object and two nagios service objects. As you can see, every host and service defaults to level5, the least important level. Now, the question becomes, how do we get these objects into nagios, and more specifically, how do we override the level parameter before writing out these objects?

In my nagiosmonitor.pp file, I have this:

class nagios-monitor inherits every-server
{
# If the dynamic file changes, restart nagios so it picks up the new definitions…
file {
“/etc/nagios/dynamic.cfg”:
mode => 644,
notify => Service[“nagios”];
}

# Collect/overwrite the service level used by a specific host and service…
Nagios_service <<| title == "hostname1_check_ssh" |>> {
use => level3
}

# Collect everything else, which will be staying at the default level5…
Nagios_host <<||>>
Nagios_service <<||>>
}

This looks a lot more complex than it really is. The <<| |>> bit is the collection operator, which makes virtual resources into real resources which will now be sent to the host. Since this is only in the nagios-monitor manifest, only nagios hosts will receive the file /etc/nagios/dynamic.cfg, which is what we want. Also, since nagios-monitor inherits every-server, it will create nagios_host and nagios_service objects for monitoring itself. Now, in between the pipes of the <<| |>> operator, one can specify attributes for which resource to realize. And after the <<| |>>, one can specify curly braces if one wishes to override any of the parameters in the resource. So, first, I realize a specific service, titled in everyserver.pp as a hostname and a check command. When I realize the check_ssh service on host hostname1, I bump its level from level5 to level3, making it a more important service in a simple one-liner. This can be repeated as many times as you wish, and is fairly easy to augment and maintain since it is so compact.

What follows this overwrite section is the collection process for all nagios hosts and services for which we are not overwriting any attributes. This is why the area between the pipes is empty. The result of these manifests is a file on the nagios servers named /etc/nagios/dynamic.conf which contains definitions for all hosts currently in puppet, and checks ping and ssh for all of those servers. The checks are all at level5, except for the check_ssh check running against the host hostname1, which will be at level3.