cpan[5]> o conf urllist urllist [ftp://archive.progeny.com/CPAN/] [ftp://carroll.cac.psu.edu/pub/CPAN/] [ftp://cpan.calvin.edu/pub/CPAN] [ftp://cpan.cs.utah.edu/pub/CPAN/] [ftp://cpan.mirrors.redwire.net/pub/CPAN/] Type 'o conf' to view all configuration items cpan[6]> o conf urllist shift cpan[7]> o conf urllist urllist [ftp://carroll.cac.psu.edu/pub/CPAN/] [ftp://cpan.calvin.edu/pub/CPAN] [ftp://cpan.cs.utah.edu/pub/CPAN/] [ftp://cpan.mirrors.redwire.net/pub/CPAN/] Type 'o conf' to view all configuration items cpan[8]> o conf commit commit: wrote '/usr/lib/perl5/5.8.5/CPAN/Config.pm' cpan[9]>You can also o conf urllist push ftp://... to add URLs.
Friday, November 30, 2007
cpan urllist
qmail
Thursday, November 29, 2007
Disk I/O
dd if=/dev/emcpowerb of=/dev/null bs=512 count=100000000000 &In a multipath set up iostat might look like:
avg-cpu: %user %nice %sys %iowait %idle 0.01 0.00 0.02 0.01 99.96 Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn ... sdb 0.00 0.00 0.00 2496 0 sdc 0.01 2.52 0.00 2583090 496 sdd 0.00 0.00 0.00 2496 0 sde 0.01 2.52 0.00 2581664 952 ... emcpowerb 0.02 5.02 0.00 5150338 1448Note that emcpowerb has the total between sdc and sde. You can use several dd comands like the above to drive the I/O load up. Look at all the time spent in wa (iowait - amount of time the CPU has been waiting for I/O to complete):
top - 22:27:44 up 12 days, 3:26, 3 users, load average: 7.05, 2.78, 1.04 Tasks: 124 total, 1 running, 123 sleeping, 0 stopped, 0 zombie Cpu0 : 1.4% us, 4.8% sy, 0.0% ni, 0.0% id, 93.9% wa, 0.0% hi, 0.0% si Cpu1 : 1.7% us, 7.1% sy, 0.0% ni, 0.0% id, 91.2% wa, 0.0% hi, 0.0% si Cpu2 : 0.0% us, 0.3% sy, 0.0% ni, 81.8% id, 17.9% wa, 0.0% hi, 0.0% si Cpu3 : 0.3% us, 3.4% sy, 0.0% ni, 6.1% id, 90.1% wa, 0.0% hi, 0.0% si Mem: 8310532k total, 1028576k used, 7281956k free, 767020k buffers Swap: 2031608k total, 0k used, 2031608k free, 128680k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 2400 root 18 0 4836 420 356 D 2 0.0 0:00.83 dd 2352 root 18 0 5364 420 356 D 2 0.0 0:06.15 dd 2358 root 18 0 5036 420 356 D 2 0.0 0:04.05 dd 2359 root 18 0 5204 420 356 D 2 0.0 0:04.06 dd 2402 root 18 0 4492 420 356 D 2 0.0 0:00.86 dd 2345 root 18 0 4112 420 356 D 2 0.0 0:09.21 dd 2348 root 18 0 3884 416 356 D 2 0.0 0:06.40 dd 2401 root 18 0 5276 420 356 D 2 0.0 0:00.81 dd 2353 root 18 0 4348 420 356 D 1 0.0 0:06.09 ddand see how your multipath device handles it (note that 5 and 3 queued I/Os):
# powermt display dev=emcpowerb | egrep "sdc|sde" 1 qla2xxx sdc SP A1 active alive 5 0 2 qla2xxx sde SP A0 active alive 3 0 # iostat ... avg-cpu: %user %nice %sys %iowait %idle 0.01 0.00 0.02 0.04 99.93 Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn ... sdb 0.00 0.00 0.00 2496 0 sdc 0.08 17.64 0.00 18117818 496 sdd 0.00 0.00 0.00 2496 0 sde 0.08 17.26 0.00 17725688 952 ... emcpowerb 0.15 34.89 0.00 35829090 1448Note that there are obviously more reads. Note also how you can get similar stats directly from /proc/diskstats:
# cat /proc/diskstats | egrep "emc|sdc|sde" 8 32 sdc 121192 15 26827106 2477633 51 0 496 467 7 641017 2478212 8 64 sde 120903 15 26285720 2529294 45 0 952 472 2 653448 2529891 120 16 emcpowerb 240323 6396919 53098682 5097396 96 85 1448 1013 10 .. .. ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 1 2 3 4 5 6 7 8 *9*Note the 9th column (# of I/Os currently in progress) which is the same queued I/O which "powermt display dev=emcpowerb" displayed. If I stop the load test (killall dd) you can see the queued I/O drop:
# cat /proc/diskstats | egrep "emc|sdc|sde" 8 32 sdc 160459 15 35087122 3570191 51 0 496 467 0 879727 3570647 8 64 sde 160766 15 34445608 3638404 45 0 952 472 0 892158 3638873 120 16 emcpowerb 319453 8370224 69518314 7328191 96 85 1448 1013 0 .. ..Here's an easy way to focus on the 9th column:
cat /proc/diskstats |grep sdc |awk {'print $12'}
Wednesday, November 28, 2007
Tuesday, November 27, 2007
dd-wrt
Monday, November 26, 2007
dotlockfile
- In terminal1 run the shell script below
- In terminal2 run the shell script below (within 5 seconds)
- In terminal3 view the PIDs with "cat /tmp/lock_test"
Example Shell Script:
#!/bin/sh # -------------------------------------------------------- # This program uses dotlockfile(1) to assure that no other # instances of itself will run. Only useful as an example. # It works because other instances will also try to create # a lockfile of the same name and will find that the file # already exists. It only locks a resource used by the same # program. I.e. another program could choose to ignore the # lock file. # -------------------------------------------------------- dotlockfile -p -r2 /tmp/lock_test; # lock this instance TIME=5; # do something with resource (just sleeps) echo "Sleeping for $TIME"; echo "I.e. no other instances of me will run for $TIME"; sleep $TIME; echo "Done, about to unlock for other instances"; dotlockfile -u /tmp/lock_test; # unlock this instance
Note that it's just a way to create lock files as part of file locking a process, since "the resource to be controlled is not a regular file at all, so using methods for locking files does not apply".
Moodle advises having cron do this while mirroring. I'm using it because I've got a cron job that's still running when another instance of it starts.
To install dotlockfile on RedHat you can get an RPM:
$ rpm -qlp dotlockfile-1.06.1-1mdv2007.0.i586.rpm 2> /dev/null /usr/bin/dotlockfile /usr/share/man/man1/dotlockfile.1.bz2 $I.e. I couldn't easily find it on RHN. It seems to be installed by default on Ubuntu but if you don't have it it is available in Ubuntu's liblockfile1 package.
find old files
find $BDIR -name \*.gz -ctime +$DAYS -exec rm '{}' \;My backup script wasn't cleaning things correctly. The above did the trick. The find command is interesting.
Wednesday, November 21, 2007
Cisco IOS CLI
Monday, November 19, 2007
nfslock vs imaps

# egrep "^10:" 2007-11-19.log | grep -i imap | head -1 10:06:32.88 1 IMAP failed to start listener on [123.456.78.9:993]. Error Code=network address (port) is already in use #I hadn't expected another service to grab that port but there it was:
# netstat -tulpn | grep 993 tcp 0 0 0.0.0.0:993 0.0.0.0:* LISTEN 5899/rpc.statd #man rpc.statd: "The rpc.statd server implements the NSM (Network Status Monitor) RPC protocol... used by the NFS file locking service, rpc.lockd, to implement lock recovery when the NFS server machine crashes and reboots." This server used to use NFS but doesn't anymore. I stopped the service:
service nfslock stop chkconfig nfslock offand made sure my last chkconfg would prevent it from coming back up:
# chkconfig --list | grep nfslock nfslock 0:off 1:off 2:off 3:off 4:off 5:off 6:off #While investigating this I saw that others had seen rpc.statd running on various ports. The man page said that "rpc.statd will ask portmap(8) to assign it a port number. As of this writing, there is not a standard port number that portmap always or usually assigns. Specifying a port may be useful when implementing a firewall" (thus the -p option). I find it odd that it just happened to grab a port that my server needed.
Saturday, November 17, 2007
zimbra rhel5 install
Monday, November 12, 2007
Zimbra 1
ldap_filter: (&(uid=%u)) ldap_search_base: o=domainI'm now trying to import a list of users. Since Zimbra uses OpenLDAP to store account data I think I'll have to use that as my interface. I'm able to export them:
openldap/sbin/slapcat -f /opt/zimbra/conf/slapd.conf -l /tmp/ldap.ldifBut even if I used the last 14 lines of the ldif file I don't think I could just re-import the file. I might be able to feed the file to a script which would re-create the account in the mail store, but I'm speculating. Time to read more documentation. I want to pilot on multiple servers:

Sunday, November 4, 2007
xli
down with regex
Regular expressions tend to be overkill especially for simple things. User input should almost never be turned into a regex. A lot of string operations can be effectively resolved more simply. Look at the string and try to make some rule based on index math and substrings.
I wanted to know if a string ended with a substring. I tried this:
host_re = re.compile('\.domain\.tld$') if (host_re.search(host)): # do somethingInstead we started up with:
def ends_with(x, y): return len(x) == x.rfind(y) + len(y)If y is found within x we get the index, or location, where it was found. We add the index to the length of y and this value must equal the length of x. This is better because a regex tends to introduce complications. Here's a variation of the above which covers if the other string is longer:
pos = host.rfind(zone_line) if (pos > -1 and len(host) == pos + len(zone_line)): # do something