Sunday, 15 January 2012

The Fundamentals of building Client Networks

Recently I've been thinking a lot about the best way to help my clients understand and engage with their IT networks and systems. I have also been thinking a lot about how to best manage and look after these systems for my clients in a sustainable way. In order to do this I've been looking at the fundamental basic building blocks of my client base and considering the commonalities. The reason for understanding these commonalities is to put into place simple guidelines for developing and maintaining a network. Each network will of course have certain unique circumstances but if the fundamental infrastructure is well understood, these unique aspects of each network will be easier to manage.

So thinking of all of these things, I've looked at the commonalities in my clients and found they can be grouped into several broad categories:


  • sites with a single server, single location and a small (less than 30) number of users. They may have some mobility but generally only a small requirement.
  • sites with multiple servers but only a single location and between 25 and 50 users. Again, some mobility but not a lot - potentially they'll want more.
  • sites with multiple servers, multiple locations and 25 plus users. Requirements for mobility including file access and remote VPN access.
Although these categories are quite broad, they cover 90% of the small to medium business clients I tend to deal with. These clients are all important to me, and given I have a finite amount of time to work with them, its critical that understanding the fundamentals and underlying structure of the networks doesn't need to be re-discovered every time I'm on site. How then to ensure efficient support of clients?

Firstly by grouping sites into the broad categories I mentioned earlier, I have a quick higher understanding of each site. By building each network following standard procedures there is plenty of efficiency to be gained and also it's a lot easier to explain to a client what is on their network.

Secondly good documentation is key. It's not just writing stuff down, but having it available to review when you are onsite - and this means there are several things that have to be in place:
  • data must be available via some sort of mobility
  • data must be secure
  • data must be organised and detailed
Having the data secure is incredibly important - if it's available using some type of web interface, it has to be SSL secured and the passwords have to be strong. Although this seems obvious, it doesn't seem to be well executed. Having data organised and detailed is the key to keeping client networks well looked after.

Thirdly using the same basic ideas to build each network type mean that if the key support staff member is not available, other support staff can easily work out what is where and how it's set up. These basic ideas also make it far more efficient to produce quotes and proposals, and, I've found anyway, that development of new ideas can be more easily implemented into proposals and integrated into networks.

Recently I've noted that lately I have been speaking with businesses that aren't currently clients of mine and I've found some over-complicated and under-documented networks. By applying some of the basic principles I've touched on in the post I'm able to start getting these networks back under control. I've found that the easiest way to do this is the following:
  • determine what the client needs
  • determine what the client wants
  • determine what the client already has
  • determine what the client actually can have
  • document it all and discuss at length and in as non-technical a manner as possible
These are the fundamentals of building client networks and they are also the fundamentals of recovering a client network from a state of disrepair. The major difference is that the former causes a lot less pain than the latter.

Questions?

AB out.

Saturday, 14 January 2012

Understanding a network

Recently I've been spending time with several prospective clients and I've found a few quite horrible things. The common, awful things stem from a complete lack of disclosure by the incumbent IT support consultants. In one instance, one of the clients aren't even allowed to have administrator access to their systems! They can't add or remove users, or perform any basic administrative functions. They are being kept in the dark and spoonfed bullshit by the IT guys. So when they get a hugely expensive proposal to upgrade their systems, the first, maybe even the second time they fall for it and finally they call someone else in to look at it.

What I've found is awful - barely ethical behaviour by the IT consultants, systems with non-genuine software and lies to the client. Networks that are probably capable of so much more being poorly managed - even by basic standards. For example, several of them have multiple sites with poor data delivery - but rather than look at the bandwidth as an issue, the IT guy is telling them the servers are under performing, but an analysis of the system shows plenty of overhead available in disk, cpu and memory capacity. The bandwidth is the problem, but again, rather than work on that, and fix some poorly configured routers, there are inaccurate reports of server issues - for example "the server is running out of RAM, that's why it goes slow..." but checking the RAM shows that there is plenty free and the system isn't swapping at all.

I just find this completely unethical. Why not consider some different options if things aren't working properly? It's been my experience that a client is willing to accept that new ideas come up and give different options for an office to be more productive. It's also been my experience that a client won't be looking to replace an IT consultant unless they are very unhappy and willing to risk the potential for damage to systems for the opportunity to get a more reliable system that they can trust and at the end of the day that's what this is all about - trust.

Without trust then the relationship is over. It's very obvious but people get lazy and without checking to make sure they are looking after their clients, well sloppy behaviour becomes prevalent and then its time for someone else to take over, with the client paying an awful lot of expense both in time and pain of changeover, plus the loss of valuable site knowledge.


Sunday, 8 January 2012

Useful script for unrar files in multiple directories

A friend of mine recently asked me to help with a problem he had. When he downloaded files from the internet, no doubt legitimate, many of them contained nested directories with an rar file and associated components in them. Some of these downloads look like this (for example):

  • Main Folder
    • Sub-Folder 1
    • Sub-Folder 2
    • Sub-Folder n etc

This is really tedious to go through each sub-folder and unrar each archive so I wrote a simple script for him to run straight from the linux/*BSD command line:


angus@server: ~# directory=/path/to/directory ; for dir in $( ls $directory ) ; do cd $dir ; unrar e *.rar ; cp *.avi /path/to/end/directory ; cd .. ; done

It seems to work relatively well. An expansion of this as a bash script:

#!/bin/bash
# Script to extract RAR files downloaded in torrents - usually TV series type torrents
# This is the directory your torrents are downloaded to
echo "Please input torrent directory: "
read -r "input_torrent"
echo "$input_torrent"
# This is the directory you want the extracted files to be copied to
echo "Please input directory for extraction: "
read -r "output_dir"
echo "$output_dir" 
#enable for loops over items with spaces in their name
IFS=$'\n'
for dir in `find "$input_torrent" -type d`
do
        cd $dir
        # ls # uncomment this line and comment the two lines below for testing
        unrar e *.part001.rar #or this can be unrar e *.rar
        cp *.avi "$output_dir"
        cd ..
done

Notes about this script:
  • unrar e *.part001.rar
    • I've found that this may need to be altered dependent on my friend's torrents. The directory may have the files set up in a similar pattern to this above: file.partXXX.rar OR also commonly found is file.XXX with a file.rar that is the key file to the archive
  • The input_torrent and output_dir variables need to be written without backslashes i.e.
    • /path/to/files with a space in the name
    • NOT /path/to/files\ with\ a\ space\ in\ the\ name as you would usually expect in a *nix environment
      • This is because I'm learning bash scripting and making things all neat and tidy is more than I'm capable of doing :-)
  • It's set up to copy the extracted avi file elsewhere
The bit of the script between the "do" and the "done" can be modified to do different things which might be handy for you down the track.

Modify as you require and drop a comment if you have anything to add to the script!

AB out.

Saturday, 7 January 2012

rtorrent - the friendly torrent application

I use rtorrent for my legitimate torrent requirements. I find it extremely useful and here is why:

  • I run it on a linux server I have under a screen session so it's always available
  • it's set to have an upload and a download limit for torrents
  • stops after I've uploaded double what I've downloaded
  • reliable
  • easy to drive
Of course, getting it to this point wasn't totally straightforward. I had to set up my .rtorrent.rc file in my home directory to get all this stuff to work properly. It isn't using 100% of the capabilities of rtorrent, merely the ones I find most useful. For example I don't have it set to check for new torrents in a particular directory - I add them manually for an additional measure of control and so torrents I'm finished seeding aren't accidentally added back in. It does send me an email when a download is finished, retains info about where each torrent is up to and stops if diskspace becomes low (which it occasionally does)

Here is my .rtorrent.rc contents - everything in grey is a comment:
#=================================================================
# This is an example resource file for rTorrent. Copy to
# ~/.rtorrent.rc and enable/modify the options as needed. Remember to
# uncomment the options you wish to enable.
# Maximum and minimum number of peers to connect to per torrent.
#min_peers = 40
#max_peers = 100
# Same as above but for seeding completed torrents (-1 = same as downloading)
#min_peers_seed = 10
#max_peers_seed = 50
# Maximum number of simultanious uploads per torrent.
#max_uploads = 15
# Global upload and download rate in KiB. "0" for unlimited.
download_rate = 200
upload_rate = 5
# Default directory to save the downloaded torrents.
directory = /home/angus/torrents
# Default session directory. Make sure you don't run multiple instance
# of rtorrent using the same session directory. Perhaps using a
# relative path?
session = ~/torrents/.session
# Watch a directory for new torrents, and stop those that have been
# deleted.
#schedule = watch_directory,15,15,load_start=/home/angus/torrent/.torrent
#schedule = untied_directory,5,5,stop_untied=
# Close torrents when diskspace is low.
schedule = low_diskspace,5,60,close_low_diskspace=100M
# Stop torrents when reaching upload ratio in percent,
# when also reaching total upload in bytes, or when
# reaching final upload ratio in percent.
# Enable the default ratio group.
ratio.enable=
# Change the limits, the defaults should be sufficient.
ratio.min.set=150
ratio.max.set=200
ratio.upload.set=20M
# Changing the command triggered when the ratio is reached.
system.method.set = group.seeding.ratio.command, d.close=, d.erase=
# The ip address reported to the tracker.
ip = xxx.xxx.xxx.xxx
#ip = rakshasa.no
# The ip address the listening socket and outgoing connections is
# bound to.
#bind = 127.0.0.1
#bind = rakshasa.no
# Port range to use for listening.
port_range = 6900-6999
# Start opening ports at a random position within the port range.
#port_random = no
# Check hash for finished torrents. Might be usefull until the bug is
# fixed that causes lack of diskspace not to be properly reported.
#check_hash = no
# Set whetever the client should try to connect to UDP trackers.
#use_udp_trackers = yes
# Alternative calls to bind and ip that should handle dynamic ip's.
#schedule = ip_tick,0,1800,ip=rakshasa
#schedule = bind_tick,0,1800,bind=rakshasa
# Encryption options, set to none (default) or any combination of the following:
# allow_incoming, try_outgoing, require, require_RC4, enable_retry, prefer_plaintext
#
# The example value allows incoming encrypted connections, starts unencrypted
# outgoing connections but retries with encryption if they fail, preferring
# plaintext to RC4 encryption after the encrypted handshake
#
# encryption = allow_incoming,enable_retry,prefer_plaintext
# Enable peer exchange (for torrents not marked private)
#
# peer_exchange = yes
#
# Do not modify the following parameters unless you know what you're doing.
#
# Hash read-ahead controls how many MB to request the kernel to read
# ahead. If the value is too low the disk may not be fully utilized,
# while if too high the kernel might not be able to keep the read
# pages in memory thus end up trashing.
#hash_read_ahead = 10
# Interval between attempts to check the hash, in milliseconds.
#hash_interval = 100
# Number of attempts to check the hash while using the mincore status,
# before forcing. Overworked systems might need lower values to get a
# decent hash checking rate.
#hash_max_tries = 10
# First and only argument to rtorrent_mail.sh is completed file's name (d.get_name)
system.method.set_key = event.download.finished,notify_me,"execute=~/scripts/rtorrent_mail.sh,$d.get_name="
#===================================================================

I hope this is useful for you.

Friday, 6 January 2012

Restoring OTRS on an Ubuntu Server

Some time ago I relocated our OTRS server from a failing server to a virtual machine under Microsoft Hyper-V. While the change to a virtual machine ran smoothly and I used the details in a previous post to set it up, after a month I noticed some strange errors creeping in to the installation - the nightly log emails had inconsistencies in them. Fortunately I was able to run a full backup of the OTRS installation using the built in backup tool and very shortly thereafter the server fell in a heap. Rebooting it caused a complete failure of the virtual disk. Now, how the hell something like that happens is beyond me. It was like the virtual disk dropped a head or something.... Ridiculous I know, but the fsck I ran basically told me the disk had failed and corruptions crept in to everything on the disk. Realising that I was fighting a bad fight, I decided to create a new virtual machine and transfer the data back across.

The recovery procedure, described here: http://doc.otrs.org/3.0/en/html/restore.html doesn't really cover everything that needs to happen. Here is a short breakdown of the notes that I made while I was running the recovery process:


  • make sure you set the MySQL (or whatever database you use) password to be the same.
  • in fact, make sure you match up all the passwords where possible.
  • Install OTRS first using the Ubuntu install method - which is well described here: http://wiki.otrs.org/index.php?title=Installation_on_Ubuntu_Lucid_Lynx_(10.4) 
    • make sure you run all the right commands, including the cron ones (which I initially forgot - oops!)
  • Run the restore as per the link above and then restart apache and cron.
  • Test your installation and see how it goes.
Since this error, I've written a very simple script that runs the backup and scp's it across to another server I have. This in turn is backed up to my FreeNAS box, hopefully protecting my useful data. Here is the script:

---------------------------------------------------------------------------------------------------
#!/bin/bash
NOW=$(date +"%Y-%m-%d_%H-%M")
/opt/otrs/scripts/backup.pl -d /home/user/backup
scp -r /home/user/backup/$NOW user@server:/home/user/backup/OTRS/
---------------------------------------------------------------------------------------------------

The $NOW variable is configured to match the output of the OTRS backup.pl script and then I simply scp it across to my server. It's date organised and works pretty nicely. rsync might be a nicer way to do it, but this virtual machine only provides OTRS and nothing else so I'll keep it simple. 

If you can use any of this then please do.

AB out.

Thursday, 5 January 2012

Service Delivery in bandwidth poor locations

Being in the country presents some interesting challenges and one I find that I come up against frequently at the moment is, as the title suggests, getting needed services into various remote sites. Although ADSL is quite widespread, and where not available various wireless services (NextG and the like) are able to cover the connectivity issues. But in the case where one is attempting to link sites via a VPN, 512/512Kbps is really not enough for modern applications, particularly if you're pushing internet as well as mail and remote desktop connections over that particular link. Even an ADSL2+ link with speeds up to 24Mbps/1Mbps is not really adequate for the task at hand.

So how to get around this? I'm thinking along the lines of a division of service, decentralising where possible and using cloud technologies to take the burden off the VPN links, that is, push email out to the cloud and whatever other services available out to the internet, thereby reducing the outgoing bandwidth requirements at the central site. Hopefully this will free up more bandwidth for RDP and the like. Additional ADSL services can further reduce the burden if I push http traffic out that way (using a Netgear dual WAN router or the like).

I have recently had to put this theory into practice and it seems to be working out, but it's not entirely solving the problems. Perhaps a packet caching device at either end, such as the ones produced by Riverbed, might be the answer. It's a difficult question and it gets worse when people want to put voice over it as well. You can use clever calling plans to get cheap inter-office calls more easily than implementing a whole other VPN simply for voice. And at the end of the day, let's not forget that ADSL is provided in a "best effort" scenario and no provider in the country guarantees bandwidth availability.

Tricky tricky tricky....

Wednesday, 4 January 2012

Skyrim issues

I really like playing the Elder Scrolls games - I've played and completed Morrowind, Oblivion and now I'm working through Skyrim. The issue I've got is frequent freezes. Now I play it on the PlayStation 3 and do that for a very specific reason - I don't have to worry about compatible hardware or any of that jazz, I just want to play the damned game. So when I find that a game, configured for very specific hardware crashes like this it's extremely irritating. I've got both the PS3 and the game patched to the latest updates so that's all current and I'm not missing any potential fixes.

Generally I find the gameplay very good, enjoy the skill system and the levelling. I try to avoid using online walkthroughs or FAQ's - that's cheating! This means I occasionally screw things up and go back to a recent save (of which I have a lot because of the afore mentioned crashes) it costs me in time. In the 45 minutes I've played today it has crashed twice. I turn off the PS3, turn it back on, go through the disk recovery and then I can eventually get the game started again.

I hope they can get things sorted with it. It will be a much better game once it's stability issues are improved on.

AB out.

Tuesday, 3 January 2012

Migrating to Blogger

Previously I had been using Google Sites to host www.ryv.id.au. Sites is great, don't get me wrong, however the main purpose of my webpage is to host this blog and I don't think that sites do it well. For example, it doesn't list the entries in date order, rather in alphabetical order on the left hand side. While this is OK for a webpage, it makes it difficult for a blog oriented site to be easily navigated. My other webpage - www.zenpiper.com has a similar issue, only I also have other content on there not so easily migrated to Blogger.

It's horses for courses naturally. I've used Blogger previously and been reasonably happy with it. I'll stick with it for now and review what's happening with Google Sites as I go. Naturally, as a Google Reseller, I'm trying to keep up with it to the best of my ability to offer it to my valued clients.

AB out.

Adventures with OpenBSD - OpenBSD 5.0 on Sun Blade 1500

The scenario:

Installation of OpenBSD 5.0 on an Sun Blade 1500. I've replaced the default XVR-600 piece of proprietary junk video card with a Sun PGX-64 PCI Video Graphics card that uses the mach64 chipset for rendering things. Instantly I had a much nicer console and a far more workable X configuration. The only trick was getting the bloody thing to use 1280x1024 with 24bit resolution on my 19" Dell monitor. Here are the notes from the exercise:

Default installation
man afterboot

Dell E198FP Sync rates:
  • 30 kHz to 81 kHz (automatic)
  • 56 Hz to 76 Hz
Make sure to copy the above into the /etc/X11/xorg.conf file and also add:
Section "Screen"
        Identifier "Screen0"
        Device     "Card0"
        Monitor    "Monitor0"
        DefaultDepth    24
                SubSection "Display"
                Viewport   0 0
                Depth     24
                Modes   "1280x1024"
        EndSubSection
EndSection
- to force it to use 1280x1024

Add to .profile: PKG_PATH=http://mirror.aarnet.edu.au/pub/OpenBSD/5.0/packages/`machine -a`/

Installing Fluxbox (to play with more than anything):

pkg_add -i -vv fluxbox feh

Make sure to add exec /usr/local/bin/startfluxbox to .xinitrc by doing:

$ cat "exec /usr/local/bin/startfluxbox" > .xinitrc

Also do this to .xsession so startx grabs it straight away:

$ cat "exec /usr/local/bin/startfluxbox" > .xsession

pkg_add -i -vv midori -> lightweight browser, and tends to install a billion dependencies (mostly media playing type stuff which isn't bad)
pkg_add -i -vv firefox36
pkg_add -i -vv mousepad (lightweight text editor)
pkg_add -i -vv filezilla (FTP and stuff)
pkg_add -i -vv goffice (some kind of office thing - need to examine it more closely)
pkg_add -i -vv ristretto (basic image editing and viewing)
pkg_add -i -vv epdfview (PDF viewing)
pkg_add -i -vv conky (for checking out the system loads)
pkg_add -i -vv eterm (my favourite terminal program)

Note: fluxbox menus need a lot of work - I've deleted/commented out a *lot* of stuff to clean this all up.

pkg_add -u (check for any updates or errata)

Also look at this : http://www.gabsoftware.com/tips/tutorial-install-gnome-desktop-and-gnome-display-manager-on-openbsd-4-8/ for using Gnome and GDM

Further adventures with OpenBSD - XFCE vs Gnome

So continuing the great adventure - recently whenever I've used Gnome there is a string of "Starting file access" or something similar that appears in multiple tabs down the bottom. This continues endlessly and the load on my Blade 1500 gets up to about 5 which is unacceptable. So I hit the net and looked into using something different. I found a great blog (which I neglected to bookmark or make any other notes about)  that explained a bit about how to do it. Basically I did this:

# pkg_add -i -vv pkg_mgr

which is an easy way to do searches and install large number of packages and then go to X11 and pick all the XFCE packages. How easy is that? Download and install and off you go. The load on my machine is:

angus@blade:~$ w
11:43AM  up 13 days, 21:04, 3 users, load averages: 0.71, 0.63, 0.59

With 792MB of RAM in use (of 2048MB) and this is with Firefox running while I write this entry. 

Overall I find XFCE to be more responsive than Gnome - which is hardly surprising and for the basic features I require it looks quite nice and drives quite well. 

I do tend to find that the machine struggles when I'm looking at various webpages on the net - it doesn't handle processor intensive work all that well - and after all, why should it? This computer is old and does only have 1GHz processors so it will go slow. As a basic server type machine - running with the encrypted file systems and the like with SSH access in, it's working quite well.

Configuring an Ubuntu server under Microsoft Hyper-V

It's fairly straightforward to make this happen. Do a basic config of the system and then:

$ sudo vi /etc/initramfs-tools/modules
    & add below lines
hv_vmbus
hv_storvsc
hv_blkvsc
hv_netvsc

Save the file, then: 

$ sudo update-initramfs –u

$ sudo reboot

$ sudo ifconfig -a

$sudo vi /etc/network/interfaces
    Add below lines for dhcp:
    Auto eth0
iface eth0 inet dhcp

    Add below lines for static IP:
auto eth0
iface eth0 inet static
address 10.0.0.100 [IP address]
netmask 255.255.255.0 [Subnet]
gateway 10.0.0.1 [Default Gateway]

Now restart networking service & reboot:

$ sudo /etc/init.d/networking restart
$ sudo reboot

And you will be good to go!

Further adventures with OpenBSD - Encrypting Files systems

So I decided to create an encrypted folder on my workstation to use as a storage device for work related files (which typically have passwords etc located in them). After some trial and error I found the way to do it. Blog entries and the like that reference this material mention using the svnd0 vnode device for the encryption but it doesn't work. I'm not sure if this is an OpenBSD 5 peculiarity or something to do with my Sparc install but I eventually sorted it out.

Note: do all commands as the root user - it's a lot easier.

I created the sparse file to be encrypted:
    # dd if=/dev/zero of=/location/of/secret/file/.cryptfile bs=1024 count=1024000

Note that it's 1GB in size and has a preceeding "." so it's at least a little bit hidden from a casual ls search.

I have to mount .cryptfile somewhere so I created a folder for that too:

    # mkdir /media/crypt (or wherever you'd like to put it)

I have to check what vnodes are available:

    # vnconfig -l
vnd0: not in use
vnd1: not in use
vnd2: not in use
vnd3: not in use

I can choose any of these to associate with my virtual encrypted device. I will use vnd0. Using vnconfig again:

    # sudo vnconfig -ck -v vnd0 .cryptfile
Encryption key: (use something good)
vnd0: 1048576000 bytes on .cryptfile

OK so now we need to create a file system on our device (which is only a single partition) so we need to newfs the "c" slice as this is the whole disk:

    #  sudo newfs /dev/vnd0c
/dev/rvnd0c: 1000.0MB in 2048000 sectors of 512 bytes
5 cylinder groups of 202.47MB, 12958 blocks, 25984 inodes each
super-block backups (for fsck -b #) at:
 32, 414688, 829344, 1244000, 1658656,

So now to mount our encrypted filesystem to store our secret files!

    # mount /dev/vnd0c /media/crypt

Probably a good idea to make it usable for me:

    # chown -R angus:wheel /media/crypt

And we're off and racing:

# df -h
Filesystem     Size    Used   Avail Capacity  Mounted on
/dev/wd0a     1005M   42.2M    913M     4%    /
/dev/wd0k     42.8G    1.0G   39.7G     2%    /home
/dev/wd0d      3.9G    224K    3.7G     0%    /tmp
/dev/wd0f      2.0G    450M    1.4G    24%    /usr
/dev/wd0g     1005M    135M    820M    14%    /usr/X11R6
/dev/wd0h      8.6G    1.9G    6.3G    23%    /usr/local
/dev/wd0j      2.0G    2.0K    1.9G     0%    /usr/obj
/dev/wd0i      2.0G    2.0K    1.9G     0%    /usr/src
/dev/wd0e      7.9G   42.7M    7.4G     1%    /var
/dev/vnd0c     984M    2.0K    935M     0%    /media/crypt

I'll be re-creating this whole thing again soon so watch out for any updates or errata.

Check out: 
http://www.backwatcher.org/writing/howtos/obsd-encrypted-filesystem.html for some handy mounting/unmounting scripts.

FreeNAS Upgrade from i386 to x64

To get reporting working properly do the following:

SSH to the box (or use the console)

[root@freenas] ~# service collectd stop
    Stopping collectd.
    Waiting for PIDS: 4002.
[root@freenas] ~# find /data -name "*.rrd" -exec rm -rf {} \;
[root@freenas] ~# find /var/db/collectd -name "*.rrd" -exec rm -rf {} \;
[root@freenas] ~# service collectd start
    Starting collectd.

... and reporting will be fixed.

FreeNAS version is FreeNAS-8.0.2-RELEASE-amd64 (8288)

*BSD vs Linux for Home Server

I have a few simple needs for my home server - it needs to be stable, functional on older hardware (P4 2GHz with 1 or 2 GB of RAM) and run a few simple applications:
  • rtorrent (for... ahem... legitimate torrent requirements)
  • irssi - the bestest IRC client (and the one I've spent ages getting a nice config file for)
  • screen (for teh awesomeness!)
  • SSH - for remote work, and for sshfs so I can rsync and backup data remotely
  • and a bit of storage space - 100GB is nice
  • nagios - monitoring work sites as required
  • DHCP
  • DNS
Currently I'm running Ubuntu 10.04.3 LTS on a P4 3GHz USDT HP that has a noisy fan in it and I'm going to migrate back to my Dell P4 2GHz box that I was running before. It has a slower processor, is quiet and reliable. It's also more power efficient than the current one. I've been considering getting my hands on an Atom powered box or the like with very low power requirements for home. After all this server really doesn't have to do a lot or work - it just needs to chug quietly away and provide the basic services I need. So why change?

Well several reasons I guess. Security is the big one. Reliability is the next one. A rolling distribution would be handy too - one with easy, in place, headless upgrades.

Most Linux variants will support the apps I listed, as will FreeBSD and DragonFly BSD, my two preferred BSD variants (even though I've had great success with OpenBSD on my Sun Blade - see earlier posts). I'm thinking FreeBSD may be the option to go with, so I'm playing with it under VMware Player at the moment. DragonFly's HAMMER files system is mighty attractive though, so I'm thinking very carefully about this choice. I'll keep notes on my adventure as it goes forward.

Adventures with Immich

With the implementation of my Proxmox server it's now time to play with some new applications - and we'll start with Immich, a repla...