Thursday, 14 November 2013

HOW TO: POP3 on Windows 8

I have a few clients still clinging to POP3 mail accounts and the newest version of the Windows Mail client doesn't work with it. Instead you will need to install the desktop version of Windows Live Mail. This can be found here: http://download.live.com

When you grab it, only install the mail component and when that is done, go into Start, find the Windows Live Mail shortcut and set up your beloved POP3 account. Enjoy

Saturday, 12 October 2013

ownCloud - your in house cloud file service

As usual, a client of mine requires something out to of the ordinary - they love Dropbox and Drive, using them extensively at home, and now want something for themselves at work. The catch is - the work they do requires their data to be housed in Australia. Some legal requirement I gather for their funding. After searching through the archives in the mailing lists for SAGE-AU (www.sage-au.org.au) for wisdom and nuggets of gold I stumbled across ownCloud. Several of their members expressed an admiration for this software so I thought I'd check it out.

You can find out a lot more about ownCloud on their website - www.owncloud.org but here is a brief overview of my experiences using this application.

Firstly I installed it on a spare PC I had, after first putting Ubuntu 12.04 LTS on there. Installation was straightforward, securing it a bit less so - there's a bit of work to be done there but it's reasonably well documented and then I installed the application on my Windows 7 laptop. It all just worked and worked quite well. The application on my Windows 7 box works almost exactly like Drive/Dropbox - copy files into there and they sync up to my ownCloud server. They are then available from the web page - secured with SSL - and available anywhere. I've punched a hole through the firewall for testing purposes to get this available. My clients love it.

The configuration options are quite straightforward and although I did get rid of a lot of the helpful folder customisations, I can see why these would be useful to others.

In terms of load on the server it's been quite minimal so far. The PC is running quite happily, only a low end dual core box with 2GB of RAM and a 160GB hard disk drive.

The production box with be either a Dell PowerEdge T110 II or a HP N54L - depending on funding. Requirements are quite low and depending on the volume of data to be stored, either of these servers will do a great job. Cloud in the office for under a $1000!

If you have half an hour (that's all it took for me to get it going) check out ownCloud :-)

Friday, 4 October 2013

Goodbye Cisco SRP527 and hello... what?

Recently I learned the SRP527 was being End Of Life'd. This is a sad moment - we started using the Netgear DG834's to provide a low cost VPN solution between two sites, then graduated to the double the price Cisco and now I'm on the hunt again.

The issue we have is so many of our clients only need a single VPN tunnel between ADSL enabled sites. Cisco 800 series routers are too expensive and provide far more bang than the client usually needs or wants.

Currently we are investigating the Draytek range of routers. I've ordered two and I'll review them in my next post.

The SRP527's have been quite a reasonable unit for us. I've found that the wireless connectivity, particularly to Apple devices is a bit hit and miss, but it's generally provided us with a stable connection that has been 1Mb/s faster than other ADSL2+ devices I've used.

Bring on the next challenger!

Sunday, 15 September 2013

A tale of survival - FreeNAS and ZFS - and four disks with failed sectors

Recently, one of the FreeNAS storage devices we have at the office started to generate failed sectors on two of the disks. While an eyebrow raising event in and of itself, I wasn't particularly concerned. Living in the virtual outback as we do, I ordered some more disks. About 8 days later, the third of the four disks in our NAS started to throw out errors! Uh oh, it appears that we were on a slippery slope towards Doom!

I called the supplier demanding my disks only to find out they'd ordered WD Green drives! Noooo. I amended the order to get WD Red drives (which are designed for a NAS) and was informed it would take a day or two. The next morning the final disk was generating errors. We were getting close to some serious error thresholds on two of the disks and the third and fourth were well on the way...

Impatiently waiting for the new disks to arrive, I kept a close eye on the NAS. FreeNAS emailed me with alarming frequency about disk failure, imminent apocalypse and the like. The next day the WD Red drives arrived and three of the four disks were now generating large numbers of errors. I shut the NAS down (not taking the pool offline like I should have!) and replaced the most error prone disk. Restarting the NAS I added it back to the pool, replacing one of the dead disks and let it rebuild. Gradually I replaced all the disks until the pool was degraded with a corrupted file. 

On this filesystem are all my virtual machines, so I was a bit concerned about which file was corrupt. Thankfully it was an old backup of my current Windows 7 workstation so I deleted it. Oddly, I was unable to remove two of the old disks - every time I tried it would add them back.

After a bit of head scratching I realised I needed to delete the snapshots and once I did that, I was able to remove the disks from the pool and it changed from Degraded to Online and services were all restored. I've checked over the disks and every single one has failed since. One file lost out of almost 3TB of data - thank you ZFS and FreeNAS! Note to self - sort the backups out!

Friday, 6 September 2013

How to update XenServer 6.2

Since XenServer 6.2 came out, things have changed for the Citrix developed virtualisation environment. Specifically, the free licence no longer applies - you get a fully featured system out of the box and licensing is applied per socket. What this means in a practical sense for keeping your server updated is that using XenCenter to update it is not an option unless you've licensed it.

If you're like me and running XenServer in a quasi test / virtual environment, you won't have paid for a licence (as much as you'd like to). Therefore updating your VM hosts is a bit different and you have to use the command line. Now, it's important to note a couple of things that the XenServer FAQs don't necessarily specify. You have to use the xe command line program to upload and apply the patches. Do this in the following way:

Open up a cmd process (Windows+r, then type cmd and hit enter).

Go to the correct directory: cd C:\Program Files (x86)\Citrix\XenCenter and hit enter.

In this folder is the xe.exe application we'll need. The next thing to do is download the patches from Citrix. Usually they'll land in your Downloads folder. Extract the files from within and note where you've put them - typically it ends up being c:\users\ryv\\Downloads\XS62E002\ for example

You have to use xe.exe to upload and apply these patches to your server - send it to the Pool Master using either it's IP or hostname (if you have DNS set up correctly).

The syntax is:

xe patch-upload -s <hostname / IP> -u root -pw <password> file-name=<path to file>\XS62E002.xsupdate

In real life this command might look like this:

xe patch-upload -s 10.0.0.100 -u root -pw s3cret123 file-name=c:\users\ryv\Downloads\XS62E002\XS62E002.xsupdate

Once the file is uploaded, it gives you a UUID of the hotfix. It might look like this: 59128f15-92cd-4dd9-8fbe-a0115d1b07b4

Make a note of this - we'll need it in a tick.

To apply the hotfix to your hosts in the pool, the syntax looks like this:

xe -s <hostname / IP> -u root -pw <password> patch-pool-apply uuid=<hotfix UUID> and then hit enter.

In real life it might look like this:

xe -s 10.0.0.100 -u root -pw s3cret123 patch-pool-apply uuid=59128f15-92cd-4dd9-8fbe-a0115d1b07b4

You'll really need to reboot the hosts after this. Verifying the application is easy - go to XenCentre and choose your Pool, then click the General Tab and expand Updates. Note that the update is "Fully Applied" and you're done!

Have fun!

Sunday, 1 September 2013

Further adventures of XenServer on the HP N40L Microserver - a follow up

If you've read my previous post on this matter you'll know how delighted I was to get the SBS2003 server running so well under XenServer. Well it's been about 300 days since I did this work. How do I know it's been 300 days? Easy - the SBS2003 server crashed for other reasons and I was compelled to reboot both - with an uptime of 281 days! Apparently it has been running flawlessly - indeed the logs from both servers would suggest this.

Backups to a NAS were set up shortly after the server was commissioned and these have been running like a freight train - for which I'm profoundly grateful. Fortunately in this instance the server simply needed a restart and it was up, running and doing it's job quite happily.

I should also note that I blew a tonne of dust and dirt out of it and the whole time the N40L was like the little train that could :-)

If you have a low usage server and you're looking for a simply solution and doing a P2V migration doesn't scare you, then this is a fine option.

I now have two N40L's at home - one running FreeNAS and the other running Mint - both are running very well. I note that the price for them has gone up to $300 from eBay now....

Tuesday, 25 June 2013

Who owns the data?

Firstly, let me make it clear I am not a lawyer - IANAL. So naturally I was left stressed and scratching my head at a recent and very difficult situation.

A client of mine was having a shake up at the top end of the company. I don't know why and I didn't ask. The boss was on leave, directed in some way by the Board of Directors and I was being given conflicted requirements - one boss saying "Don't do anything - stuff is happening in the background" while the boss appointed by the Board was asking me to do various things to essentially keep the business running.

The way I've dealt with it, and I think it's probably the way to remember for the future is to request, in writing and signed by a large number of the Board members a document to give you permission to do as requested by the Board appointed Boss. I asked for the Board chairman, two other board members and the CEO or Boss to sign the document - thus ensuring a majority of high level stakeholders were involved in this process.

The key thing about this to remember is that my organisation was working with their organisation not individuals working with other individuals. As much as I might have a relationship with a member of that organisation, the important thing is that it's a relationship between two businesses. The Board is the controlling entity of that other business and the document you get from them means that if they aren't behaving there is a limitation to liability for me - I have been directed in a manner that I can reasonably indicate was from a legitimate controlling entity.

At the highest level too, the business's data all belongs to that business - not to the people working there. This can be tricky of course if there is some intellectual property involved but that's what the courts are for. At any rate, in this particular instance, once I had that paper shield (as it were) I went ahead and performed the tasks as requested. If I'm ever challenged on that, I can simply say - here is the document signed and presented to me by the Board of Directors. There isn't a higher power in the organisation so as far as I'm concerned as an IT Professional then I have to do as they ask, or we lose a client. I think as long as what I'm asked to do isn't in contravention of any ethical or moral strictures then this can work well.

I hope you, gentle reader, don't get caught in the middle like this. It's very uncomfortable and must be handled with some care. Good luck and have a good lawyer - like I do (thanks AP!)

Saturday, 8 June 2013

Reducing the winsxs folder - why it won't happen until Microsoft change Windows Updates

Recently I, along with thousands of other system administrators, have been swearing more than usual. If you're one of these people, you know why - the winsxs folder starts to use huge amounts of space on your hard disk. If you've gone by Microsoft recommended practice and created a 50GB C drive, then you also know how quickly this screws you.

Is there a way to reduce this folder? If you apply a Service Pack and go through the mechanism to apply it without chance of uninstalling it then you'll grab a little space back, but as you apply more updates, patches and new drivers the winsxs (or Windows side by side folder) will continue to grow. Theoretically it allows you to roll back if something goes wrong with some component of the operating system. In real life though, this isn't really done - we can just reinstall the busted component or whatever.

The system that controls the winsxs folder is tied to Windows Updates - and as we all know in Windows Vista, 7, 8, Server 2008, 2008R2 and 2012 Windows Updates can be hit and miss. Updates failing to apply, updates taking forever to apply or the ever annoying "Shutdown and install updates" that drags on forever. Vastly different to Windows XP of course.

Most often the fix is to replace the disk, or expand your hard disk size but this is frustrating and annoying. It's also expensive for our clients. Unfortunately it seems to be the best way to solve this issue - there is no short term fix or even any hint from Microsoft this will be solved.

I recommend that for new servers, a 100GB C drive is configured to give you breathing time until Microsoft finally manage to solve this issue. If you know better - hit the comments.

Thursday, 16 May 2013

Why I won't use MelbourneIT for domains any more

Recently I have had three new clients come to our company for IT support. Each one uses MelbourneIT's domain name services - MelbourneIT hosts the domain name, the registry of that name and the web sites. In all three cases I've needed to make changes to DNS records and have been unable to get usernames and passwords for the managed services from MelbourneIT, despite having the authorised users available and requesting the changes themselves. Emails with reset account details have never arrived and after an estimated 3.5 hours on the phone I've given up. We'll set the DNS records up somewhere else and migrate the domain to a new registrar.

Poor customer service like this is not unusual in the IT world. Often a client will ask who to use for the domain names, where to host their DNS etc. I've always had very good experiences with Westnet and now with iiNET, reasonable experiences with Telstra on the Business Broadband and associated services and good experiences with Netregistry. While my company is a Telstra Fixed and Data dealer, I'm not personally involved in that part of the business and we don't resell any of the other companies services. I've used Netregistry personally for some years now, and I have my DNS hosted with Westnet still. Both organisations have been great to work with and I'm very pleased with the experiences I've had. I recommend them both to people I have as clients and friends. Contrast that with MelbourneIT's apparently poor customer service and I won't recommend them to anyone - not until I see a real improvement there.

I don't quite understand why so many companies - some with great products - skimp on customer service. Most of the time, the client is buying the sales chap or the customer service rep as much or more than the product in question. I always keep this in mind with my own clients - great customer service makes for sticky clients - they won't leave. Admitting mistakes can be seen as very detrimental, but I've always found that the admission, an apology and a plan for reparation have always been very positive. I've read too that in medical circles, some hospitals and doctors are apologising if things go wrong and people forgive them enough to drop law suits. That's a bit off topic, so I'll drag it back. The technical support staff I spoke to at MelbourneIT offered support but have failed to follow through and this is something that is now impacting on my relationship with the clients - they've started seeking a scapegoat for the things that aren't happening in a timely manner and unfortunately we IT people all get tarred with the same brush to a greater or lesser degree. It's been frustrating enough I'll take my business elsewhere and my clients too. Unfortunate for MelbourneIT but good for others.

Monday, 13 May 2013

Adventures with migrating Windows SBS2008 to Windows SBS2011 - Part 2


We take up the exciting adventures in migrating Windows SBS2008 to Windows SBS2011. The day is getting older and the laborious task of migrating Exchange data looms before us. We start by creating new Public Folder stores and configuring them. There is quite a bit of jumping backwards and forwards from the source (old server) to the destination (new server) during this process. Note there is a bit of command line work here – I highly recommend using tab complete where possible. If you haven’t used this before, type the first bit of a command or location and hit the TAB key – it will bring up the first match to those characters. Keep hitting Tab until you get what you want. Typically if it’s a multipart command, I’ll type a few letters, TAB, then a few more letters, another TAB etc until I minimise the number of letters I have to type to the bare minimum. It’s very *nix-y J

The mailboxes for the users – fortunately small – are in the process of migrating. Next will be data files and shares. We expect this to take the bulk of the time for the process. Note that the public folders suggested waiting 24 hours for them to complete the migration (No way!). This particular site has no data in the public folders so we can safely blow past this part.

The exchange migration was relatively straightforward and simple for us – not a lot of data and all over in about 30 minutes. On to the file migration and starting with the UserShares I was pleased to see that the command used was Robocopy. As you’d know from this blog I really like robocopy and its venerable cousin xcopy. Those applications have been great tools in my arsenal. We also set up the second partition – all the line-of-business data will go in here and we’ve got around 70GB of data to transfer for that. The robocopy transfer of the UserShares folder ran at about 500MB / minute so we’re looking at about 140 minutes for the big data transfer. We thought we’d get a bit of a jump on a few other bits of the migration – namely WSUS but the source server was running flat out keeping up with its new brother’s demands, so that was a no-go. Much of the migration of Fax, internal website and a few other features we were able to skip as this organisation doesn't use them. We’re at 9.25 hours so far and making good time (touch wood).

We RDP’d into the old server – turns out the console was misbehaving and started cleaning up WSUS. This process took 10 minutes by itself. Then it was time to set it up to migrate to the new server. The data copy was still proceeding and the log file was over 6MB in size already. We reviewed WSUS and decided to stop the migration – we’ll download it afresh and configure it only for the existing machines, cleaning up lots of other stuff in the meantime.

Creating a spreadsheet for all the share permissions is a handy thing to do. If you've got a lot of folders, with complex permissions I find this to be a good way to keep them all straight. It’s also a good opportunity to review Security Group membership and how these groups are applied to folders.  Robocopy with the switches the documentation suggests /COPY:SOU brings across all your ACL information so security is pretty easy to get going.

We’re up to the finish of the migration. We need to demote the old server, remove Exchange and a few other tasks. It’s a bit scary – this is the end stage. Luckily we still have the backup in case things go pear shaped. Here we go.

Removing Exchange 2007 from the old server proved challenging. There was a few difficult moments, an early “Who’s your daddy!” cry, then some silent weeping and finally success. I won’t bore you with details – sufficient to say it’s a bit of a process but our Google-Fu was up to it. The actual process of removing Exchange was surprisingly slow too – the files took a long time to delete. Not sure why – they aren't that big, and yet, 15 minutes after it started deleting them, it’s still going.  Once this process finished, we removed A/D Certificate services and then demoted the server. Rebooted and remove from network. Apparently we should now be done with the old server.

Uh Oh! It turns out that the users’ roaming profiles weren't properly copied across! Oh Noes! Powered up the dirty old server and started copying data across to a USB memory stick. Although it’s only 5GB the files are all little and so the copy time is suggesting 1 hour 20 minutes (!) I think this is uncool and my colleague agrees. Fortunately this does give us time to try and fix another issue that’s cropped up – the desktop PCs haven’t updated where they are supposed to get the redirected folders (Desktop and Start Menu) from. They’re still looking at the old server. So, with more swearing – as it is now 6:30PM and we've been at this since 8am this morning – we tackled the next issue. The desktop PCs registry’s suggest they are looking in the right place but the folder redirection still fails (it’s looking at the wrong place still so the issue of having no data there isn't yet a big one). Running gpupdate /force hasn't seemed to fix it yet. We’ll update the files, then try again – especially because the RedirectedFolders is empty – we had copied that data across so not sure what happened there.

This is a longer than anticipated process – the 5 or so GB of data is all very small files and so takes ages to copy across – more than an hour to copy it off, and only about 20 minutes to copy back in.

We found that the desktops were continuously looking in the wrong place – they were using the old server’s name in the UNC. Rather than update a million shortcuts on desktops and keep fighting the desktops I added a CNAME in the DNS to point the old server name back to the new server. Everything started working! We did find a multitude of strange, legacy shares that required recreation, so try to get all this down on paper before you get started. By this time we’d been at it for 15 hours each and we were starting to get a bit pissy. Thankfully Outlook and most other apps continued to work – we didn't need to create new profiles or anything.

We found that Exchange also had no outgoing connector to send email out. Internal email worked fine and the server was receiving mail but we couldn't spam, uh, I mean email, anyone. There was no Hub Transport Send Connector created. We got this going and suddenly the queued mail we had sudden flowed through. It was quite spectacular really – we’d sent a *lot* of test emails J

It was at this stage we made the executive decision that all the major boxes had been ticked and the network was operational. We had allotted 16 hours for each of us and we came in at 15.5 hours. Not too bad at all. We’ll no doubt have some problems on Monday but for now we’ll knock off and collapse at home. I hope some of this information is informative or at least entertaining. Here’s hoping your migration goes as well or better.

Sunday, 12 May 2013

Adventures with migrating Windows SBS2008 to Windows SBS2011 - Part 1


Approaching a major migration can be a very stressful event, especially with a Small Business Server involved in the mix. Migrating one from exiting to new is even more fraught with danger. Over the course of this weekend, we are migrating a Windows SBS2008 server to a brand new Windows SBS2011 box. There is some great documentation from Microsoft about this process and I’d like to share some of the experiences we had.

Firstly, it’s critical to assemble and test the new server before anything else. Give yourself enough time to do this. Even a pre-built, delivered by HP/Lenovo/IBM server still needs testing on your part – disk, RAM and CPU at the least. Build your RAID arrays too , have them prepped and ready to go for installation. Also, identify what drivers you will need – particularly RAID controller drivers and network adaptor drivers. If you've got the NIC drivers, then you can download any others you might need.

Getting the right documentation is helpful – the Microsoft Migration documents are very thorough (and long – 60+ pages) but are pretty well step-by-step. A very large USB drive is also handy and a laptop with internet connectivity is always a must.

We ran through the initial stages, getting the 2008 server patched up to the levels required for the migration. Then a backup of the C drive – we’ll migrate data via the network later. This reduced our backup from 250GB+ to under 90GB, which took a little over 30 minutes to complete. Then, the SBS2011 disk went into the old server and the migration prep tool was run. We created an Answer File (needed for the new server) and called it a night – it was a Friday after all and around 8pm.

Next morning – installation of the new server started. Drivers for the RAID controller and NIC were needed pretty quickly. When you run the installation it gets run in Attended Migration Mode – the migration process gives you 3 weeks to complete the migration, with the possibility of having two domain controllers on the network at once. After this time, the initial server stops working and that’s that. We experienced a BSOD trying to get the network card drivers to work – ouch! Here’s hoping it recovers back to the same point in the installation…. And it more or less did, except the server doesn't have the option to install a network card driver like it did. Two options only – Test Network Connection and “How to troubleshoot network issues” which opens the Help documentation. There’s nowhere to install a driver or configured the NIC. Hitting cancel shut the server down. Uh oh. We’ll crank that mofo up again and see what happens. Remember too, the network adaptor's IP has been pre-configured via the Answer File – this actually worked.

The server rebooted and now it wants to activate because the activation period has expired. How the hell did we get to there? OK, we've chosen to enter the activation code (cause we haven’t put any codes in yet). We got to that and now we’re back at the same screen about the server being unable to proceed because it doesn't have a network connection. A three finger salute (CTRL/ALT/DEL) allowed us to access Task Manager and get to the Device Manager from there. Remember how I mentioned it would be great to have the right drivers handy? We thought we had the right ones, but alas, they were not. After hunting around and testing several different drivers, still no luck. I’m sure there’s more on the HP website and there is… let’s try some others!

While that process was going on, I looked into some details about Windows SBS 2011. From the Microsoft OEM site:
Designed and priced especially for small businesses with up to 75 users, Windows SBS 2011 Standard is a complete solution designed for customers who want enterprise-class technologies in an affordable, all-in-one suite.
Built on Windows Server 2008 R2, Windows SBS 2011 Standard includes Microsoft Exchange Server 2010 SP1, Microsoft SharePoint Foundation 2010, and Windows Software Update Services.
Windows SBS 2011 Standard is a great opportunity for small businesses with prior versions of Windows SBS to upgrade their servers and to simultaneously take advantage of the advancements in security, reliability, and connectivity technology.”

Interesting indeed. SBS2011 is the last SBS. Windows 2012 Essentials doesn’t include Exchange and therefore becomes very pricy for small operations. Bring on Google Apps for that. At any rate, we've now downloaded the HP ProLiant Support Pack and installing it – 33 minutes to complete! Once this finished all the hardware driver issues were resolved and the server continued on its merry installation way. Sadly the Server Activation issue cropped up again and suggested we were victims of software counterfeiting – oh noes! Attempts to resolve this ended up in another reboot – which is usually OK but it does take a while. After restart we found the DNS wasn’t working – we adjusted it to use the ISP DNS and the server rebooted again, without much in the way of a by-your-leave. With the server live again the installation/migration continued.

The latest reboot resulted in the migration continuing and the Windows Activation windows popping up – this type the Activate Online was successful and the migration tool continued to expand and install files. 30 minutes until it finishes!

So it turns out 30 minutes was conservative. An hour on and the process was still running. We had time for coffees, pies and sandwiches. Hopefully it will finish soon…. During the interminable time while it does the migration, we noted that DHCP had stopped working  on the network. This was being delivered via the SBS2008 server and shouldn't have been affected. We restarted the service and DHCP was restored. We’re not sure why that failed.

OK so it was more like 90 minutes than 30 minutes. The server rebooted – hopefully because it’s supposed to and Windows is starting again. Sadly the process is still continuing after the reboot and the internets tells us it could do it 2 or 3 times!

After two hours the expand and install files process completed. It is now time to run the Migration Wizard, starting with File locations. This includes Exchange files and data files. We went through and used the default locations, then detected the network. It picked all this stuff up correctly and the exciting journey continued </sarcasm>.  We’ll take up the migration in the next blog post – where we begin with migrating Exchange data.

Friday, 10 May 2013

Adventures with the Cisco SRP527 ADSL2+ Router

This is a review of the Cisco SRP527 ADSL2+ router as much as it is an overview of my experience
with it. For a long time I was a huge fan of the Netgear DG834 series routers - for around $100 you got a router capable of wireless, 5 VPN tunnels, reasonable (but not fantastic) firewall and very reliable. It's only been since Netgear cancelled this excellent series, pushing users to the higher end models for VPN and using other, non-VPN capable routers for home users, that I started casting further afield for a new, reasonably priced VPN capable ADSL2+ router. WiFi wasn't that important - TPlink do a reasonable wireless access point for around $60 that we've deployed very successfully and I don't mind the separation of devices. One of my colleagues heard me bitching about the Netgear changes and suggested I check out the Cisco SRP range of routers. Usually the only Cisco routers I've played with are 800 series ones, or 1900 series - routers that require care and patience to set up, plus command line skills that I don't really have - I'm a *nix dude after all.

He was using the SRP547, the higher model than the one I use now, at home and loved it - he was able to control his kids access to YouTube and Facebook, killing their WiFi so they'd sleep and allowing him full access to his bandwidth :-) More importantly, they are reasonably priced and capable of both WiFi and VPN support. We started to sell a few of these devices and recent events enabled me to pick one up for home. I need a VPN to the office for remote backups, maintenance and monitoring so it was an excellent choice.

The Cisco SRP527 is an unassuming looking beast. It's in the same chassis as the 547 (and the RVS4000 for that matter) and offers a wealth of configurability. First things first - it has a web based front end that is clear and easy to navigate around. There are a lot of options, but they are fairly intelligently divided up and you can follow your nose looking for things. I set up my ADSL credentials, configured my firewall - note that you have to set up the Port Forwards, then the Advanced Firewall to get things moving in the right directions. I made the error of assuming that since I'd set up the Advanced Firewall options I didn't need to do the Port Forwarding - you do. But you don't have to set up the Advanced Firewall if you're allowing any access to the port forwards.

The 527 has 4 10/100 ports, one of which can be used as a second WAN port. It has 802.11N wireless capabilities and 2 phone ports

Rightly placed under the Cisco Small Business SRP500 Series Services Ready Platforms on the Cisco support pages, these are terrific devices. Not only did I have it up and running, with the VPN connected successfully to the SRP547 at the office, but I picked up almost 1Mb in speed on my ADSL line. Not bad at all for 15 minutes work. Even setting up the SRP547 at the office, with *many* more port forwards and some quite complex routing only took about 45 minutes.

The thing that probably impresses me most about this device is the reporting. The status page gives you a breakdown of so many different things its amazing. I can see how much data the port forwards are doing - individually, I can see WiFi stats, ADSL stats, VPN traffic stats and so much more. For someone like me it's awesome - I can watch this while I'm testing various different pieces of hardware and things that I'm doing - useful if I'm trying to work out what's sucking the life from my internet connection.

Things to note - when upgrading the firmware make sure you pick the right one. I inadvertently used the SRP520 firmware instead of the SRP520U firmware. Luckily a restore from backup fixed everything. With a reasonable price tag and lots of stuff it can do - it's well worth checking out the Cisco SRP527. With a bit of extra coin go for the SRP547 and get gigabit network ports!

Saturday, 27 April 2013

Linux and the Desktop

Often as I read my Internet news I note that the perennial question of "Is Linux ready for the desktop?" and I started thinking about it. I've also recently been playing with Windows 8 and I said to my co-worker "Is Windows 8 ready for the desktop?" We both laughed. As an outgrowth of this I started to really look at what it would take for me as a consultant with a range of small and medium business clients to move them to Linux. 

Obviously there are a range of applications that are not cross platform - usually either financial (MYOB etc) or industry specific stuff.Wine could potentially take care of this, or even running Windows in a virtual machine - which is common with people needing Windows XP to run specific software and being unable to run it on Windows 7 (and also 8). As for the common applications like word processing, spreadsheets and presentations there are a range of Open Source versions like NeoOffice, or even Google Apps or Office 365.

If you consider application support beyond the specific software noted above, there are plenty of applications available in your Linux of choice for all the stuff you want to do - video / audio / image editing, video / audio play etc. The options are numerous.

Taking into account Linux's robust architecture and resilience against viruses, trojans and malware, then the operating system starts to really look good. It's not uncommon for me to have a Linux desktop with 30 plus days of up time. Updates are easy and encompass the OS as well as apps, all in one reasonably easy to use package (I like Linux Mint personally and it's great for all these things). 

Take Android for example. Prior to it gaining popularity I bet there were a lot of people asking if Android was ready for the mobile market place. Time has certainly shown that it is. The difference between the mobile phone marketplace and the desktop is simply that the competition is so much more fierce. Microsoft's  stranglehold over the desktop, strengthened by their bundled browser Internet Explorer and Windows Media Player and buy in from all the big software firms. 

Chromebooks - Linux on the desktop! Obviously it's ready - it needs the brilliant packaging and design of MacOS X. And probably someone big to push it. Dell have been offering it for some time now and the excellent Ubuntu Windows installer (also available on Mint) gives the new user a chance to play with it. 

Give Linux a try - if only in a virtual machine and make up your own mind:

Saturday, 30 March 2013

Further adventures of XenServer on the HP N40L Microserver

We all know my delight in using the excellent N40L for all sorts of things. Recently a client of mine had issues with their Dell server - a server that had cost them over $20,000 5 years ago. It runs Windows SBS2003 and does a bit of file serving and not much else. I've migrated them to Google Apps for mail/calendar etc so they aren't even using Exchange. Unfortunately this client has fallen on hard times with the GFC so when this huge and expensive server of theirs began to fail, they asked for a low cost option to save their data and have a minimum of downtime.

I had just purchased an N40L for my test lab and as their disks continued to decline was able to get a complete image of the system. What surprised me was they had a 5 year old server with 7 year old disks in it! What the? I acquired some Western Digital Red Drives and installed them and 8 GB of RAM into the N40L. My initial idea was to use Acronis or similar to do a Universal Restore of the data to the N40L, update drivers and software and put the machine back in. After all, this server lives in their main office space - you can imagine what a Dell 2950 Tower server sounds like in your ear day after day.

Unfortunately my imaging project was unsuccessful. Windows SBS 2003 did not want to play the game and so I was left pondering my next move. I could buy a new copy of Windows SBS (2011 in this case) and migrate data across, a time consuming effort and with the Microsoft Tax on Australian software not an inexpensive option. I could do something dodgy and get a.... no no no. Life is too short to pirate software. At any rate, the option of a physical to virtual migration was available. So I installed XenServer 6 on the HP N40L. I installed to one disk and set up the hardware (really software) RAID via the BIOS. I'm not sure if this mirroring will actually work, because XenServer only sees the two disks. I reasoned that if software RAID is running and I install to one disk, then the BIOS level RAID should mirror both the disks.... when I have the leisure I'll test this. At any rate, 15 minutes later XenServer was up and running and ready for stuff to happen.

Because I was in a hurry I slammed a copy of XenCentre on my notebook, connected to the server and configured a Windows 2003 SBS guest with roughly the same parameters (disk, RAM etc) as the original server, imaged it across as if it was a physical server and held my breath. The server booted in the virtual environment successfully! It was running like a bucket of pus, but after installing the Xen drivers it was running better than the previous version - this made my clients very happy. I configured an external USB drive to act as the back up device and kicked a backup off. It failed and has continued to fail - there seems to be some odd conflict with the device.... at any rate, the server is running and now I need to put a small NAS in for backup purposes - one which I will mirror to an offsite location.

So for a relatively short amount of downtime and much less than a new, full sized server they are operational. When it's time for a proper new server, I'll set it up another XenServer - using hardware RAID this time (which will work) and simply migrate. The server isn't forward facing and the firewall allows only file serving with all other services disabled or firewalled off. It makes for minimal disruption for the client and once I manage to convince them to migrate to FreeBSD or GNU/Linux for their file serving the basic platform will be ready to go - I won't even need to buy another server, simply configure an additional VM and away we go.

Friday, 29 March 2013

HP N40L and FreeNAS 8.3.0

My existing HP N40L Microserver is running out of disk space. 2 TB is not enough it turns out. So I thought why not add another N40L to my network? After all, it's been a success with my existing one thus far.... So on to eBay I went, and I found an Australian company selling them for $209 delivered! I'm amazed these are so cheap - after all even low end PCs are more than this. So I ordered one up and it arrived three days later. I put a couple of 2 TB disks into the box, an 8 GB RAM DIMM and an 8 GB usb drive. Half an hour later I had FreeNAS 8.3.0 installed and a 2 TB array set up.

With an NFS share I can access the 2 TB array from my media PC and it all runs brilliantly. I've got space to add in two extra drives, and once I get two more disks I'll install them - running two 2TB mirrors and sharing out data easily. The N40L runs very quietly and efficiently and even running two of them is very quiet in the lounge room. I've used Western Digital Green Disks from 2 TB external USB drives. For $109 each plus the $209 for the N40L means that for $436 I've got a reasonable little NAS here. Another $218 and I've got a 4 TB NAS! It's stable and runs brilliantly. FreeNAS is an excellent platform for this, easy to upgrade and very stable with a wide range of network protocols available for connection to it. I'd heartily suggest using a server like this for a backup server or simple data storage. Add a couple of extra gigabit ethernet ports via the PCI Express card slots and then LAGG them together for greater through put and this simple and inexpensive NAS has even more applications in the business arena. I would strongly recommend 8 GB or more of RAM so pre-caching can be effected - this will improve data delivery.

As I type this I note an update for FreeNAS has become available so I'll grab that and install it!

FreeNAS details here: http://www.freenas.org

HP N40L Microserver details can be found here - HP N40L Microserver (URL truncated because it's awful)

SSD's - a new lease on life for older hardware

Solid State Disks have been on the market for a while now and the prices are coming down per gigabyte which is starting to bring them into the realm of affordability. While recently searching to upgrade the disk in my Dell Inspiron 1102 net book I was offered a 128GB SSD. I didn't really think much of what it would do in the computer until I'd installed it. Once I got the thing imaged and transferred across into the net book I was pleasantly surprised by both the performance boost and also the boost in battery life. I was amazed actually. It was a much better machine than it had ever been, running Windows 7 quite well and most basic Office Apps.

I've procured a second SSD for a venerable Lenovo R500. The specs on this notebook are pretty reasonable, but with a 6 cell battery it's lifespan wasn't great. An upgrade to a 9 cell battery gave it a boost but not a huge one. Installing an SSD made a significant difference. 6 hours of battery life is easily achievable while using the net or office productivity applications. Speed hasn't really been such an issue with this particular laptop but now it's even better.

For high end gaming rigs SSDs are the norm and even in servers now we're seeing them more often. I've changed the set up in my desktop PC - boot from an SSD, with a 2TB SATA disk for data. As the price for SSDs continues to drop they are definitely worth considering for even lower end applications. My two older laptops now are going to be useful for longer and perform better than they ever have before. Consider it for your older SATA capable notebooks!

Friday, 8 February 2013

Traffic Monitoring using Ubuntu Linux, ntop, iftop and bridging

This is an update of an older post, as the utilities change, so has this concept of a cheap network spike - I use it to troubleshoot network issues, usually between a router and the network to understand what traffic is going where. The concept involves a transparent bridge between two network interface cards, and then looking at that traffic with a variety of tools to determine network traffic specifics. Most recently I used one to determine if a 4MB SDSL connection was saturated or not. It turned out the router was incorrectly configured and the connection had a maximum usage under 100Kb/s (!) At $1600 / month it's probably important to get this right - especially when the client was considering upgrading to a faster (and more expensive) link based on their DSL provider's advice.

Hardware requirements:


I'm using an old Dell Vostro desktop PC with a dual gigabit NIC in it - low profile and fits into the box nicely. Added a bit of extra RAM and a decent disk and that's really it. I'm also running this on an old Dell D420 with a gigabit PCMCIA adaptor - useful for the out and about jobs.

Software requirements:


  • Ubuntu 12.04 LTS - I've chosen this for longevity purposes, previously I'd used non-LTS operating systems and the updates naturally ran out. I tried this with FreeBSD 9.1 but had issues with packages and getting traffic across the network bridge effectively (probably more my screw up than FreeBSD's)
  • ntop - network traffic analysis monitor from www.ntop.org. They have version 5 available from the repositories on the site, version 4 is included in Ubuntu 12.04
  • iftop - a neat command line package that shows network usage from a terminal screen. Highly configurable
  • tcpdump or equivalent for deeper packet analysis.

Configuration

Setting up the box and the ethernet bridge
Setting up the box is straight forward - go through the usual Ubuntu installation. Use aptitude or apt-get to install bridge-utils and iftop

We want the network bridge between our ethernet adaptors to come up automatically. To do so edit /etc/rc.local and pop this into it (assuming eth1 and eth2 are the interfaces you want to bridge. I have eth0 configured statically in this instance so I can browse from other machines to it)

/etc/rc.local
brctl addbr br0
ifconfig eth1 0.0.0.0 promisc up
ifconfig eth2 0.0.0.0 promisc up
brctl addif br0 eth1
brctl addif br0 eth2
ip link set br0 up
This will bring the bridge up at boot time.

ntop

After you've added the necessary repositories to your aptitude configuration, install ntop5 using apt-get install ntop5

I run this from the command line - as a service it seems to fail fairly consistently. The command is:

ntop -P /var/lib/ntop -Q /usr/local/share/ntop/spool/ -i br0 -u ntop -m 192.168.0.0/24 -d

-P sets the database file path
-Q sets the spool file path
-i sets the interface (br0 as per /etc/rc.local)
-m sets the local subnet - in this case 192.168.0.0/24 (change to suit)
-d sets it to become a daemon freeing up your terminal
Browse to localhost:3000 to find your ntop installation, or if you have a third network card go to the address on the network e.g. 192.168.0.30:3000 and view your traffic stats.

iftop

To get what I want out of iftop, I run a script that calls it and configure the /etc/iftoprc file. The script is:
bridge_monitor.sh
#!/bin/sh
# customisable settings
LOCALNET="192.168.0.0/24"
IFACE="br0" # the bridged interface
CONF="/etc/iftoprc"
/usr/sbin/iftop -p -n -N -i $IFACE -F $LOCALNET -c $CONF

The contects of /etc/iftoprc are:
dns-resolution: yes
port-resolution: yes
show-bars: yes
promiscuous: no
port-display: source-only
#hide-source: yes
#hide-destination: yes
use-bytes: yes
sort: 2s
#line-display: one-line-both
show-totals: yes

Again customise to suit and start monitoring that network!

Wednesday, 6 February 2013

OTRS Restore Procedure and backup script

As I note in my previous post, I managed to kill my OTRS install and as usual had to trawl around the net to remember how to restore it. In a nutshell:

# mysql -u root -p
msyql> drop database otrs;
mysql> create database otrs;
mysql> ext
# /opt/otrs/scripts/restore.pl -d path_to_backup /opt/otrs

You did back up right?

Nightly I run a script with the following in it:

otrs_backup.sh


#!/bin/bash
# Variables below - change these to suit
NOW=$(date +"%Y-%m-%d_%H-%M") # this gets the correct file name for OTRS backup
LOCAL=/root/backup # a local directory for OTRS to backup to
REMOTE="user@backupserver:~/backup/OTRS/" # remote backup dir - nfs share, ftp or cifs
/opt/otrs/scripts/backup.pl -d $LOCAL # OTRS internal backup (files and DB)
tar -cf $LOCAL/$NOW.tar $LOCAL/$NOW # creates a file from the OTRS backup folder - more efficient to copy over a network
gzip $LOCAL/$NOW.tar
rm -rf $LOCAL/$NOW # tidy up
scp -r $LOCAL/$NOW.tar.gz $REMOTE # scp to remote directory


You may wish to run this from crontab after copying otrs_backup.sh to /usr/local/bin:

0 20 * * * /usr/local/bin/otrs_backup.sh

This will run at 10pm each night - theoretically you could run it more frequently. OTRS databases will a lot of attachments get quite large though so be mindful of that (I have a couple I manage that are 1GB and are only 5 months old)

Enjoy

Upgrading OTRS 3.1 to 3.2.1

After noting that our OTRS (www.otrs.org) was complaining about a major release update pending I took the plunge this morning and set about upgrading it. Initially I ran through the normal upgrade procedure and couldn't log on. Oops. Maybe I need to pay more attention here? Turns out there are quite a few caveats about this upgrade, and I'm hoping that what I note here will assist you - especially the database upgrade stuff. That was a bit of a surprise!

Initially I ran my normal otrs_pre_upgrade.sh script which stops services and backs everything up. That script looks like this:

#!/bin/bash
service cron stop
service apache2 stop
NOW=`date +%F`mkdir /root/backup/$NOW
BDIR=/root/backup/$NOW
cp -R /opt/otrs/Kernel/Config.pm $BDIR
cp -R /opt/otrs/Kernel/Config/GenericAgent.pm $BDIR
cp -R /opt/otrs/Kernel/Config/Files/ZZZAuto.pm $BDIR
cp -R /opt/otrs/var/ $BDIR
/opt/otrs/scripts/backup.pl -d $BDIR

Usually I then ln -s otrs-new otrs and run my upgrade script - but something failed along the way. Here is what I found:

Firstly, there are a lot more PERL modules required in 3.2 - these three caught me out:

  • YAML::XS
  • DBD::ODBC
  • JSON::XS
I added them using aptitude - my OTRS install is on Ubuntu 12.04 LTS (www.ubuntu.com) - easy enough to do and then checked the modules again. If you are following the UPGRADING documentation, you should run:
  • /opt/otrs/bin/otrs.CheckModules.pl
This will tell you what modules you require. I didn't bother with the Oracle or PostgreSQL modules as I'm not using those databases, nor am I interested in the Encode::HanExtra (no Chinese characters). 

Secondly there are database changes to be made. MySQL uses INNODB as the default storage engine. I've never even thought about this before - OTRS had always just run happily without asking me fancy questions about this sort of thing. Now in 9 of the UPGRADING document I had to apply database changes, including changing the default storage engine from INNODB to MyISAM.

Fortunately I stumbled across some nice scripts to do this at Techusers.net - here they are tailored to suit OTRS:

Step 1. Get all the table names from OTRS (you'll have to put in your password and you might have to change root to something more site applicable):
mysql -u root -p -e "SHOW TABLES IN otrs;" | tail -n +2 | xargs -I '{}' echo "ALTER TABLE {} ENGINE=INNODB;" > alter_table.sql

Step 2. Updating all the tables in one go (same as above - password + username update)
perl -p -i -e 's/(search_[a-z_]+ ENGINE=)INNODB/\1MYISAM/g' alter_table.sql 

Step 3. Applying the change to your SQL Database:
mysql -u root -p otrs < alter_table.sql 
Much thanks and kudos to the writers at www.techusers.net - this saved me from doing each table by hand! The search_profile table refused to change from INNODB to MyISAM but when I checked the dbupgrade scripts, this particular table isn't mentioned. It did not seem to affect the overall upgrade.

The rest of the upgrade went fairly smoothly. It's important to note, however, that you must go to Admin -> Packages -> Update Online Repository and then upgrade your packages to get better speed from your OTRS install. I found that after I did this, I restarted the apache service (service apache2 restart) and OTRS began humming along quite nicely. I'm still exploring the new features. Enjoy

Wednesday, 23 January 2013

Privacy in the modern times



It seems to me that with the advent of all our social media applications – Facebook, MySpace, twitter, Flickr, Tumblr etc., the ability for us to get our thoughts out there is the easiest it’s ever been. The detail that this provides to people is remarkable. Ad companies use it for focused advertising, other companies use it for various nefarious means and criminals use it to steal our identities. Less insidious I think is that people can know us in a way they never have before. The cost to our privacy seems to be one we’re happy to bear though – the most popular consumer mobile devices have Facebook and the like built in and integrated with everything – messages, photos, GPS locations etc. Our internal thoughts and feelings are now able to externalised quickly and limitlessly. Nine times out of ten this is incredibly boring stuff (let’s face it, we’re not as interesting as we’d like to be), but the fodder for bullies, abuse and misuse is extraordinary. It’s very much like posting a sticky note to the wall at school with your latest thoughts and opening yourself up to complete, uncontrolled scrutiny. We all know how much impact putting yourself out there at school can have. Now we do it on a global scale and it doesn’t seem to be an important thing to consider the value of our private lives.

Never mind the fact that once something gets to the Net it never seems to leave. Those embarrassing moments, which once passed leaving only an uncomfortable memory, now linger – sometimes that moment makes it to Youtube and it can live forever. These little moments, of often excruciating embarrassment now have the potential to harm us forever. One can make injunction to have them removed, a costly and time consuming procedure which often brings even more attention to the moment and so is only a partial remedy. It doesn’t stop people from downloading and keeping these images and movies for ever on their own personal machines. This is even exploited as people do stupid things for attention (and get it). The slapstick comedy of the Three Stooges seems to have morphed into Jackass and our collective intelligence has taken a mighty hit. But back to privacy.

I see the youth of today posting details, photos and information about themselves that as a young person I would never have done (and as an old person am even less likely). The generation older than mine are so recalcitrant about their personal feelings and life it can be like pulling teeth getting any information out of them even under the best and most appropriate of circumstances. It certainly adds to their mystery – another underrated and mostly lost commodity in the world. Whether it’s the endless tweets of a person summing up their thoughts in 160 characters or their barely there clothing, mystery is a lost art. Privacy and mystery are inextricably linked, and we don’t seem to realise that as you give up one, you give up the other. Potential partners or even potential employers can look into what you are doing, often without appropriate context, and make judgements on you and your behaviour without having other critical information, for while we do tend to post a lot of information to the net, most of it requires a certain amount of local knowledge (i.e. you had to be there type thing)

It is incumbent upon IT professionals to help non-technical people navigate this quagmire of what to do. The privacy settings of Facebook (for example) are not clear cut and there have been many times I’ve seen a profile completely exposed to all and sundry – birthdate, address, phone etc. – everything the budding identify thief needs to acquire and then sell your identity with. We need to help people understand what they can and should share to the world. There is a vulnerability to such openness and most lay people don’t understand the potential for harm. IT professionals have an obligation therefore to protect people from their potential loss through education and technical assistance. If you consider your current visibility on the Internet – where are you vulnerable?

Adventures with Immich

With the implementation of my Proxmox server it's now time to play with some new applications - and we'll start with Immich, a repla...