Header Image

Work Term

Below are my reflections on my time working with the data center of the NSCC Institute of Technology campus! The first half reflects on my time spent with the team as a work term student and what I completed for them during that time. The second half encompasses my time during the summer months, spent as a part-time Junior Systems Admin. and my volunteer experiences since I started attending classes again. I loved my time spent with this amazing team of professionals! Hopefully that is evident by reading these relfections.

Work term collage

Part 1


I had an excellent experience during my time working in the Data Center at NSCC IT Campus. Andre was a great supervisor, who had a new challenge for me every day and made a genuine effort to include me in as many daily processes of the workplace as possible. From day one, I was involved with exactly the same things any of the members of the team would be in their daily work. This included some very interesting and informative meetings (Strategic Plan, Dell, and planning for Toll with Bell).

The first major project that I was a part of was the replacement of the core switches on the network. This was a 6 hour process in which we conferred with 3 other locations to retool the core of the network from 4 older Cisco switches to two new ones with 10gbit modules included for dark fiber links between the colleges, and universities involved. During my time working here, the master switch has gone done twice due to (possibly hardware) failure, so I was involved with reworking the network to function on redundant links, troubleshooting the main switch and even getting to have a look at the programming of the switch itself through Denise (the Cisco rep).

I have also been involved with several projects that I handled on my own, including developing 3 fully functioning automation scripts through PowerShell that will be used regularly by other techs with the college and a handful of smaller scripts to assist the techs here. One of them is even in use with all of the techs thanks to Andre distributing it. It goes without saying that writing these scripts required a lot of research into many different aspects, including PowerShell itself, use of Windows API and Scheduling, and the Dell iDRAC system for rack server management.

The first script that I wrote takes input from the user to create a PowerShell job on a target machine remotely to reboot the machine at a specific time. This required study of the way that Windows schedules its tasks from a backend perspective. The script can schedule a job to execute a PowerShell script block with virtually any timed trigger you could want and also allows for the listing of already scheduled jobs, removal, and editing of the trigger for a given job. Scheduling is supported for a one-time run, daily, and weekly (covers monthly). There is also a second, sister script to this one that I developed that essentially does the same thing but much more simply schedules a reboot at a given time to run once on the machine instead. This script will actually be the more widely used of the two since it is a regularly handled task that used to have to be done manually.

The third major script was actually a pair of scripts that I developed to help manage the iDRAC system of the Dell servers. Andre was looking for a way to enter that system, detect the jobs that have a status of completed and remove them from the system to avoid bulk jobs just lying around in memory. He also wanted a way to leave a work note behind in the system whenever he ran a given script on one of the machines. I researched the iDRAC, how its work note system worked, how the job queue was handled and managed to come up with a script to find and clear the completed jobs from the system. I then came up with a script (really more of a module) that would add a work note to the system simply stating that the script (by name) had been run and since the system automatically date and time stamps every note this proved to be the perfect way to track when scripts have been run on these machines.

I also had two other major projects that I handled that will have tangible results across the college (I hope) when they go fully into production. I took each of these all the way through the testing phase and into pre-production so I am quite pleased with that.

The first major project was developing a WSUS server for the college that will serve as the go-between for all of the servers on the network for Microsoft Updates. I set up the server from scratch, worked out a system for managing which updates were applicable for our Server 2012 machines, set up a synchronization schedule with the Microsoft servers and developed a plan for automatic update approvals for the test group of machines I was working with. Furthermore, I also had to research and develop a plan for implementing the specific GPO settings that would be needed to have the client servers seamlessly integrate with the WSUS. Once all of this was in place, it was a simple matter of ironing out some rough edges but everything at the time of writing this seems to be working exactly as planned on the 10 servers I am monitoring. This has required a bit of management and subsequent further learning along the way thanks to major OS updates, GPO tweaks and some other minor odds and ends.

The other of the two more hands-on projects was a network USB hub setup that had been sitting on the back burner here for some time. It was handed to me get up and running so I physically brought the system here online with Andre's supervision then went into full research mode again, digging to find the configuration needed to successfully set this up. Rather than detail this in full, I have opted to include the documentation I have already produced for the team to illustrate what I have done on this project.

Looking at a broad-spectrum view of my experience, I would say that I have learned an awful lot from this experience. Every project or script I have worked on, I have needed to research from the ground up and thankfully I haven't experienced very many failures. Those failures I did experience, I was able to dig a little deeper and find immediate fixes for. I look forward to continuing to contribute going forward as well since it looks like I might be staying on going forward in some capacity!

Part 2


Throughout my continued summer work placement, I tasked to make WSUS better and expand it to a broader client base of servers. We expanded the server base to 150 servers across multiple campuses. I continued to monitor and refine the update approval scheme and work out bugs in the system, which essentially came down to defining what servers were not communicating properly with the service. In most cases, this turned out to be the fact that Windows Update was turned off on the server itself so it would not report to WSUS.

We needed a simple and fast way to determine what servers had this issue and resolve it, preferably all at once. In came PowerShell, once again. I created a suite of scripts for WSUS management once we expanded the service. The first was a script to facilitate server cleanup, to manage the space on the WSUS content directory. I scheduled the script to run the cleanup once a month to help manage our storage space. Despite this fact, we still ran short on space and after analysis and future projections, I suggested that we expand the storage by another 20 GB to allow for future growth.  Next, I created a script for the management of the client servers. A regular practice while working on servers in the DC is to disable the Windows Update service once initial updates are run. This keeps the servers from rebooting at less-than-optimal times due to updates and prevents aggravation for the systems administrators while working with them. Unfortunately, without this service active, WSUS can’t properly communicate with the client servers. So, we needed a way to find out if a server’s Windows Update service was disabled, and possibly when the last time it had received updates. I created a script that does both of these things, tying it into a script I had created during my first work term that (in the case of a server’s WU service being disabled) launched and allowed the admins to schedule a job to start the service and install any updates that were parked. This would automatically cause the server to reboot and report to WSUS, bringing into line to start using WSUS for future Windows Updates.

Over the summer, I also had the opportunity to attend the HPCS (High Performance Computing Symposium) conference. The conference was a very high-level technology conference, where they were speaking to the ability of high-speed networking and very powerful server tech to create large-scale global data networks a reality. Some companies already proved to be off to a good start as well. Altogether, the conference was much more applicable to the university crowd, with big projects on the go that would require a lot of storage and fast transfer between campuses, but the whole thing was still very informative.

I also got involved with the new profile server project for Citrix (Anywhere Apps), helping Clint set up and spec out the new profiles that are used for the service this school year. After that was more or less done (as much as I could do anything with it), I was going to move on to a System Center 2012 update rollout. Then, what I have lovingly termed as PS-Gate (power supply) occurred. Within the month of August, the data center lost a total of 50+ power supplies across servers, switches, SANs, and more equipment. Obviously, this came at a great cost to the data center and ended up requiring a complete rethinking of the way the server room was wired (in fact, the room had to be completely rewired and re-grounded). Getting to witness and in a small way be involved in this process was extremely informative and educational from a disaster management capacity.

This project is currently still underway downstairs and likely won’t be completed until November sometime. While this continues, I am continuing to work with the data center team to further automate their day-to-day tasks more fully, using PowerShell. I am also helping with any projects that need an extra set of hands, such as the recent server build-up for the Toll (Aliant) data center.

Update...


 I continue to volunteer my time with the data center team. My involvement is very small, since classes have to be my primary focus. I have helped developed some new scripts for the team and continue to monitor and work on the WSUS project. The re-constructruction of the server room has been completed and full normal operations have been restored. It has been a busy year thus far, and I am glad to still be able to help in any small way that I can in the DC.

- Rob Gauthier