TPR6: Transitioning to a Multi-tier Web Environment

Steven Lewis 
Web Manager/Information Security Coordinator, The College at Brockport, State University of NY


The audio for this podcast can be downloaded at

Steve Lewis: Just a little bit about my background. I have an undergraduate degree in Math and Computes Science from University of Rochester. Immediately after that, I started working on the web stuff at a college about 20 miles to the west of Rochester, New York. At a public institution as their lone full-time web professional, really.

So if you’re talking about the web shop of one, that was me. We finally hired a couple of people and at least one of the staff person is here with me at the conference base. So, it’s good not to be alone, but still is when it comes to technology, I’m really the head of the web unit at Brockport.

So, dealing with the technology issues as they come up and keeping things up to date is really what felt me. And so I hope to take you through a change that we made about 18 months ago with respect to our setup, so that if you're faced with something similar, you can maybe take some of the lessons I learned as part of this process and apply them to your own situation.

So just a brief look at where we are going today, I wanted to give you a sense of the service setup that we begin this process with, how we accomplished the transition without really any significant downtime more than a few seconds during the process, what our new production environment looks like, how we kind of fell into a new pre-production environment that was better for our setup. And talk a little bit about change management, how we now work on projects before and after this arrangement or how this new setup helps us, really. And then a little about future plans if we have time in terms of what we hope to be moving to in the future in terms of, yeah, we have a previous setup but the pieces that we think are missing along the way.

So, to take us back to 2009, what we had was a setup…

Do you guys use permanent markers on this board? We seem to be having a minor technical difficulty.

02:01

So this setup was essentially on a Sun box. Sun, I guess, maybe doesn’t exist so easily anymore but we were essentially handed off a machine that was coming off another project. They said “Well, your current hardware was end of life about six months ago but we want to give you this new box.” Well, thanks for telling me that first but without any real thought as to where we wanted to go, in longer term, their easy solution was to give us more hardware than we've ever need. So we went from a 250 to a 450 which was really overkill for our web server but hey, the box was supported whenever it went down. We get nice fast response from Sun to give us back up and running.

We’re running Solaris 10 which allowed us to use zones or that Sun’s virtualized environment within the OS at that time. I don’t think we were really playing around with any of their fun features like CFS or things like that. But at least we were up to date on the OS on our current or then our current platform.

However, we didn’t use any of the tools or services with respect to Apache that Sun provided. What we found is that the versions of what Sun provided through their own OS was really out of date and didn’t seem to be as easily configurable in terms of the type of environment in the modules with Apache that we needed nor to run our environment.

So really from the start when we first switch to UNIX, which is about six months after I got to Brockport, for the web server that we’ve been running on our environment that was essentially compiled by me. Something that our sysadmin apparently didn’t want to get into at that time. Well, I’m not sure how that fell to me, but I guess I did enough compilation in college that I had managed to pull it off. And so it ends up being a platform that we were running on.

Does anyone still using Apache 1.3 out there? Thank God. Because that’s one of those things that we just never had a solid reason to upgrade from and the prospect of 2.0 or 2.2  were just out there in terms of “Well, I don’t know how different all these things are going to be” and rather than, "Why not, it seems to be working for us." So this project was one of those many things that had to change as we move forward.

04:11

Anyone still using MySQL 4 or PHP 4? A couple maybe. I think there are four people I heard from RIT here and last I heard that institution was still running PHP 3. So there is still hope out there for people and some of these upgrade projects really depend on how much of an installed base you have out there.

We also use mod_perl and we use that to really just do some filtering on all our pages. We use the server site include implementation that was part of  mod_perl along with some other tools. And I’ll get into one of the tricks that we did that we were kind of pulling off with the mod_perl setup to dynamically change the pages that was being served in order to help facilitate testing of the new environment.

So we had two boxes in our old setup and I’ll just draw with my very basic diagram these two virtual boxes. We had the box that face the web. We had the box that people updated the web pages to. And so we called the production server vendor and the testing box where the box that uploaded people stuff to, we call Flexo. I happened to be a fan of Futurama and so I stole the names and we used that for this particular project.

In terms of vendor, again, our primarily web server, we didn’t have a separate database server. We just ran MySQL on the same system. We figured, "Hey it'd cut down on latency," and it really, there was never overwhelming the box that we had.

These are account machine where people FTPed. We figure keeping the server off the same machine, with the level of security to it was setup. We did share, however, the production web folder between the pre-production and the production system or the FTP system and a production system. And so that’s the mechanism within zones where you can actually share a directory between two the different setups, their zones and that setup system and that’s how we head that setup.

06:23

Unfortunately, that caused us a particular problem. About a year prior to this, we had adapted a PHP MVC framework—Model View Controller, called cake-PHP—and we had that installed split up into three pieces. So every cake application that we wrote didn’t have to have a copy of that framework. We had one common location for the framework. We had the F files which live in a folder that wasn’t accessible from the web and we had just really a fragment of code left to tie those pieces together that was what in the public web files along with any other images or files and whatnot to get put up with part of that system.

And that really caused us a problem because our testing system was the same as the system that people uploaded files to in production and those folders were the same. And so if you were in a web application, you’re uploading folders to your production directory and the production directory's different than your testing system and your production system. That’s the problem we ran in to over and over again. We were deploying applications that we had to change a setting in a file somewhere.

We already had a file segmented off in terms of database configuration so that our production system wouldn’t be talking to our pre-production database. So we exported that out. So we migrated things over, we went and copied the file and the web auth directory so that the copying from production to pre-production to work or vice versa, you never change the database credentials.

Even though we had that setup in place, that can kind of get over by just putting a constant in that file. We have problems because, again, we weren't being consistent about our development at that point. So we knew that we needed another development environment. It will be more like our production environment. We ended being where the bonuses that we got during this process.

08:09

Another interesting observation that I really have to mention at this point is we didn’t have a clear separation of responsibilities when it came to the system administrator group. I’m in IT and the system’s group is in IT. I report to the CIO. They report to a manager of systems in networking which reports to a director which reports to the CIO—so different levels of organization there. And they were very interesting in their perspectives on things like the website not being a critical piece of the colleges infrastructure. That if websites down, they somehow thought that the President's offices are not going to be screaming until it was back online.

So, things like letting the support on our hardware expire and not really telling me after the fact. Or things like that where communication wasn’t great. We weren’t really putting a lot of intentionality into the system support of the web. It was just "We're going to give you whatever hardware we have left over at the end of the day."

So I think we’ve… That being one problem, how we segregate the duties being a techie and being able to go in and make the changes in systems with root on these boxes is helpful and streamlines a lot of things. But when we talk about segregation responsibilities, that really shouldn’t be in my plate. That should the systems guys job to take care of. They can give me tools to do what I need and not much more. So, with the segregation that has changed this new system.

The systems guys, well, they didn’t’ want to be responsible for compiling the web server, were complaining at times about the being out-of-date when it run Apaches that were available to the Apache configuration. And looking and reading through those release notes that happened when we do that, that was judgment call that I could make because I knew what services and features that Apache we're using. So if there’s security error and a feature that we’re not using, that’s not a critical patch for us in our environment. So, there was some tension there that we were experiencing as part of our wonderful relationship.

10:15

And as a division or a unit of the college, we really decided that Linux was the future. We had really bitten, I guess, the virtualization bug saying we can save energy, we can save resources by having common hardware and a virtual image that we can move stuff between as resources dictates. So that was something that we had finally really thinking about our platform. We had embraced something and it seems to be the way to go for us.

So, when it came to the transitional period, it was really clear to me that the first thing we had that need moved was the databases. And the real reason for that was, if you move everything at once then you have a real transition problem. Because you have a copy of the database on one server and a copy on the database on the new server, and how do you affect that transition when you’re talking about DNS resolution for cleaning out to the world. And that can take couple of days, sometimes, to do.

So, what we happened is instead of the database server being on the same box, we developed a new production system for databases off to the side. And instead of the databases being here, we connected them to the new box over time. We transition this, staggered that transition as resources allowed.

So we stood up the new server for MySQL on our Linux BM and it came… Once we got it up and running and we would start moving our production systems over, such as our database is over one at a time. We have 72 on the system and like, I think, most web application—tough that’s probably declining really—but in our setup it was almost universal, I think at the time, that most of the web applications that we had, though the public site of them are read-only.

And so the cut-over ends up being one of really a seamless transition. You changed your configuration file and all of the sudden you’re reading from the database server.

12:19

So, right on the administrator, if you take down the administrative side for a couple of minutes while you do a transition, that’s not really good. In most cases it seriously affects your institution and how you operate. And so that got us to the process through which we would migrate a typical application with cycle wash-rinse repeat several types. So we disable administrative access so we can actually have a static database to move. So if its database is not changing anymore, then that makes it easy to migrate and pull from one system to another.

As part of this endless process, we started creating accounts. So a lot of the web applications out of convenience, of course, followed that one adage where you have one administrative account rather than create separate accounts, several application, you just give the whole world to every web application that you have. So any database errors or coding errors or SQL injection, you can pretty much own the database server.

Luckily that never happened and as part of this process, we created read-only and read-write accounts with very restricted and minimally necessary permissions such that as we're going to this process, we were improving the security of our web application. So, when we we migrated, we created the account as well.

So once we created the accounts, then we can take the public applications, the word read-only application and change the configuration file. So that instead of reporting to the local host, we’re now going to try and connect to that new database server. So that’s what we did. And then once we’ve got that working and confirmed that it works, then we can move the administrative site over so that that is also connecting to the new database server.

14:01

And then finally, once we have configured it right, we can turn back on access, change the htaccess file to let anybody in who had... with the application determine who can access it. And then we have… And one of our applications, one of our databases, has migrated.

So those 72 databases, when you get it down to a script, in essence, this process becomes pretty painless because MySQL comes with tools that allow us to essentially dump the database file into a SQL file that you can then read in another MySQL box to import all that data. So the database is fixed. We run a command, essentially one-liner, that will migrate all the data from old box to new, and then put it in a shell script to go out and create your accounts and those other things. And then all of a sudden, we have migrated the database pretty easily.

And those 72 databases weren't really as bad in the end as I thought. And I didn’t really have a good sense of this at the time, but as we went through, it turned out it was really an opportunity for some spring cleaning. I moved 39 databases in the end because those were the only web applications that we had at the time operational that we had to move.

So 17 of the databases outlived their projects. So we went and did something quick and it solved the problem. And now, the problem is over but the database was never retired. And so in some cases, we were cutting to departments from projects for satisfaction surveys from 2002 or something saying, "We need these data."

So I’ll send you the Excel or CSV files. If not, it’s gone as of that and such a date. And some of them we contacted and sent along and some of them didn’t want it anymore. So, 17 of those databases never had to move. It turns out nine and this is probably not too unusual, though I was surprise that number was so high. Nine projects that we started, we created a database, maybe had some tables, started writings some programs and then we never really finished the projects. So either priorities changed or we went out with our different way and these databases were still around, so they were gone.

16:11

Turned out we had six databases related to the association here. Most of those might have been either registration data or there’s annual survey to do for the conference in terms of salary stuff and how units are organized, where people are and those types of things. And those systems, those databases, we migrated to the association. The association finally had its own server space and we can just move those stuff. We didn’t have to worry about keeping those around.

And then we got one for free. One of those 72 tables was the MySQL table which is where things like permissions are stored. So as we were migrating the tables over and creating the databases and creating the users, the MySQL stuff kind of happened automatically. And so the databases just migrated to the live system and things were pretty smooth-sailing.

And this is one of those things that we made a decision early on when we don’t have really a robust and sound development environment. The fact that we had configuration files for our systems, we don’t have to change each and every single bit of the program. We change one central config file, then our permissions changed throughout the application.

So when we go in and change, we didn’t have to change the 10 files and application to migrate, change what database server was affecting to. We had to change, in most cases, one file and got them all.

So once we have the databases set up, it really became time, we needed the test, get stuff moved over to the new server and test it out.

Before I move on to that, I seem to recall, I may have forgotten to say something I usually say before my presentation. Which is, anyone has any questions at any point, feel free to interrupt me. So, raise your hand, get my attention, shout out "Hey, Steve…" We’ll try to get to that.

I would like, however, now pause, and ask if there any questions about that database migration process. Pretty straight forward? OK, then I’ll move on.

18:08

So moving the… And really starting to build a new environment—I’m color coding here so you can follow along; the blue is our new building, our new environment—one step at a time.

So we wanted to build our new environment, and there were lots of differences. We went over the diversion changes before, mod_pro connecting to Apache 2.2 server. We had all the programs that we wrote in PHP 4 or old versions of Perl end up being migrated over the latest and greatest versions where there any issues that we’re going to encounter. We were anticipating some serious problems with our cake-PHP framework and that some of the naming conventions that use PHP 4 are different when you use PHP 5, and so we have to work through some of those.

And then the only thing we're really sensitive of when we were testing the system is that we really didn’t want to disrupt our normal database. We wanted, if the Office of College Events was going in and updating and making changes, test changes, to the web system to make sure they all work, we don’t want them entering that test data into our production database.

So we wanted, really, to build our new web server, but the same time let it play off and play against the, really, a test copy of the database so that we wouldn’t have problems with them going in and testing stuff, and then breaking the live website or putting garbage information info live website.

So what we did really is, at a point in time, is we made the day and then we said “OK, we’re going to copy all the data over to our new production system.” That, however, creates a certain other problem but we’ll get into that, in a second.

So once we have a copy of the system over and then they said “OK, how do we access the database?” I used red to represent our new development environment, which I guess points a little something.

20:06

But we’ve already moved our database once and we moved it under a modern architecture: Linux 5, MySQL 5. And we could easily, now we’re in a virtual environment, clone it. So that’s what we did at first, is we cloned it. So the challenge becomes “OK, eventually your data gets stale. It’s no longer as useful being a two-month old copy of your production database.”

So not only do we cloned it, but we started a nightly process through which we would copy over the most recent version of our production database into our pre-production system. We understood that, well, that process, we might want to make some changes along the way, so we now run it the testing system. So we've added an event to the events calendar for the current day saying that “This is a test database.” We would an event to our homepage database to add the announcement that “Oh by the way, you’re on the test system. You’re pulling from the test database.”

So we cloned the existing server. We got an rsync process to synchronize the data. We added data to customize it. We adjusted the host fields of MySQL database so all the permissions on our production server were pointed at our old production, our old production web server. And if we want this server to be able to connect to the database server, then we need to rewrite the MySQL tables. There are about five of them in the MySQL database that have a host field. And so we run a script that every night, when it back up to the database, it would change the host over. So instead of having permissions on a production system, we're establishing permissions to web system – the new environment that we were then testing.

And then finally, we were aware of the fact that well, we could just redirect all traffic on the testing web system by changing all those configuration files to point at these test copies of the database. But that creates a certain problem that when we wanted to go live, all of a sudden, we're re-writing another 70 configuration files along the way.

22:17

So rather than do that, we just changed… We did an OS hack. So that when this server was trying to look and connect to that server, the OS would redirect it, and so it will connect to the test system. So when you went to the test web server, it was thinking it was connecting to the production system, but it was really the OS to the redirect to the test system.

So that allowed us to have a web, set up our test web environment that was only connected to the test database, only could connect to the test database, and with the administrators of applications could then go in and test them and make sure that there were no problems that they encountered. And we really did push most of our testing on to the end users of our web applications.

So again, we had the PHP upgrade to deal with. As part of the environment, we decided we're going to expire at least one of our Apache module from 1 to 3 rather than try to convert it to one that's somewhat compatible to Apache 2.

And so our mod_auth_external is a... It allow you to run an arbitrary program to do an http authentication process. So we rewrote that with a mod_perl program. And also, we're essentially doing what we could do with LDAP and so we used it with LDAP too.

So we want to talk about how we involved the application owners to the process of reviewing their own applications. And one of the critical pieces was really how to put out fires. So when things are broken, we had to get the new system up and running based on expired hardware or pretty close to expire.

And so that's one thing that we really did concentrate on was—and being a small enough shop, we could do this—was halt the production for a month on anything new and just go through and fix identified problems.

24:08

And I have student staff that also does programming for me and so they were also involved in making sure the applications were working and doing summary testing as well. And when they weren't doing more routine HTML updates, they were fixing the applications that may have been breaking in the new setup.

And then, of course, we didn't want to launch the website with a month old data and so we also started a synchronization process that would copy any changes made to our current production site to the soon-to-be production site.

Finally, the last new keys of our production system we really need to replace was the user access. And so, the box to which people FTPed, or in this case, secure FTPed now, we need to get those copied over. Previously, we had just been creating user accounts with separate passwords that may have been not as good as we like them to be, but they were in separate passwords, and some people would check them off to remember them in Dreamweaver or routinely forget them and need to reset.

And so, what we did was, since we had this new environment that can easily integrate with LDAP, then our authentication environment who said, "Well, we're going to get rid of all local accounts and use anything that we can, and we're going to migrate them to a standard LDAP authentication with our NetIDs.

And so, one of the challenges or observations or benefits of that process was that, when we initially set up the accounts, they would be the same log-ins, there'll just be different passwords. And so it was very easy to walk that walk to get things transferred over to the new environment. Our system administrator team really worked on this, for the most part, to get the new setups to work.

And then, finally, the big day arise. We set up our new environment. We've got it all to work and when it came time to start pointing the world, instead of pointing at our old server to point at our new web server.

26:06

And so what do we do? The first thing we did—and having sat through a project management session earlier at this conference, and really drill home this point—communicate. You need to really talk to people. They need to be aware that, "Oh by the way, your configuration's going to have to change." "Oh, by the way, it's going to be integrated and combined with your LDAP, your NETID password." And, "Things are going to be different, you need to pay attention. You need to let us know if you need to be doing your testing, and let us know if there are any problems that you encounter on your sites, whether they're applications or just flat HTML."

 So when it came time for the weekend roll-over to happen, we cut of access to FTPs so they couldn't make changes to their sites anymore. We did a final synchronization of files till any final changes were reflected on the new system, and duplicate production database hosts permissions. So what we needed to do was change the permissions on this server, or rather add permissions for this one. Because these were different names. This was vendor and this is, was—we didn't use names so much—W31 they called it.

So we had to duplicate every permission for here from here so this server could talk to that database, because it didn't have to do that before. And in fact, we didn't want it to in case something was configured wrong. And so, those didn't exist until the point where we made that migration.

And then, we removed that /etc/hosts hack that we did, that was making the system think that was the production database. Then all of the sudden, our new web system was talking to the same database as the old web environment.

And then, the big moment – we changed the DNS entry to new IP address and let that propagates throughout the world. And then once we did that, we essentially changed the DNS record for this box so that the connections will be made to our campus to that new machine, so the pages with changes would happen the same way.

28:14

And then once the project was done, we would let people know that they could go in and change their sites again. We, then, turn off the new system or turn off the old system as their hits went away, as soon as the hits went to zero. So there's a propagation of two weeks, or two days, or whenever it could take the DNS to reach the world.

At this point, we had migrated all the production stuff off of our old environment but we still have some things on other server, that user server, the development, the testing stuff. I mean, we weren't really done because we couldn't turn off the old machine and be done with it.

So what we realized when we had gotten to this point, was that "Oh, we already have half of our new environment done," because we already had that testing server. We already have the database stuff being copied over on a nightly basis, which really provides a wonderful point of opportunity for you to test a new application with your current data. Supposed you have to generate something new and unique for your testing.

So what we had to do was turn on a development server. And essentially, rather than split the user access from the main production, we really didn't intend to give out any access to the server beyond the web group, beyond the web team. So it didn't really make sense to worry about the segmentation of roles and people potentially hacking this box. It's certainly possible that no system is impenetrable. But what we really thought was it's just one extra layer of complexity. Well, it wasn't a hundred percent with the current environment. We didn't think of it having any real production level differences. And so we had one development box that we stood up.

30:09

And this time though, we decided that it wasn't worth cloning everything. That taking an entire copy of the website, all the HTMLs, all the images, all the other cruft that is part of our web press really wasn't worth it. The flat HTML files, we really didn't think we needed the production environment for. And the applications, well, we're setting up an environment that is as identical as we can make it to the production environment.

So what we did was, we moved our applications, we moved our CGI bin. We made a copy of that. We moved the cake stuff in there. As we work on anything else, we would refresh it. So we would just build a copy of what's in production in our development environment, do our testing, do our programming. It's all against this testing database and then we would deploy it, deploy that system to production, as we're confirmed that our changes were made.

So we decided not to copy everything. We made that change. We did, however, adjust the process to which we were cloning our database server over time. So we had to change what we have here. We have access to the production system here and the host configuration stuff to MySQL.

Here we had updated it again because it wasn't coming from the production system, it's coming from the testing systems. That was just automatable thing that we ran every night.

Whenever we build anything new, we would add this PPRD extension just so we know that we're doing a new database. It's a database that doesn't exist in production yet and we deploy it from production and we change the name of it, let me go back and we change the name of the database and the applications configured to here and then delete the PPRD version of it.

In terms of modifications to schema, I mean, how often when you change a web application, do you have to actually change the database that underlies it? Let's say, a lot of times you're adding new functionality, you have to add some place to store that new functionality, so it's often.

32:13

So what we did was we also automated the changing of the database. So that whenever we did our copy over here, what we did was we would run a variant script. So we run a sequel to change database the way that we wanted to do when we rolled the new version into production. So that meant, we got a test of that upgrade everyday until we roll that change into production.

So another benefit is that the process through which we upgrade our applications becomes more robust. And it was really a benefit that we got from this project, that we really didn't expect but it just happened to be one of those things we thought about along the way that made a whole lot sense.

How good am I on time, John? About ten minutes.

Well, I could take you in to quick tour of our Quickreg system here, just to show… What I have here is… This is… We have a web calendar. And a lot of people, it turns out, needed some kind of mechanism to register people for things. And after awhile, taking care of email just seems to be a challenge when you could just store in a database.

So we created a generic registration system that you could then figure to toggle with what fields you turn on and off but would otherwise kind of do the types of things you need to do in a basic simple registration system.

So one of the examples I got here is Our Center for Excellence in Learning and Teaching. And, I mean. The application frontend is not one that people usually saw. What they saw over here is list of events in this office. And so, if you want to attend an event, you click on the link. You get to description, and well, that's not as usable as I would like it to be. But we could go in and click to registration.

34:14

In this case, this event is full because we also embedded capacity planning and turn that on the system. So I can't actually register for that particular event here but it would bring up a form as part of that process.

So we went through our initial version of that application, was one that would not actually have any optional fields. A field was either "on-required' or "off". So other things we did is we added a third option was "on but not required". And that involved change in the database. It involved adding new fields that you wanted to add, it involved adding checks. Like we can maybe require someone log in with a college NetID in order to process the registration. And even make a work-around so that I can, with a NetID, log in and register someone else. So if I'm a secretary registering a chair, that's possible too.

So what we did was, in this case, we had a student working on it, we copied everything over the application side from our production environment to testing. Made a whole lot of changes, we asked people to test it, and as part of that project process, identified, "OK, this isn't working how people wanted to. We need to tweak this because it wasn't as easy to use."  

And so we kind of went back and forth, for the people who use the system to identify what those core needs were. And they could go in to the test system and test it and play around with registration data without having to worry about corrupting again the production system.   

And then, anything that they happen to do during one day was gone the next because it was overwritten by the new, the daily, nightly, rather, feed. And then once that process was done and the semester was approaching, we put out that final call for testing and didn’t hear any buzz so we rolled that into production by, essentially, just copying.

36:06

 

Well, this is what we did, backed everything up in case something happens that we can go back, temporarily needed to disable the applications so people can't register to a database or series we're about to change. We would change the database system production using that script that we would previously running everyday in our test environment. We would copy the application update to the production service to all that code that we had been changing. We would copy over to server. Any that we have certain programs that accept code, or accept files, upload files from people, we need to preserve that in the public website of things but mostly it's copying over replacing existing content. And then, test it and make sure it works.

And then that same day, we can't run those SQL updates again on development once they're made in production because they'll all conflict with changes that are already. And so we needed to remove that fragment of SQL code that was running everyday to update the application development because we had rolled it into production.

Very briefly, some changes we expect to be making. I mean, VMWare, it seems to be an environment that's working well for us. And we don't, outside of some capacity issues and some of the computational load on the web servers and what it does, it really ends up not being too bad. We don’t need, we don't think right now, to spawn off a second web server. But as traffic continues to grow, we expect that some kind of cache and proxy or some kind of second web server, something that we're going to need to invest in to keep our infrastructure capable of handling that.

And so, by being able to copy just one web server to a second one over time, that really allows us the capacity to grow and handle those unexpected events that might happen. The… Was one of the things that we expect to happen at some point in not too distant future.

38:06

In terms of versioning, we don't right now, other than  the copies that we have now in lots of different places,  have a robust versioning system – CVS or Subversion or something else. And it seems very logical to have some kind of other systems up there that can deploy things to the production and to the development environment with just some tools that we developed as part of that process. And so it's something we expect to explore, we know it's something we’re missing. It's one of those aspirational things that we want to be working towards as a more robust environment to really smooth out any processes or programming issues that we've had.

And that is the conclusion of my presentation. I'd like to open the floor up again for questions, remind everyone to fill up TPR6 on their evaluation forms. And thank you all for coming, but yeah, if there's any questions, question.

Audience 1:  What kind of environment do you use and if… What difficulties would you anticipate from moving from one place to another? For example, you're [39:07 Unintelligible]

Steve Lewis: OK. The question was what Linux environment… The two-part question, what Linux environment we're running now? We ended up being a Red Hat Enterprise 5 as our primary environment for… All the servers that I talked about here on the news side are Red Hat Enterprise 5. In terms of the different environments and challenges of upgrading between Linux is I think you have the capacity to, and it allow the same problems that we had going here from different versions. The most similar that you can keep the systems, I think the fewer problems that you'll have. So if you're moving to the same, keeping the same version of Apache, keeping the same version of your database, keeping the same version of your programming link that will help minimize the challenges as you would move forward from that one environment to the other.

40:11

I would suspect that… I mean, I was honestly surprised during this process the how much worked. Despite that we were going through so many versions, we were able to isolate and solve these problems through the process which tracks. We have job tracking, at least, on my campus. So those were things that helped.

So I would definitely have a testing opportunity and make sure you really affirmatively look at all you're applications. If you don't have good documentations in terms of who you're primary customers are for each of your applications, you could really develop that and I would encourage you like we did to involve them in the testing of their new applications under new environment.

So you said from Ubuntu to Red Hat, you think?

Audience 1: Yeah. For [40:58 Unintelligible].

Steve Lewis: So with the…

Audience 1: I know that it's different, you hack next [41:15 Unintelligible]. 

Steve Lewis: I would say if you're moving at the behest of, say, a central IT, it sounds like, that you should involve them in the process of defining whose roles and responsibilities are whose. I mean, I was, as I mentioned building the web server from scratch and we went to, essentially, a package version in Red Hat. And so with Apache in Red Hat, essentially, you could just install using Yum a new package and it will handle all the dependencies for you.

So if you install PHP 5 with the graphics utilities, that has certain implications I think based on the shared libraries, in terms of not being shared-secure. So you need to be sure you're using the free port model of Apache 2 and not the one that uses threads.  And so, that handles a lot of those dependencies for you. I was shocked at how seemingly robust the Red Hat Package Manager was.

But I mean, if you're moving to an environment where you're being supported and they kind of force your hand in this, what I would do is I would definitely sit down with them and develop at least an MOU informally in terms of who's going to be responsible for what.

42:26

And again, the most similar you can make the environments, the better. A lot of the stuff that will probably be different between the two systems are going to be stuff that you won't have to worry about anymore.

So other questions? Yeah.

Audience 2: Is there a specific reason of choosing Linux instead of other Unix?

Steve Lewis: The question was, why do we choose Linux as opposed to some other Unix-like environment? And I think, that would be one of those decisions that were made above my pay grade. I think that they like the concept of Linux. I think that they were seeing it at that seem to have the most development, energy and thinking behind it. I think there is a desire to be a little bit less proprietary in terms of education head of the clouds, ivory tower, and really go with something we could recruit system administrators to support. I think the other challenges we had around the same time is we were dealing with turn-over in our sysadmin department. I think we have essentially two and a half system administrators, two of which have been the department for probably under two years at that point. So with that type of turn-over, it will become a problem if you can't find a skill set to support that. I think that was why we made that business decision.

Probably there are some additional integrations and tie-ins with VMWare that we were following other people's recommendations as well.

Moderator: [43:55 Unintelligible]

Steve Lewis: SIG. OK, we’re done. Thanks everybody.

They'll collect your evals at the door, TPR6. Thanks.