Small Linux Server

jerry1234

New Member
Reaction score
0
Location
SF Bay Area
Hello,

I've been maintaining my own Linux server for approximately umpteen years. I try to update it once every couple of years. The last update was painful, because I had not touched it for at least four years, and I had forgotten half of the stuff I did, where the config files were, etc. But got through it. Installed Slackware on a little Dell desktop, latest Perl, latest Apache etc etc etc.

Right now, it would be easy to upgrade again, because I relearned everything and it's fresh. The other day, I got to thinking - one thing I have always wanted in a server was ECC RAM. I don't exactly lay awake at night thinking about cosmic rays messing up my business data, but it's a concern.

Once upon a time, ECC ram was an option with ordinary clone motherboards ( back when they were called PC Clones ). That was then. Nowadays, all the CPUs have memory controllers built into them, and the ordinary desktop ones have no ECC support. Because PC innovation is driven by gamers, and gamers could care less if their PCs drop a bit or two in the middle of the night.

So I started poking around to see what's available in server CPUs and motherboards. Hmm, this Intel E3 lineup looks real interesting - like a I7 only no GPU? At one fifth the price? Very cool. Started making up spreadsheets - adding it all up - CPU, motherboard, quality case, etc etc. About a grand.

Then I stumbled on the Dell T110 II - wow, that's pretty much the exact machine I was thinking of, and I don't have to bolt it together! $950 dollars
later, I have one of these bad boys winging its way to me. Bought it on Ebay because ( I think ) the seller has it already assembled, and I don't have to wait the 2 weeks for Dell to build it.

The Ebay deal came without disks, so I ordered a pair of WD blacks from Newegg. I am thinking seriously about SSD's, but every single SSD I looked at has some non-zero percentage of people getting BSODs and random failures. So despite the speed, I think I'll just use hard disks. SSDs can wait till the next server upgrade, by which time they will be more mature.

In general, I like to use one disk for the system stuff, and another one
for the user data. Possibly put the swap on the "user data" one. Aiming
for the ability to do common things at the same time, and minimize disk thrashing. So if an executable in /bin or /usr/bin uses a library in /lib or /usr/lib, then you can get some parallelism if the bin stuff is in one disk and the lib stuff in another. Etc. Of course, it's possible to overthink all of this.

If I did use an SSD, I would arrange it to hold directories that didn't change much. /bin, /usr/bin, /lib /usr/lib, /usr/local/lib, etc. These directories contain the binaries that run the system, and mostly you just read them a lot. Volatile directories like /var and /home would definitely go
on hard disk.

The ebay deal came with 8 gigs of ECC DRAM, which isn't a whole lot by modern standards. They offered an upgrade to 16 for $200. I passed on that because I'm pretty sure I can score 16 gigs of unbuffered ECC on the open market for more like $100. The world of RAM moves fast!
 
I've been maintaining my own Linux server for approximately umpteen years. I try to update it once every couple of years. The last update was painful, because I had not touched it for at least four years, and I had forgotten half of the stuff I did, where the config files were, etc. But got through it.

I am so bad about this as well, not so much the update interval, but the failure to document settings and changes I made to config files when I set things up originally. You'd think I'd learn my lesson after having to retrace my steps a few times. :o

Anyway, sounds like a fun project. I wouldn't know what to do with that much server power though. My Linux server at the shop is just a lowly old Optiplex GX520 desktop w/ 3GB. Currently it serves as a samba server (shared files, WSUS offline update repo, music files for the shop, and back-up storage for my rsync scripts) and provides FOG/PXE services for my small shop network.
 
I mostly use the server for Samba. The real driver is Paperport ( running on the Windows machines ). Paperport is extremely demanding of network and server speed, With a direct gigabit connection and the server I have now ( a first-generation core-2-duo Dell ) it is barely tolerable.

Besides fileserving, it also does the basic routing for the internal net, runs my paranoid firewall ( adapted from David Ranch's TrinityOS work (url goes here that I am not allowed to post ). The paranoid firewall not only filters incoming packets, but also outgoing, for defense in depth.

I also have a set of scripts that help run my business. They are structured as dynamic web pages, written in Perl, and using mysql databases. My scripts support a system of
logins and permissions, and I have given accounts to my staff. They log in via https and
book rents. Whee, I don't have to!

The Internet is a wild and hairy place. Automated hacker scripts constantly bang at my server trying to get in. Security logs show this. Endless tries for various usernames, especially "root". So I wrote a Perl script that monitors the logs and if a particular IP address tries more than X ssh login attempts in time Y, it is attacking us, and the script adds it to an IP blacklist ( using Ivan Ristic's blacklist utility ) that drops all packets from that host at the earliest possible opportunity - for one day.

The server also supports openvpn for re mote access, and a couple webmail services.
It is also a mailserver, running sendmail.

In the past, I built these servers on PCs that originally came with Windows. I would
run VirtualBox and load the same version of Windows - and it would be legal with the existing COA.

-jerry1234
 
These are very interesting projects. I see one of them is based on Ubuntu
server. I really like the Ubuntu package manager system - which is actually based on Debian.

I tried out Ubuntu server a year ago. I was not able to get everything working to my satisfaction, and went back to Slackware. Slackware is the oldest surviving Linux distribution, being the successor to the very first CD-based distribution - SLS. Back in the day, I used SLS, and before that,
the "MCC Interim Distribution", which was strictly a toy.

I'll be using Slackware again. Mainly because I know where everything is. The only real decision is whether to just clone the server I recently set up - or to set up a new system from scratch. Cloning if awful tempting - I would have the new machine up in less than a day.
Installing from scratch - would get the the latest greatest of everything. But it will probably languish in the computer room multiple months before I get confident enough in it to cut over.

OTOH, if I build the new server from scratch, I don't have to take the old one off line for cloning.

If I want to be really daring, I could just yank the disks out of the old server and stick'em in the new one. If I did this with a Windows machine, I would reasonably expect nothing but a blue screen. But it might just work with a Linux distribution. They compile in support for lots of hardware in the standard kernel, and the I think that the Xeon E3 is a fairly mainstream processor. I don't think I'll do that though. I want the new server to have brand new drives - with no wear and tear on them.

So it's looking like a cloning procedure. I can stick the new drives in the old server and clone the partitions with a series of dd commands. Dd is very efficient - disk drives love it! Just start at the first byte and proceed in order till you get to the last byte. The read/write head never needs to move except from cylinder to cylinder.

The SATAIII interface is rated at 3gB/second. I'll assume with friction in the system I get 1 tenth of that, or 300megabytes/second. The total amount of data to be transferred is 247 gigabytes, giving me a transfer time of 13 minutes.

I'll start writing the script today.

- Jerry
 
Whups, make that 128 megabytes per second for the 1-tb drive and 150 for the 500-gig drive. The speed of the SATA interface is
irrelevant.
 
Last edited:
If I want to be really daring, I could just yank the disks out of the old server and stick'em in the new one. If I did this with a Windows machine, I would reasonably expect nothing but a blue screen. But it might just work with a Linux distribution. They compile in support for lots of hardware in the standard kernel, and the I think that the Xeon E3 is a fairly mainstream processor. I don't think I'll do that though. I want the new server to have brand new drives - with no wear and tear on them.

Depends on the RAID controller (if any)...and differences....or if no RAID controller...at least on the HDD controller differences. Similar chipsets...especially if all Intel based, should be good. Gets weird sometimes with motherboards that have oddball controllers like JMicron or others.
 
These are very interesting projects. I see one of them is based on Ubuntu
server. I really like the Ubuntu package manager system - which is actually based on Debian.

I tried out Ubuntu server a year ago. I was not able to get everything working to my satisfaction, and went back to Slackware. Slackware is the oldest surviving Linux distribution, being the successor to the very first CD-based distribution - SLS. Back in the day, I used SLS, and before that,
the "MCC Interim Distribution", which was strictly a toy.

I'll be using Slackware again. Mainly because I know where everything is. The only real decision is whether to just clone the server I recently set up - or to set up a new system from scratch. Cloning if awful tempting - I would have the new machine up in less than a day.
Installing from scratch - would get the the latest greatest of everything. But it will probably languish in the computer room multiple months before I get confident enough in it to cut over.

Former (briefly) Slackware user hear as well. I've always been a fan of uncomplicated linux distros that favor "UNIX-like" philosophies in their implementation. However, I really didn't care for Slackware's package manager at the time, and its lack of dependency resolution. After many months of distro-hopping, I discovered Arch Linux and never looked back. It gave me the simplicity and flexibility I desired with a great package manager (pacman) in a binary (i686/x86_64), rolling release distro.

As big as a fan of Arch that I am, however, I don't always use it for a server role, although it is quite capable of doing so. In the shop, I am currently using Ubuntu Server in the LTS package (much quicker to install and configure). I would also consider using any of the traditional server workhorses as well (Debian, CentOS, ClearOS, etc), whichever I felt most comfortable with.... and if that was Slackware, that's probably what I'd go with too. That's one of the great things about Linux/FOSS, there are so many choices. :D
 
I just tried out Arch in a virtualbox on my Windows7 workstation. Looks nice. EXTREMELY simple and straightforward. Reading about systemd.... Looks like it has excellent support for hot-swapping external drives.
 
Whups, make that 128 megabytes per second for the 1-tb drive and 150 for the 500-gig drive. The speed of the SATA interface is
irrelevant.

The actual speed of the byte copy was 107 megabytes per second on the first tracks, sagging to 95 megabytes per second on the inner tracks. This was with the Linux "dd" command using a 64-megabyte block size.

I figured that was the cache size on the destination drive - the system could throw the data into the dest drive, and then go read the source drive while the dest drive was digesting it ( writing data from cache onto the magnetic media ). That's assuming that the drive actually does write caching, which I'm not sure if it does.

- jerry
 
Back
Top