[WARNING] Linux still not so much fun

In my mind SSDs replaced any speed advantages RAID 0 had and does so at a safety level a few orders of magnitude higher than RAID 0. After all, a conventional hard drive still has a 9 ms access time. Even in RAID 0 where the closest drive head to the data starts reading first only drops that number in half (~4 ms?). An SSD has a .09 ms access time. Once the SATA bus is filled (which a single SSD can do) it doesn't matter how fast data can be read, there's no place to go with it.

My younger days are littered with RAID 0 failures but please don't let all of our negativism dissuade you from healthy geek explorations. Just understand where it's coming from.
 
I have a nice Gigabyte AMD motherboard with a Phenom II X6 1100T 3.3GHz CPU and 8GB of RAM. Even though 8 years old that's still a very capable CPU.

Since I had 3 identical Sata-II 160GB drives I'd thought I'd build a fast little Raid-0 array

Oh FFS.

You're not doing this for performance no matter what you say. This is putting a straight pipe and a fart can on a Civic or putting a watercooling rig on a Celeron so you can overclock it - or even sinking the whole thing in mineral oil or Fluorinert. This is so you can say "I did this horribly impractical thing to prove that I can."

When the thing you're trying to do is stupid and impractical for anyone not starting with "Hold my beer," don't complain that hardware and software developers haven't left things in place that are only useful to the stupid and impractical.
 
Oh FFS.

You're not doing this for performance no matter what you say. This is putting a straight pipe and a fart can on a Civic or putting a watercooling rig on a Celeron so you can overclock it - or even sinking the whole thing in mineral oil or Fluorinert. This is so you can say "I did this horribly impractical thing to prove that I can."

When the thing you're trying to do is stupid and impractical for anyone not starting with "Hold my beer," don't complain that hardware and software developers haven't left things in place that are only useful to the stupid and impractical.
This ^^
It had to be said...
 
In my mind SSDs replaced any speed advantages RAID 0

Two/four SSDs are faster than one.

Once the SATA bus is filled (which a single SSD can do) it doesn't matter how fast data can be read, there's no place to go with it

Dual/quad SATA buses perhaps? I believe at least two SATA still do not saturate PCI-E.

safety level a few orders of magnitude higher than RAID 0

RAID10 will happily give you 4x read speed if buses can handle it, and no downside on reliability. The only problem you still need to boot from what essentially is a RAID0.
 
...
Dual/quad SATA buses perhaps? I believe at least two SATA still do not saturate PCI-E...

But one SSD is faster than all hard drive access times.

Then we get into PCIe 1, 2, or 3. Of those PCIe 3 is the only one rated faster than SATA III. 8 Gbs vs. 6 Gbs but even that doesn't tell the story as the PCIe bus is usually busy with other stuff and the associated overhead.
 
Using a real RAID card will allow one to create a bootable RAID 0 with little effort. But, other than having more than one drive, it's hardly redundant in my book. In this case, to me, redundant implies having some kind of mechanism to keep things going in case of a drive failure. RAID 0 is the only one that can't withstand that failure even with a real RAID card.
 
Using a real RAID card will allow one to create a bootable RAID 0 with little effort. But, other than having more than one drive, it's hardly redundant in my book. In this case, to me, redundant implies having some kind of mechanism to keep things going in case of a drive failure. RAID 0 is the only one that can't withstand that failure even with a real RAID card.


Don't forget JBOD! XD
 
Then we get into PCIe 1, 2, or 3. Of those PCIe 3 is the only one rated faster than SATA III. 8 Gbs vs. 6 Gbs

My understanding is PCI-E is rated in GigaTransactions per second per lane, with PCI-E 2.0 is 5 GT/sec per lane. That is, 5 Gbit/sec/lane with 10bits per byte. So, 500 MB/sec per lane. RAID controllers, as far as I'm aware, typically utilize 8 lanes, 4000 MB/sec in total. However, if someone knows better, please do tell.
 
RAID controllers use however many lanes the card is built to use. There are a great many 8 lane cards out there, but there are even more 4s! I have to admit, I haven't seen any 16 lane cards, but servers are being built to house them. So they must exist.

But it is far more common to see x8 cards, and PCIe v3 cards are 985MB/s/lane, which is plenty fast for some seriously crazy stuff.
 
Last edited:
Played with Linux desktop distro (Ubuntu & Fedora) a years ago and drive issue was the one that annoyed me most. Stay away from now and chose MacBook Pro as main working machine. However, still use Linux server for web hosting. Good to learn thing with Linux but it is fun to play with it, at least for me.
 
Back
Top