[WARNING] Linux still not so much fun

So I've been thinking about this. All you supposed tech experts seem as afraid of Raid 0 as the Catholic priest's altar boy on confession Sunday.

Immediately it comes to me that we have been taught to use the wrong acronym. The very first word of RAID is redundant. And there is nothing redundant about a Stripe. So if you throw the word RAID away and think in terms of disk striping it helps. Striping is all about performance and absolutely nothing about safety or redundancy. I mean c'mon, you're not going to buy a Corvette to haul your 24' camping trailer.

And even if you guys are scared to death of Striping I think it's irrational. I'm no statistician but the probability of a catastrophic failure of multiple disks in a stripe is only slightly higher than the probability of any single disk failing. And if that single disk failed you'd have to recover it or reinstall. There is absolutely zero difference in your actions between a single disk failing and a stripe failing.

So what's the extra scary technician-cringing god-awful failure point on a stripe? As near as I can tell the only extra vulnerability would be an unrecoverable read error in the array signature. Again, an extremely small probability. And even if it happens what do you do? Replace the drive, reconfigure the array controller, write the new signature, then reinstall or re-image the machine. (Technically speaking, if you did a complete clone of the failed drive and then simply re-wrote the array signature on the new drive the system should be 100% usable again.)

Striping is a performance enhancement and it can be quite significant. There's no reason to fear it since modern disk controllers will continue to sense and correct the vast majorities of disk errors that occur. The fear is a holdover from old disk controllers that had little or no error correction built into them.

It's like that Corvette. You will have higher maintenance costs than the Corolla, but you know that up front. Striping is the same way, it will be higher maintenance but it's not brain surgery or rocket science. I can rebuild my 3-drive stripe, boot from a Win7 DVD, and restore from my USB-3 external drive in less than 30 minutes.

It's not for the casual end-user and I never said it was. So get outta that confessional and quit trying to excommunicate me.
 
Actually, a 3 disk stripe is exactly 3x more likely to fault than a single disk. That's statistically huge.

I've used RAID 0 before, just not for a boot volume. There are technically ugly things to do this with OS provided RAID 0 which is where this thread started.

Hardware provided RAID 0 would work.

The car analog isn't bad, but even a sports car has a starter. A small electric motor designed to start the far more powerful engine. That's your boot partition. Every machine needs one.

P.S. Odds of fault when using commodity junk drives? Even higher. Not that it matters if it's simply an experimental platform.
 
Last edited:
In my mind SSDs replaced any speed advantages RAID 0 had and does so at a safety level a few orders of magnitude higher than RAID 0. After all, a conventional hard drive still has a 9 ms access time. Even in RAID 0 where the closest drive head to the data starts reading first only drops that number in half (~4 ms?). An SSD has a .09 ms access time. Once the SATA bus is filled (which a single SSD can do) it doesn't matter how fast data can be read, there's no place to go with it.

My younger days are littered with RAID 0 failures but please don't let all of our negativism dissuade you from healthy geek explorations. Just understand where it's coming from.
 
I have a nice Gigabyte AMD motherboard with a Phenom II X6 1100T 3.3GHz CPU and 8GB of RAM. Even though 8 years old that's still a very capable CPU.

Since I had 3 identical Sata-II 160GB drives I'd thought I'd build a fast little Raid-0 array

Oh FFS.

You're not doing this for performance no matter what you say. This is putting a straight pipe and a fart can on a Civic or putting a watercooling rig on a Celeron so you can overclock it - or even sinking the whole thing in mineral oil or Fluorinert. This is so you can say "I did this horribly impractical thing to prove that I can."

When the thing you're trying to do is stupid and impractical for anyone not starting with "Hold my beer," don't complain that hardware and software developers haven't left things in place that are only useful to the stupid and impractical.
 
Oh FFS.

You're not doing this for performance no matter what you say. This is putting a straight pipe and a fart can on a Civic or putting a watercooling rig on a Celeron so you can overclock it - or even sinking the whole thing in mineral oil or Fluorinert. This is so you can say "I did this horribly impractical thing to prove that I can."

When the thing you're trying to do is stupid and impractical for anyone not starting with "Hold my beer," don't complain that hardware and software developers haven't left things in place that are only useful to the stupid and impractical.
This ^^
It had to be said...
 
In my mind SSDs replaced any speed advantages RAID 0

Two/four SSDs are faster than one.

Once the SATA bus is filled (which a single SSD can do) it doesn't matter how fast data can be read, there's no place to go with it

Dual/quad SATA buses perhaps? I believe at least two SATA still do not saturate PCI-E.

safety level a few orders of magnitude higher than RAID 0

RAID10 will happily give you 4x read speed if buses can handle it, and no downside on reliability. The only problem you still need to boot from what essentially is a RAID0.
 
...
Dual/quad SATA buses perhaps? I believe at least two SATA still do not saturate PCI-E...

But one SSD is faster than all hard drive access times.

Then we get into PCIe 1, 2, or 3. Of those PCIe 3 is the only one rated faster than SATA III. 8 Gbs vs. 6 Gbs but even that doesn't tell the story as the PCIe bus is usually busy with other stuff and the associated overhead.
 
Using a real RAID card will allow one to create a bootable RAID 0 with little effort. But, other than having more than one drive, it's hardly redundant in my book. In this case, to me, redundant implies having some kind of mechanism to keep things going in case of a drive failure. RAID 0 is the only one that can't withstand that failure even with a real RAID card.
 
Using a real RAID card will allow one to create a bootable RAID 0 with little effort. But, other than having more than one drive, it's hardly redundant in my book. In this case, to me, redundant implies having some kind of mechanism to keep things going in case of a drive failure. RAID 0 is the only one that can't withstand that failure even with a real RAID card.


Don't forget JBOD! XD
 
Then we get into PCIe 1, 2, or 3. Of those PCIe 3 is the only one rated faster than SATA III. 8 Gbs vs. 6 Gbs

My understanding is PCI-E is rated in GigaTransactions per second per lane, with PCI-E 2.0 is 5 GT/sec per lane. That is, 5 Gbit/sec/lane with 10bits per byte. So, 500 MB/sec per lane. RAID controllers, as far as I'm aware, typically utilize 8 lanes, 4000 MB/sec in total. However, if someone knows better, please do tell.
 
RAID controllers use however many lanes the card is built to use. There are a great many 8 lane cards out there, but there are even more 4s! I have to admit, I haven't seen any 16 lane cards, but servers are being built to house them. So they must exist.

But it is far more common to see x8 cards, and PCIe v3 cards are 985MB/s/lane, which is plenty fast for some seriously crazy stuff.
 
Last edited:
Played with Linux desktop distro (Ubuntu & Fedora) a years ago and drive issue was the one that annoyed me most. Stay away from now and chose MacBook Pro as main working machine. However, still use Linux server for web hosting. Good to learn thing with Linux but it is fun to play with it, at least for me.
 
Back
Top