Help with Raid 5

That's good news. Means the disk headers are most likely intact if R-Studio seems it as an array. So, hopefully, when you mount them, which R-Studio can do, they will mount as an array. It's odd that it is showing it as a 3tb array.

Is this the link you were referring to? - https://www.technibble.com/guide-using-ddrescue-recover-data/

That's the one. You're saying R-Studio will be able to mount the *.img created by ddrescue? (would suck if I'd imaged all night for nothing!)
I'm curious why R-Studio isn't able to see the individual disks within the array. I should have grabbed a screenshot, but I wasn't thinking that far ahead. It did show the "intel raid volume" with the proper volume size... just not the individual disks.

With as slow as ddrescue is going, I may pull the external drive out of it's enclosure and connect it via sata. (It's a borrowed drive, so I was trying to avoid that)
 
USB2 is SLOW. To image all three drives may take several days depending on your machine. And an equal amount or more of time to get the data off.

Back to the RAID thing. When an HD is prepared for use information is written to the beginning of the drive. This tells OS's and BIOS's how to handle the drive. This happens even with RAID. Think about it. How can a computer know how to handle a drive if it is not told what type of drive it is. If nothing is on the drive then no one would know what to do and moving it to a new computer means you would have to start from scratch every time. This even happens when using real hardware RAID like PERC's. It's rare but sometimes a RAID gets corrupted, usually on the card. When you boot the machine you will see a message about a mismatch between what is on the card and what is stored on the HD.
 
root@partedmagic: /media/sdf1# ddrescue -d -r3 /dev/sda raid5disk1.img raid5disk1.logfile

Took that straight from a TN guide to using ddrescue, fwiw

Drive being imaged is connected via SATA... destination is a 4TB external USB (2.0... that machine doesn't have 3, sadly)

Now, I use dd so see if this option is available in ddrescue for changing the block size (bs=xx) . This will make a world of difference in the speed of which you can clone. I know everyone here is hot on ddrescue but dd works just fine.

I recently cloned one of my raid1 disks because I screwed up the 2nd disk. Just running without the bs switch gave me a throughput of about 35MBS speed. I stopped it and added the BS option and I optained a speed of 150 MBS as I remember. This is with a USB3 docking station.


dd if=/dev/sda of=/dev/sdb conv=noerror,sync bs=2M

if= The drive or file to be cloned (source)
of= Where to clone it too (drive or img file)
conv=noerror,sync Skip bad sectors and do not error out, Sync the cloning.
bs= use block sizes of N size (M=megs)

Try this and see if your speed greatly increases. At some point (2,3,4M) you will reach the law of maximum returns and I settled on 2M. Your mileage may vary on your system.
 
Are you familiar with the ddrescue log file? If not, you want to be familiar with it. Run through once with settings that don't try to pull data from bad areas just to get the vast majority of the drive, then run through again with the same logfile and ddrescue will only go after the areas not previously imaged.
 
Are you familiar with the ddrescue log file? If not, you want to be familiar with it. Run through once with settings that don't try to pull data from bad areas just to get the vast majority of the drive, then run through again with the same logfile and ddrescue will only go after the areas not previously imaged.

Yes I am very familiar with it. I just have never needed it. I do very well with dd
 
good-news-everyone.png
I just started imaging disk 2, which is the one that had originally failed. So far it's moving right along with just a single error, and at a much faster speed than disk 1. I ended up throwing a USB3 card in that machine to speed up the txfr. Tried removing the ext drive and connecting via SATA, but I don't think the controller supports that large of a drive (it's an old board.) But still, 35,870 kB/s is a helluva lot better than 18,600.
 
Update:
2nd disk has finished imaging, and I'm able to open the first 2 images in R-studio as expected. Only 14ish more hours of imaging to go! :rolleyes:
(Is that normal, btw, or is my machine just stupid slow? Even Clonezilla takes hours)

A couple of you have offered your help, remotely or otherwise. I appreciate the offers, and anticipate being ready to start the recovery portion in the next day or two. I'll be in touch ;)
 
Capture_zpstqgusy6b.png


I have a 1tb external... figured I would move one of the images there to free up space.
Image size: 931.5GB
HDD size: 931.48GB

#FML
 
Hey guys, remember how frustrated I was that the RAID array never showed as "degraded" and never gave me the option to rebuild with a new disk? Well I think I figured out why. The disk I was trying to use as a replacement had a couple red flags in the SMART attributes. I picked up a brand new drive today, and when I popped it in and started the computer it FINALLY showed the array as degraded and gave me the option to rebuild.

So. I guess that's a good lesson to walk away with... I just wish it hadn't taken 2 weeks of my time to figure out!

I was able to recover all the data, by the way. I'll review the steps real quick for anyone who wasn't following along.

  1. Imaged each member drive using ddrescue (following the guide HERE)
  2. Saved images to external storage
  3. Opened images in R-Studio
  4. Created a Virtual Block Raid, and added each image in the correct order
  5. Determined (through trial and error) the correct raid parameters.
    In my case, with a RAID5 on the Intel RST controller, the parameters were
    1. 0 Sector offset
    2. Left Asynchronous block order
    3. No parity delay
    4. 4 rows
    5. 128K block size
  6. Recovered data to external storage
  7. WIN

Hopefully this helps someone in the future. Thanks to everyone who helped out with tips and guidance!
 
I was going to show you how to get the RAID config in about 10 seconds. ;) Happy that you got it recovered, though. I assume that you figured out which drive failed first and made sure that it wasn't in the virtual rebuild so as to avoid recovering corrupted data, right?
 
I was going to show you how to get the RAID config in about 10 seconds. ;) Happy that you got it recovered, though. I assume that you figured out which drive failed first and made sure that it wasn't in the virtual rebuild so as to avoid recovering corrupted data, right?

That would have been helpful! LOL
I did include the "bad" drive in the virtual rebuild, but only because I saw during the imaging process that it was just a single sector that had gone bad. I'm not exactly sure what files if any were affected by that, but everything of importance is good to go. I did notice a folder full of video files that had an X on them in R-Studio... but even those recovered just fine.

Just out of curiousity... how long would it have taken to just let the array rebuild with a new disk? (The array ended up having about 1 TB of data total)
 
That would have been helpful! LOL
That is why I offered. For some reason, there are those on this forum who would rather spend 2 weeks getting a less than perfect recovery than take 2 minutes to talk directly to one of the few members on this forum who is a full time data recovery professional.
I did include the "bad" drive in the virtual rebuild, but only because I saw during the imaging process that it was just a single sector that had gone bad. I'm not exactly sure what files if any were affected by that, but everything of importance is good to go. I did notice a folder full of video files that had an X on them in R-Studio... but even those recovered just fine.
So, basically, any file that exists on the drive after the bad sector with have a very good chance of being corrupt, as no sectors after the bad sector will have been rebuilt from parity.
Just out of curiousity... how long would it have taken to just let the array rebuild with a new disk? (The array ended up having about 1 TB of data total)
This I cannot answer as this is not something a data recovery professional would ever need or want to do. We will always clone the drives and reconstruct virtually from the clones. This entire project should not have taken more than 3 days, taking into consideration the time it takes to transfer the data to the destination drive.
 
Just out of curiousity... how long would it have taken to just let the array rebuild with a new disk? (The array ended up having about 1 TB of data total)

If you are talking about the new, good drive in the original machine it depends a lot on the drives. I have two R710's, both PERC6's. One has SAS drives and one has regular SATA drives. Rebuild should be similar to a build. Been a while since I did the SAS box but 6 x 300gb 2.5" 10k drive took maybe 4-5 hours to build a RAID5. The second one, 6 x 2tb SATA 7.2k drives, took something like 30 hours to build a RAID5. Since you are using software RAID that adds to the overhead. But you should be able to boot and use while it runs in the background.
 
If you are talking about the new, good drive in the original machine it depends a lot on the drives. I have two R710's, both PERC6's. One has SAS drives and one has regular SATA drives. Rebuild should be similar to a build. Been a while since I did the SAS box but 6 x 300gb 2.5" 10k drive took maybe 4-5 hours to build a RAID5. The second one, 6 x 2tb SATA 7.2k drives, took something like 30 hours to build a RAID5. Since you are using software RAID that adds to the overhead. But you should be able to boot and use while it runs in the background.

Thanks, Mark.
 
Back
Top