Speeding up ddrescue

Haole Boy

Active Member
Reaction score
190
Aloha. I use ddrescue to image every hard drive that comes my way. I'm creating an image file, not doing a clone operation. I just imaged a 2TB drive and it took 12 hours, so I'm wondering if there are some different parameters I can specify on the ddrescue command line that might speed this operation up.

NOTE: I understand that a drive with many bad sectors will take a long time to image. For this question, I'm only referring to creating an image of a "healthy" drive. Yeah, this one has a few errors, but other 2TB drives I've imaged also took around 12 hours.

Here's what ddrescue reported after the image finished. Take a look at the "average rate" (in red). I have no reference to know if this is "slow" or "about right".

harryz@LinuxBench /media/Cust_Data/Don $ ddrescue -d /dev/sda don.img don.logfile
GNU ddrescue 1.22
ipos: 32360 MB, non-trimmed: 0 B, current rate: 0 B/s
opos: 32360 MB, non-scraped: 0 B, average rate: 45898 kB/s
non-tried: 0 B, bad-sector: 32768 B, error rate: 128 B/s
rescued: 2000 GB, bad areas: 8, run time: 12h 6m 23s
pct rescued: 99.99%, read errors: 73, remaining time: n/a
time since last successful read: 2m 46s
Finished


I'm running ddrescue on a Linux Mint 18.1 (I think) machine. The 'source' hard drive is in a SATA connected dock that supports SATA III. The target "drive" is a Synology DS416 with 4 x 2TB WD Black drives, connected via ethernet (wired) to the same router as the PC.

Mahalo for your time,

Harry Z
 
You could play around with -c to adjust cluster size, but you're not going to speed it up much, if at all. But aren't you imaging a whole lot of empty space? Why not use fsarchiver or something that gets only the allocated space?
 
There's not much you can do to speed up ddrescue except for possibly your hardware setup. Is the network Gigabit to the NAS? Is Linux using your ESATA/USB3.0 controller correctly ("Driver" working)?

That drive doesn't look very healthy from the report and 45MB/s is pretty slow for a 2TB drive. Unfortunately, ddrescue can't make a drive read faster than it is capable, so it is usually the drive that is the slow-link.

You say you image your drives routinely over the network.. is this a "normal speed" that you commonly deal with, or does it go faster usually?

You can test your bottlenecks by dropping in an SSD for cloning. Personally I put all my drives direct to SATA, no USB or ESATA anything. No network, etc.
 
What they said, plus I suspect that the Synology is the bottleneck. Presuming that those 4xWD Blacks are in a RAID, that will add considerable overhead to the write.

So, for otherwise serviceable drives, use any software that only copies used sectors and stick to copying to a local drive. If you want to store an archive on the Synology, you can always transfer from the local backup drive later, when you're not waiting on it finishing.
 
I have 2 specially made DDRescue rigs. They were both made from desktops that users didn't want fixed. I don't remember the specs offhand but I know they are not identical. One rig has a dual core CPU and the other is a Core i5. On either rig, it takes 6 hours to run DDRescue on a 1 TB drive. Based on that info, 12 hours for 2 TB is exactly right.

phaZed is right "ddrescue can't make a drive read faster than it is capable, so it is usually the drive that is the slow-link."
 
A friendly comment from a data recovery perspective on degraded drives: Often times, cannot image just the data, as the damage lies within important file structures (e.g. catalog file, MFT, superblock,), therefore whatever the data imaging software would attempt to image may be incomplete. So, a sector-by-sector copy is necessary for later examination, reconstruction, scans, etc. Now given the limitation on imaging software based free tools, then not many options available there.

I also agree that the best set up for optimal stability and speed when working with slightly degraded drives is SATA-to-SATA. Ideally, even though it comes across forensically, a write blocker (read only) would help, as it prevents the OS from constantly beating up the drive with attempts to read from it.
 
A friendly comment from a data recovery perspective on degraded drives:
He is imaging ALL drives that come in to the shop. Not just the iffy ones.

I only image when the drive is iffy or there is some software that can not be easily reinstalled. Macrium is my choice.
I do however get a Fab's backup just in case on the rest.
 
If they are working drives, Use something else. (faster)
Save dddresce for bad drive imaging.

Aloha Porthos. Thanx for the reply. Question... how do you determine that a drive is "good" or "bad" before doing the backup? Or do you just run your backup program and if it has issues, then run ddrescue?

Obviously, if SMART shows a lot of reallocated sectors or other error indicators, then it's ddrescue time. But, I think it's safe to say that we've all seen bad drives where SMART shows no errors, and my understanding is the more you try to read a bad drive (e.g.: by running a file system backup), the worse it's going to become. This is why I use ddrescue on every drive - one time through the drive to minimize damage. But, it does take a long time compared to file system backup programs.

There's not much you can do to speed up ddrescue except for possibly your hardware setup. Is the network Gigabit to the NAS? Is Linux using your ESATA/USB3.0 controller correctly ("Driver" working)?

That drive doesn't look very healthy from the report and 45MB/s is pretty slow for a 2TB drive. Unfortunately, ddrescue can't make a drive read faster than it is capable, so it is usually the drive that is the slow-link.

You say you image your drives routinely over the network.. is this a "normal speed" that you commonly deal with, or does it go faster usually?

You can test your bottlenecks by dropping in an SSD for cloning. Personally I put all my drives direct to SATA, no USB or ESATA anything. No network, etc.

Aloha phaZed. Even though this particular drive did have some errors on it, the speed I reference is consistent even with good drives. When I originally set up the Synology the speed was much slower until I figured out how to connect to the synology using SMB3 protocol.

Not sure why I did not think of this before, but I just tried a ddrescue on this same disk but with the target being a directory on the local drive. Let it run for a few minutes (not much free space available), and transfer rate was much higher and estimated time was just over 4 hours.The control panel on the Synology lists my network connection as 1000 Mbps, Full duplex, MTU 1500 So, my bottleneck appears to be in the Synology itself. Guess I'm off to the Synology forums to see if there is anything that can be done.

Thanx for getting me pointed in the correct direction!

Mahalo to everyone for responding!

Harry Z.
 
All computers I work on are booted with a PE disk and crystal disk and gsmart are run IN FRONT of the client. I dont care if they only need a screen or keyboard.

If any issues arise at that point, A new drive is offered then. Now the client interview begins and the discussion on data importance is top priority. New drives are always for the most part clean installed.

The client is now informed what the costs will be and it will be up to them where we go next.

If the data is worth the cost of of lab recovery or I can tell or am told they really NEED the data I stop. I will not do anything else to that drive at that point. I will sell and install a new drive and move on and bag and box the old one till they're ready($$) to get the data recovered.

If I think I need to use something like ddrescue it goes to recovery lab. I usually let the client send the drive to the lab and end my involvment with the drive there.

I have been burned in the past and follow the below for those who do not "care" about the data.

If it is not important enough to spend $3-500 on and after a separate release is explained and signed (Video/audio recorded as well) I will attempt for a fee (a non-refundable attempt fee $50 for my time is charged in advance.) to recover the data.

Disclaimer, To this date I have had only two clients that paid for the lab in 15 years. The others were either lucky that what was needed was recovered by Fab's or did not care/need it. Around here the home users are not willing to spend money for data recovery.


Since I am 99% home break fix I dont see much software more than Office most of the time not legit, some Quicken and Quickbooks and the odd ofline genealogy program from time to time.
Printer drivers are no issue.

Drives that have no smart issues get backed up with Fab's while it is still booted with the PE disk.

All computers leave my shop with Win 10. The people I deal with care more about how much fixing the computer will cost since most of them are sub $500 systems.

If Win 10 is an issue for them (has been only once) I charge accordingly for the extra time and hassle it is for me.

I don't do upgrades, Only CLEAN installs.

If the computer is older than Win 7, It is replaced with a new or refurb computer no exceptions unless they are willing to spend the cost of a retail copy of Win 10, enough ram, cost of data backup and my install labor and sometimes a new SSD. (always costs more than a refurb replacement so that has happened only once). That was a good custom build that came in with a bootleg OS.

I have only seen 2 infected computers since Win 10 was released.

When I see some of those weird issues many here report, 99% of the time it will be a N&P on that system.

Now for those wondering, My general prices are below. I am lucky to get these amounts.
All parts and software are extra of course.

Install OS (win 10)/tune up/ setup new computer. $80. Other supported OS's $199.

This includes total backup/ transfer of up to 40 gigs of data caulcated by Fab's. Over 40 at .50 a gig. Additional user accounts $20 extra each to set up on top of that. (Takes time to customize each additional one.)

Office (legal) Reinstall

2007/2010 $30 no outlook $45 with outlook restore.

2013/2016/365 $40 no outlook $55 with outlook. Must know office account credentials and will verify account from shop computer before work is started.

Libre office installed for free. :)

HD, screen and KB replacements for laptops depends on difficulty of job which is determined after I know model # and are flat rated.

I do not do any motherboard work at all. To me, bad boards mean replacement systems and setups. Again my clients usually have sub $500 systems. :rolleyes:

I will only sell or reccomend business class new or refurbished.
 
If you feel that you want the images on the server, yet want to speed up the imaging process perhaps you need to break it into 2 steps.

1. Image to a file on a local drive connected as a slave for imaging
2. After the image is complete, decide whether or not you want to compress the image
3. Copy the image to the NAS and let it run in the background while you move forward with your work
4. When you are sure all is good, clear the local space for the next project

As for me, unless I'm dealing with something under 100GB, I'm a drive-to-drive kind of guy. Not only is the NAS always going to be a slower option, it could pose an issue with multiple projects, if it fails and you are in need of the data stored on it.
 
If you feel that you want the images on the server, yet want to speed up the imaging process perhaps you need to break it into 2 steps.

1. Image to a file on a local drive connected as a slave for imaging
2. After the image is complete, decide whether or not you want to compress the image
3. Copy the image to the NAS and let it run in the background while you move forward with your work
4. When you are sure all is good, clear the local space for the next project

As for me, unless I'm dealing with something under 100GB, I'm a drive-to-drive kind of guy. Not only is the NAS always going to be a slower option, it could pose an issue with multiple projects, if it fails and you are in need of the data stored on it.
Same here. Most streamlined and worry free generally speaking.
 
@Porthos Mahalo for the info. Certainly a lot for me to think about. I also have a mostly residential client set and have only had 1 client agree to send a failing drive to a professional recovery company. The rest are OK with best effort to recover whatever I can.

@lcoughey and @labtech and whoever else suggested that I image to a local drive then migrate the image to the NAS: Mahalo for this suggestion.

Thanx to everyone who responded. I learn so much every time I post here.

Harry Z
 
Back
Top