Fab's AutoBackup 7 Pro - a must have tool for techs

When I selected the Google folder, I assumed that included all sub-folders, including the Chrome\User Data folder.
Exclusion does not prevent Fabs from looking into such folders. It will prevent it from listing what's inside for copy.
The search engine crawls inside folders. If it finds a file that is part of an exclusion, then it will not add it to copy list. I don't know if I can tweak that a bit so it will not even look into it.
 
The search engine crawls inside folders. If it finds a file that is part of an exclusion, then it will not add it to copy list. I don't know if I can tweak that a bit so it will not even look into it.
In cases like mine, where I suspect there are bad sectors within one of the sub-folders, it would be best not to look, if that's possible. :)
 
In cases like mine, where I suspect there are bad sectors within one of the sub-folders, it would be best not to look, if that's possible. :)
I'm currently working on it. Hopefully I'll find something. So far, not so good.

Edit : I've found something that looks suitable. Needs some more testing.
 
Last edited:
@Larry Sabo : Tests results are good. In your case, it would display that it's processing Google Chrome profile but only for an eye blink time (not sure that you could see it) and skip to next item without starting any file search.
 
Last edited:
where I suspect there are bad sectors

Is it not possible to have Fabs ignore bad sectors, or only repeat trying X number of times on a given sector it cannot read?

It's always been my preference when any "mass data copying" utility, whether cloning software or something such as Fabs, be able to be told to either ignore sectors marked as bad and/or to only attempt a very few reading attempts when it hits something akin to "a needle skip and jump back" on an old vinyl album. If it can't be read on the first pass, I don't need or want 500 more attempts, just a couple are fine, thanks, then give up and move along.
 
Is it not possible to have Fabs ignore bad sectors, or only repeat trying X number of times on a given sector it cannot read?

It's always been my preference when any "mass data copying" utility, whether cloning software or something such as Fabs, be able to be told to either ignore sectors marked as bad and/or to only attempt a very few reading attempts when it hits something akin to "a needle skip and jump back" on an old vinyl album. If it can't be read on the first pass, I don't need or want 500 more attempts, just a couple are fine, thanks, then give up and move along.
That's not possible as fab's operates at file level, not at sector level like cloning software. However, there is a timeout feature that will skip such not copiable files and skip to next one
 
Aren't exception cases a grand PITA, no matter where or when?!

When all goes according to plan and smoothly far less of the code gets hit than when all sorts of exception handling come into the picture!
 
I appreciate the response. That being said, based on what @Larry Sabo described happening, it seems to me like the number of retries needs to be adjusted down, way down. Or, if retries is not the way to look at it, the amount of time taken on retries needs to be reduced significantly.

I think if @fabs is getting into the data recovery business instead of just the data copying business he's going to need to increase his software rates a LOT. I understand the frustration when things go badly but when evidence of disk damage appears it's time to make a clone, then fix the clone, then pull the data after the sectors are fixed.
 
I understand the frustration when things go badly but when evidence of disk damage appears it's time to make a clone, then fix the clone, then pull the data after the sectors are fixed.

Well, I think there can be a happy medium.

Sometimes you don't necessarily know that there is any disk damage, as the evidence presents itself during mass copy, and if that's already in progress, having some reasonable (and I know that definition is different to different people) timeout mechanism in place when it's encountered "on the fly" whenever it might be encountered is that happy medium.
 
I think if @fabs is getting into the data recovery business instead of just the data copying business he's going to need to increase his software rates a LOT. I understand the frustration when things go badly but when evidence of disk damage appears it's time to make a clone, then fix the clone, then pull the data after the sectors are fixed.
This. I think everyone is asking to fix something that isn’t really broken. It’s beyond the normal operating conditions for the utility.
 
I think everyone is asking to fix something that isn’t really broken. It’s beyond the normal operating conditions for the utility.

Something being broken, yet being beyond the typical operating conditions for a given thing, are not mutually exclusive.

If Fabs is hanging for an inordinate amount of time when it encounters "something off" during it's normal processing that, to me, is broken. Given the volumes of data anticipated, and the distinct possibility of a "less than perfect" source drive in many circumstances, handing an unreadable condition with grace should be expected.

This isn't a direct criticism of Fabs, specifically, but I find most copying utilities (regardless of specific type) devote way, way, way too much time to retrying when a fault is encountered rather than, after a very finite (100 or less) number of tries, moving along. We all know that if you do not get something to read from disc in the first couple of tries, the probability that it will ever read is very small indeed. That's why I like things to have very low retry counts/times whenever possible. When conditions are normal, it just doesn't matter and never comes into the picture. When an exception condition on reading is hit, it allows the process to continue in a timely manner and a error message can be presented, whether during processing, at the end of processing, in the log, or all of the above.
 
Well, I think there can be a happy medium.

Sometimes you don't necessarily know that there is any disk damage, as the evidence presents itself during mass copy, and if that's already in progress, having some reasonable (and I know that definition is different to different people) timeout mechanism in place when it's encountered "on the fly" whenever it might be encountered is that happy medium.
If you must have this a skip button probably is the best answer.
 
  • Like
Reactions: GTP
On a similar topic I find it frustrating that Fabs will continue trying to copy every file even if the drive drops out completely. Does a file copy failure not give any indication of the drive not being present? Maybe there's an easy way to check after a file copy failure if the volume is even present, then Fabs could prompt asking to retry or abort.

I've had this happen because of sick drives, or dodgy USB ports.

I don't see these potential features being discussed today as data recovery features, but as improvements to error handling. A large percentage of source drives being used with Fabs will have bad sectors or other problems, that are perhaps unknown to the user (e.g. not showing in a SMART check).
 
  • Like
Reactions: GTP
On a similar topic I find it frustrating that Fabs will continue trying to copy every file even if the drive drops out completely. Does a file copy failure not give any indication of the drive not being present? Maybe there's an easy way to check after a file copy failure if the volume is even present, then Fabs could prompt asking to retry or abort.

I've had this happen because of sick drives, or dodgy USB ports.
This! That's a real concern here and there's clearly an improvement to make. Handling troubles while (just) listing files to copy is probably not the right thing to chase as this sounds like an alarm saying that data may be at risk. Since Fab's can have to adjust permissions on folders (so it writes on the disk somehow) if it can't get what's inside, then it's better to work with a clone instead of letting it cause even more damage.
The fact that there is already a retry feature while copying with a timeout before skipping to next files sounds like a good compromise to me.

Having it trying to to copy files while source drive has dropped is a real problem and definitely needs to be addressed.
 
Last edited:
Back
Top