[REQUEST] Clone or Copy NAS Drive

Blues

Well-Known Member
Reaction score
488
Location
Tennessee, US
I have a 2TB NAS that needs to be upgraded to a 4TB NAS it is a simple home setup device. It is OMV running on a Raspberry Pi with a USB Enclosure housing a 2TB drive currently with a 4TB replacement drive. I do not have a PC that runs Linux so any copying/replicating for the drive will be difficult/impossible from Windows as the drive is ext4 format which Windows cannot read and most tools for that in Windows are poorly reviewed. So what would you recommend using to copy the data from the 2TB to the 4TB and is there a cloning tool that will do this and allow the full 4TB to be accessible after cloning? I don't do this type of stuff often so I am not as well versed in this job I have plugged the drive to a PC and booted via a USB thumb drive to Ubuntu and notice the root folder on the drive matches the drive label in OMV so concerned if that will automatically recognize or if I need to initialize the new drive in OMV then just do a straight copy within a Linux boot from 2TB to 4TB after the 4TB has been initialized and setup under OMV.
 
I know nothing about OMV, but normally the easiest way to do that is a full image copy to the new drive using dd, then extend the partition to fill the drive using gparted. All that can be done under Ubuntu live.

Personally, though, I'd use it as an opportunity to switch to btrfs, mainly for the defrag capability. Or if you don't want btrfs, to ext3, as ext4 has a higher risk of data loss in case of a power failure. Changing the filesystem is usually best done via a new OS install, then copy the data.
 
Do you have a backup of everything on the 2tb? If you don't then you have no protection the event of a melt down. The OMV device is just a debian based NAS so I'd research that. Personally I use ownCloud and FreeNAS for my mass storage.

I'm not so sure I'd use dd though for copying, more likely cp. The data is going from a SATA2USB bridge and then a USB2SATA bridge. My preference would be to use whatever they have within the OMV GUI. You will need to hook up the new drive and then init it for it to be seen. I agree with @backwoodsman in that you'd be better off using btrfs, it's a better file system for storage compared to ext4. You should be able to have two drives each with a different file format. You'll just have to make sure you have a mount point for the new one.
 
It is actually setup with 2 TB drives, both will be upgraded with new 4TB drives, and the 2nd drive is a backup of the first drive. I am honestly not terribly familiar with drive formats so I will look into what all is supported before selecting a new format. OMV has a scheduled job to copy the primary drive to the secondary. I have considering taking this time to switch it to a software RAID1 but not sure how that will impact performance and a daily backup job so far seems sufficient. I am thinking now I might pull drive 1 setup the new drive just as drive 1 and then copy over from the backup on drive 2 to drive 1 then replace drive 2 and configure as before and let the schedule backup populate the new backup drive with the contents.
 
It is actually setup with 2 TB drives, both will be upgraded with new 4TB drives, and the 2nd drive is a backup of the first drive. I am honestly not terribly familiar with drive formats so I will look into what all is supported before selecting a new format. OMV has a scheduled job to copy the primary drive to the secondary. I have considering taking this time to switch it to a software RAID1 but not sure how that will impact performance and a daily backup job so far seems sufficient. I am thinking now I might pull drive 1 setup the new drive just as drive 1 and then copy over from the backup on drive 2 to drive 1 then replace drive 2 and configure as before and let the schedule backup populate the new backup drive with the contents.
Ok. What are the clients running? So I'd check to make sure the drives are reporting the same number of files. The advantage with RAID is the updates are instantaneous. Using scheduled jobs doesn't do that. To be on the safe side you can unplug the existing drives, plug in the new ones, power up the Pi to init the drives as RAID 1. Then shut it down and replace one of the 4TB with one of the existing drives and manually copy it over. When complete shut it down, pull the old drive and put back the new drive. Power up and it should sync. But both of the operations will take days if the disk is full because you're going from USB to USB.
 
A SAMBA share and PLEX its a friend he set it up initially just doesn't know what to do for the upgrade and before I dive into moving 1.75TB of movies, music, and photos I wanted a good plan.

OMV is running the share and has PLEX running via docker. I think he switched as when he last ran a software RAID, on a desktop, he had the entire RAID get corrupted as it replicated the corruption so he seems to fear basic RAID setups now. One of those friends who is techy but not a tech so will find a guide to do first setup but changes and growth are out of his reach without a whole new setup.
 
he had the entire RAID get corrupted as it replicated the corruption so he seems to fear basic RAID setups now. One of those friends who is techy but not a tech so will find a guide to do first setup but changes and growth are out of his reach without a whole new setup.
That's a valid concern with software RAID systems. Which includes those machines which which are using "fake" RAID cards. And you should comment to them as well that RAID is not a backup system. It's a usage continuity tool meant to minimize down time.
 
RAID is only as good as the system and software it runs on. One reason why FakeRAID cards give RAID a bad rap, because they are buggy and poorly implemented.

Most of the *nix variants of RAID are fairly robust from what I've seen. But, it does require a good amount of learning. You have to know how things work. That's why in most cases for end users, a (insert brand name here) RAID or "Backup Drive" isn't really a smart choice because it's basically dumbing down and limiting a client's ability to understand what really is happening to their data with a "Trust Me!" with Robotnik as the poster boy.
 
RAID is not a backup system. It's a usage continuity tool meant to minimize down time.
Since he wants it as backup I think keeping the copy job vs going with RAID keeps more inline with his intended use so I think I will forgo the RAID though it is good to hear further good reports of Linux based RAIDs.

When you guys say Fake RAID cards what specifically do you refer to? I really see little need for RAID setups that aren't in server systems where support is there from the get go so picking up or adding RAID has really not been something I have looked at on the lower end. I want to make sure if I ever do look into is that I am avoiding any problems.
 
Since he wants it as backup I think keeping the copy job vs going with RAID keeps more inline with his intended use so I think I will forgo the RAID though it is good to hear further good reports of Linux based RAIDs.

When you guys say Fake RAID cards what specifically do you refer to? I really see little need for RAID setups that aren't in server systems where support is there from the get go so picking up or adding RAID has really not been something I have looked at on the lower end. I want to make sure if I ever do look into is that I am avoiding any problems.
In that case yes, a copy job would be more in line with a backup. But a backup should also include a couple of other things. Data verification and offsite storage

What I'm calling fake RAID cards are cards which advertise RAID functionality but actually use the underlying OS and CPU to do the heavy lifting. In my experience the fake ones are typically $200 or less, including the built in Intel ones that you can find on Dell's. You can do software RAID in Linux and macOS using mdadm. The real ones start at 400-500 and should have a battery as well.
 
I agree it isn't a complete backup solution and I will talk to him about that and options to expand it but I do think it is a good part of a backup to have something local like that.
 
You can do software RAID in Linux and macOS using mdadm. The real ones start at 400-500 and should have a battery as well.
And Windows. Surprisingly easy inside of Windows.


And yes, what he said; Battery so that it can retain writes during power outages/shutdowns/etc.
 
And Windows. Surprisingly easy inside of Windows.


And yes, what he said; Battery so that it can retain writes during power outages/shutdowns/etc.
Good pointing that out. I avoid software RAID so tend to forget about storage spaces which might provide some utility for a small group of users.
 
Good pointing that out. I avoid software RAID so tend to forget about storage spaces which might provide some utility for a small group of users.
I think I'd be more apt to use OS-specific RAID than some weird hardware card that is still driven by CPU. zfs is AMAZING and unless you need the benefits of a hardware card, including one of the biggest; Offloading the workload from the CPU to a dedicated card/process. Battery-backed RAM, extra (faster) ports, faster interface, faster interaction. Lots of benefits to SERIOUS hardware cards, but some can just use software. I'd never use one of those quazi-cards.

Edit: I used to be sneaky. Wholesaler of mine was selling refurb 1U and 2U servers with multiple drive bays and GOOD RAID cards. Bought them for $50 each, and more than that in shipping, and sold the dual RAID cards in both on ebay for $250-$300 per card... I was sad when I ran out. :D
 
Back
Top