natethebrewer

Members
  • Posts

    2
  • Joined

  • Last visited

natethebrewer's Achievements

Noob

Noob (1/14)

0

Reputation

  1. Didn't think about the parity drive.... So it does sound like disabling parity, writing to all the disks, then re-enabling parity and letting it build would be what I would do in the future. At this point since I already did a parity check when I spun up the array, and I have about half the data moved (doing reconstructed writes), I guess I will just let it go at this point, because even if I cut the transfer time in half, throwing a parity check on after that is going to take longer than the remaining file copy. Should have asked sooner but can use that strategy with the next one. Thank you for the info!
  2. TLDR: Is it safe write data (via FTP) directly to /mnt/disk1/share, /mnt/disk2/share, etc. during initial population of a large amount of data to the array, or will that break file permissions or high water strategy when transferring files normally to /mnt/user/share later on? Full: I've currently got 2 Unraid servers containing the same data, and due to recently phasing out some old hardware, I recently spun up a third. I had 8 relatively new 2 TB drives and the controller card in the server could only handle those size drives, so I figured it was perfect as a tertiary backup for my data. I have been transferring data to the array in /mnt/user/datastore and with the high water method it is doing the usual fill each drive to 1 TB and then move onto the next one, but these are SMR drives so they are slow to move 9 TB of data total. I would normally just wait but this server is going to get moved offsite and is currently eating up overhead on the UPS for all of my other equipment. The server the data is being copied from is a fast ZFS array over a 10 Gb link, so the bottleneck is the speed of a single drive in the new Unraid server. My question is, are there any downsides to using my FTP client to write a 500 GB set of files to the appropriate share on one disk, then set up a transfer of another 500 GB set of files to the same share on another disk, and so on so that I can saturate the connection and get the transfer done quicker? Basically, manually distributing the files, but doing so simultaneously using multiple FTP sessions. Then under normal use writing files now and then to the server, will the 'high water' method level data distribution out across the disks? Or will this cause issues with file permissions or something else I am not seeing.