JonathanM

Moderators
  • Posts

    14272
  • Joined

  • Last visited

  • Days Won

    62

Everything posted by JonathanM

  1. Right click on one of the drives, select Change Drive Letter and Paths... What path does it show to access it?
  2. Most likely you will have to view the screen on the server at some point to troubleshoot, so I'd work towards that regardless. You should be able to connect the USB stick to your desktop computer, look in the previous folder, and copy everything out of that folder back into the root of the flash drive and get back running again.
  3. The flash drive is mounted to /boot, array drives are mounted to /mnt/diskXX, pool drives are mounted to /mnt/poolname Since the flash is FAT32, files stored there can't have the full set of linux attributes, so if you store your files there you need to copy them elsewhere and apply the correct perms with your script.
  4. That path exists only in RAM, and is recreated every boot. If you make changes there you will need to script the change to occur on every reboot.
  5. I think what you are asking is for pool to pool mover in addition to pool - array and array - pool. Personally I've wanted pool to pool mover ever since multiple pool support was implemented. Perhaps after the addition of native ZFS pool support we might get our wish.
  6. Currently pools can be set 1 of 4 ways for each share. pool:YES new files targeted to the share written to pool - moved to parity array pool:ONLY new files targeted to the share written to pool - mover leaves them there pool:NO new files targeted to the share written to the array - mover leaves them there pool:PREFER new files targeted to the share written to the pool - overflow writes to array, mover moves any array content back to the pool if there is space. If the share is designated pool:ONLY then new files go to the pool and stay there, just like you are asking. I'm unsure what you are saying about not wanting it the way it works currently.
  7. I think some motherboard and BIOS combinations can make it work, however it's not a typical use case, so finding documentation may be difficult.
  8. Assuming all the drives involved are perfectly healthy, just replace one at a time starting with parity and Unraid will rebuild your data to the new drives.
  9. Why does xteve have 11GB of temp files?
  10. Probably, but why throw away the progress you made? The command line you posted is specifically tailored to resuming a transfer. Just run the same command and it should pick up where it left off. After that rsync finishes, you can check the transfer by issuing rsync with the same source and destination, but substitute -narcv for the -avPX part. It will compare the source and destination, and list any differences. If it returns to the command line without listing any files, it means the source and destination are identical.
  11. Can you SSH into it from another pc? If so, type powerdown -r and see what happens.
  12. Be sure you set up and verify that you get notifications so you have immediate feedback of any issues. Since Unraid works with the entire surface of all the drives, it's a good idea to do an extended smart test on all your drives before you trust them. The parity build and check will also help you gain confidence in the drives.
  13. If you are using my preferred command line for rsync (one line for the initial copy, a second run through to verify) then you could to a file system check on the destination drive to be sure it's clean, then issue the same copy command again and it will pick up where it left off, then the rsync verify command will catch any inconsistencies. You didn't say what exact commands you were using, so it's tough to advise on best options.
  14. Don't use disks in Unraid that are ready to fail. The way Unraid parity works it uses the entire capacity of ALL remaining drives to reconstruct the failed drive, so if a second drive dies you will lose all the data on both drives. The moment a drive fails, ALL rest of the data drives plus the parity drive are read to emulate the failed drive, and writes to the missing drive's slot update the parity drive. As an example, say you have replaced all your older disks except one, and it doesn't have much content, so you aren't worried about replacing it. Now, out of the blue without warning, one of your new disks, full of data, dies on you. Now, that old drive that was on its last legs is called into constant use to emulate the failed drive, and it must survive reading the entire capacity to rebuild the failed new drive. You hope it survives long enough to get through the rebuild process, but chances are, it's not going to. You must always be able to trust all your drives to perform perfectly, so when one of them inevitably fails the rest are in good shape. Any drive that shows signs of failure must be replaced ASAP.
  15. https://forums.unraid.net/topic/43651-plug-in-unbalance/?do=findComment&comment=1098819
  16. Rootfs has nothing to do with the flash drive space wise. The flash is mounted at /boot The command given was to check space at / which is the root. Problems sometimes occur when there are typos, like capitalization errors.
  17. Attach your diagnostics zip file to your next post in this thread.
  18. They won't. https://forums.unraid.net/topic/35866-unraid-6-nerdpack-cli-tools-iftop-iotop-screen-kbd-etc/ read the recommended post and proceed from there.
  19. Running Unraid as a VM is not supported. It's not forbidden, and there is a section on the forum for people running it as a VM so they can compare notes and help each other, but if you have an issue that you need support on you will need to boot bare metal and reproduce the issue so as to prove it's not caused by some setting in your hypervisor.
  20. I'm sure you know this, but your quote sums up why nerdpack is gone. Nobody stepped up to keep all the huge number of necessary ducks in a row.