BorisBee

Members
  • Posts

    7
  • Joined

  • Last visited

BorisBee's Achievements

Noob

Noob (1/14)

0

Reputation

  1. Ok thanks for the information. I'm currently running a parity check and it's about 50% done but I noticed that my sync errors corrected jumped up to 521884 from about 135. What exactly is the parity check doing? My array seems otherwise fine other than missing a few files from the corrupted disk. Luckily I have a list of files that were on that disk and I will compare it with the disk afterwards to find out exactly what's missing. The rest of the files seem to be there and are able to be browsed.
  2. Ok I ended up running xfs_repair -L and the disk will now mount in my array. I do have some files in lost+found but it's not too many. My question now is if I run a parity check will those files be replaced? and I noticed that many of the files in lost+found are named with a number and not the actual filename. Is there a way to find out what the original filename was? I'm going to prioritize getting this server on a UPS so I don't have any more sudden shutdowns.
  3. I ran xfs_repair -n on the drive while in maintenance mode and it came back with several issues that it wants to fix. I may run that and see if that lets me mount the disk and worse case just format and rebuild parity if someone doesn't have another solution.
  4. I had a sudden power failure and after powering back up my docker containers would give an execution 403 error and my array was doing a parity check automatically. I googled a bit and it seemed that the cause could be from corrupted or docker image that can no longer be found such as if it was split up on several disks and that you should try to do a clean reboot of the system and that might solve it. So I did a clean reboot after pausing the parity check and when the server came up disk2 is now unmounted and says "Unmountable: Wrong or no file system". I am wondering what the recommended steps to do. Is there a way to force mount the disk and have it check the parity, do I need to format the drive and then rebuild parity after adding it back to the array, or is there another solution? *Edit* I also am realizing now that the server is reporting I have no docker containers anymore. I hope this isn't as bad as it sounds to me I've attached the diagnostic files below. Thanks. nas-diagnostics-20230701-1939.zip
  5. I am sorry if this has been asked a hundred times before but I have tried searching and haven't seemed to be able to find an answer. I have had my unraid server running for a few months now and everything seemed to be working as intended in regards to high-water splitting files evenly across disks. Lately I have noticed though that when I write data to the array it seems to want to put everything on to disk 1. I can force it to write data to other disks if I disable those disks from the share but obviously that is not ideal and I am just curious as to why this is happening or if I am misunderstanding something. I've attached images of my share settings, and also of my disk usage settings. As you can see it was clearly working at some point since the 16tb drives were filled up to the halfway point. Should I just ignore this? Seems odd to me that it's not writing data to the rest of the disks in the array and defaulting to disk 1.
  6. Are the backend fields editable when using the webGUI? I'd like to change the number of transfers from 4 to 1 but while it looks like I can type a number into the field, the field is not saved. Is there somewhere else I'm supposed to change this? It's working great for me otherwise. Thanks *Edit* Seems like I figured it out. You need to add the --transfers flag to the RCLONE USER FLAG section