bilbo6209

Members
  • Posts

    117
  • Joined

  • Last visited

Converted

  • Gender
    Undisclosed

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

bilbo6209's Achievements

Apprentice

Apprentice (3/14)

2

Reputation

  1. Itimpi, so I set a minimum on cache and all shares, hit tge mover button again and files are moving. It doesn't seem like this should hit it but it did! I set the use cache back to yes too.
  2. I thought prefer was prefer to use the cache, but won't stop transfer if the cache fills up, the files will go to the shares directly instead. I started to adjust the minimum space on the shares.... But if all shares are on all drives won't it always have all free space on the drives? But will this correct the current issue?
  3. I build my daughters unraid servers (they live out of state and this gives me off site redundancy and allows them access to some shared files. When I built the severs I threw a handful of drives In, smaller drives thinking I wouldn't mirror that much data.... Well their drives filled up and their cache drives filled up. I have added more data drives and tries kicking off mover to dump the cache data to the new drives and received the error (in the active logging) of "step 3 {time} server name file path Dest path (28) no space left on device" I then used unbalance to move some files off the full drives to the new drives and tried again, same error I changed the cache setting for each share to prefer (I want it this way in the future so transfers don't hang if the cache fills up) Same error I read on a different post to change the split behavior, it was on automatically split, I changed to to all other options and received the same error each time. Does anyone have any other suggestions? The servers are going back out of state tomorrow morning and gaining physical access to them will be out of the option, and we interface access is a pain but doable.
  4. Of course this is the one disk I forced a reformat on to get the full 8tb vs the 7.8tb available due to the 4k formatting 🙄 OK ill drop it in a different system with a trial key, what a pain 😞 Thank you JorgeB!
  5. I had some issues with drives and upgraded all my spinning disks.... Changed from 2 very low hour 12tb parity drives and 6 8tb used data drives to new 14tb parity drives, 2 new 14tb data drives and the 2 12tb parity drives becoming data drives. I swapped a new 14tb drive in place of both 12tb parity drives (one at a time and let each rebuild before replacing the next) I the swapped a new 14tb (or the 2 12tb parity) in one at a time and let it rebuild for 4 of the 6 current data drives. I pulled the 2 remaining 8tb data drives and did a new config to remove the 2 8tb drives from the array. Using the unassigned devislces plugin I was able to mount one of the 2 8tb drives and copy data to the array. I have attempted to mount the other 8tb drive and each Time it attempts to mount and fails.... Hitting the log button in the GUI it says "mount of sdl1 failed. Mount /mnt/disks/8tb (mount 2) system call failed: function not implemented. If I look in krusader the mount doesn't show... I have rebooted to ensure it wasn't something hung etc bob-diagnostics-20230719-2217.zip
  6. I had some issues with drives and upgraded all my spinning disks.... Changed from 2 very low hour 12tb parity drives and 6 8tb used data drives to new 14tb parity drives, 2 new 14tb data drives and the 2 12tb parity drives becoming data drives. I swapped a new 14tb drive in place of both 12tb parity drives (one at a time and let each rebuild before replacing the next) I the swapped a new 14tb (or the 2 12tb parity) in one at a time and let it rebuild for 4 of the 6 current data drives. I pulled the 2 remaining 8tb data drives and did a new config to remove the 2 8tb drives from the array. Using the unassigned devislces plugin I was able to mount one of the 2 8tb drives and copy data to the array. I have attempted to mountvtge other 8tb drive and each Time it attempts to mount and fails.... Hitting the log button in the GUI it says "mount of sdl1 failed. Mount /mnt/disks/8tb (mount 2) system call failed: function not implemented. If I look in krusader the mount doesn't show... I have rebooted the server today to ensure it wasn't a pending reboot etc I have attached logs so hopefully someone can point me to if this is a OS issue and beings here or if it is a plugin issue and shojkd be posted on their forum bob-diagnostics-20230719-2217.zip
  7. That is my fear too.... All internal slots are full 😞 BUT I am condensing a couple drives down onto a bigger drive. All Disks that have logged errors in unraid have been in that disk slot so I'm just going to mark it as bad so I don't use it. Sucks to have a nice hot swap chassis that has a bad slot. As long as it doesn't spread to other slots I can live with it. Thanks for your help JorgeB!
  8. Well I'm back, The initial disk has been solid, full smart test (2 corrected delayed read errors taht have been there for a while) and copied 2 or 3tb of data to the drive with no issues. BUT the replacement drive has no smart errors, but unraid is reporting almost 22,000 errors 😞 My guess is the servers backplane has a flakey port etc... But I have attached the diags for anyone who wants to look and make any suggestions! bob-diagnostics-20230702-1619.zip
  9. OK disk issues appear to be sorted... hopefully I just started getting one of the symptoms of the "Warning: file_put_contents():" error... I just started getting errors saying the server cant access github. I can't update dockers. The difference is I updated to 6.12.1 within the last week or so... maybe that is masking the original error? I can not start dockers (I tried to start Krusader to do some file maintenance), I receive a "execution error, server error" message I have attached diags taken after attempting to start several dockers after reboot everything is working correctly. I can start dockers, and access community apps bob-diagnostics-20230626-2232.zip
  10. the diags are after the disk was disabled so no diags from when the disk was throwing the errors in unraid. I had to reboot the server due to another issue I have another thread opened for (error showing unraid is unable to write to /usr/local/ and basically stops unraid from accessing github, updating plugins/dockers etc, and causes an unclean shutdown and parity check when using the reboot or shutdown option in the gui) i can try doing a preclear, on the disabled drive and see what it does. I just rebuilt the data on a different drive and would really rather not put the disabled drive back in and rebuild again. I dont know if I will see the errors if the disk is unassigned.
  11. If these are "nothing to worry about" why did disk 5, with only 2 corrected delayed read errors, throw over 1.5 million errors in unraid while rebuilding data (drive replaced) and unraid disabled the drive? Sorry not trying to be confrontational just trying to understand why a drive with so few errors that are usually "nothing to worry about" would be toasted by unraid.... While I'm not happy about the drives having errors, I'm more worried about something else happening on the server that I was hoping would have shown in the logs.
  12. The disk above would have been disk 5 in the diagnosis Here is disk 2 with more errors in the log BUT 0 errors showing on the main screen in unraid, and no unraid errors or warnings.
  13. In the smart scan logs, they are all corrected I attached a pic of the logs for the drive unraid disabled, it had somewhere in the 1.5 million errors shown on the main screen while rebuilding the drive. Even though smart onlh shows 2 errors. Bill