Marauder

Members
  • Posts

    52
  • Joined

  • Last visited

Everything posted by Marauder

  1. with that process should I do each drive 1 at a time, or can I repair both at the same time?
  2. overview of all drives. How do I tell it to rebuild, on those drives? Will they system see data on them and just do a parity check, or does it wipe them and restart? Can it do both drives at the same time?
  3. thanks, here are the diagnotistcs. My last reboot I moved the ZR118ZZN serial device to another sata port. So if you see that in the diag, that's why. Eitherway they all get read. unraid-diagnostics-20220516-1412.zip
  4. I added another drive to my machine. When I went to the UI to stop the array and to add the device, I saw that 2 of my devices are now showing as offline and contents being emulated. I restarted the server, no change. I checked all the cables and I don't see an issue with any of them. I started the array in maintenance mode, and ran a quick smartctl scan and an filesystem check on both the drives. they both passed and came back clear. But Unraid is still showing the drives as missing.
  5. good to know, the pages happened to be correct step, by step, but never know if that'll change. so good to have it here for anyone that comes looking in the future.
  6. Resolved: PHEW. Unmounted array, remounted in maintenance mode. Than ran file system check on each drive. Then restarted server. I first tried just remounting the array not in maintenance mode, and that didn't work. Before doing the file system check, I tried unmounting and remounting the array and rebooting, neither worked. I also tried remounting after the file system check and that didn't work either, I had to do a full reboot. Pages used: https://wiki.unraid.net/Check_Disk_Filesystems#Checking_and_fixing_drives_in_the_webGui https://wiki.unraid.net/Check_Disk_Filesystems
  7. This morning I went and updated some of my plugins as I saw I had an update. I updated Nvtop, unassigned drivers, and ublalance. After that, my docker containers crashed, and when I went to start them I'd get an error. Trying to access my share I get this error. I tried unmounting and rebooting. root@unraid:/mnt# ls /bin/ls: cannot access 'user': Transport endpoint is not connected cache/ disk1/ disk2/ disk3/ disk4/ disk5/ disk6/ disk7/ disk8/ disk9/ disks/ download/ remotes/ rootshare/ user/ user0/ root@unraid:/mnt# cd user -bash: cd: user: Transport endpoint is not connected root@unraid:/mnt# Panicing a bit here, thinking I've broken my system, any help would be appreciated.
  8. Thanks, again. I had Minimum free space: 0kb as the drive still had a 200kb of space left, I assume thats why it didn't trigger. So i corrected the free space min to 100mb.
  9. Thanks, I'll switch it now, and see what happens. I saw the drive reporting as 1.5tb when I had it as raid0, so I thought it was right.
  10. I noticed another issue, I have the cache setup as Prefer. Though the drive is full, my current downloads have all failed. They're not being transferred over to the array, like they're supposed to.
  11. I setup a cache drive using a 500gb and 1tb SSD. I then set the data portion of the cache to be RADI0. On The main screen it shows 1.5tb size, but under used space when I hit 1tb the free space is 0. How can I correct this to have the proper 1.5tb of space?
  12. Under balance I converted it to raid0. Raid0 is no parity and basically just combines the disks together as 1 large disk.
  13. Figured out how to change it, it was under the cage settings and balance. I set it to raid0
  14. So I changed out my motherboard(same CPU) from a MSI z77a-g43 to an ASROCK Fatal1ty Z77. They both use the came chipsets. When going to boot, on the new ASROCK board, unraid gets through majority of the boot process and then kernel panics. I swapped motherboards back and machine boots right up. Any suggestions?
  15. So i'm setting up multiple cache pools. The first is a single disk, the second pool is a 1tb and a 500gb SSD drive. When I formatted and initialized that pool it gave me a size of 750 and free space as 500gb. I was expecting it to be 1.5tb. How do I change it so that it's 1.5tb? Also when setting up my docker mounts, do I point to the share location or the cache location? for example if I have a share called downloads do I point my docker mount to: /mnt/user/downloads or /mnt/download? assuming the cache is called dowload?
  16. just a brain fart. Thought I checked and it didn't have it.
  17. I just went with a VM approach. It's not ideal, but I just did a Debian vm with max 2 cpu and 1gb ram.
  18. I know this is a dead thread but wondering if cangtning has changed since. Looking for a good option for downloading f1 races automatically.
  19. I was looking to have my drives spin down when not in use. I am currently running Emby Media Server and was wondering if that would cause any issues with my library as I have the system set to monitor the folders for changes. Does the share still display all the files from a cache/index and then spins up the appropriate drive when needed making it so Emby wouldn't see that the drive is spun down and no files have changed?
  20. Thank you that fixed the issue. Did you have a similar problem or did you see something in my logs that pointed to it?
  21. i am having the same issue. rebooted my server and after it came up no exportable shares. this is my thread with diagnostics zip posted.
  22. I rebooted my system to replace a failed drive. I after booting up I am getting "There are no exportable user shares" The array is started and the only disk that is showing an alert is the one I replaced. I am running dual parity. All other drives are showing fine. I then shut the system down and put the old drive back in, and same error. The system is doing an automatic parity-sync/data-rebuild. As i have dual parity I'm wondering why all my shares are gone? How do I get them back? Under shares, it shows disk shares and has all of my drives, showing green. I've attached the from my server unraid-diagnostics-20181010-1950.zip unraid-diagnostics-20181010-1950.zip
  23. I hope that's not the case as it's a brandnew sas breakout cable. Might just be a lose connection as this happened after I moved the server.