Fiala06

Members
  • Posts

    121
  • Joined

  • Last visited

Everything posted by Fiala06

  1. I've noticed lately I've been getting a bit of lag, and 9 times out of 10 a reboot fixes it. The GUI will time out or take over 90secs to load. I have no idea whats causing it. Even my docker images are becoming super slow. Yesterday I replaced one of my parity drives and is currently rebuilding and everything is running even slower. Hopeing there is something I'm missing as these 10TB parity drives take forever lol Thanks Attached is my diag. unraid-diagnostics-20190813-0333.zip
  2. Trying to install this and I get and when installing I'm getting this:
  3. Since adding this app yesterday my docker img (21GB) is now 95% full and growing (was 82%). I'm not seeing any variables to change for where the data is stored. Am I missing something?
  4. I did just find this: Mar 22 11:44:36 UNRAID root: Active pids left on /dev/md* Mar 22 11:44:36 UNRAID root: Generating diagnostics... Mar 22 11:44:38 UNRAID emhttpd: Unmounting disks... Mar 22 11:44:38 UNRAID emhttpd: shcmd (694): umount /mnt/cache Mar 22 11:44:38 UNRAID root: umount: /mnt/cache: target is busy. Mar 22 11:44:38 UNRAID emhttpd: shcmd (694): exit status: 32 Mar 22 11:44:38 UNRAID emhttpd: Retry unmounting disk share(s)... Mar 22 11:44:43 UNRAID emhttpd: Unmounting disks... Mar 22 11:44:43 UNRAID emhttpd: shcmd (695): umount /mnt/cache Mar 22 11:44:43 UNRAID root: umount: /mnt/cache: target is busy. Mar 22 11:44:43 UNRAID emhttpd: shcmd (695): exit status: 32 Mar 22 11:44:43 UNRAID emhttpd: Retry unmounting disk share(s)... So something with my cache. I'm running mover now just to see if that helps. Edit: After some fiddling I ran "New Permissions". Took about 45mins but once complete the array stopped! Not sure if it was a fluke yet or not but so far so good. Oh and disabling my built in 10g eth solved the original issue.
  5. Ok I'll have to wait until this evening to check out the flash drive. I'm at work. It's been on array stopping for over 20mins at this point.
  6. Is there a command or something I can put in to see what is being used? None of my computer and devices are on except my unraid box. No ssh sessions, movies etc.
  7. Well seems to have helped but ran into a different issue now. It's just stuck on "Array stopping Retry unmounting disk shares". I even stopped Docker and VMs in settings before trying to stop the array. Tried a unclean shutdown and stopping array again, same thing.
  8. I've been unable to figure this one out. I get this almost every time I stop my array to do some maintenance. The machine locks up. This mean anything to anyone? I'm on 6.6.6 and details are in my signature. Thanks for any tips!
  9. Thanks, just did a force update and mine started right back up! Appreciate all you do for us!
  10. Interesting enough just found out if I ping unraid.local and put that in the browser it works. How can I remove the .local part as it used to work without it? C:\Users\Fiala06>ping unraid.local -4 Pinging unraid [192.168.1.101] with 32 bytes of data: Reply from 192.168.1.101: bytes=32 time<1ms TTL=64 Reply from 192.168.1.101: bytes=32 time<1ms TTL=64 Reply from 192.168.1.101: bytes=32 time<1ms TTL=64 Reply from 192.168.1.101: bytes=32 time<1ms TTL=64
  11. When I first setup unraid I was able to browse the machine via hostname. It now only works via IP. Not entirely sure when it quit. Some things I've tried Reboots Added Dynamix Local Master Cleared my cache Multiple browsers and machines/devices Changing hostnames Unplugged ethernet cables to see if ip was being used by something else Switched from AMD Ryzen to Intel i9 (different motherboards/cpu/ram) The network is All Ubiquity Unifi. Everything on the same network. Pinging hostname results in unreachable but dir IP works fine Unraid etho My primary desktop (physical machine) Any ideas?
  12. I've been unable to resolve port forwarding with PIA (Switzerland which allows port forwarding). Any ideas what I need to change to make it open? Currently I see: The reason I ask is if I do the PIA speed tests I get great speeds. When actually seeding as it seems like I'm limited and that's with 1k torrents and multiple trackers. Any suggestions?
  13. Ahh, that's the part I missed. Thanks for your help, really appreciate it!
  14. Thanks, I just started it but didn't have the option that "parity is already valid". It now says Unraid Parity sync / Data rebuild: 02-27-2019 09:09 Notice [UNRAID] - Parity sync / Data rebuild started Size: 8 TB Guess I'll just let it rebuild. Looks like everything is there. Mini heart attack averted!
  15. So added the drives matched what I have in the screenshot above. Didn't add the parity drives since they said they would be formatted. Figured it would be best to start the array without them to verify everything works first? Do I need to create a new config file before starting it? I'm going to wait to hear back from someone before I start the array. Don't want to wipe it if there's a small chance of getting it backup and running again.
  16. Yes, I do have a screenshot of the drives. Really hope there is a chance of getting it all back. It was on my list to create backups today since I finally got everything working
  17. I was removing a drive from my array, went to tools, new config, clicked all and yes I want to do this. Then clicked apply. Machine locked up for some reason. Now I don't see any of my data drives in the array. They are all unassigned devices now. unraid-diagnostics-20190227-0836.zip
  18. Ahh, thanks! I guess it's possible this transferred over from the ryzen build.
  19. Thanks, I'll check but I don't think that is the case. I had this issue with my ryzen 2700x, ram and mobo which I used for a month before returning them Friday. Now it happened again with my i9, new mobo, and new ram running their stock speeds. Do I need to format the cache drives again?
  20. This is the 2nd time I've had this issue and would love to know what causes it. I've attached my diag file. Here is what I see in my log: Feb 24 21:32:39 UNRAID kernel: BTRFS critical (device nvme0n1p1): corrupt leaf: root=5 block=284124921856 slot=0 ino=109145, name hash mismatch with key, have 0x00000000c7e8ba55 expect 0x0000000006a3fa50 Feb 24 21:32:39 UNRAID kernel: BTRFS critical (device nvme0n1p1): corrupt leaf: root=5 block=284124921856 slot=0 ino=109145, name hash mismatch with key, have 0x00000000c7e8ba55 expect 0x0000000006a3fa50 Feb 24 21:32:51 UNRAID kernel: BTRFS critical (device nvme0n1p1): corrupt leaf: root=5 block=284124921856 slot=0 ino=109145, name hash mismatch with key, have 0x00000000c7e8ba55 expect 0x0000000006a3fa50 Feb 24 21:32:51 UNRAID kernel: BTRFS critical (device nvme0n1p1): corrupt leaf: root=5 block=284124921856 slot=0 ino=109145, name hash mismatch with key, have 0x00000000c7e8ba55 expect 0x0000000006a3fa50 Feb 24 21:33:03 UNRAID kernel: BTRFS critical (device nvme0n1p1): corrupt leaf: root=5 block=284124921856 slot=0 ino=109145, name hash mismatch with key, have 0x00000000c7e8ba55 expect 0x0000000006a3fa50 Feb 24 21:33:03 UNRAID kernel: BTRFS critical (device nvme0n1p1): corrupt leaf: root=5 block=284124921856 slot=0 ino=109145, name hash mismatch with key, have 0x00000000c7e8ba55 expect 0x0000000006a3fa50 Feb 24 21:33:15 UNRAID kernel: BTRFS critical (device nvme0n1p1): corrupt leaf: root=5 block=284124921856 slot=0 ino=109145, name hash mismatch with key, have 0x00000000c7e8ba55 expect 0x0000000006a3fa50 Feb 24 21:33:15 UNRAID kernel: BTRFS critical (device nvme0n1p1): corrupt leaf: root=5 block=284124921856 slot=0 ino=109145, name hash mismatch with key, have 0x00000000c7e8ba55 expect 0x0000000006a3fa50 Feb 24 21:33:27 UNRAID kernel: BTRFS critical (device nvme0n1p1): corrupt leaf: root=5 block=284124921856 slot=0 ino=109145, name hash mismatch with key, have 0x00000000c7e8ba55 expect 0x0000000006a3fa50 Feb 24 21:33:27 UNRAID kernel: BTRFS critical (device nvme0n1p1): corrupt leaf: root=5 block=284124921856 slot=0 ino=109145, name hash mismatch with key, have 0x00000000c7e8ba55 expect 0x0000000006a3fa50 Feb 24 21:33:39 UNRAID kernel: BTRFS critical (device nvme0n1p1): corrupt leaf: root=5 block=284124921856 slot=0 ino=109145, name hash mismatch with key, have 0x00000000c7e8ba55 expect 0x0000000006a3fa50 Feb 24 21:33:39 UNRAID kernel: BTRFS critical (device nvme0n1p1): corrupt leaf: root=5 block=284124921856 slot=0 ino=109145, name hash mismatch with key, have 0x00000000c7e8ba55 expect 0x0000000006a3fa50 Feb 24 21:33:51 UNRAID kernel: BTRFS critical (device nvme0n1p1): corrupt leaf: root=5 block=284124921856 slot=0 ino=109145, name hash mismatch with key, have 0x00000000c7e8ba55 expect 0x0000000006a3fa50 Feb 24 21:33:51 UNRAID kernel: BTRFS critical (device nvme0n1p1): corrupt leaf: root=5 block=284124921856 slot=0 ino=109145, name hash mismatch with key, have 0x00000000c7e8ba55 expect 0x0000000006a3fa50 Feb 24 21:34:03 UNRAID kernel: BTRFS critical (device nvme0n1p1): corrupt leaf: root=5 block=284124921856 slot=0 ino=109145, name hash mismatch with key, have 0x00000000c7e8ba55 expect 0x0000000006a3fa50 Feb 24 21:34:03 UNRAID kernel: BTRFS critical (device nvme0n1p1): corrupt leaf: root=5 block=284124921856 slot=0 ino=109145, name hash mismatch with key, have 0x00000000c7e8ba55 expect 0x0000000006a3fa50 Feb 24 21:34:15 UNRAID kernel: BTRFS critical (device nvme0n1p1): corrupt leaf: root=5 block=284124921856 slot=0 ino=109145, name hash mismatch with key, have 0x00000000c7e8ba55 expect 0x0000000006a3fa50 Feb 24 21:34:15 UNRAID kernel: BTRFS critical (device nvme0n1p1): corrupt leaf: root=5 block=284124921856 slot=0 ino=109145, name hash mismatch with key, have 0x00000000c7e8ba55 expect 0x0000000006a3fa50 Any ideas what could be causing it? unraid-diagnostics-20190224-2132.zip
  21. My machine locked up during a permission rebuild. Upon reboot, I see a ton of this: Do I need to let it finish or should I cancel the parity check and run xfs_repair? If so how? Thanks for any tips!
  22. So I recently decided to move my plex metadata to its own SSD which isn't in the array since its getting huge. Fix Common Problems started giving me an error: Event: Fix Common Problems - UNRAID Subject: Errors have been found with your server (UNRAID). Description: Investigate at Settings / User Utilities / Fix Common Problems Importance: alert **** Docker application binhex-plexpass has volumes being passed that are mounted by Unassigned Devices, but they are not mounted with the slave option **** Now I've done some searching and seems I need to change the access mode for the dir. Unfortunately I can't do that for this dir as there is no edit/options. Any ideas?