glave

Members
  • Posts

    179
  • Joined

  • Last visited

Everything posted by glave

  1. Last week I moved my unRAID over to an Active Directory setup. I put everything back to default nobody/users, removed all permissions from world, and then applied AD permissions using AD groups. Today when my weekly extended tests ran, I discovered that the plugin really doesn't like AD permissions... This section then listed every single file I have, lol!
  2. It's not so much that the dockers refuse to start, but more that things can break. Perfect example is if the kodi headless docker gets up and running before the MariaDB docker. Kodi can't talk to the DB and will not work correctly until it has been completely restarted.
  3. Is this possible? Some of my dockers rely on the others to already be up and running and I'd like to ensure they start in a proper order.
  4. Is it possible to instead of trying to empty a drive, designate moving things off of that drive until that drive has X% free? For example, drive 17 currently is 92% full, and I want to get it down to 75% full but not completely empty it out.
  5. During the monthly parity check, disk1 dropped out due to what appeared to be an error with hardware, but not drive failure. I cold booted the server, readded drive1 to the array, and started a rebuild. Not very long after the rebuild began, I noticed that all of my user shares had disappeared. Looking into the log, I see lots of these errors as listed below. I have ran an extended smart check on that drive, and it passes with flying colors. How can I rebuild on this drive? If I cannot, I currently cannot even remove it from the array and start as missing, as the array says too many disks missing and will not even start unprotected. ug 6 16:54:41 arcade kernel: XFS (md1): Metadata corruption detected at xfs_da3_node_read_verify+0xeb/0xf3, xfs_da3_node block 0x8404ec80 Aug 6 16:54:41 arcade kernel: XFS (md1): Unmount and run xfs_repair Aug 6 16:54:41 arcade kernel: XFS (md1): First 64 bytes of corrupted metadata buffer: Aug 6 16:54:41 arcade kernel: ffff8800927c1000: 76 da fd f3 a8 44 d3 b9 f8 00 e5 8f 49 35 5b b3 v....D......I5[. Aug 6 16:54:41 arcade kernel: ffff8800927c1010: 9c 71 cf b4 2a 39 cd 38 70 cc 58 13 ce d3 38 52 .q..*9.8p.X...8R Aug 6 16:54:41 arcade kernel: ffff8800927c1020: fd 4b f3 f4 30 5a ca f1 ef 14 de 30 b3 74 69 ce .K..0Z.....0.ti. Aug 6 16:54:41 arcade kernel: ffff8800927c1030: 6c 8b b0 1e 2d 0a 12 cf dc f1 88 51 ed 52 82 38 l...-......Q.R.8 Aug 6 16:54:41 arcade kernel: XFS (md1): metadata I/O error: block 0x8404ec80 ("xfs_trans_read_buf_map") error 117 numblks 8 Aug 6 16:54:41 arcade shfs/user: shfs_readdir: fstatat: 890 (117) Structure needs cleaning Aug 6 16:54:41 arcade shfs/user: shfs_readdir: readdir_r: /mnt/disk1/downloads/queue (117) Structure needs cleaning Aug 6 16:54:41 arcade kernel: XFS (md1): Metadata corruption detected at xfs_da3_node_read_verify+0xeb/0xf3, xfs_da3_node block 0x8404ec80 Aug 6 16:54:41 arcade kernel: XFS (md1): Unmount and run xfs_repair Aug 6 16:54:41 arcade kernel: XFS (md1): First 64 bytes of corrupted metadata buffer: Aug 6 16:54:41 arcade kernel: ffff8801e7f35000: 76 da fd f3 a8 44 d3 b9 f8 00 e5 8f 49 35 5b b3 v....D......I5[. Aug 6 16:54:41 arcade kernel: ffff8801e7f35010: 9c 71 cf b4 2a 39 cd 38 70 cc 58 13 ce d3 38 52 .q..*9.8p.X...8R Aug 6 16:54:41 arcade kernel: ffff8801e7f35020: fd 4b f3 f4 30 5a ca f1 ef 14 de 30 b3 74 69 ce .K..0Z.....0.ti. Aug 6 16:54:41 arcade kernel: ffff8801e7f35030: 6c 8b b0 1e 2d 0a 12 cf dc f1 88 51 ed 52 82 38 l...-......Q.R.8 Aug 6 16:54:41 arcade kernel: XFS (md1): metadata I/O error: block 0x8404ec80 ("xfs_trans_read_buf_map") error 117 numblks 8 Aug 6 16:54:41 arcade shfs/user: shfs_readdir: fstatat: 890 (117) Structure needs cleaning Aug 6 16:54:41 arcade shfs/user: shfs_readdir: readdir_r: /mnt/disk1/downloads/queue (117) Structure needs cleaning Aug 6 16:54:50 arcade kernel: XFS (md1): Metadata corruption detected at xfs_da3_node_read_verify+0xeb/0xf3, xfs_da3_node block 0x8404ec80 Aug 6 16:54:50 arcade kernel: XFS (md1): Unmount and run xfs_repair Aug 6 16:54:50 arcade kernel: XFS (md1): First 64 bytes of corrupted metadata buffer: Aug 6 16:54:50 arcade kernel: ffff88009088a000: 76 da fd f3 a8 44 d3 b9 f8 00 e5 8f 49 35 5b b3 v....D......I5[. Aug 6 16:54:50 arcade kernel: ffff88009088a010: 9c 71 cf b4 2a 39 cd 38 70 cc 58 13 ce d3 38 52 .q..*9.8p.X...8R Aug 6 16:54:50 arcade kernel: ffff88009088a020: fd 4b f3 f4 30 5a ca f1 ef 14 de 30 b3 74 69 ce .K..0Z.....0.ti. Aug 6 16:54:50 arcade kernel: ffff88009088a030: 6c 8b b0 1e 2d 0a 12 cf dc f1 88 51 ed 52 82 38 l...-......Q.R.8 Aug 6 16:54:50 arcade kernel: XFS (md1): metadata I/O error: block 0x8404ec80 ("xfs_trans_read_buf_map") error 117 numblks 8 Aug 6 16:54:50 arcade kernel: XFS (md1): Metadata corruption detected at xfs_da3_node_read_verify+0xeb/0xf3, xfs_da3_node block 0x8404ec80 Aug 6 16:54:50 arcade kernel: XFS (md1): Unmount and run xfs_repair Aug 6 16:54:50 arcade kernel: XFS (md1): First 64 bytes of corrupted metadata buffer: Aug 6 16:54:50 arcade kernel: ffff8801e61ee000: 76 da fd f3 a8 44 d3 b9 f8 00 e5 8f 49 35 5b b3 v....D......I5[. Aug 6 16:54:50 arcade kernel: ffff8801e61ee010: 9c 71 cf b4 2a 39 cd 38 70 cc 58 13 ce d3 38 52 .q..*9.8p.X...8R Aug 6 16:54:50 arcade kernel: ffff8801e61ee020: fd 4b f3 f4 30 5a ca f1 ef 14 de 30 b3 74 69 ce .K..0Z.....0.ti. Aug 6 16:54:50 arcade kernel: ffff8801e61ee030: 6c 8b b0 1e 2d 0a 12 cf dc f1 88 51 ed 52 82 38 l...-......Q.R.8 Aug 6 16:54:50 arcade kernel: XFS (md1): metadata I/O error: block 0x8404ec80 ("xfs_trans_read_buf_map") error 117 numblks 8 Aug 6 16:54:50 arcade kernel: XFS (md1): Metadata corruption detected at xfs_da3_node_read_verify+0xeb/0xf3, xfs_da3_node block 0x8404ec80 Aug 6 16:54:50 arcade kernel: XFS (md1): Unmount and run xfs_repair Aug 6 16:54:50 arcade kernel: XFS (md1): First 64 bytes of corrupted metadata buffer: Aug 6 16:54:50 arcade kernel: ffff8801e6201000: 76 da fd f3 a8 44 d3 b9 f8 00 e5 8f 49 35 5b b3 v....D......I5[. Aug 6 16:54:50 arcade kernel: ffff8801e6201010: 9c 71 cf b4 2a 39 cd 38 70 cc 58 13 ce d3 38 52 .q..*9.8p.X...8R Aug 6 16:54:50 arcade kernel: ffff8801e6201020: fd 4b f3 f4 30 5a ca f1 ef 14 de 30 b3 74 69 ce .K..0Z.....0.ti. Aug 6 16:54:50 arcade kernel: ffff8801e6201030: 6c 8b b0 1e 2d 0a 12 cf dc f1 88 51 ed 52 82 38 l...-......Q.R.8 Aug 6 16:54:50 arcade kernel: XFS (md1): metadata I/O error: block 0x8404ec80 ("xfs_trans_read_buf_map") error 117 numblks 8 Aug 6 16:54:50 arcade kernel: XFS (md1): Metadata corruption detected at xfs_da3_node_read_verify+0xeb/0xf3, xfs_da3_node block 0x8404ec80 Aug 6 16:54:50 arcade kernel: XFS (md1): Unmount and run xfs_repair Aug 6 16:54:50 arcade kernel: XFS (md1): First 64 bytes of corrupted metadata buffer: Aug 6 16:54:50 arcade kernel: ffff880091f23000: 76 da fd f3 a8 44 d3 b9 f8 00 e5 8f 49 35 5b b3 v....D......I5[. Aug 6 16:54:50 arcade kernel: ffff880091f23010: 9c 71 cf b4 2a 39 cd 38 70 cc 58 13 ce d3 38 52 .q..*9.8p.X...8R Aug 6 16:54:50 arcade kernel: ffff880091f23020: fd 4b f3 f4 30 5a ca f1 ef 14 de 30 b3 74 69 ce .K..0Z.....0.ti. Aug 6 16:54:50 arcade kernel: ffff880091f23030: 6c 8b b0 1e 2d 0a 12 cf dc f1 88 51 ed 52 82 38 l...-......Q.R.8 Aug 6 16:54:50 arcade kernel: XFS (md1): metadata I/O error: block 0x8404ec80 ("xfs_trans_read_buf_map") error 117 numblks 8 Aug 6 16:55:09 arcade kernel: XFS (md1): Metadata CRC error detected at __read_verify+0xaa/0xb4, xfs_dir3_leafn block 0x8404ec80 Aug 6 16:55:09 arcade kernel: XFS (md1): Unmount and run xfs_repair Aug 6 16:55:09 arcade kernel: XFS (md1): First 64 bytes of corrupted metadata buffer: Aug 6 16:55:09 arcade kernel: ffff8801e5ed4000: 76 da fd f3 a8 44 d3 b9 f8 00 e5 8f 49 35 5b b3 v....D......I5[. Aug 6 16:55:09 arcade kernel: ffff8801e5ed4010: 9c 71 cf b4 2a 39 cd 38 70 cc 58 13 ce d3 38 52 .q..*9.8p.X...8R Aug 6 16:55:09 arcade kernel: ffff8801e5ed4020: fd 4b f3 f4 30 5a ca f1 ef 14 de 30 b3 74 69 ce .K..0Z.....0.ti. Aug 6 16:55:09 arcade kernel: ffff8801e5ed4030: 6c 8b b0 1e 2d 0a 12 cf dc f1 88 51 ed 52 82 38 l...-......Q.R.8 Aug 6 16:55:09 arcade kernel: XFS (md1): metadata I/O error: block 0x8404ec80 ("xfs_trans_read_buf_map") error 74 numblks 8 Aug 6 16:55:09 arcade kernel: XFS (md1): xfs_do_force_shutdown(0x1) called from line 315 of file fs/xfs/xfs_trans_buf.c. Return address = 0xffffffff81294958 Aug 6 16:55:09 arcade shfs/user: shfs_unlink: unlink: /mnt/disk1/downloads/queue/3571 (117) Structure needs cleaning Aug 6 16:55:09 arcade shfs/user: shfs_unlink: lookup: downloads/queue/3571s (5) Input/output error Aug 6 16:55:09 arcade shfs/user: shfs_open: lookup: downloads/queue/n199.log (5) Input/output error Aug 6 16:55:09 arcade kernel: XFS (md1): I/O Error Detected. Shutting down filesystem Aug 6 16:55:09 arcade kernel: XFS (md1): Please umount the filesystem and rectify the problem(s) Aug 6 16:55:09 arcade shfs/user: shfs_open: lookup: downloads/queue/n203.log (5) Input/output error Aug 6 16:55:09 arcade shfs/user: shfs_open: lookup: downloads/queue/n199.log (5) Input/output error Aug 6 16:55:09 arcade shfs/user: shfs_open: lookup: downloads/queue/n199.log (5) Input/output error Aug 6 16:55:09 arcade shfs/user: shfs_open: lookup: downloads/queue/n199.log (5) Input/output error Aug 6 16:55:09 arcade shfs/user: shfs_open: lookup: downloads/queue/n199.log (5) Input/output error
  6. Just for good measure, here is another diagnostic, this one is after I shutdown the server, and did a cold boot. I reseated my cable as well. arcade-diagnostics-20160206-1135.zip
  7. If a drive truly dies, would SMART still be available? I checked all connections very well when I put the new PSU in a few months ago, however I did not replace those cables.
  8. I had some problems back in October and November with red balls during parity checks, which seemed to be from my power supply not being able to keep up. I remedied that by adding a EVGA SuperNOVA 850 G2 80+ Gold. Since then (up until yesterday) I've not had any issues. Previously when I'd have a red ball, I could have SEVERAL drop out at once, or maybe just one. This time I only had one, but I'm leery on it. The drive kicked out during parity check, started with some parity corrections, then failed ATA commands, then md disk read errors. arcade-diagnostics-20160205-1136.zip
  9. Worth pointing out that you can set over-rides for individual disks by going to the settings for that disk (in case you also missed those ) And now my day is complete! Thank you very much for that sir! Now my enormous drives will let me know when they get truly low on space and my cache drive will let me know when sonarr has went on a frenzy before the mover could kick in to give cache some wiggle room back.
  10. Where at? I've looked in the Stats settings & Disk Settings and I do not see the thresholds. Geez, I have no clue what I'm drinking today, because I STILL didn't see it when I went back to the page because I was convinced that you were somehow wrong! Maybe I need to get my glasses updated... Thanks
  11. Where at? I've looked in the Stats settings & Disk Settings and I do not see the thresholds.
  12. In regards to the stats plugin, I have unraid send notifications to me via Pushbullet. The stats plugin has default values for disk space usage set to (Warning) 70% = High, (Alert) 90% = Low on space. Is there somewhere I can tweak those values? It would be really great if it's possible to not only choose the threshold, but also choose it on an individual drive level (or at least the cache drive), and change the type of notification level that it is (Warning,Alert,etc). Is any piece of this currently possible?
  13. I'll debunk a few theories. It's not plex, and it's not auto-updating containers. At noon today, my utilization was at 79% of 20gb. It jumped up to 92% in two hours... I've watched it grow in the past week or so, but this was the most severe jump I've seen. I cannot think of any activities that would cause me to lose 13% of 20gb in just two hours. I don't use plex, and in the last 2 days, none of my apps have auto updated. Even if they had, they wouldn't have been restarted to have the changes go into effect. I'm running Sync, Sonarr, RDP-Calibre, Pf-logstash, NZBGet, MariaDB, Couchpotato, and Cadvisor. Cadvisor thinks all of my containers are using less than 1gb each in virtual size. Unraid is reporting much larger use: Label: none uuid: dc11f2e2-8281-4f64-9bd5-89139eb04b02 Total devices 1 FS bytes used 15.64GiB devid 1 size 20.00GiB used 19.29GiB path /dev/loop0 NZBGet, sonarr, and couchpotato have their download/temp directories mapped out of the containers. What is the effect of memory usage? If you are running too many containers, or one container starts going rogue on memory consumption, would that effect the docker filesystem with swaps?
  14. Awesome, I got it up and running. Thanks for the quick response! First of take note of the mac address the container currently uses you can do this with docker exec and looking at the interface setting. Within the docker interface edit the config file, go to advance view and under extra or additional parameters enter the following --mac-address= followed by the mac address you wrote down earlier. You will also need to set the hostname, and you can append that to the same field as were you just enter the mac address, that option is: --hostname you can find more info here https://docs.docker.com/engine/reference/run/
  15. In your TinyMediaManager container, how can I get the registered version to stick? I saw in the docker hub page you mentioned it's based on MAC address, but I use the register command, and it verifies me just fine, however if I try to restart TMM, it still shows me unregistered. I originally registered TMM when I was using it on my Windows desktop, so if it's somehow bound to that MAC address, I can't really tell my container to use that address since my desktop is still in use on the same lan.
  16. This is sounding eerily like the exact issue I've been fighting this week. I upgraded my 4TB parity with a 5TB last week, and the array rebuilt parity just fine. I then replaced a 1.5TB with the old 4TB parity, and during data rebuild about 4-5 hours in, the 4TB dropped out, and several other drives as well. Rebooted, tried again, same thing, same time frame. I tried putting the 1.5TB back in, but it wouldn't take it (too small now). I got a new 5TB, and during rebuild several drives dropped after a few hours. I rechecked all cable connections, reseated pci cards, and now I've upgraded from 650w to 850w single 12V rail PSU. Started another rebuild, drive failed.....
  17. Does the file browser from the Main view reflect accurate contents of a disk that is currently failed? It shows a status of failed, contents emulated, but from browsing it, I'm not seeing a lot of data that I would normally think should be there.
  18. Now that you've reinforced that possibility, I looked and I've just added a 5TB drive that consumes 11.3 watts at load, so that's making a lot of sense now. I ordered a EVGA SuperNOVA 850 220-G2-0850-XR, so I'll install it and see how things handle after that.
  19. I've had some issues recently that I'm wondering if this card isn't handling 4TB & 5TB drives very well. When the system is doing something that's high load, such as a data rebuild, parity rebuild, and sometimes (rarely) parity checks, I'll suddenly have 5 or so drives disappear. All of the drives that drop out are attached to the same card. It's not always the same card that this happens to either. Could it be power related? I'm using an Antec Truepower 650, 2x5TB, 4x4TB, 8x3TB, 4x2TB, 2x120GB SSD. Everything else in my build is the 20 drive beast build.
  20. Just for reference, I was able to fix these issues for myself by getting into the container and installing pyopenssl 0.13.1 apt-get install libffi-dev python-dev libssl-dev python-pip pip install -U pyopenssl==0.13.1
  21. Any chance Couchpotato will be updated soon to handle SSL changes? Seems like a lot of search providers are throwing errors now due to pyopenssl changes. Lots of errors like this: SSL routines:SSL23_GET_SERVER_HELLO:tlsv1 alert internal error It makes CP think the user/pass was incorrect when logging in to a provider.
  22. That did indeed fix the issue. I have no idea how it got mangled either. I've never had issues with any shares, nor config files. I did blow away the whole drive and start from scratch a long while back though...?
  23. I have since upgraded to 6.1 and the issue still exists.
  24. I have 15 samba shares and 5 nfs shares. Four of the nfs shares are also samba shares, but the 5th one has export off for samba. I have found that this 5th nfs only share is exporting one of my other 4 nfs shares as a duplicate. NFS Shares Comics Movies TV backups temp The backups share is my NFS only share, and when mounted on another machine, it's showing the contents of the Movies nfs share. From the server console itself, I go into /mnt/user/backups, and it is showing that it's empty (correct). I'm on Unraid 6.01 EDIT: https://dl.dropboxusercontent.com/u/189156/arcade-diagnostics-20150901-1709.zip