Videodude

Members
  • Posts

    20
  • Joined

Everything posted by Videodude

  1. I've been having an issue with my Docker Containers the past few months. I click "check for updates" and 7 of my 8 containers will say "update ready". I'll go through the motions of updating them all one by one. After updated, for the next few days clicking "check for updates" will show them all as "up-to-date", however after 1-2 weeks those same containers will show as "update ready" again, even though after the update the version of the application is the same. I don't know what logs or information to post here that would help with troubleshooting the issue. Please let me know and I'll post away. This issue is so annoying!!
  2. Awesome, thanks so much! I didn't realize there was an advanced view in the docker setup page.
  3. Mr. Balls - Is the VERSION variable something that you can set via the UNRAID web interface, or is that command line only? Can you see what version it's running from the web interface once the docker is up and running?
  4. Honestly, I'm actually considering removing my SASLP cards. My UnRAID system was built back in 2011 with 2 of those cards and 15 HDD trays (I had 10 of the slots filled), and I just recently upgraded everything to the latest version of UnRAID and replaced all the drives. My mobo has 6 SATA on-board connections, and that's more than enough for my system today (I only need 4 of them with these new high capacity drives).
  5. Good point, and thanks for the correction. Once my build is complete and my data is on the array, I'll run a check to see the times.
  6. Just set up a new UnRAID server with 3 of these drives - 1 parity + 2 data. My preclear times were the same as jbuszkie. My parity check is way faster though. I'm at ~15.5 hours to complete the initial parity of the 3 drive array. These 3 drives are plugged into my AOC-SASLP-MV8 card. Oh, and those clicks are annoying. I have them on all 3 drives though, so it must be normal. The SMART data is all fine and they all passed their preclear with flying colors.
  7. 1TB Muskin SSD is $250 on Newegg today: http://www.newegg.com/Product/Product.aspx?Item=N82E16820226596&cm_re=mushkin_1tb-_-20-226-596-_-Product Here's the anantech review: http://www.anandtech.com/show/8949/mushkin-reactor-1tb-ssd-review According to their review, it seems to hold up really well over prolonged writes. Very similar to the Samsung EVO performance.
  8. I'm almost finished building a new UnRAID setup with 3 of these drives (1 parity + 2 data). Just thought I'd share my numbers. My preclear time was just over 60 hours for the 3 drives. This was with the flast post-read flag set, although I may have messed that up. I'm almost finished with building my parity as well now. I'm about 11 hours in and 75% finished. The total time when finished should be about 15.5 hours total, which honestly isn't too bad in my mind considering the parity drive is 8TB.
  9. Would you mind linking to that script? My drives arrive today, so I'll start my preclear tonight. Edit: Nevermind, found it. http://lime-technology.com/forum/index.php?topic=32564.0
  10. Check out this post: https://lime-technology.com/forum/index.php?topic=39526.0 TLDR: It seems to be on-par with a WD Red.
  11. I just posted this deal to slickdeals.net, and wanted to share it here as well. Crazy cheap price for this drive. Comes out to around $22.17 per TB. It also comes with a 3 year warranty, which is cool. http://slickdeals.net/f/8232764-seagate-archive-8-tb-internal-hard-drive-177-35-shipped-jet-com-new-customers?p=79342642#post79342642
  12. Would it be possible to remove the "Solved" thumb from this thread?
  13. In my user share that I get the Stale File Handle errors on, I have the cache drive disabled.
  14. I'm looking at the unRAID Processes log, and I noticed the following: root 1437 2 0 09:06 ? 00:00:01 [nfsd] root 1438 2 0 09:06 ? 00:00:01 [nfsd] root 1439 2 0 09:06 ? 00:00:02 [nfsd] root 1440 2 0 09:06 ? 00:00:01 [nfsd] root 1441 2 0 09:06 ? 00:00:02 [nfsd] root 1442 2 0 09:06 ? 00:00:01 [nfsd] root 1443 2 0 09:06 ? 00:00:01 [nfsd] root 1444 2 0 09:06 ? 00:00:01 [nfsd] root 1446 1 0 09:06 ? 00:00:00 /usr/sbin/rpc.mountd nobody 2221 1314 1 09:24 ? 00:01:24 /usr/sbin/smbd -D root 6259 2 0 10:55 ? 00:00:00 [kworker/0:0] root 6260 2 0 10:55 ? 00:00:00 [flush-9:1] I'm not certain if this is important information, but I do notice some processes logged at 10:55am, which is right around when I think the Stale NFS File Handle error occurred.
  15. I just got the Stale NFS Handle error again. I believe this happened around 11am. The only thing in my system logs that correlates around that time are the spindown 4 & spindown 5 events at 10:58am. I should note that when I was using 5.0b14, I disabled disk spin down, and this error still occurred. Here is the error on my WD Live Box: # ls -al ls: ./zTemp: Stale NFS file handle ls: ./Videos: Stale NFS file handle drwxr-xr-x 7 root root 140 Dec 31 16:45 . drwxr-xr-x 3 root root 60 Dec 31 16:00 .. drwxrwxrwx 1 nobody 100 416 Jun 9 2012 .HD_Videos d-wxr----t 3 root root 60 Dec 31 16:45 .wd_tv drwxr-xr-x 3 root root 60 Dec 31 16:00 USB2 Here is my unRAID system log for the past ~2 hours Jun 11 09:06:27 MrTower emhttp: Start NFS... Jun 11 09:06:27 MrTower emhttp: shcmd (51): /etc/rc.d/rc.nfsd start |& logger Jun 11 09:06:27 MrTower logger: Starting NFS server daemons: Jun 11 09:06:27 MrTower logger: /usr/sbin/exportfs -r Jun 11 09:06:27 MrTower logger: /usr/sbin/rpc.nfsd 8 Jun 11 09:06:27 MrTower logger: /usr/sbin/rpc.mountd Jun 11 09:06:27 MrTower mountd[1445]: Kernel does not have pseudo root support. Jun 11 09:06:27 MrTower mountd[1445]: NFS v4 mounts will be disabled unless fsid=0 Jun 11 09:06:27 MrTower mountd[1445]: is specfied in /etc/exports file. Jun 11 09:06:27 MrTower emhttp: shcmd (52): /usr/local/sbin/emhttp_event svcs_restarted Jun 11 09:06:27 MrTower emhttp_event: svcs_restarted Jun 11 09:08:54 MrTower mountd[1446]: authenticated mount request from 172.16.0.92:1001 for /mnt/user/Media/_isVideo/HD/.Playlists/Playlists (/mnt/user/Media) Jun 11 09:08:57 MrTower mountd[1446]: authenticated mount request from 172.16.0.92:716 for /mnt/user/Media/_isVideo/HD (/mnt/user/Media) Jun 11 09:09:00 MrTower mountd[1446]: authenticated mount request from 172.16.0.92:852 for /mnt/user/Media/_isVideo/zTemp (/mnt/user/Media) Jun 11 09:21:58 MrTower kernel: mdcmd (39): spindown 0 Jun 11 09:23:59 MrTower kernel: mdcmd (40): spindown 1 Jun 11 09:24:00 MrTower kernel: mdcmd (41): spindown 3 Jun 11 09:24:02 MrTower kernel: mdcmd (42): spindown 4 Jun 11 09:24:05 MrTower kernel: mdcmd (43): spindown 5 Jun 11 09:24:08 MrTower kernel: mdcmd (44): spindown 2 Jun 11 09:54:45 MrTower in.telnetd[3637]: connect from 172.16.0.150 (172.16.0.150) Jun 11 09:54:51 MrTower login[3638]: ROOT LOGIN on '/dev/pts/0' from '172.16.0.150' Jun 11 10:07:19 MrTower kernel: mdcmd (45): spindown 4 Jun 11 10:07:21 MrTower kernel: mdcmd (46): spindown 5 Jun 11 10:08:14 MrTower kernel: mdcmd (47): spindown 1 Jun 11 10:08:15 MrTower kernel: mdcmd (48): spindown 3 Jun 11 10:58:38 MrTower kernel: mdcmd (49): spindown 4 Jun 11 10:58:50 MrTower kernel: mdcmd (50): spindown 5
  16. to the WD Live box, or from the WD Live box? Sorry, you are absolutely correct. I am getting those errors when connecting from the WD Live box to my NFS user share on my unRAID system. I disabled all plugins on my unRAID system this morning, and I am attempting to recreate the issue. This NFS error sometimes happens almost immediately, while other times takes hours. As soon as I have recreated the error, I will be posting the appropriate unRAID system logs.
  17. I recently upgraded from 5.0b14 to 5.0rc4, and I am still having Stale NFS Handle errors when connecting to my WD Live box. This has been reported a number of times in the past, and the only solution has been to either switch to 4.7, or switch to SMB connections. http://lime-technology.com/forum/index.php?topic=17679.0 My current workaround is a little jenky - feed an SMB connection to a Windows 7 box, and rewrap that as NFS using HaneWin NFS Server. It looks like I still need to do this workaround with RC4.
  18. Please see my PM. I'm interested in part of it
  19. I'm also an inexperienced Linux guy, and I'm running into the exact same issue. I ran through the steps here using the 3.1.1 kernel, and my results are the exact same as Alex's! Any help is greatly appreciated.