zAdok

Members
  • Posts

    45
  • Joined

  • Last visited

Everything posted by zAdok

  1. Just in case anyone else is struggling to get their their proxmox pbs datastore connected to nfs on unraid - this post has the info you need. I was specifically getting EPERM: Operation not permitted error when trying to add the datastore. adding the no_root_squash onto the export solved it.
  2. how did you go with this? Did this solution work?
  3. No dramas mate. Just happy to be able to test this app. Already dropped 2% of array utilisation
  4. I also see 1 Core maxed out when idle. It jumps from core to core also. As soon as I stop the docker that behaviour stops. @Josh.5 can i get some logs for you on this? Just let me know what log files you need.
  5. You rock @Josh.5 So far so good, getting past the 40MB point which I wasn't before.
  6. Just checked mine and it's the same. Gets 12% in and stops encoding.
  7. Working well for me, thanks @Josh.5. Looking forward to seeing some new features. Thanks again for your work on this.
  8. My first re-encode has just completed and wow. Quality is fantastic and the audio is nice too (previous build the audio was very choppy). File size has gone from about 3GB to 480MB.
  9. OK so far I've found a few bugs. 1 - When the docker was pinned to core 2,3 (i have only 4 cores) it used 100% of core 3 but wasn't actually scanning anything. Upon switching it to use all cores 0-3 it scanned the library OK and started building the queue. 2 - When you set the scheduler to 0 (to disable the schedule) the option to scan on startup does nothing. 3 - A few times when I've stopped the container and started it again, the number of workers have gone back to 3 (i've set it to 1) and the library scan went back to 60 (I had it at 180). This hasn't occurred when restarting it, only stop/start. 4 - It can't parse apostrophe characters when scanning the library, results in ' being added instead. Not sure if the file outputs this way though.
  10. Great work Josh. The previous build didn't work to well for me but I'm going to try this one out now. Thanks for your work on this.
  11. Cache2 hit 71, Cache1 was at 63 i think.
  12. So this morning I changed the SSDs onto the onboard controller as advised. After a few hours of running I started getting temp alarms from the SSDs again. I stopped the array and the temps immediately dropped 20 degrees celsius. Any ideas's why this is happening? zadok-nas-diagnostics-20170425-1055.zip
  13. OK cheers. Thanks for the helpful feedback.
  14. OK so this just happened again, this time I was able to run diagnostics. Gui reported that one cache disk was missing and that the remaining cache disk was running hot. The one that was missing was NOT the one with the reported errors last time. btrfs device stats /mnt/cache reports no errors at all since I last cleared them a few days back. Docker didn't corrupt this time. zadok-nas-diagnostics-20170424-2019.zip
  15. Thanks, heaps of write, read and flush IO errors on one of the SSDs. I'll replace that cable and keep an eye on it.
  16. Yeah thats what I was thinking. I've done this a few times in the past, replaced the SSDs as I thought they might have been causing it. Any ideas to what else could be causing this to happen?
  17. System became unresponsive, was able to reboot cleanly. System booted but web interface was extremely slow, cache SSDs were missing. Stopped array, clean reboot again. Cache drives returned but docker will not start. zadok-nas-diagnostics-20170421-2050.zip
  18. Just advising that I got some data of one of the SSDs but not the other 2. I've purchased 2 new drives to replace the 3 and will ensure the new pool is a RAID1
  19. I'll give this a go when I get home. Thanks
  20. Sorry, what am I trying to achieve by physically disconnecting the faulty SSD? I've done this but the cache drives dont show as a pool anymore and they show as new devices. I have checked the diagnostics from the URL you provided but the scrub requires the pool to be mounted, which it's not. Unless I'm reading it wrong. The btrfs restore doesnt seem to be available in unRAID... unless I dont know where to find it?
  21. Hi All, I upgraded to 6.2.1 and after the restart one of my cache drives wont mount. Cache pool says "Unmountable - No file system (no btrfs UUID)". I downgraded back to 6.2 in case it was a driver issue but still the same. I have checked cables etc. Where too from here? Diagnostics attached. Thanks in advance. zadok-nas-diagnostics-20161011-2217.zip
  22. Very informative thread! Can anyone confirm what the speeds are like when replacing a failed drive?