Scorpionhl

Members
  • Posts

    13
  • Joined

  • Last visited

Everything posted by Scorpionhl

  1. Sorry, was referring to it in terms of how the SSD manufacturers do for warranty, TBW = Terabytes Written. But yes, this is under the smart data: Data units written 187,917,605 [96.2 TB]
  2. I started noticing this a few weeks ago, just happened to look at my cache's TBW and thought it was pretty high. This was a single new NVMe 1TB drive, formatted BTRFS, installed in a brand new unraid box/install in November 2019. Having tracked the TBW for over a week now, it was writing close to 450GB/day while my system was doing nothing remotely close to that. I found this thread and issued the remount no_cache command some people have suggested, and I'm currently looking at 43GB/day (so a 90% reduction). I'm caught between waiting for a fix or clearing it off and formatting to XFS. Since I have no plans to expand this drive pool, I'm probably just going to reformat as XFS.
  3. For about a month, i was seeing this once or twice a day on my Linux Mint VM and couldn't figure out the cause (and having to umount/mount to correct it). I tried using NFS, various SMB versions, SMB options, all resulted in the same issue. As others have mentioned it didn't seem to be based on the amount of time it was mounted, rather the time after the previous write to that mount. I never discovered the source of the issue, but I have since switched to using autofs so that it unmounts and remounts the share dynamically. Since going to this, I've only had one instance of this issue come up and that was simply because I was inside the mounted share via command line and autofs couldn't unmount it. My setup uses a cache drive for writes on one of these shares, but not the other and both experienced the problem. The VM runs off the cache drive in case thats relevant at all. I also run the 'dynamix cache directories' plug in, but have it exclude one of these shares (the one that uses the cache disk for writes). But again, it still happens to both shares. This has also only occurred on the new 6.8 and 6.8RC releases for me, but I built this server new and went straight to rc1 a few months ago so never attempted to run it on 6.7 and older.
  4. Has anyone found a solution to this issue, or did those kernel parameters fix your problem? I've only been running unraid for 2 months now, on the 6.7 RC code (and now 6.8), and have come across this twice. In each case, the server itself had been up for almost two weeks with plex running as a docker using the igpu the whole time (baring a restart of the docker for the occasional update). As an additional symptom of this issue, the whole Unraid box freezes (can't login to directly attached console or anything) and for some reason (that I cannot explain) it wipes out my whole wired network (I even replaced the attached switch when this happened the last time, rebooted router, wireless ran fine), but unless I unplugged the unraid box I got no network connectivity on other machines. Machine specs: i7-6700 Asus Z170-A on latest firmware 3 PCIe cards (additional network, graphics, LSI -Sas to Sata) From the Syslog (remote): 2019-12-2716:59:39 Error kern kernel [drm:gen8_reset_engines [i915]] *ERROR* rcs0: reset request timeout 2019-12-2716:59:39 Error kern kernel i915 0000:00:02.0: Failed to reset chip 2019-12-2716:59:38 Error kern kernel [drm:gen8_reset_engines [i915]] *ERROR* rcs0: reset request timeout 2019-12-2716:59:38 Error kern kernel [drm:gen8_reset_engines [i915]] *ERROR* rcs0: reset request timeout 2019-12-2716:59:38 Error kern kernel [drm:gen8_reset_engines [i915]] *ERROR* rcs0: reset request timeout 2019-12-2716:59:38 Notice kern kernel i915 0000:00:02.0: Resetting chip for hang on rcs0 2019-12-2716:59:38 Error kern kernel [drm:gen8_reset_engines [i915]] *ERROR* rcs0: reset request timeout 2019-12-2716:59:38 Notice kern kernel i915 0000:00:02.0: Resetting rcs0 for hang on rcs0 2019-12-2716:59:38 Information kern kernel [drm] GPU HANG: ecode 9:0:0x96d1ccef, in Plex Transcoder [14234], reason: hang on rcs0, action: reset
  5. Best thing about unraid is definitely the community engagement, everyone does a terrific job in responding to issues quickly. If I were to add something in 2020, it'd probably be to have multiple unraid pools on the same server.
  6. Great! thanks for help, info, and fix
  7. Could you elaborate on the long term fix? should I be adding this md command to my go file?
  8. I was just looking at it, and everything is stable so far! No hint of a corrupted database (I could tell in the past with episodes not getting marked watched properly, and generally the interface would load slower over time).
  9. Wanted to throw my hat in this ring. I have a brand new server (as of Thursday) running on 6.8 RC4, and after migrating the Plex server/data over to it, I have been experiencing a corrupted database after about 24 hours ( means I've restored the database 3 times so far, and that's how long it's taken me to notice the corruption). I restored the Plex database to it's original version (repair attempts didn't work for me). This morning I took the array offline and issued the 'mdcmd set md_restrict 1' command, then restarted the array/Plex. Crossing my fingers for the next 24 hours.
  10. I think you are looking at the wrong user, the zoidberg user has 0 read access in the dashboard view
  11. I've been following this thread, and recently upgraded to RC4 (new unraid server). I found that the write values are being displayed properly, but it appears the read is still off. Here's the dashboard pic: and the share parms from the new user edit interface: